A "Humans First" Approach to AI
Human-Centered AI Frameworks & Approaches
Section titled “Human-Centered AI Frameworks & Approaches”Value Sensitive Design (VSD)
Section titled “Value Sensitive Design (VSD)”- Developed by: Batya Friedman and Peter Kahn at the University of Washington
- Core approach: Systematically accounts for human values throughout the design process
- Methodology: Uses conceptual, empirical, and technical investigations to identify and address stakeholder values
- Key contribution: Provides a structured methodology to incorporate human values like privacy, autonomy, and trust directly into technical systems
- Resource: Value Sensitive Design: Shaping Technology with Moral Imagination
Center for Humane Technology (CHT)
Section titled “Center for Humane Technology (CHT)”- Founded by: Tristan Harris (former Google Design Ethicist) and colleagues
- Core mission: Realigning technology with humanity’s best interests
- Focus areas: Reducing digital addiction, combating misinformation, and promoting human-centered business models
- Notable work: The Social Dilemma documentary, Humane Tech design principles
- Key insight: Identifies attention extraction as the root problem of many tech harms
- Resource: CHT website and resources
Advancing Humans with AI (MIT Media Lab)
Section titled “Advancing Humans with AI (MIT Media Lab)”- Core approach: Focuses on AI systems that enhance human capabilities rather than replace them
- Research areas: Human-AI collaboration, AI literacy, augmented cognition
- Philosophy: Views AI as a tool for expanding human potential rather than a substitute
- Projects: Includes work on AI education, creative collaboration, and cognitive enhancement
- Resource: AHA at MIT
Responsible AI (Microsoft)
Section titled “Responsible AI (Microsoft)”- Six guiding principles: Fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability
- Implementation: Comprehensive governance and public policy framework
- Practical tools: Fairlearn (fairness assessment), InterpretML (model interpretability), AI Fairness Checklist, and the Responsible AI Dashboard
- Impact assessment: Includes structured approaches to evaluate potential harms before deployment
- Resource: Responsible AI at Microsoft
Human-Centered AI (Stanford HAI)
Section titled “Human-Centered AI (Stanford HAI)”- Founded: 2019 as an interdisciplinary institute
- Mission: Advance AI research, education, policy and practice to improve human condition
- Core philosophy: Human-centered AI technologies should enhance human capabilities, not replace them
- Key areas: Augmenting human capabilities, addressing societal impact of AI, guiding AI’s development with human values
- Notable work: Research on AI ethics, policy recommendations, educational programs
- Resource: Stanford HAI
Ethically Aligned Design (IEEE)
Section titled “Ethically Aligned Design (IEEE)”- Scope: Comprehensive framework for ethical considerations in autonomous and intelligent systems
- Development process: Created through global, multidisciplinary collaboration
- Key principles: Human rights, wellbeing, data agency, effectiveness, transparency, accountability
- Implementation: Provides specific recommendations for standards bodies, policymakers, and engineers
- Technical focus: Includes detailed technical approaches for embedding ethics in AI systems
- Resource: IEEE Ethically Aligned Design
Montreal Declaration for Responsible AI
Section titled “Montreal Declaration for Responsible AI”- Origin: Developed at the University of Montreal through collaborative deliberation
- Structure: 10 principles - wellbeing, autonomy, privacy, solidarity, democracy, equity, diversity, prudence, responsibility, sustainability
- Distinguishing feature: Created through public participation and consultation
- Implementation: Includes self-assessment tools and governance recommendations
- Goal: Guide digital transition so everyone benefits equitably from AI advancement
- Resource: Montreal Declaration
Additional Human-Centered Methodological Approaches
Section titled “Additional Human-Centered Methodological Approaches”Participatory Design for AI
Section titled “Participatory Design for AI”- Directly involves end users throughout the AI development process
- Emphasizes co-creation rather than designing “for” users
- Particularly valuable for AI systems serving marginalized or underrepresented communities
- Helps identify potential harms that developers might not anticipate
Algorithmic Impact Assessments
Section titled “Algorithmic Impact Assessments”- Similar to environmental impact assessments
- Structured evaluation of potential societal impacts before deployment
- Often includes public disclosure requirements
- Increasingly being adopted in public sector AI governance
- Example: Canada’s Algorithmic Impact Assessment Tool
Consequence Scanning
Section titled “Consequence Scanning”- Developed by Doteveryone (UK think tank)
- Structured workshop approach for development teams
- Asks three key questions:
- What are the intended and unintended consequences of this product or service?
- What are the positive consequences we want to focus on?
- What are the negative consequences we need to mitigate?
- Integrated into regular development cycles, not just at the end
Ethics by Design
Section titled “Ethics by Design”- Integrates ethical reasoning throughout the entire development lifecycle
- Uses tools like ethics canvas, value proposition canvas
- Incorporates ethics-focused design patterns and best practices
- Emphasizes proactive rather than reactive ethical considerations
Human-in-the-Loop Systems
Section titled “Human-in-the-Loop Systems”- Ensures humans maintain meaningful control and oversight in AI systems
- Particularly important in high-risk domains (healthcare, justice, etc.)
- Different models: human review, approval, oversight, or collaboration
- Recognizes that full automation isn’t always the goal
- Preserves human agency and accountability
Common Principles Across Human-Centered AI Approaches
Section titled “Common Principles Across Human-Centered AI Approaches”-
Transparency & Explainability
- AI systems should be understandable to those affected by them
- Decisions should be explainable in human terms
-
Inclusive Design Processes
- Diverse stakeholders should participate in development
- Systems should work for people of all backgrounds and abilities
-
Continuous Assessment
- Ongoing evaluation of impacts rather than one-time assessments
- Iterative improvement based on real-world effects
-
Augmentation Over Automation
- Focus on enhancing human capabilities rather than replacing humans
- Preserve meaningful human agency and decision-making
-
Accountability Structures
- Clear lines of responsibility for AI outcomes
- Mechanisms for redress when harms occur
-
Contextual Deployment
- Recognition that no single approach works in all contexts
- Adaptation to specific cultural, social, and domain needs
Lots more…ask an AI to expand on this!