AI for Humanity
Building ethical, trustworthy AI systems through global collaboration
Based on insights from the 2025 Global Digital Collaboration Conference
Sponsored by Global Open Source Innovation Meetup (GOSIM)
Our Mission
The AI for Humanity Program is founded on the belief that global collaboration can steer AI toward a more equitable, safe, and impactful future. We bring together researchers, technologists, and open source leaders to share insights, identify shared challenges, and chart pathways for collective action.
Executive Summary
Key insights from the 2025 Global Digital Collaboration Conference
Responsible AI
- • Technical and governance imperative
- • Nine dimensions: transparency, privacy, accountability, safety, inclusion, sustainability
- • Need machine-readable standards, open tools, and cross-sector coordination
Model Collaboration & Openness
- • Counter 'open-washing' with verifiable openness across models, data, and compute
- • Adopt permissive licensing and Model Openness Framework (Class I–III)
- • Broaden access to datasets and compute; preserve linguistic/cultural diversity
Maturity & Evaluation
- • AI maturity and evaluation remain fragmented; focus on human-centered value
- • Standardize audits/testing; context-aware metrics and open tools
- • Prioritize real-world impact over hype; augment humans vs. blind automation
Social & Economic Impact
- • Tackle inequities in access, talent, and infrastructure—especially in the Global South
- • Leverage open systems to reduce information asymmetries
- • Finance and climate use-cases need transparent, non–black-box approaches
Agentic AI & Trust
- • Agents amplify risks (misinfo, identity spoofing, unauthorized actions)
- • Build trust via identity, provenance (C2PA), content credentials, and standards
- • Architect for accountability, oversight, and secure delegation
Five Critical AI Themes
Explore comprehensive frameworks, tools, and standards for responsible AI development across five critical themes.
Model Collaboration
Open models, data accessibility, and collaborative development for democratizing AI globally.
Learn More →Maturity & Evaluation
Assessment frameworks, benchmarking standards, and maturity models for AI systems.
Learn More →Social & Economic Impact
Labor implications, equity considerations, and economic effects of AI deployment across societies.
Learn More →Agentic AI
Autonomous AI systems, trust protocols, security frameworks, and human-AI interaction patterns.
Learn More →Session Key Points
Detailed insights and practical recommendations from expert presentations
Session 1 — Responsible AI
RGAF (9 Dimensions) TrustyAI Tooling Identity & Provenance FINOS Governance
Session 1 — Responsible AI
- Engineer for fairness, safety, explainability, and alignment; not just principles.
- Adopt machine-readable controls, risk/mitigation catalogs, and open auditability.
- Use federated identity and traceability for sensitive domains (e.g., child safety).
Session 2 — Model Collaboration
True Openness MOF (Class I–III) Open Data & Compute Language Preservation
Session 2 — Model Collaboration
- Counter "open-washing" with permissive licensing and verifiable openness.
- Expand access to diverse datasets and affordable compute, including emerging economies.
- Partner with global bodies to preserve languages and cultures in AI.
Session 3 — Maturity & Evaluation
Human-Centered Value Standardized Testing Context-Aware Metrics
Session 3 — Maturity & Evaluation
- Move beyond hype; target augmenting human capabilities.
- Establish shared benchmarks and evaluation practices across domains.
- Open-source tools + multi-stakeholder review to improve reliability and transparency.
Session 4 — Social & Economic Impact
Equity & Access Global South Capacity Open Systems Climate Finance
Session 4 — Social & Economic Impact
- Invest in capacity-building, infrastructure, and context-aware models for underserved regions.
- Use open data and tooling to reduce information monopolies.
- Apply transparent models in finance/climate to avoid black-box risks.
Session 5 — Agentic AI
Identity Provenance (C2PA) Delegation & Oversight Security ≠ Trust
Session 5 — Agentic AI
- Deploy content credentials and provenance to combat mis/disinformation.
- Design accountability for agents: memory, identity, audit trails, and safe tool access.
- Adopt monitoring/critic agents and pre-action policy checks.
Conclusions & Suggested Actions
Translate conference insights into immediate next steps. Prioritize pilots, transparent reporting, and multi‑stakeholder alignment.
1 Transparency & Licensing
Publish transparent model/dataset cards; adopt permissive open licenses (e.g., Apache-2.0/MIT).
2 Independent AI Audits
Pilot independent, open audits for high-risk AI with machine-readable controls (e.g., FINOS-style).
3 Global Registry & Compute
Stand up a global registry of open models/datasets and a shared compute pilot for education/research.
4 Cross-Regional Working Groups
Launch cross-regional working groups on evaluation standards and language/culture preservation.
5 Content Credentials & Oversight
Deploy content credentials (C2PA) and agent oversight patterns in production pilots.
6 Incident Response Playbooks
Create incident response playbooks for AI harms and run tabletop exercises with regulators.
Join the Global Collaboration
Be part of shaping the future of AI. Connect with researchers, technologists, and practitioners working on responsible AI development.
Resources
Download key reports and documentation from the AI for Humanity initiative
GDC AI For Humanity Track Report
Comprehensive report covering the five key AI themes from the 2025 Global Digital Collaboration Conference
Download PDFImplementation Frameworks
Access key frameworks including RGAF, MOF, and TSP for implementing responsible AI practices
Community Guidelines
Guidelines for participation in working groups and collaborative development