Today, a new alliance launches Humanity AI, pledging $500 million over five years to steer artificial intelligence toward human-centered outcomes.
This initiative unites ten major philanthropic organizations, from arts and education to labor and democracy, into a collective force aiming to ensure AI develops with human dignity, equity, and agency in mind. Among the founding partners are the Doris Duke Foundation, Ford Foundation, MacArthur Foundation, Mozilla Foundation, and Omidyar Network.
AI is no longer a distant frontier. As models influence how we learn, work, and express ourselves, critical questions emerge: Who holds the power behind the code? Whose values are embedded in the algorithms? How do we safeguard human creativity and connection?
As AI systems become increasingly entwined with daily life, questions about power, value, and agency are becoming central. Workers wonder whether AI will support them or supplant them. Creators and artists worry their contributions may be overshadowed or co-opted by machines. Communities ponder who benefits from AI, and who bears its risks.
A New Alliance for a Better AI Future
A group of America’s biggest foundations decided to take action. They have formed a coalition called Humanity AI. This group is launching an incredible $500 million AI philanthropy initiative over the next five years. Their goal is simple but huge: to build a people-centered future for AI.
You might recognize some of the names involved. The Ford Foundation, known for its work in social justice, is part of this. So is the MacArthur Foundation, which supports creative solutions to urgent challenges, and the David and Lucile Packard Foundation, a leader in conservation and science.
These organizations have a long history of tackling big societal problems. Now, they are turning their attention to one of the biggest of our time. They believe that the development of AI cannot be left solely to tech companies and market forces.
Together, they are driving new investments in people and projects that will help shape AI for good. It’s a clear signal that the future of this technology should be a conversation that includes everyone. Civil society, academics, workers, and communities all need a voice in how these powerful tools are built and deployed.
“People deserve to be part of shaping AI—not merely being subject to it,” said John Palfrey, President of MacArthur. “Our goal is to move away from a future defined by algorithms alone and toward one co-designed by people.”
Core Focus Areas & Grant Strategy
Humanity AI funders will invest in projects across five interconnected domains:
-
Democracy — Supporting frameworks that preserve individual rights and collective voice in AI governance.
-
Education — Ensuring AI tools foster access, equity, and learning opportunities for all.
-
Humanities & Culture — Safeguarding creators’ rights and valuing human-driven cultural expression in an AI era.
-
Labor & Economy — Promoting models in which AI enhances human work instead of displacing it.
-
Security — Holding AI systems to strict standards to protect public safety, fairness, and trust.
Grantmaking will begin in earnest in 2026, with Rockefeller Philanthropy Advisors managing the pooled fund and hiring a core team including an Executive Director. Humanity AI aims to expand participation by inviting additional funders into its collaborative structure.
Michele L. Jawando of Omidyar Network commented, “The future is not written by algorithms—it’s written by people. This effort is about guiding AI toward human values, not letting technology dictate what we become.”
Protecting Democracy in the Digital Age
Our democracy is built on trust and shared information. AI poses new challenges to both.
Think about the rise of deepfakes, which can create convincing but completely fake videos of political figures, or the rapid spread of misinformation on social media. These tools can make it hard to know what’s real, eroding the public’s trust in institutions.
AI-powered algorithms can also create “filter bubbles” that show people only content that reinforces their existing beliefs. This dynamic can deepen political polarization and make compromise more difficult.
Humanity AI will fund projects that build new frameworks to protect our rights and freedoms. They want to promote a version of AI that strengthens democracy, not undermines it. This work is about building a digital public square that is safe and reliable for everyone.
This includes funding research into technologies like digital watermarking, which can help identify synthetically generated content. It also means supporting media literacy programs to give people the skills to spot fake information. The health of our democracy might depend on winning this fight.
Shaping AI for Students, Not Just Schools
AI is walking into our classrooms. It could be a fantastic tool for learning. Imagine an AI tutor that gives every student personalized help exactly when they need it, adapting to their unique learning pace and style.
The potential is enormous, but there are risks, too. Will AI be used to monitor students constantly, collecting sensitive data about their performance and behavior? Could it create bigger gaps between well-funded schools that can afford the latest tech and those that cannot, worsening the digital divide?
The initiative aims to put the best interests of students first. The objective is to make certain AI becomes a tool that expands access to knowledge, not a new form of surveillance. The goal is equitable AI deployment that makes learning better and more engaging for all children, regardless of their background.
Valuing Human Creativity and Culture
Artists, musicians, and writers are feeling the impact of AI. Generative AI can now create stunning images, compose music, and even write stories. This has started a huge debate about art, originality, and ownership.
Whose work is it if an AI was trained by scraping thousands of human-created images from the internet without permission? Many artists feel like their work and style are being stolen. They worry that their creativity will be devalued if markets are flooded with AI-generated content.
Governments are now trying to figure out how copyright law applies to this new world. This AI philanthropy initiative wants to help fix this problem by supporting artists and cultural communities. The goal is not to stop AI art but to protect the human creators whose work forms the foundation of these systems.
Making Sure AI Works For Us, Not Against Us
Let’s talk about jobs. This is one of the biggest worries people have about artificial intelligence. You hear stories about robots replacing cashiers, truck drivers, and even some office workers, and it is a real concern for families everywhere.
But the story does not have to end there. AI could also be used to make our jobs better and safer. It could take care of boring, repetitive tasks, freeing us up to focus on the more creative and strategic parts of our work.
Humanity AI is focused on building an economy that values people. The initiative will support projects that help AI enhance how people work through a worker-centric AI approach. The goal is a future where technology helps everyone thrive, not just a select few.
This includes supporting unions and worker organizations that are negotiating for better protections in an automated workplace. It also means investing in lifelong learning programs to help people gain new skills. Ultimately, it’s about giving workers a seat at the table when decisions about new technologies are made.
Putting Safety and Security First
Innovation should never come at the cost of our safety. As AI gets integrated into critical systems like cars, healthcare, and finance, we have to be certain it’s reliable. A self-driving car needs to be incredibly safe, and an AI that helps doctors diagnose disease must be accurate.
We can’t afford to get this wrong. That’s why this initiative is pushing for the highest possible standards and responsible AI governance. Those who build and use AI systems must be held accountable for their safety and security.
This means rigorous testing and transparency. People need to trust that the AI systems they interact with every day are safe, secure, and working in their best interests. Concepts like explainable AI (XAI) are important, allowing us to understand why an AI model made a particular decision.
It also means defending against new threats like adversarial attacks, where tiny, malicious changes to data can fool an AI system. Funding research and developing industry-wide safety protocols are essential steps in building a trustworthy AI ecosystem.
How You Can Get Involved
Humanity AI is just getting started. The group plans to begin giving out grants as early as the fall of 2025. This will inject new funding into projects across the five key areas we discussed, supporting researchers, advocates, and innovators.
The coalition is also actively looking for more partners. They want to grow this movement and invite other funders to join their mission. They believe building a better future with this collaborative AI development will take all of us working together to put humanity first.
Organizations interested in tracking or applying for funding can sign up for updates via the Humanity AI website.