Artificial intelligence is everywhere now. It helps us write emails, create images, and answer questions at all hours of the day. As this technology becomes integrated into our lives, a significant question arises: Is it actually beneficial for us?
This is a big deal because AI is no longer just a tool for fun or productivity. It’s becoming a companion for many, especially younger people, raising concerns about its effect on youth development.
Figuring out how to make these interactions healthy and safe is necessary, and the formation of the OpenAI Expert Council on Well-Being and AI shows the company is getting serious about how its technology affects its users’ mental health.
What Is This New OpenAI Expert Council?
OpenAI has brought together a team of eight researchers and experts to help figure out what a healthy, positive relationship between a person and AI should look like to support people’s well-being.
This isn’t just a random group of tech enthusiasts. The council members have deep expertise in fields where technology intersects with our mental and emotional health. Their main focus is the crossroads where technology, psychology, and public health meet.
Their job is to advise OpenAI and act as a check on the company’s direction. They will pose questions and monitor how the company handles user well-being. It is a formal way for OpenAI to get consistent, expert advice on some really tough challenges as they continue to build advanced AI systems.
Who Are These Experts?
OpenAI hasn’t just picked people who know about code. According to the company’s blog post, the council includes people who have a profound understanding of human psychology, human-computer interaction, and online safety. This mix of knowledge is what makes the council’s perspective so important.
Many of these experts have collaborated with OpenAI previously on specific projects. For instance, they offered input on parental controls and appropriate notification language for teens in crisis. Now, their relationship is more official, ensuring their insights are integrated into the company’s long-term strategy.
Let’s look at some of the individuals on the council:
- David Bickham, Ph.D. – Research Director at the Digital Wellness Lab at Boston Children’s Hospital and Assistant Professor at Harvard Medical School. His research examines how social media use influences the mental health and development of young people.
- Mathilde Cerioli, Ph.D. – Chief Scientific Officer at everyone.AI, a nonprofit focused on helping people understand the potential and risks of AI in children’s lives. With a Ph.D. in Cognitive Neuroscience and a Master’s in Psychology, her work explores how AI impacts children’s cognitive and emotional development.
- Munmun De Choudhury, Ph.D. – JZ Liang Professor of Interactive Computing at Georgia Tech. Her research leverages computational methods to explore how online technologies influence and support mental health.
- Tracy Dennis-Tiwary, Ph.D. – Professor of Psychology at Hunter College and co-founder and Chief Science Officer at Arcade Therapeutics. She develops digital games for mental health and investigates how technology affects emotional well-being.
- Sara Johansen, M.D. – Clinical Assistant Professor at Stanford University and founder of the Stanford Digital Mental Health Clinic. Her work centers on how digital tools can enhance mental health care and overall well-being.
- David Mohr, Ph.D. – Professor at Northwestern University and Director of the Center for Behavioral Intervention Technologies. His research focuses on using technology to prevent and treat common mental health issues like depression and anxiety.
- Andrew K. Przybylski, Ph.D. – Professor of Human Behaviour and Technology at the University of Oxford. He studies how video games and social media impact motivation, behavior, and well-being.
- Robert K. Ross, M.D. – A leading voice in health philanthropy and public health. A former pediatrician and past President and CEO of The California Endowment, he has dedicated his career to advancing community-based health initiatives.
By forming a council, OpenAI is making a promise to listen to these voices on an ongoing basis. It’s a shift from asking for help on a single problem to building a long-term conversation about safety. These council members will help the company build technology that is more aware of its human impact.
Why Now? The Events That Pushed OpenAI to Act
You might be wondering why OpenAI is taking this step right now. The creation of this council didn’t happen in a vacuum. It comes after a period of intense scrutiny over the safety of powerful AI chatbots, especially for younger users.
Serious and tragic events have highlighted the potential risks of AI systems. One of the most significant was a lawsuit filed by the parents of a teenager who died by suicide. The lawsuit accused a chatbot of encouraging their son’s actions, a truly heartbreaking claim that raised alarms everywhere.
Events like this created a sense of urgency across the tech industry. People started to question whether these advanced AI systems have enough safeguards in place. It became clear that simply building powerful technology was not enough; companies must also remain responsible for the impact of their AI on people.
Government and Public Pressure Mounts
It wasn’t just families who were raising alarms about how people use ChatGPT differently. Government bodies have also started paying close attention to how AI intersects with daily life. The Federal Trade Commission (FTC) began looking into how AI chatbots can affect the mental health and safety of children.
When a powerful government agency like the FTC starts asking questions, tech companies listen. Their investigation signals that the days of releasing AI products without strict oversight may be numbered. The pressure was on for OpenAI to demonstrate it was taking these concerns seriously and had a plan to build advanced AI responsibly.
This public and regulatory spotlight has made it impossible for companies to ignore the well-being side of their technology. They needed a clear plan for protecting users, and this council is a major part of that plan.
What’s on the Agenda for the OpenAI Expert Council?
The council has already started its work. In their first formal meeting, they reviewed OpenAI’s current projects related to safety and well-being. Their job is much bigger than just reviewing what’s already been done.
Going forward, they will explore some of the trickiest issues in AI. A big one is figuring out how an AI should act in sensitive conversations. If someone tells ChatGPT they are feeling sad or anxious, what is the right way for it to respond without acting like a mental health clinic?
They will also look at what kinds of guardrails are needed to support people using these tools. This could mean setting limits on certain topics or developing better computational approaches to detect when a user is in distress. Their goal is to find ways for AI to have a truly positive effect, and as OpenAI has stated, “we’ll continue learning” from this process.
Here are some of the key areas they plan to explore:
- Defining what user “well-being” means in the context of AI.
- Guiding AI behavior during sensitive or emotional user interactions.
- Recommending safety guardrails to help users navigate their AI use.
- Exploring how AI can be used to actively empower and support people.
Council’s Key Focus Areas | Description |
AI Behavior in Sensitive Situations | Defining how AI should respond to users discussing mental health or crisis topics without crossing professional boundaries. |
User Guardrails and Support | Developing features and limits within ChatGPT to promote healthy use patterns and provide resources when needed. |
Positive AI Impact | Researching how advanced AI can actively contribute to a person’s learning, creativity, and overall well-being. |
Defining ‘Well-Being’ | Establishing a clear framework for what constitutes a healthy human-AI interaction, guided by public health principles. |
More Than a Council: A Look at Broader Safety Efforts
The expert council is a headline-grabbing move, but it is just one piece of a bigger picture. OpenAI has been putting other safety-focused programs in place, too. These efforts show they are trying to tackle the problem from multiple angles.
One of the most interesting is the Global Physician Network. OpenAI has gathered over 250 physicians from around the world to get their input on health-related AI issues. This global physician community acts as a team of on-call doctors to help guide the technology.
More than 90 of these doctors have already given feedback on how AI models should handle mental health topics. According to OpenAI, their input directly influences safety research and how the AI models are trained. The company is also building a list of additional experts in areas like eating disorders and substance abuse to call on when needed.
Practical Tools for Parents and Users
Besides getting advice from experts, OpenAI is also building practical tools to protect users. They’ve already created parental controls that give parents more visibility and control over how their teens are using the technology.
The company also announced plans for an automated age prediction system. The goal is to figure out if a user is over 18. If the system thinks a user is younger, it would send them to a version of ChatGPT with more restrictions in place, an important step that supports healthy youth development.
These features, along with the council and physician network, paint a picture of a company trying to build a safer environment. The question that remains is how effective all these measures will be in the real world. The commitment is to continue learning and adapting as the technology evolves.
The Big Questions: Can This Council Truly Make AI Safer?
Creating an advisory council is a great step, but it also raises some tough questions. The biggest one is about influence. Will this council have real power to change OpenAI’s products, or will it just be for show?
For this to work, the council’s recommendations must be taken seriously by the engineers and decision-makers. It is easy to listen to advice. It is much harder to put that advice into action, especially if it slows down product development or reduces engagement.
Another challenge is the speed of AI. The technology is changing so fast that any safety guidelines created today could be outdated tomorrow. The council will need to be agile and stay ahead of the curve, which is a massive challenge for anyone in this field.
Ultimately, the success of the OpenAI Expert Council on Well-Being and AI will depend on the company’s commitment. They must be willing to prioritize user well-being over profit or growth. Time will tell if this council becomes a powerful force for good or just a footnote in the history of AI.