Home News Coalition Urges OpenAI to Scrap AI Ballot Measure Amid Child Safety Fears
News

Coalition Urges OpenAI to Scrap AI Ballot Measure Amid Child Safety Fears

Share
Coalition
Share

Coalition Urges OpenAI to Scrap AI Ballot Measure Amid Child Safety Fears | News

A coalition of tech-accountability and child-safety groups is urging California lawmakers to challenge an OpenAI-backed ballot measure over what it calls major gaps in protections for minors. The dispute centers on the Parents & Kids Safe AI Act, a California initiative that Common Sense Media and OpenAI publicly backed on January 9, 2026, and that critics say could narrow liability, weaken privacy safeguards, and limit public oversight even while presenting itself as a child-protection law. The fight matters because the proposal is already in California’s initiative pipeline and could shape how states regulate AI systems used by children.

⚠️The core dispute is not whether children need AI protections.
Both sides say they support stronger safeguards. The conflict is over whether the OpenAI-backed measure strengthens California law or carves out exemptions that could leave children with weaker remedies.

California Initiative Snapshot

As of March 19, 2026

Measure name
Parents & Kids Safe AI Act
OpenAI and Common Sense Media announced support on January 9, 2026
Circulation deadline
June 24, 2026
California Secretary of State deadline
Signatures required
546,651
Registered voters needed for ballot eligibility

Sources: California Secretary of State; Common Sense Media

February 25 letter turns a child-safety campaign into a fight over liability

On February 25, 2026, Common Cause published a response from its AI watchdog group, CITED, saying it was “sounding the alarm” over OpenAI’s proposed ballot measure and the risks it poses to children’s safety. The underlying coalition letter, dated February 4, 2026, was sent to leaders of California’s Assembly and Senate privacy and judiciary committees. It asked lawmakers to scrutinize the initiative if it reaches the Legislature for review and argued that voters would need fuller information before any statewide vote.

The letter does not dispute that AI systems can harm minors. Instead, it argues that the initiative is framed as a safety measure while narrowing how harms can be challenged. According to the coalition’s PDF, the proposal defines “severe harms” narrowly around significant physical injury tied to suicide, attempted suicide, self-harm, or threats of violence. The signers argue that this leaves out mental and emotional distress, including harms linked to companion chatbots or age-inappropriate content that may not immediately produce physical injury.

The coalition also says the initiative would reduce accountability by limiting enforcement largely to the California attorney general, restricting class actions, capping penalties for certain violations, and blocking some claims under California’s unfair competition law. Those objections are central to the campaign against the measure because they shift the debate from broad child safety principles to the mechanics of enforcement, remedies, and corporate exposure.

Main Points of Dispute Over the Parents & Kids Safe AI Act

Issue Supporters Say Critics Say
Age assurance Creates child-protective settings for users under 18 Could weaken California’s broader age-verification framework
Privacy Bars sale or sharing of children’s data without parental consent Contains loopholes and may not fully apply to OpenAI’s nonprofit structure
Audits Requires independent child-safety audits Audit results would not be fully public
Enforcement Allows attorney general enforcement and penalties Limits private lawsuits and class actions

Source: Common Sense Media press release; Common Cause coalition letter | January 9, 2026 and February 4, 2026

Why January 9 matters: OpenAI and Common Sense Media merged competing proposals

The present controversy starts with a public alliance. On January 9, 2026, Common Sense Media announced that it and OpenAI were backing the Parents & Kids Safe AI Act, describing it as the strongest youth AI safety effort in the United States. Common Sense Media said the measure consolidated two competing ballot initiatives into one proposal. That detail matters because it shows the measure was not simply an outside advocacy effort later endorsed by OpenAI; it was presented as a merged framework with OpenAI’s support.

Common Sense Media’s release says the initiative would require age assurance, prohibit child-targeted advertising, restrict emotional dependency and simulated romantic relationships, provide parental controls, mandate independent audits, and authorize enforcement by the California attorney general. It also says the measure applies to AI chatbots and systems that simulate conversation, including ChatGPT.

Supporters framed the proposal as a response to a failed legislative path in 2025. Common Sense Media had backed California legislation aimed at AI companion chatbot safety, but Governor Gavin Newsom vetoed that bill on October 13, 2025. The governor said he supported the goal of safeguards for minors but warned the bill’s restrictions could unintentionally amount to a broad ban on minors’ use of conversational AI tools. After that veto, the ballot route became more important for advocates seeking statewide rules.

That sequence explains why the coalition’s criticism is politically significant. It is not attacking a dormant draft. It is challenging a live initiative that emerged after a vetoed bill, has an official title and summary, and is already cleared for signature gathering.

How the California AI Child-Safety Fight Reached the Ballot Track

October 13, 2025
Governor vetoes chatbot bill

Governor Gavin Newsom vetoes legislation restricting children’s access to AI chatbots, saying the bill’s limits may be overly broad.

December 26, 2025
Initiative cleared for circulation

California’s Secretary of State says the child-safety AI initiative enters circulation after the attorney general prepares the official title and summary.

January 9, 2026
OpenAI and Common Sense Media announce support

The organizations say they have consolidated competing ballot initiatives into the Parents & Kids Safe AI Act.

February 4, 2026
Coalition letter sent to lawmakers

Advocacy groups warn the measure could weaken existing consumer and child protections.

February 25, 2026
Common Cause publishes response

CITED publicly urges scrutiny of the OpenAI-backed proposal over child-safety concerns.

546,651 signatures and a June 24 deadline put the measure on a real clock

California’s Secretary of State said on December 26, 2025 that the initiative had been cleared to begin collecting signatures. The office’s release states that the proponent must gather 546,651 valid signatures from registered voters to qualify the measure for the ballot. It also sets June 24, 2026 as the deadline to submit signatures to county elections officials.

That procedural status gives the story urgency. The coalition is not debating an abstract policy white paper. It is trying to influence lawmakers, voters, and public opinion while the measure is still moving through the initiative process. If supporters collect enough signatures, the proposal could become a statewide ballot question with legal language that is harder to revise than ordinary legislation.

The coalition’s letter highlights that concern directly. It says the initiative would “lock in” the law by requiring a supermajority of the Legislature to make changes and only allowing amendments that are consistent with the initiative’s purposes. Critics argue that such a structure is a poor fit for AI policy because the technology and its risks change quickly. In their view, a ballot statute can freeze definitions, exemptions, and enforcement choices at a moment when lawmakers may need flexibility.

That argument has practical weight in California. Ballot measures can be durable and politically difficult to amend. For AI governance, where lawmakers are still defining terms such as “companion chatbot,” “covered AI systems,” and “age assurance,” durability can be either a feature or a flaw depending on how broad or narrow the original text is.

📊The initiative is already beyond the concept stage.
California’s Secretary of State says the proposal entered circulation on December 26, 2025, with 546,651 signatures required and a June 24, 2026 filing deadline.

How the coalition says the measure could exempt OpenAI and narrow privacy rules

One of the sharpest claims in the coalition letter is that the initiative’s privacy protections are weak and may exempt OpenAI in its current form. The letter says the proposal bars providers from selling or sharing children’s personal information, but ties that rule to the California Consumer Privacy Act definition of a covered business. Because OpenAI’s structure includes a nonprofit parent, the coalition argues the company is “likely exempt” from those CCPA obligations in its current configuration and therefore may fall outside some privacy protections in the initiative.

That claim is especially notable because OpenAI spent much of 2025 under scrutiny over its corporate structure. In May 2025, OpenAI said its nonprofit would remain in control of the company after abandoning a plan to convert itself more fully into a for-profit business. Later, OpenAI said its nonprofit would continue to oversee a public benefit corporation structure. Those governance details matter here because the coalition’s privacy critique depends on how California law applies to nonprofit versus business entities.

The coalition also points to a “business purpose” loophole. It argues that even where the initiative restricts sale or sharing of children’s data, it still allows use of that information for business purposes as defined under the CCPA. Critics say that could permit sensitive conversations with chatbots to be used for model training, product improvement, or research, depending on how the terms are interpreted and enforced.

Supporters of the initiative present the opposite picture. Common Sense Media says the measure would prohibit child-targeted advertising, stop the sale or sharing of children’s data without parental consent, and require parental controls and alerts. The gap between those descriptions is why the text of the initiative, not just the press release, has become the center of the dispute.

What the proposal says about audits, parental alerts, and “severe harms”

Common Sense Media’s January 9 release emphasizes several concrete safeguards: age assurance, restrictions on emotional dependency, parental controls, annual risk assessments, independent audits, and attorney general enforcement. Those are the provisions supporters use to argue that the measure is a strong baseline for youth AI safety.

The coalition does not say those mechanisms are absent. It says they are too narrow, too opaque, or too discretionary. On audits, the letter argues that although companies would have to obtain independent child-safety audits and submit them to the attorney general, the public would not get full access to those reports. Critics say that means parents, researchers, and watchdog groups could learn only generalized findings rather than identify which products or companies pose the greatest risks.

On parental alerts, the coalition argues the initiative gives AI companies too much discretion. The letter says providers would have to notify a parent in a “timely manner” when the system determines a child “will” suffer severe harm, but it criticizes both the timing standard and the threshold. In the coalition’s reading, “will” is too high a bar compared with “may,” and the initiative also leaves room for companies to decide that notifying a parent is not in the child’s best interest.

On harms, the coalition’s central complaint is definitional. By focusing on severe physical injury tied to self-harm or violence, critics say the measure misses a wider range of mental-health and developmental harms that can arise from manipulative chatbot interactions, grooming behavior, or prolonged emotional dependency. That is not a technical drafting dispute alone. It goes to which injuries count, who can sue, and what conduct companies must design against.

Key Provisions Supporters Highlight vs. Criticisms Raised by Opponents

Provision Supporters’ framing Opponents’ concern
Independent audits Annual child-safety reviews reported to the AG Reports are not fully public, limiting accountability
Parental alerts Parents can be notified if a child shows signs of self-harm Notification standard is too narrow and discretionary
Companion chatbot limits Restricts emotional dependency and simulated romance Definitions and exemptions may be too broad or unclear
Attorney general enforcement Creates state oversight and penalties Crowds out private enforcement and class actions

Source: Common Sense Media; Common Cause coalition letter | January-February 2026

2025 warnings from attorneys general add pressure to OpenAI’s child-safety record

The ballot fight is unfolding against a broader backdrop of official concern about chatbot safety. In September 2025, the attorneys general of California and Delaware warned OpenAI they had serious concerns about the safety of ChatGPT, especially for children and teens. Reporting from AP and TechCrunch said the officials were reviewing OpenAI’s safety mission and pressing the company and the wider industry to improve protections.

OpenAI, for its part, has publicly emphasized child-safety work. The company has published a Teen Safety Blueprint, a child-safety-by-design statement, and policy materials describing prohibitions on exploiting or sexualizing minors and reporting apparent child sexual abuse material to the National Center for Missing and Exploited Children. It has also said it supports policy frameworks that promote partnerships among technology companies, government, and advocacy groups to protect children.

That creates a tension at the center of this story. OpenAI has publicly argued for stronger child protections and has backed a California initiative marketed as a leading youth AI safety measure. Yet a coalition of advocacy groups says the same proposal could weaken existing legal tools and create carve-outs that benefit AI companies. The disagreement is therefore less about whether child safety matters and more about whether the legal architecture is genuinely protective.

For readers tracking AI regulation, this is the important distinction: a measure can contain new obligations while still narrowing older remedies. That is why critics are focusing on class actions, unfair competition claims, penalty caps, audit secrecy, and amendment restrictions rather than only on the headline promises of age assurance and parental controls.

What happens next before California voters see any AI child-safety question

The next milestone is procedural, not rhetorical. Supporters must collect enough valid signatures by June 24, 2026 for the initiative to qualify. Before that, lawmakers, advocacy groups, and industry participants are likely to keep debating whether the proposal should move forward in its present form, be revised through legislative negotiations, or be abandoned.

The coalition’s public position is clear: it wants lawmakers to scrutinize the initiative closely and has framed the measure as one that should not advance without major changes. The title of this article uses “scrap” in the political sense raised by opponents, but the publicly available documents show a formal call for rigorous scrutiny and a warning that the initiative could undermine protections, rather than a completed legal action removing it from circulation.

That distinction matters for accuracy. As of Thursday, March 19, 2026, the measure remains an active California initiative in circulation. The Secretary of State’s records show the proposal is on the signature-gathering track, and Common Sense Media’s January announcement backing the measure remains public. No public source reviewed here shows that OpenAI has withdrawn support or that California officials have halted the initiative process.

If the measure qualifies, the debate will likely widen beyond Sacramento. Common Sense Media has described the proposal as a model with national significance, while critics have warned that its structure could be replicated elsewhere. That means the California fight may become a test case for a broader question in AI law: whether child-safety statutes should be built through flexible legislation, ballot initiatives, or company-backed hybrid models that combine both advocacy and industry support.

Conclusion

The dispute over the OpenAI-backed Parents & Kids Safe AI Act is a fight over legal design, not just messaging. Supporters say the initiative would impose age assurance, parental controls, audit requirements, and restrictions on manipulative chatbot behavior. Opponents say the same text narrows liability, weakens privacy protections, limits public transparency, and could even exempt OpenAI from some obligations. With 546,651 signatures required and a June 24, 2026 deadline already set, the argument is moving on a real election calendar. For now, the verified record shows an active California initiative, a public alliance between OpenAI and Common Sense Media, and a coalition campaign pressing lawmakers to stop or substantially rethink the measure before it reaches voters.

Frequently Asked Questions

What is the Parents & Kids Safe AI Act?

It is a California ballot initiative focused on child-safety requirements for AI products, including chatbots. Common Sense Media and OpenAI announced support for the measure on January 9, 2026, saying it would require age assurance, parental controls, independent audits, and limits on manipulative chatbot behavior.

Did a coalition really oppose the OpenAI-backed measure?

Yes. A coalition letter dated February 4, 2026 and later publicized by Common Cause on February 25, 2026 argues that the initiative could weaken child safety, privacy, and accountability protections. The signers urged California lawmakers to scrutinize the measure closely.

Why are critics saying the measure could hurt child safety?

Critics argue the initiative defines covered harms too narrowly, limits private lawsuits, caps some penalties, restricts public access to audit findings, and may create privacy loopholes. Their position is that these design choices could reduce accountability even if the measure adds some new obligations.

Is the ballot measure still active as of March 19, 2026?

Yes. California’s Secretary of State says the initiative entered circulation on December 26, 2025. The filing deadline for signatures is June 24, 2026, and 546,651 valid signatures are required for ballot eligibility.

Does the proposal apply to ChatGPT?

Common Sense Media’s January 9, 2026 release says the measure covers AI chatbots and systems that simulate conversation, “such as ChatGPT.” Critics, however, argue some definitions and exemptions in the initiative text may narrow how broadly it applies in practice.

Has OpenAI responded by withdrawing support?

No public source reviewed here shows that OpenAI has withdrawn support. The public announcement from January 9, 2026 backing the measure remains available, and the initiative remains in circulation in California’s ballot process.

{
“@context”: “https://schema.org”,
“@type”: “NewsArticle”,
“headline”: “Coalition Urges OpenAI to Scrap AI Ballot Measure Amid Child Safety Fears”,
“datePublished”: “2026-03-19T00:00:00Z”,
“dateModified”: “2026-03-19T00:00:00Z”,
“inLanguage”: “en-US”,
“author”: {
“@type”: “Organization”,
“name”: “News”
},
“publisher”: {
“@type”: “Organization”,
“name”: “News”
},
“description”: “A coalition of advocacy groups is pressing California lawmakers to challenge an OpenAI-backed AI ballot measure, arguing it weakens child safety, privacy, and accountability rules.”
}

{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “What is the Parents & Kids Safe AI Act?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “It is a California ballot initiative focused on child-safety requirements for AI products, including chatbots. Common Sense Media and OpenAI announced support for the measure on January 9, 2026.”
}
},
{
“@type”: “Question”,
“name”: “Did a coalition oppose the OpenAI-backed measure?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Yes. A coalition letter dated February 4, 2026 and publicized by Common Cause on February 25, 2026 argues that the initiative could weaken child safety, privacy, and accountability protections.”
}
},
{
“@type”: “Question”,
“name”: “Is the ballot measure still active?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Yes. California’s Secretary of State says the initiative entered circulation on December 26, 2025, with a June 24, 2026 signature deadline.”
}
},
{
“@type”: “Question”,
“name”: “Does the proposal apply to ChatGPT?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Common Sense Media says the measure covers AI chatbots and systems that simulate conversation, including ChatGPT.”
}
}
]
}

Disclaimer: This article is for informational purposes only and is not legal advice. Ballot initiatives, regulatory interpretations, and corporate structures can change; readers should review official California filings and source documents independently.

Share
Written by
Elizabeth Torres

Elizabeth Torres is a seasoned writer specializing in Crypto News with over 5 years of experience in financial journalism. She holds a BA in Economics from a reputable university, equipping her with a solid foundation in finance and investment strategies. At Newsreportonline, Elizabeth covers the latest developments in cryptocurrency, blockchain technology, and market trends, ensuring her readers stay informed in this rapidly evolving landscape.With a keen eye for detail and a dedication to transparency, she provides insights that are both informative and accessible, adhering to the principles of YMYL (Your Money or Your Life) content. You can reach Elizabeth via email at elizabeth-torres@newsreportonline.com and follow her updates on social media.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Why Bitcoin Is Falling Despite $1.1 Billion ETF Inflows

Why Bitcoin Is Falling Despite $1.1 Billion in ETF Inflows—explore key market...

OpenClaw Developers Lured in GitHub Phishing Campaign | Crypto Wallet Risk

Stay alert: OpenClaw Developers Lured in GitHub Phishing Campaign Targeting Crypto Wallets....

OpenAI Ballot Measure Faces Backlash Over Child Safety Concerns

Coalition urges OpenAI to scrap AI ballot measure over child safety concerns....

Coalition Urges OpenAI to Scrap AI Ballot Measure Amid Child Safety Fears

Coalition urges OpenAI to scrap AI ballot measure over child safety concerns....