Good morning,
This week’s edition, we break down OpenAI’s surprising shift back toward nonprofit control, as pressure mounts to balance AI progress with public accountability—leaving investors like Microsoft to navigate new trade-offs.
Also, we examine why the newest AI models are hallucinating more than ever, challenging the belief that bigger and smarter models naturally bring better results
Have a great week!
📣 Announcements
💸 Economy
🧭 Ethics
- Karen
OpenAI Backtracks
OpenAI has quietly but significantly scaled back its ambitions to shift fully toward a conventional for-profit model, opting instead to become a public benefit corporation (PBC) that remains under the control of its original nonprofit parent. This proposed structure is a retreat from OpenAI’s earlier plans to hand over power to investors, and it likely is a result of the significant tension between rapid AI advancement and societal responsibility.
OpenAI began as a company designed to resolve this tension by ensuring safe and ethical AI development. OpenAI’s first break from this path came in 2019 when it created a for-profit subsidiary to raise venture capital funding. Last year, OpenAI planned to escalate this strategy by reorganizing with a for-profit structure. The move was answered with concern, litigation, and backlash from prominent leaders in the AI space. Some opponents of the move, such as Musk, took OpenAI to court, and Musk plans to continue suing against reorganization under this new proposed structure.
OpenAI hopes the PBC model will give it more flexibility to raise capital while still centering societal benefit in its mission. It seeks to compromise with opponents in a manner that will satisfy some of the legal pressures while providing a clear path to more funding deals such as the one OpenAI recently struck with SoftBank.
The decision to maintain nonprofit control under this PBC model underscores the growing public and regulatory scrutiny that is coming up surrounding AI. Achieving public benefit from advanced AI models requires careful planning, which many critics argued would be absent in a fully investor-led OpenAI, as structural incentives for prioritizing safety would no longer exist. Under the PBC structure, the nonprofit board largely maintains veto power over most consequential decisions.
One aspect of OpenAI’s restructuring that shocked some investors was OpenAI’s announcement that it will share a smaller fraction of its revenue with major backer Microsoft, likely indicating a long-term decrease in OpenAI’s reliance on Microsoft. Nevertheless, investors such as Microsoft are not simply walking away. A planned removal of profit caps and improved financial terms will significantly benefit OpenAI’s investors, who will reap the benefits of new profit streams.
As a whole, this restructuring seems to indicate that tech companies may be likely to cave to pressure from critics who argue that AI is too important to be left solely to market forces. With AI breakthroughs on the horizon, even the most ambitious tech companies will be challenged to reckon with the social and ethical risks of AI development despite their profit-seeking tendencies.
-Tobin
AI Gets More Powerful, But Hallucinations Get Worse?
For years, AI developers have promised that hallucinations, a term for when chatbots invent false or misleading information, would decrease as systems become more advanced. It was promised that better models, more data, and sophisticated training methods would iron out errors. But this promise rings hollow today. As AI models have become more powerful, the hallucination problem is getting surprisingly worse.
The shocking trend is clearest in the latest reasoning models from companies such as OpenAI, Google, and DeepSeek. These models were designed to be more logical and better at complex problem-solving. Yet, they hallucinate more often than their predecessors. According to expert evaluations, its newest model, o4-mini, hallucinates nearly 79% of the time on general fact-based questions, compared to 44% for the earlier o1 model. Even on more focused tasks like answering questions about public figures, o4-mini’s hallucination rate was 48%, triple that of o1.
This reversal of progress questions one of the foundational assumptions about AI progress: that better performance naturally brings more reliability. This assumption had shaped early AI roadmaps, investor pitches, and research strategies. But now, the illusion is fading.
There are a few reasons why hallucinations may be increasing, but these reasons remain murky to researchers. Large language models (LLMs) do not have a true ability to fact-check; rather, they rely on probability to generate responses. What this means is that when an LLM thinks through a problem, it generates a plausible-sounding answer without a built-in way to verify that the answer is based on truth. As these LLMs get bigger and use more steps, the opportunities for errors to appear grow, and even the small errors snowball into a final answer that may be entirely fabricated.
OpenAI acknowledges this in its systems card, essentially a baseball card showing statistics for AI models, for o3 and o4-mini. While overall reasoning skills get stronger, more steps in the thinking process increase the number of claims being made, and therefore, it increases the amount of hallucinations that may occur.
Especially as these models are integrated into real-world systems, more hallucinations present real risks. A chatbot that invents legal precedents could mislead lawyers, or false statements from consumer-facing chatbots can mislead buyers about products. A customer support assistant who misstates a return policy might escalate a minor complaint into a reputational risk. While these hallucinations do not render AI entirely useless, somehow, it does create an impediment to true adoption of AI for widespread use.
As hallucinations grow, the industry seems to be at a crossroads. Raw model performance is no longer enough to gain widespread appeal or create breakthroughs in the AI space. Instead, the next breakthroughs will likely come in the form of innovations in how to ground these models in reality. Until then, those using AI to aid in creating or generating factual information will have to remain skeptical.
-Tobin
Feel free to elaborate on any of your thoughts through this Anonymous Feedback Form.
All the best,
Tobin Wilson, Editorial Intern
Karen Harrison, Newsletter Manager
.
.
.
“The mind is everything. What you think you become.” – Buddha