Good morning Bruin Bots 🤖
Welcome back!
To UC students on the quarter system, good luck with finals!
We are diving into the slowdown in technological development that OpenAI is facing. We will also discuss the ethicality of companies using AI technology for weapons of warfare.
Today’s rundown👇
💸 Economy
🧭 Ethics
-Karen
An AI Slowdown and OpenAI
Investors have accepted AI with open arms based on a demonstration of rapid improvement in recent years. With fast improvement fueling high expectations, companies have felt pressure to build on this hype and keep the trend going. Nevertheless, researchers in both academia and industry have warned that an AI slowdown is imminent.
The key reason for a slowdown in development is that models face limitations that are becoming far more difficult to overcome. For the past few years, most obstacles faced by AI models have been solved because drastically increasing the size of AI models would do enough, but an approach based on expanding the scale of models may no longer be as effective. As a result, the current pace of AI development may significantly change. “We have applications like OpenAI’s ChatGPT because of scaling laws. If that’s no longer true, then the future of AI development will look a lot different.”
Fortunately, however, experts do not expect this to shrink AI’s impact in the coming years. Most believe that AI has already advanced enough to have a large impact; it’s only a question of effectively applying these systems to commercial work. With the internet, for example, it was not particularly fast development that made it profitable; instead, further development of its commercial applications was what prompted significant expansion of economic growth.
For OpenAI, the slowdown is set to hit them particularly hard with $5 billion in expected losses over the coming year. As a result, OpenAI is working to become more attractive to investors, turning itself into a for-profit enterprise.
Elon Musk, having been formerly involved in OpenAI, has sued to block this change, arguing that it goes against the founding principles of OpenAI and its commitment to safety. Additionally, he claims that OpenAI prevented investors from putting money into other startups such as Mr. Musk’s xAI.
Despite slowing progress and such pushback, Sam Altman, OpenAI CEO, remains optimistic for the future of AI. At the recent New York Times DealBook conference, Altman revealed his belief that AI could reach superintelligence, outperforming or mirroring the capability of humans, sooner than most people expect. While he admits the disruption may take “a little longer than people expect,” he argues that it will also be “more intense than people think.”
While it’s difficult to be sure of AI’s future development, expecting it to continue at the current rate is likely an unsustainable belief. Nevertheless, a slowing development cycle may provide an opportunity for commercial applications to catch up with the development of AI, aiding in expanding productivity gains and boosting the economy.
-Tobin
When the Military Comes to Town
Despite the ideas that Silicon Valley puts out about the importance of community and bringing people together, the development of novel technologies have very often been directly tied to military funding and motives. From the invention of GPS to lithium batteries, and even the internet, military involvement has driven the development of such technologies.
With the firing of workers for protesting military contracts by companies such as Google in the past, the continuation of its use is by no means new. However, given the increasing risk of catastrophic destruction coming from AI, it seems particularly important that we truly ask, who is this AI being made for?
Companies such as OpenAI have reportedly expanded their deals with both US and foreign governments such as the United Arab Emirates to integrate artificial intelligence into existing and novel tools of war. Unfortunately, many of the most powerful in Silicon Valley have turned a blind eye to the human destruction these tools create, eager to expand their portfolios even at the cost of peoples’ lives. Companies as a whole have refused to listen to their workers; Google, for example, fired around 50 workers this year for protesting “Project Nimbus, a contract to provide the Israeli government and its military with cloud computing services.”
The expansion of the military-industrial complex to include AI has been explicitly set up to enable the weaponization of surveillance, the creation of semiautonomous weapons, and the development of entirely novel tools using AI.
In the context of such applications, it is the duty of both workers at these firms and the general public to question what benefits they can bring to the general populace. Why exactly does a broad civilian application of novel technology seem to always come secondary, such as with the internet or lithium batteries? More importantly, given increasingly more powerful tools, do developers have more of a duty to develop for their communities first?
At such a defining stage in the future trajectory of AI, it is key that we do not get caught up in the myopia of short development cycles and our portfolios. At every step we make in AI development, we need to ask what these developments do for our communities. Most importantly, there needs to be further scrutiny of how military involvement with AI technologies occurs, how transparent companies are about their development, and greater discourse regarding whether this involvement keeps us safer or just heightens the risk of conflict.
-Tobin
Feel free to elaborate on any of your thoughts through this Anonymous Feedback Form.
All the best,
Tobin Wilson, Editorial Intern
Karen Harrison, Newsletter Manager
.
.
.
"Ability may get you to the top, but it takes character to keep you there." - John Wooden