Good afternoon Bruin Bots 🤖
Happy Monday!
Welcome back to the Current Events newsletter.
If you're new here, thanks for joining our community!
In this newsletter, we provide a weekly deep dive into the latest AI developments worldwide.
Feel free to comment below with your thoughts on this edition!
Here's a rundown of what's inside today's letter 👇
📣 Announcements
💸 Economy
🧭 Ethics
-Karen
Announcements
Codeium Tech Talk
📣Calling UCLA students📣
Codeium, a Series C startup with a 1.25B valuation, is coming to UCLA this Wed. October 9th for a tech talk, networking, and recruiting night.
Codeium’s engineers will talk about how Codeium works on a technical level, with the opportunity for people to ask questions. There will also be free food and merch!
You do not want to miss out!
Recruitment
Bruin AI sincerely thanks everyone who attended our info session and networking night. We're also incredibly grateful to everyone who applied for our Fall 2024 recruitment cycle. We are truly blown away by the sheer level of talent and excitement this year!
Invitations to coffee chats will be sent out by the end of today.
Intellectual Property Rights and AI
Behind the scenes of AI innovation, intellectual property rights (IPR) as they relate to AI essentially determine the direction of AI innovation. Unfortunately, the current regime may be unable to effectively produce strong enough AI innovation. Recent guidance from the Patent and Trademark Office (PTO) is making it harder to innovate as its requirement of human contribution (eg: documenting every AI search through the innovation process) complicates incentives for AI innovation. It is a particularly litigious issue due to the subjective terms the PTO uses in requiring “the invention didn’t have a ‘significant contribution’ by a ‘natural person.’”
The devaluation of AI-assisted patents will discourage innovation and invite infringement suits which threatens to stifle AI innovation. Possibly worse, because a machine cannot claim inventorship over something it produces, it falls into the public domain. This reduces incentives to invest in AI and causes companies to simply not disclose important innovations, opting to “seek protection [under] trade secret laws.”
One sector where these issues become particularly pertinent is the medical field. Costs of drug discovery are often prohibitively high, leading to high costs when these drugs are eventually rolled out. However, AI, which combines robust data sets with efficient computer science, can reshape and optimize the drug discovery process that allows for novel designs and addressing novel diseases. COVID-19 particularly demonstrated the manner in which new diseases create difficulties for vaccine development, exemplifying the need for strong AI investment to be prepared for future pandemics which researchers warn will become more lethal due to urbanization and climate change.
The difficulties with AI intellectual property rights also extend to the semiconductors which have, in recent memory, had a large economic and geopolitical impact. AI applications in the semiconductor industry can optimize chip design, analyze data to predict quality issues, and save time which creates valuable opportunities for new creativity and innovation that exceeds what humans can do. Unlocking this potential will be impossible without an overhaul to our IPR system as it currently allows patent trolls to target the semiconductor industry due to corporations’ need for tens of thousands of patents on different aspects of semiconductor production. With this in mind, the ongoing failure of the US to create its own burgeoning semiconductor industry seems all too logical.
While some may argue that government subsidies and other innovation incentives, such as those in the CHIPS Act, will be enough, Elinor Hobbs, Distinguished Professor of International Business at Northwestern University, explains succinctly that “point is not simply to reward any and all inventive efforts, but rather to provide incentives that direct inventive efforts based on the anticipated market value of inventions.” Thus, she concludes, more subsidies in the absence of IPR reform would fail to provide the incentive for better innovation.
Continuing on the current path, it seems inevitable that US AI innovation particularly will be encumbered by outdated regulatory mechanisms.
-Tobin
Predatory Terms of Service?
Buried among thousands of pages of legal jargon that seems incomprehensible to the average person agreeing to a terms of service (TOS) they did not read, many corporations are tweaking their TOS with small changes that have the ability to massively expand its ability to surveil users. With the rise of generative AI, it seemed inevitable that large companies would tweak their TOS to include language that benefits their AI goals. The FTC warns that “[i]t may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.”
In July, just an eight-word change to Google’s privacy policy allowed the company to begin using private information to train AI chatbots. This could help the company get ahead in the AI race, but this step forward was in no way unique to Google. Snapchat, for example, found itself sending warnings to users not to share confidential information with the application’s novel AI chatbot because the information “would be used in its training.”
The importance of private information can only be understood given background information on how generative AI gets trained. There are, generally, two pools of information companies can draw from: private and public data. Public data includes easily accessible materials such as textbooks or articles, whereas private data includes sensitive information such as private texts and correspondences. Private data is protected by a patchwork of privacy laws that have made it difficult and detrimental to pursue private data; however, many companies are beginning to face the risk of running out of public data to train their AI models on within just a few years. This has prompted the increasing pursuit of private data to train AI models as it would unlock a myriad of new data for the models to use.
The problem remains that most people are against their private information and conversations being used to train AI. For this reason, the small tweaks to a company’s TOS can hide important changes to how companies use data that users would prefer to remain private.
Adobe is another key example of a company that deceptively changed its terms of service, something especially prescient given recent additions of AI features to its programs. One of the most important issues that was brought up to Adobe, and likely will be brought to other corporations, regards information under confidentiality agreements such as NDA. Many professionals use services such as those provided by Adobe to store sensitive data, and such a change to the TOS would enter sensitive information in an AI model’s training data. Adobe has since denied training its AI on such data, however the risk of a similar occurrence remains.
Whatever the direction of AI training data goes, it is critical that users push back on and remain vigilant of a TOS which would allow for vast overreach by corporations in harvesting user data.
-Tobin
Feel free to elaborate on any of your thoughts through this Anonymous Feedback Form.
All the best,
Tobin Wilson, Editorial Intern
Karen Harrison, Newsletter Manager
.
.
.
"What we fear of doing most is usually what we most need to do." — Ralph Waldo Emerson