Good morning,
Hope you had a great weekend!
This week, we explore the environmental impact of artificial intelligence. We also examine how AI is reshaping the digital landscape, from how we search for information to how we build and engage online.
📣 Announcements
💸 Economy
🤖 Technology
- Karen
Reducing the Carbon Cost of Intelligence
AI has grown from a niche research tool to being used daily. It answers your homework questions, recommends playlists, writes code, etc. But powering that friendly chatbot results in serious environmental impact. The truth is AI consumes a lot of power and that power isn’t always clean.
Training advanced AI models or image generation systems like DALL·E requires immense computational resources. A study by the University of Massachusetts Amherst found that training a single large AI model can emit over 284 tons of CO₂. This is equivalent to the lifetime emissions of five average cars. Then, once trained, running these models still requires energy-intensive data centers operating 24/7. As demand for AI grows rapidly, so does the pressure on the energy grid and the planet.
This surge in AI adoption is creating a ripple effect across energy markets and infrastructure planning. According to The Wall Street Journal, utilities in high-growth states like Arizona and Georgia are now adjusting long-term forecasts because AI-driven data centers are consuming more electricity than entire cities. Goldman Sachs estimates that roughly $720 billion in grid spending may be needed globally by 2030 just to keep pace with AI‑driven energy demand. Without coordinated investment in renewables and grid capacity, we face not only rising carbon emissions but also costly inefficiencies that will negatively impact the economy as a whole.
The good news? The AI industry isn’t ignoring the problem, it’s actively scouting for solutions. Google has committed to running all of its data centers on 24/7 carbon-free energy by 2030. Amazon aims to be fully renewable-powered by 2025. There’s also exciting progress on the hardware side, as researchers at Stanford and MIT are developing low-power AI chips and more efficient algorithms that significantly reduce the energy footprint of AI.
However, technical fixes aren’t enough on their own. To make AI truly sustainable, it’ll take collaboration between tech companies, energy providers, regulators, and researchers. Policymakers will need to create incentives for carbon-efficient AI development, and transparency standards will be crucial to ensure the implementation of sustainable practices in the world of AI. Some researchers are now pushing for “model emission labeling,” so users can see how much energy a given AI system consumes, kind of like a nutrition label, but for AI chatbots.
AI isn’t going away, and it shouldn’t have to, but its growth doesn’t need to come at the expense of the environment. With smarter infrastructure, greener tech, and a shared sense of responsibility, we can build AI systems that are as sustainable as they are helpful.
- Mihika B.
Answer engines are quietly rewriting the web’s traffic map. Big stuff is happening—it's like watching the internet being updated in real time! New Similarweb data shows AI platforms sent 1.13B referrals in June, a 357% year‑over‑year increase, still less than Google but now noticeable enough that publishers and product teams are considering AI visibility (when LLMs send summaries with links, structured answers, FAQ‑style pages). Discovery is shifting from search queries to synthesized answers that hand users a few high‑intent links, which is important if you care about growth, design content for extraction and attribution. Even small tweaks like more links, clear citations, and FAQs can help your pages surface in AI answer cards more often.
The White House released America’s AI Action Plan, a 26‑page roadmap with 90+ forthcoming actions and three executive orders, hoping to speed up progress and deregulate instincts on data‑center build‑out, procurement, and exports. Businesses view it as a green light for infrastructure and “hands‑off” innovation, but critics warn of thin guardrails. Try to write down what data/models you use, test privacy and safety before you ship, and use open tools so you can adjust when rules change.
Experts describe a wave of chatbot-created bounty reports that break down. cURL’s Daniel Stenberg calls it “death by a thousand slops,” noting that roughly 20% of 2025 submissions showed AI fingerprints while the valid rate slid toward ~5%—a triage tax few community projects can afford. If you’re running a bounty program, adding a bit of friendly friction (captchas, clear PoC templates) can save everyone headaches later.
Meanwhile, the AI data stack is consolidating under pressure. Databricks’ purchase of Tabular (reported near $2B) and Snowflake’s earlier acquisition of Neeva point to a future where users aren’t swamped with one-off tools we can’t use together. Pick tools that play well with others, like open table formats, easy export for embeddings and evals, and neutral feature stores so you can switch later easily.
On the consumer front, AI companions are no longer niche, which raises design ethics questions you can’t ignore. A new Common Sense Media study reported teens trying AI “friends,” with many discussing sensitive issues with bots and some rating those chats as satisfying as talking to peers. This technology should try to avoid manipulative phrasing, label the bot clearly, and focus on features that don’t substitute for real relationships. Bottom line: tech should lift people up, not box them in—let’s build accordingly!
- Connor
Feel free to elaborate on any of your thoughts through this Anonymous Feedback Form.
All the best,
Mihika Bhattacharjee, Editorial Intern
Connor H. Wong, Editorial Intern
Karen Harrison, Newsletter Director
.
.
.
"Live as if you were to die tomorrow. Learn as if you were to live forever." — Mahatma Gandhi