Recent updates 🗞️

📈 S&P and NASDAQ at 6800 and 25,500 (ATH)
🚢 US and China announced trade agreement following Trump’s Asia visit
🇮🇳 Indian Women’s Cricket team wins first Cricket World Cup
🎥 Watching Allen Iverson’s documentary on Prime
📚 Reading “Empire of AI” by Karen Hao
YTD Portfolio Performance: +60.58% YTD

I’ve been thinking a lot lately about the collision happening right now between two massive forces in our economy. On one side, we have artificial intelligence scaling at a pace that would have seemed absurd just a few years ago. On the other, we have an electrical grid that was designed for a different era, suddenly being asked to do things it was never quite built for. But here’s what caught me off guard when I started digging into this: the story we’ve been telling ourselves about AI’s energy consumption turns out to be full of holes.

The Efficiency Story Today

Recent comprehensive analysis reveals something counterintuitive. Right now, at this very moment, the energy footprint of AI might be the smallest it will ever be. When you look at the numbers for a single chatbot query, we’re talking about roughly the energy it takes to run a microwave for anywhere between a fraction of a second to a few seconds. That’s not nothing, but it’s also not the apocalyptic scenario many headlines suggest. The most popular platforms use about as much energy as a mid-sized town annually. For something that millions of people interact with daily, that’s actually remarkable from an efficiency standpoint.

The Coming Surge

But before we start celebrating, here’s the part that keeps me up at night. This efficiency story, this relatively modest current footprint, is about to change dramatically. Data centers consumed about 4% of total U.S. electricity in 2024. Projections suggest that by 2030, global data center electricity demand could more than double to around 945 terawatt-hours—slightly more than Japan’s entire energy consumption. In some states like Virginia, data centers already consume roughly a quarter of the total electricity supply.

What strikes me about these numbers isn’t just their magnitude. It’s the uncertainty embedded in them. The firms building and deploying these AI systems keep exact figures about energy consumption closely guarded. This makes it nearly impossible for grid operators, regulators, and utility planners to prepare adequately for what’s coming. We’re trying to build infrastructure for a future we can’t quite see clearly.

The Carbon Problem

The carbon intensity dimension makes the picture even more complicated. It’s not just about how much energy AI uses—it’s about what kind of energy, and when. The carbon intensity of electricity used by data centers runs about 48% higher than the U.S. average. This isn’t an accident. It’s a function of where data centers are located, when they draw power, and what generation sources are available to meet that demand. Some estimates suggest that about 60% of the increasing electricity demands from data centers will be met by burning fossil fuels, potentially adding around 220 million tons to global carbon emissions. Data centers are trending toward using dirtier, more carbon-intensive forms of energy like gas to fill immediate needs precisely because the demand is growing so quickly.

Unlocking Hidden Capacity

What I find fascinating about this moment is how it exposes the gap between our infrastructure’s theoretical capacity and what we’re actually using. Recent research has explored something that seems almost counterintuitive at first: what if the solution to accommodating these massive new loads isn’t just building more generation and transmission capacity, but rather using what we already have more intelligently?

Most power systems operate at around 53% average utilization, with peaks that only occur during relatively brief periods throughout the year. We’ve built enough infrastructure to handle extreme weather events and absolute peak demand, but most of the time, that capacity sits there underutilized. This creates what researchers call “curtailment-enabled headroom”—the ability to accommodate new loads if they can be flexible enough to avoid contributing to system peaks. The analysis suggests that with relatively modest curtailment rates, somewhere around half a percent to one percent annually, the existing U.S. power system could potentially accommodate between 76 and 126 gigawatts of new load without requiring major generation and transmission expansion.

Making Flexibility Work

When I first encountered this flexibility concept, I wondered what it actually means for data centers, which we tend to think of as needing constant, reliable power. But it turns out the picture is more nuanced than that. The carbon intensity of electricity varies significantly throughout the day and across seasons. Engineers are starting to leverage this variation by splitting computing operations so some are performed later, when more of the electricity fed into the grid comes from renewable sources like solar and wind. Some AI workloads don’t need to be performed in their entirety at the same time. This temporal flexibility can go a long way toward reducing a data center’s carbon footprint without necessarily reducing total computational output.

There are also opportunities for spatial flexibility—distributing computational work across multiple data centers in different geographic locations to take advantage of regional variations in grid conditions and renewable availability. Some facilities are exploring onsite power generation and storage—batteries, natural gas generators, even renewable energy installations that can help buffer their draw from the grid during peak periods. Researchers are building flexibility models that consider the differing energy demands of training a model versus deploying that model, trying to uncover the best strategies for scheduling and streamlining computing operations. There are even software tools being developed that make carbon intensity a parameter, recognizing peak energy periods and automatically making adjustments. Experiments with these approaches have shown potential for reducing carbon intensity by 80% to 90% for different types of operations.

Who Pays the Bill

Here’s something that doesn’t get enough attention in the technical discussions: who pays for all this? In areas with high concentrations of data centers, electricity costs have increased dramatically—in some cases, wholesale prices are up to 267% higher than they were five years ago. These costs don’t just affect the data centers themselves; they ripple through to residential and commercial customers in those regions. The deals that utility companies are making with data centers will likely transfer many costs of the AI revolution to the rest of us, in the form of higher electricity bills.

The flexibility approach offers a potential path to mitigate these concerns by reducing the need for costly upgrades in the first place. What’s emerging in some jurisdictions is essentially a trade—flexibility in exchange for faster interconnection and lower infrastructure costs. If a data center can commit to curtailing its load during peak stress periods, potentially avoiding grid upgrades altogether, regulators and utilities are starting to explore agreements that benefit everyone.

The Scaling Challenge

There’s a paradox at the heart of this whole situation that I keep coming back to. Right now, the energy efficiency of current AI usage is actually impressive. But everything about where AI is headed suggests that this efficiency won’t scale linearly. Video generation, for instance, consumes over 700 times the energy needed to create a high-quality image. As AI moves toward more personalized experiences, gains the ability to reason and solve complex problems, and gets embedded into every application we use, the energy demands multiply.

Finding Our Path Forward

So where does all this leave us? I think the honest answer is: at a crossroads. The research suggests there’s significant untapped potential in our existing infrastructure if we can match flexible capabilities with the right regulatory frameworks and market structures. This isn’t about abandoning the need for new generation and transmission—we’ll certainly need both. But it’s about recognizing that flexibility could be a powerful tool in the toolkit.

What gives me hope is that the tools and technologies for enabling this flexibility—sophisticated power management systems, distributed energy resources, carbon-aware software, advanced computational scheduling—are maturing rapidly. What concerns me is whether we have the transparency, the regulatory frameworks, and the will to deploy them effectively before we lock ourselves into more expensive and carbon-intensive paths.

The firms building these AI systems are racing to scale, driven by competitive pressures and the transformative potential of the technology. Grid operators and regulators are trying to keep pace, balancing reliability, affordability, and environmental concerns. The conversation has evolved from “how big is AI’s energy footprint?” to “how do we make AI’s inevitable growth compatible with our energy and climate goals?” That shift in framing could make all the difference in how this story unfolds over the next decade. But it requires transparency, flexibility, and smarter planning—all of which remain frustratingly elusive in today’s landscape.

Cheers 🥂