

As utilities across the U.S. grapple with the rising electricity demands of AI data centers emerging in their service territories, integrating these large loads into the grid requires clear guiding principles. Utilities aim to avoid shifting unfair costs onto other ratepayers, they’re prioritizing grid reliability, and they want to preserve capacity for broader economic growth.
Arizona Public Service (APS) has committed to serving 4.7 gigawatts (GW) of new large customer load over the next decade, according to Jordan Tillinghast, manager of the utility’s data center strategy. Of that 4.7 GW, two-thirds comes from data centers.
For perspective, the utility’s peak load is 8.2 GW.
“I expect us to break our peak this year,” said Tillinghast. “That’s mostly just due to large customer data center load growth. We’ll probably break it every year for the foreseeable future.”
Tillinghast spoke on a DTECH Data Centers and AI panel last week about rate design and interconnection considerations for data centers. Arizona is arguably the hottest U.S. data center market outside of Northern Virginia. APS currently has at least 10 GW of pending interconnection requests from data centers – and counting, Tillinghast said.
Panelists noted that traditional utility planning frameworks were never designed for the kind of multi-gigawatt, fast-ramping, load-intensive growth now being driven by hyperscale data centers.
“We’re in a fundamentally different planning environment now,” said Tillinghast. “We’ve always had to build ahead of need, but the risk of getting it wrong now is much larger.”
The growing size of data center loads and how they behave on the grid has become a reliability concern for system planners. Panelists referenced a July 2024 incident in the Eastern Interconnection where a transmission line fault led to the unexpected, simultaneous loss of approximately 1,500 MW of data center load—disconnected not by utility action but by customer-side protections.
The event underscored that system reliability is at risk not only from large generation loss but also from sudden, unanticipated large load losses. These events can cause frequency and voltage fluctuations, requiring operational intervention even if they don’t immediately threaten grid stability.
As a result, utilities are updating interconnection rules as data center loads surge. Panelists noted Southern Company’s draft guidelines, which require developers to submit validated load models—critical tools for planners to understand how fast-acting loads respond to grid disturbances.
“Data centers can ramp, as in, change their demand in seconds or less, very quickly,” said Jack Gibfried, Engineer of Power Systems Modeling and Analysis at the North American Electric Reliability Corporation (NERC). “If you did that in your house, not a big deal. Your refrigerator turns off, not a big deal. But 1000 megawatts, that’s really where it starts to matter.”
One of the main concerns for utilities is stranded costs—when they invest in new generation or transmission assets based on expected load growth, only to find the demand never materializes. That risk, Tillinghast said, is driving a new wave of contractual and rate design, especially in states like Arizona where data center growth is high.
To protect against financial risk and ensure fairness for its ratepayers, Tillinghast said APS introduced “load commitment agreements.” These contracts require data centers to guarantee energy use levels, meet minimum demand and energy thresholds, demonstrate creditworthiness and commit to long-term usage timelines.
“We’re trying to help [data centers] make this work,” said Tillinghast. “But at the minimum, like we do need to make sure that we’re just protected, in case.”