The AI Energy Economy — Part 3: Where the Grid Meets the Machine
The Connective and Last-Meter Layers of AI Electrification
Part 2 of this series focuses on the companies building the physical backbone of AI electrification: turbines, transformers, substations, transmission lines, grid-scale batteries, and the construction required to deploy them. These firms profit whenever utilities and hyperscalers invest in additional electrical capacity, regardless of which energy source ultimately produces the power.
But as AI demand scales, another reality becomes clear: electricity systems don’t just need more hardware — they need coherence.
Power must be generated, transmitted, conditioned, distributed, stabilized, and controlled all at the same time. At AI scale, the seams between those layers become just as important as the layers themselves. A failure, bottleneck, or inefficiency at any one of these junctions can limit the usefulness of power everywhere else.
This section focuses on the companies that connect those layers together — from grid-level infrastructure down to the point inside facilities where electricity must be safely delivered and heat must be continuously removed for AI systems to operate at all. This is the layer where the grid stops being an abstract system and starts becoming a physical constraint on computation.
1. What “Torque” Means in This Series
Throughout this series, torque is used as a metaphor borrowed from mechanics to describe how strongly a company’s earnings and stock price respond when AI drives electricity demand higher.
Low-torque businesses — such as regulated utilities — benefit steadily as demand grows, but their upside is constrained by regulation, rate cases, and long planning cycles. High-torque businesses sit closer to the system’s breaking points. When infrastructure becomes constrained or demand accelerates unexpectedly, their products become urgently needed, orders accelerate, and earnings can re-rate quickly.
In short: torque increases as you move closer to the constraint.
2. GE Vernova (GEV): The Bridge Company of AI Electrification
GE Vernova doesn’t belong in just one part of this series because it doesn’t operate at just one layer of the power system. It does not own power plants, and it does not sell electricity to customers. Instead, it supplies the equipment, technology, and systems that allow electricity to be generated, moved, and managed at scale.
In practical terms, GE Vernova builds much of the machinery that makes modern power systems possible. Its businesses include gas turbines used for firm and fast-ramping power, nuclear reactor technology and services through GE Hitachi, grid equipment such as transformers and substations, high-voltage transmission components and power electronics, and software that utilities use to monitor and balance increasingly complex power flows. If electricity is being produced, transmitted, or stabilized somewhere in the system, GE Vernova is often involved.
For readers new to the sector, it can help to think of GE Vernova as a system integrator for electricity. Utilities and grid operators don’t just need more power — they need many different pieces of equipment to work together reliably and continuously under rising stress. GE Vernova sits at the junction where generation, transmission, and grid control intersect.
That positioning matters enormously in the AI era. AI demand doesn’t stress one isolated component of the grid. It increases load on generation, tightens transmission corridors, raises congestion risk, and requires more sophisticated control to keep voltage and frequency stable. GE Vernova benefits whenever utilities add new generation, expand transmission, upgrade substations, or invest in software and power electronics to manage instability. In other words, it benefits from system-wide stress, not just from one specific technology winning.
Investment view:
GE Vernova is not cheap on a headline basis, and it is no longer an under-the-radar name. Investors are already recognizing its role in the electrification cycle. However, its valuation looks more reasonable when viewed against three factors: a large and growing order backlog, improving margins following the GE restructuring, and its unusually broad exposure across the entire power system rather than a single niche.
Unlike companies tied to one technology — such as only renewables, only nuclear, or only data centers — GE Vernova participates in nearly every major category of AI-driven grid investment. That breadth reduces dependence on any single policy outcome or energy source and increases the durability of earnings over a multi-year build-out cycle.
Bottom line:
GEV is a Buy as a cornerstone, system-wide electrification play. It may not offer the explosive torque of narrow bottleneck specialists, but its unmatched reach across generation, transmission, and grid intelligence makes it one of the most resilient and strategically important beneficiaries of AI-driven electricity demand.
3. Prysmian Group (PRYMY): The Arteries of the AI Grid
Prysmian focuses on one deceptively simple task: moving electricity from where it is generated to where it is needed. It is the global leader in high-voltage and HVDC (high-voltage direct current) cabling — the specialized cables required for long-distance transmission, inter-regional grid connections, offshore wind, underground routes, and virtually every major new HVDC project worldwide.
As AI data centers scale, electricity increasingly has to travel farther. Many of the most reliable power sources for AI — nuclear plants, large gas facilities, hydro, and offshore wind — are not located next to dense compute hubs. Power must be transmitted across regions, states, and sometimes entire countries to reach AI clusters. In many cases, the binding constraint is no longer how much electricity can be generated, but whether it can physically reach the data center at all.
This is where Prysmian becomes critical. High-voltage transmission cables are not interchangeable commodities. They are capital-intensive, custom-engineered products designed for specific voltages, routes, and environmental conditions. Manufacturing capacity is limited, projects require years of planning and permitting, and installation — especially offshore or underground — is slow and complex. Once a transmission project is approved, cables often become the pacing item that determines how fast power can be delivered.
Because of this, Prysmian enjoys unusually strong scarcity dynamics. Utilities and grid operators cannot easily substitute another supplier at scale, and delays in cabling can stall entire multi-billion-dollar generation or data-center projects. That gives Prysmian pricing power, long-dated order backlogs, and strong visibility into future revenue — characteristics that are rare in industrial manufacturing.
Investment view:
Prysmian benefits from AI electrification through physical bottlenecks rather than incremental demand. As grids are forced to expand and interconnect to serve hyperscale AI loads, demand for high-voltage and HVDC cabling rises faster than manufacturing capacity. While the stock has appreciated alongside the electrification theme, its valuation is supported by backlog strength, constrained supply, and the long-cycle nature of transmission spending.
Bottom line:
PRYMY is a Buy for exposure to grid-level physical constraints. It is one of the clearest ways to invest in the reality that electricity must travel — and that, in the AI era, the ability to move power across regions is becoming just as scarce and valuable as the power itself.
4. nVent Electric (NVT): Where Power Becomes Usable
As electricity volumes grow, so does the complexity of distributing and protecting that power inside data centers, substations, and industrial facilities. That complexity is nVent’s opportunity.
nVent sits downstream of generation and transmission, capturing value as projects move from concept to execution. The company specializes in electrical enclosures, busbars, power-distribution systems, grounding, and protection hardware that allow large amounts of electricity to be delivered safely in confined spaces. These are the systems that take high-voltage power arriving at a facility and make it usable inside the building.
At AI scale, power density rises sharply. Far more electricity is pushed through smaller physical footprints, which increases heat, fault risk, and the cost of failure. You cannot simply “plug in” a gigawatt-scale AI data center. Power must be carefully stepped down, routed, contained, and protected as it moves toward racks packed with GPUs running continuously. Without robust internal electrical infrastructure, power becomes unsafe, unreliable, or unusable long before upstream generation or transmission capacity is fully utilized.
After divesting its thermal-management business, nVent is now a more focused electrical-infrastructure company centered on Systems Protection and Electrical Connections, with infrastructure-related end markets accounting for more than 40% of revenue. This shift concentrates the business directly on the parts of the electrification stack that are most stressed by AI-driven load growth.
Investment view:
nVent offers a compelling combination of structural growth and operational stability. Its products sit at a true constraint point inside facilities, where electrical safety, reliability, and uptime are non-negotiable. Unlike broader electrical suppliers, nVent benefits directly from rising power density rather than just higher volumes of projects.
From a valuation perspective, nVent stands out as less fully priced than many other AI-infrastructure beneficiaries. The company does not yet carry the same premium multiple as more widely recognized data-center names, despite operating closer to the physical breaking points where AI demand turns into urgent spending. Margins are improving, cyclicality is lower than traditional industrials, and growth is tied to multi-year infrastructure build-outs rather than short-term hype cycles.
Bottom line:
NVT is a Buy — a quietly compounding AI-infrastructure play with direct exposure to last-meter electrical constraints, improving profitability, and more attractive valuation than many higher-profile AI beneficiaries.
5. Vertiv (VRT): Keeping AI from Overheating
Vertiv specializes in cooling, thermal management, and power-conditioning systems for data centers. While electricity is what enables computation, heat is what ultimately limits it. Nearly all the power consumed by GPUs and accelerators is converted into heat, and AI workloads generate far more heat than traditional enterprise or cloud computing ever did.
In older data centers, air-cooling systems were usually sufficient. AI changes that equation. Modern AI racks can generate many times the heat of legacy servers, often pushing well beyond what conventional air-based designs can handle. That is why the industry is rapidly moving toward liquid cooling, immersion cooling, and tightly integrated power-and-cooling architectures — areas where Vertiv has become a core supplier.
Once power density crosses certain thresholds, cooling stops being a background efficiency issue and becomes a hard constraint. If heat cannot be removed fast enough, servers automatically throttle performance or shut down entirely to prevent damage. At that point, it no longer matters how much electricity is available upstream. Compute becomes unusable because it cannot be kept within safe thermal limits.
This is what gives Vertiv its unusually high torque in the AI infrastructure stack. Cooling demand does not rise smoothly with compute — it accelerates sharply once density increases past key design limits. When hyperscalers push harder on performance, Vertiv’s products move from “nice to have” to “mission-critical,” and spending can ramp quickly.
Investment view:
Vertiv’s growth and valuation are tightly linked to rising compute density and the pace of AI infrastructure build-outs. The company has strong momentum, expanding margins, and multi-year visibility tied to hyperscaler investment plans. At the same time, much of that optimism is already reflected in the stock price. Vertiv trades at a premium multiple because investors understand how central cooling has become to AI — which means returns are likely to be more volatile and sensitive to changes in AI spending expectations.
Bottom line:
VRT is a Speculative Buy — a high-torque beneficiary of AI cooling constraints. It offers potentially significant upside if AI density and liquid-cooling adoption continue to accelerate, but its valuation leaves less room for error than more conservatively priced infrastructure names. Vertiv fits best as a targeted, higher-risk allocation rather than a defensive core holding.
6. How nVent and Vertiv Fit Together
Although both nVent and Vertiv benefit from AI-driven infrastructure growth, they solve different but tightly linked problems inside the same facilities. Their importance only emerges once electricity reaches extreme density inside a building — and neither problem can be solved by simply adding more power upstream.
nVent’s role begins the moment electricity enters a data-center facility. Power must be stepped down, routed, contained, and protected as it moves from high-voltage feeds into switchgear, panels, busbars, and distribution systems delivering electricity to thousands of servers packed into confined spaces. As AI workloads scale, far more electricity is pushed through smaller physical footprints, increasing the risk of overheating, electrical faults, and cascading failures. nVent’s enclosures, grounding systems, busway, and protection hardware are what make that power usable at all. If these systems fail, electricity cannot be safely delivered — and parts of the facility must shut down regardless of how much power is available outside the building.
Vertiv’s role begins once that electricity is successfully delivered and consumed. Nearly all of the power feeding GPUs and accelerators is immediately converted into heat. At AI scale, that heat load becomes extreme. Modern AI racks generate many times more heat than traditional servers, often well beyond what legacy air-cooling systems were designed to handle. Vertiv’s cooling, thermal-management, power-conditioning, and uninterruptible power systems determine whether that heat can be removed quickly and continuously enough to keep equipment operating. If cooling capacity is insufficient or a thermal system fails, servers will automatically throttle performance or shut down entirely — even if power delivery remains flawless.
A concrete example clarifies the distinction. Imagine a high-density AI data hall running at full load. If an electrical fault occurs — a short, arc, or overheating in a busbar or enclosure — nVent’s systems isolate the problem, prevent damage, and allow the rest of the facility to continue operating. Without that protection, power delivery itself becomes the bottleneck. But even if electricity flows perfectly, the facility can still fail if heat is not removed. If cooling capacity is overwhelmed or a thermal system goes down, GPUs will reduce output or shut off to avoid damage. In that case, Vertiv becomes the limiting factor, not electricity supply.
In plain terms, nVent governs whether power can safely enter and move through a facility, while Vertiv governs whether that power can continue to be used once it turns into computation. One controls electrical survivability; the other controls thermal survivability. As AI data centers push toward higher rack density, higher utilization, and tighter uptime requirements, both constraints bind at the same time — and failures at either point immediately limit usable compute.
This is why nVent and Vertiv often appear side by side in modern AI facilities, and why they exhibit higher torque than broader electrification players such as Eaton or Schneider Electric. They operate where abstract grid capacity turns into real, usable computing power — and where problems stop being theoretical and start taking systems offline.
7. Amphenol (APH): The Final Connection Point
As electricity and data move closer to where computing actually happens, another layer becomes critical: the physical connections that link everything together inside the equipment itself.
Amphenol designs the connectors and cabling systems that carry power and data between servers, GPUs, racks, cooling systems, sensors, and control electronics. In plain terms, these are the parts that make sure electricity and information can move reliably from one piece of hardware to another. They may look small compared with turbines, transformers, or cooling systems, but without them, nothing works.
As AI data centers grow denser, these connections become far more demanding. Higher power density means more electricity flowing through tighter spaces, higher operating temperatures, and continuous, round-the-clock use. A loose, overheated, or unreliable connection can shut down an entire rack, force systems to throttle performance, or trigger cascading failures. That’s why Amphenol’s products are engineered to tolerate high loads, extreme heat, vibration, and nonstop operation with almost no margin for error.
Amphenol sits at the final step of the electrification chain. It does not move power across regions or distribute it within buildings — those jobs happen earlier in the system. Instead, Amphenol ensures that all upstream investments in generation, transmission, last-meter electrical infrastructure, cooling, and automation actually reach the chips doing the work. If these final connections fail, all the power and cooling capacity upstream becomes irrelevant.
Investment view:
Amphenol is a high-quality, long-term compounder with meaningful exposure to AI infrastructure. Its products are essential, but they scale incrementally rather than acting as hard bottlenecks. Demand rises steadily as AI systems expand, rather than spiking abruptly when constraints appear.
That distinction matters for valuation. Amphenol is widely recognized as a best-in-class operator, and the stock already reflects that reputation. Shares trade at a premium to historical averages, pricing in strong execution and long-term growth. While the business fundamentals are excellent, this limits near-term upside compared with higher-torque names that sit directly at electrical or thermal breaking points.
Bottom line:
APH is a Hold — a durable, high-quality complement to AI infrastructure exposure, but with lower torque and less valuation-driven upside than last-meter electrical and cooling specialists.
8. Conclusion: Connection Is Value
As AI pushes electricity demand to unprecedented levels, the most valuable companies may not be those producing the electrons — but those making the system work under stress.
Together, the companies in this section show that AI does not strain one part of the electricity system in isolation. It stresses the entire system at once, exposing weaknesses at the points where generation, transmission, distribution, and computing meet.
Those stress points are exactly where the connective and last-meter layers sit — the places where electricity must be moved, transformed, contained, protected, and kept usable as it travels from the grid into computing equipment.
GE Vernova connects systems. Prysmian connects regions. nVent connects facilities. Vertiv keeps them operable. Amphenol connects components. Collectively, they operate between power plants, grids, and servers — in the layers where electricity stops being an abstract commodity and starts determining whether AI infrastructure can function at all.
This is where AI-driven power systems first encounter real limits. Density, heat, and reliability cease to be engineering details and become binding constraints. Electricity that cannot be routed safely cannot be used. Power that cannot be cooled cannot be sustained. As systems grow larger, denser, and less tolerant of failure, the value of coherence, protection, and control rises sharply.
In an AI-driven energy system, connection is value — and these companies sit squarely at that intersection.
The next layer explains how these increasingly stressed systems continue to operate at scale: the automation, controls, and intelligence that monitor, stabilize, and optimize power flows in real time. That is the focus of Part 4.


