#33
There are two kinds of AI data centers. In training centers, researchers feed vast amounts of data through untrained neural networks until the weights converge on a useful model. Inference centers, on the other hand, house the finished model so that it can answer questions, write code, generate images, and so on. Both require large numbers of specialized chips (GPUs, TPUs, LPUs) and industrial-scale electricity. Many industry observers now think the bottleneck will flip from training to inference by 2027-2028, once GPT-5-class systems reveal just how much demand there is for artificial intelligence.
Paul Romer’s endogenous growth theory holds that long-run prosperity hinges on the discovery of new, non-rival ideas—recipes that anyone can reuse at virtually zero marginal cost. This is why most economists support high levels of high-skilled immigration, as high-skilled immigrants raise the stock of human capital, boosting the rate at which those ideas are discovered, shared, and recombined. In Romer’s framework, population quality and quantity both matter as idea creation produces positive spillovers for everyone else.
If chip clusters are tomorrow’s workers, a country’s policy toward inference centres starts to resemble its immigration policy: both aim to attract scarce, productivity-boosting resources and keep them inside the national borders. Companies like Shopify already talk about growing AI “headcount” faster than human headcount, suggesting that domestic demand for on-shore compute will continue to rise sharply.
A few weeks ago, Bell and Groq announced a $5 billion plan to install roughly 500 MW of AI compute across six Canadian sites, with four slated to go live by 2027. If ~385 of those 500 MW reach the chips, with the balance being used primarily for cooling and networking, and assuming an aggressive (but plausible) 4-6 TFLOPS per watt for low-precision inference, the fleet would deliver 1,500-2,500 exaflops of inference compute. For Canada, that’s enormous, the largest deal of its kind, but it is still only on the same order as thousands or maybe low tens of thousands of human brains working in parallel.
I think we’re going to need a lot more.