Westlake Village, Dec. 29, 2025 (GLOBE NEWSWIRE) — Microcaps.com curates and contextualizes news, analysis, and market data across the public micro- and small-cap ecosystem, with a focus on emerging growth themes shaping investor sentiment. As AI infrastructure spending accelerates and capital flows increasingly target data centers, GPU clouds, and adjacent platforms, Microcaps is examining how these developments are being reflected in public-market valuations, from established leaders like Nvidia (NVDA) to newer, infrastructure-adjacent entrants gaining investor attention.
The artificial intelligence data center industry has grown swiftly from a small niche to one of the most expensive areas of technology. What started as a race to get GPUs has morphed into a bigger rivalry for real estate, long-term power contracts, energy capacity, cooling systems, and high-speed internet connections. More and more, this change is happening on public markets, where investors see the physical infrastructure that makes AI possible as the next big thing.
What makes AI data centers unique
Data centers that focus on AI are very different from regular business buildings. These environments are made to support macro model training and inference. They need accelerators, high rack power density, liquid or sophisticated cooling, and low-latency networking. McKinsey & Co. says that by 2030, worldwide investment in AI-ready data centers might exceed $5.2 trillion, which shows how big and strong the demand is (McKinsey).
As AI moves from being a test to being used on a large scale, these technical requirements are leading to new alliances, types of facilities, and ways to raise money.
What the GPU supply does
Access to high-performance computing, especially graphics processing units, is a common way to design AI infrastructure. One of the biggest providers of GPUs is Nvidia, which is a key part in powering many of the world’s largest AI compute clusters (Wikipedia). Hardware companies are important, but the bigger story is how power, physical capacity, and integration models affect the size and reliability of AI deployments. This is especially true as some operators look into hybrid approaches that combine their own infrastructure with third-party and network-based GPU capacity.
Public market signals and valuation multiples
Valuation multiples are becoming more and more like what investors expect. Companies that use AI infrastructure have experienced enterprise value-to-revenue ratios in the 20-to-30-times range in both public and private markets, especially when growth is easy to monitor (Aventis Advisors). On the other hand, the average price-to-sales ratio for S&P 500 companies is still about 2.8 times their sales (Eqvista).
These disparities in value show that the market believes in the long-term worth of infrastructure that makes AI possible, even if it costs a lot of money or is still in the early stages of making money.
Data centers and AI infrastructure
Cloud providers that focus on GPUs have come up to handle large-scale training and inference workloads. A lot of people are interested in these companies since they are able to grow quickly and use their money wisely. A recent piece in the Wall Street Journal says that these kinds of platforms have traded at revenue multiples as high as 13 times, which shows that investors like infrastructure models that can grow (Wall Street Journal).
Data center operators who have built massive facilities before have also gained. According to Oliver Wyman, infrastructure experts commonly sell for 20 to 30 times EBITDA because of the strength of long-term leases and strategic real estate placement.
Valuations close to infrastructure
More and more people are realizing that supporting infrastructure providers, such power supply, thermal systems, and high-performance fiber, are very important for AI to grow. Even though these companies don’t provide computing power, they have traded at revenue multiples of 20 or more during times of high interest (Skeptically Optimistic).
They are an important part of the AI infrastructure stack because they help make computing environments denser and more efficient.
The rise of “neoclouds”
To handle AI-native workloads, a new type of cloud platform has appeared. These are frequently called “neoclouds.” These companies focus on GPU infrastructure and orchestration solutions that are made for AI applications. They are influential examples of how infrastructure strategy fits AI compute demand since they can quickly get scarce GPU supply and grow swiftly (Wikipedia – CoreWeave).
But investors are still debating whether these approaches are too capital-intensive and risky to carry out.
Public entrants near infrastructure
Some newer publicly traded companies are getting into the AI business in different ways, such as by creating data center infrastructure, helping with energy deployments, or providing very specific GPU-hosting services. As businesses need more flexible access to GPUs, some companies that started out making AI software or doing applied research have changed their focus to compute enablement. One example of this movement is Axe Compute (NASDAQ: AGPU), which recently changed its name from a concentration on health sciences. This shows how new companies entering the public market are changing their objectives to fit with the infrastructure layer of the AI economy. These companies’ stock prices are typically based more on how much money they could make in the future using AI than on how much money they make now. Their existence shows how the infrastructure models that underpin the AI boom are becoming more complicated and varied (Global Equity Briefing).
The enabling layer: landlords, connections, and cooling
Even if they don’t directly offer AI compute, data center landlords and connectivity platforms are important parts of the ecosystem. These providers profit from long-term contracts, a lack of land, and a high demand for interconnection from cloud platforms and hyperscalers. As AI grows, these companies are getting more attention from both investors and partners looking for good places to do business.
As AI density grows, vendors of cooling and energy delivery systems, such those that specialize in liquid cooling or immersion systems, become more and more important. Their capacity to work with the latest hardware is becoming a competitive edge.
The access layer: models that don’t need a lot of assets
Asset-light platforms that combine computing power instead of owning it are becoming more popular to suit short-term needs. These businesses connect data center partners with end customers and offer variable pricing and availability. Some public operators, like Axe Compute, which is a newer company, are using this model to make money from enterprise GPU access without taking on all the cost of building out a hyperscale network. People generally judge their worth by how much money they make over time and how many partners they have, not by how much hardware they own.
This concept is popular with developers and small businesses that need scalable computing power without having to commit to long-term infrastructure.
The space is defined by capital intensity and execution risk.
AI data infrastructure is affected by two opposite forces: huge demand increase and very high buildout costs. For land, power, cooling, and chips, the costs can be billions of dollars per deployment. Operators also have to deal with problems with execution, like getting access to the grid, supply chain problems, getting permits, and making sure the project is environmentally friendly.
McKinsey says that AI workloads will keep growing, but so will the need for new ways to plan and pay for long-term infrastructure (McKinsey).
What lies ahead
The years 2023 and 2024 were all about the GPU race, but 2025 is more about being ready for infrastructure. Now, power, cooling, interconnection, and capital planning are just as crucial as the chips themselves. Companies that combine technical leadership with scalable infrastructure and financial discipline are expected to dominate the next wave of growth in public markets.
As AI infrastructure grows, organizations that work in the computation, enabling, and access layers will be very important to the growth of the AI economy.
About microcaps
Microcaps is a digital media and market intelligence site that focuses on the public micro- and small-cap universe. The website brings together company news, press releases, and research from other sources, and it also gives editorial context to new industries, valuation trends, and investment themes. Microcaps was made to help investors, issuers, and other people in the market better understand how new enterprises and industries are being priced and positioned in the public markets.

