As AI adoption has continued to surge over the last couple of months, one thing has become abundantly clear, i.e. there isn’t enough computational horsepower to go around (something that has become painfully obvious as cloud providers have accrued months-long waitlists for high-end GPU instances). And, unlike the brief crypto-mining GPU craze from just a few years ago, today’s crunch is being driven by real demand from AI research and deployments.
For perspective sake, Amazon Web Services has been charging about $98 per hour for an 8-GPU server loaded with Nvidia’s top-tier H100 chips, while some decentralized GPU platforms offer comparable hardware for as little as $3 an hour. Amidst this stark 30× price gap, Singularity Compute, the infrastructure arm of decentralized AI pioneer SingularityNET, has announced the phase I deployment of its first enterprise-grade NVIDIA GPU cluster at a state-of-the-art data center in Sweden.
Under a partnership with Swedish operator Conapto, Singularity’s cluster is using cutting-edge NVIDIA hardware (including the next-generation H200 and L40S GPUs) in a Stockholm facility powered entirely by renewable energy.
What’s on offer exactly?
The cluster, which has been made to be high density by design, serves as the foundation for both traditional enterprise workloads and the projects of the Artificial Superintelligence (ASI) Alliance, a decentralized AI ecosystem spearheaded by SingularityNET. It offers flexible access modes that mirror the needs of modern AI developers wherein companies can rent whole machines on bare metal, spin up GPU-powered virtual machines, or even tap into dedicated API endpoints for AI inference.
In real world terms what this means is that an organization can potentially train entire large machine learning models from scratch, fine-tune existing models on custom datasets, or run heavy-duty inference for applications like generative AI, all using Singularity’s infrastructure.
On the operational front, it bears mentioning that the partnership is set to be managed by popular cloud provider and NVIDIA partner Cudo Compute, with the latter ensuring the cluster’s timely delivery of enterprise-grade reliability and support that mission-critical AI projects demand. On the entire development, Dr. Ben Goertzel, founder of SingularityNET and co-chair of the ASI Alliance, opined:
“As AI accelerates toward AGI and beyond, access to high-performance, ethically aligned compute is becoming a defining factor in who shapes the future. We need powerful compute that is configured for interoperation with decentralized networks running a rich variety of AI algorithms carrying out tasks for diverse populations. The new GPU deployment in Sweden is a meaningful milestone on the road to a truly open, global Artificial Superintelligence.”
A similar sentiment was echoed by Singularity Compute CEO Joe Honan who believes the launch is about more than just extra compute capacity but rather a step toward a new paradigm in AI infrastructure, emphasizing that the cluster’s NVIDIA GPUs will deliver the performance and reliability modern AI demands, while upholding principles of openness, security, and sovereignty in how the compute is provisioned.
In this broader context, it also bears mentioning that the Swedish cluster is set to serve as the backbone for ASI:Cloud, Singularity’s new AI model inference service developed in collaboration with Cudo. To elaborate, ASI:Cloud provides developers with wallet-based access to an OpenAI-compatible API for model inference, offering a smooth path to scale from serverless functions up to dedicated GPU servers.
Early customers are already being onboarded to the Swedish cluster, with the team hinting that this is only the beginning of additional hardware and new geographic locations entering the fray. Thus, for a community that has often been at the bleeding edge of the ongoing AI and blockchain revolution, this deployment seems to be a tangible step toward the long-held goal of a decentralized, globally distributed AI infrastructure.
The race for AI compute is underway and heating up fast
Since the turn of the decade, the tech sector has poured major investments into AI infrastructure, with 2025 alone having witnessed over $1 trillion in new AI-focused data center projects. Even nation-states seem to be wading in with France, for example, having unveiled a surprise €100+ billion plan to boost AI infrastructure.
Yet not everyone can spend billions to solve the current compute shortage, resulting in emergence of alternate approaches like decentralized or distributed GPU networks (that can tap into hardware spread across many locations and operators).
In other words, if the 2010s rewarded those who accumulated data, the 2020s will seemingly reward those who control compute power. Within that future, efforts like Singularity Compute’s new GPU cluster embody a growing determination to democratize who gets to shape AI’s next chapter (primarily by broadening where the compute behind it is coming from). Interesting times ahead.


Leave a Reply