Nvidia has unveiled its most revolutionary hardware since the launch of its graphics cards — the Project DIGITS mini supercomputer. This palm-sized device, priced at $3,000, delivers one petaflop of AI performance, opening doors for researchers, developers, and businesses to leverage AI capabilities at a fraction of the previous cost.
At the heart of this new device is Nvidia CEO Jensen Huang’s bold vision to democratize AI. The GB10 system-on-chip (SoC), powering DIGITS, marks a significant leap in AI computing, offering a 1,000-teraflop performance at FP4 precision, the same metric used by Nvidia’s high-end DGX systems. But while the original DGX-1 supercomputer launched at $129,000 in 2016, DIGITS comes in at just $3,000.
Introducing “Jensen’s Law”
The launch of Project DIGITS also inspires a new industry axiom: Jensen’s Law. It mirrors Moore’s Law by focusing on performance improvements and cost reduction over time. The concept suggests that at equal AI performance, it takes about 100 months (roughly 8 years) to cut the price per FLOP by a factor of 25.
Project DIGITS represents the culmination of this trend, delivering AI performance previously reserved for hyperscalers like Google and Microsoft to individual researchers and businesses.
Jensen Huang emphasized this ambition in a statement: “AI will be mainstream in every application for every industry. With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers, placing an AI supercomputer on the desks of every data scientist, AI researcher, and student.”
Nvidia’s Strategic Moat
DIGITS also serves as a strategic move to strengthen Nvidia’s moat — the competitive advantage that keeps rivals at bay. By making AI computing more accessible, Nvidia ensures that developers, researchers, and companies become even more reliant on its CUDA platform and proprietary technologies.
One of the most notable features of DIGITS is its FP4 processing, optimized for AI inference. Nvidia claims that FP4 processing delivers 5x the performance of FP8, a key metric for AI workloads. Notably, AMD, Nvidia’s closest competitor, does not yet offer FP4 processing in its latest MI300 series chips.
Nvidia’s moat strategy follows a proven formula seen in companies like Microsoft (with Windows), Apple (with the iPhone), and Google (with Gmail). By making its ecosystem indispensable, Nvidia hopes to cement its dominance in the AI space for years to come.
What’s Next?
The release of DIGITS hints at Nvidia’s broader strategy to bring AI to the masses. While it won’t likely be marketed directly to consumers, it opens the door for PC manufacturers and partners to build similar systems for professional markets.
It also raises questions about Nvidia’s Jetson Orin platform, previously used for edge and DIY projects. With the GB10 SoC powering DIGITS, the company could shift toward more integrated, memory-on-chip architectures, similar to Apple’s M-series chips.
By disrupting itself before others do, Nvidia is once again pushing the boundaries of AI and computing — a move that could redefine what’s possible on desktops worldwide.