Oleblue
4 weeks ago
Dell Is Determined To Gets Its Piece Of The AI Enterprise Pie
May 20, 2025 Jeffrey Burt
For much of the two-plus years since ChatGPT hit the market and kicked off the generative AI frenzy, the market tilted toward well-resourced hyperscalers like Google, Amazon Web Services, and Microsoft as well as Tier 2 cloud service providers, with powerful β and expensive β accelerators and massive large language models like Metaβs Llama with 405 billion parameters.
But thatβs beginning to change as enterprise adoption continues to rise. In a detailed report, early-stage venture capital firm Menlo Ventures wrote that 2024 was βthe year that generative AI became a mission-critical imperative for the enterprise,β noting that AI spending grew to $13.8 billion last year, more than six times than the $2.3 billion spent in 2023.
The company said it was a βclear signal that enterprises are shifting from experimentation to execution, embedding AI at the core of their business strategies.β Enterprises are still getting their AI feet under them, but there are popular use cases (code assistants), support chatbots, enterprise search and retrieval, and data extraction and transformation. Healthcare, financial services, and legal are among the sectors that are embracing generative AI as the models become more vertically oriented.
Making Their Moves
IT vendors are taking notice, with several announcements this month lending evidence. At Computex this week, Nvidia introduced Nvidia NVLink, a computing fabric that others can license and sell into AI datacenters, while Hewlett Packard Enterprise unveiled updates to its portfolio of Nvidia AI Computing by HPE solutions.
Meanwhile, Qualcomm β which ditched the datacenter seven years ago after giving it a shot with its Arm-based Centriq chip β is getting back into the game, confirming it will build AI CPUs for enterprise datacenters that will connect with Nvidiaβs GPU accelerators. The decision first emerged this month when Qualcomm announced a deal with Saudi AI company HUMAIN. Cisco will work with the AI Infrastructure Partnership, which includes Microsoft, Nvidia, xAI, and others.
Last month, Google said that starting next quarter, Gemini will become available on Google Distributed Cloud, which will allow enterprises to run the AI tool on premises.
Dell And Enterprise AI
Enterprise adoption was a central theme in Day 1 of Dell Technologies World this week in Las Vegas, with company founder and chief executive officer Michael Dell saying during his keynote that such adoption was key for AI to reach its economic potential and that the IT vendorβs job was to make AI more accessible. He spoke of the work his company does with high-profile AI tech firms like CoreWeave, G42, and Mistral, and a project Dell is working on with an unnamed company that involves 110,000 GPUs, direct liquid cooling technologies, 240 megawatts of power, 27,500 GPU nodes, 2,800 racks, 6,000 network switches, and 27,000 miles of network cables.
Future AI projects will be larger, denser, built to scale and to generate tens of trillions of tokens a month and scale to a million GPUs. However, such companies are in the βbusiness of pure intelligence,β he said.
βFor most of us, the reality is a little different. βAI isnβt your product, but AI can power your purpose. You donβt need your own Colossus,β Dell said, referring to a massive AI system built by xAI, βbut you do need AI. Weβre taking all the learnings from these massive systems to make AI easy for you, from AI PCs to small, domain-specific models running on the edge to the planetary-scale AI datacenters.β
In a taped conversation with Dell, Nvidia co-founder and chief executive officer Jensen Huang backed up that thought, saying that βweβre simultaneously teeing up for one of the largest opportunities ahead of us, which is enterprise AI.β
The Move To On-Premises AI
The trends, as seen below, are running in that direction. Along with the move toward domain-specific AI models, businesses want to use their own data in AI training and inferencing and want to keep it in-house for security and privacy reasons.
Dell last year introduced its Dell AI Factory, a program aimed at giving enterprises the hardware they need to quickly design and manage their AI infrastructures, with more than 3,000 being deploy since then. Michael Dell said the AI Factory was 60 percent more cost-efficient than public clouds.
Last month it rolled out several new Intel-powered PowerEdge servers and ObjectScale and PowerScale storage enhancements to further its disaggregated architecture β with compute, storage, and network managed separate β aimed at enterprises with AI ambitions. This week in Las Vegas, the company added more servers and storage systems for AI workloads and cooling technologies to ease the heat generated by them.
New Servers And More
The PowerEdge XE9780 and XE9785 are air-cooled systems, with the XE9780L and XE9785L using direct-to-chip liquid cooling technology. All support up to 192 Nvidia Blackwell Ultra GPUs and can be customized to fit up to 256 Blackwell Ultra GPUs for each Dell IR7000 rack. They deliver up to four times faster LLM training with the eight-way Nvidia HG B300 accelerators.
The 10U air-cooled servers can run on Intel Xeon 6 or AMD Epyc 9005 CPUs, while the liquid-cooled systems are dense GPU-powered 3U machines.
Dellβs PowerEdge XE9712 includes Nvidiaβs GB300 NVL72 liquid-cooled rack, while the PowerEdge XE7745 will come in July with Nvidiaβs RTX Pro 6000 Black Server Edition GPUs and being supported in Nvidiaβs Enterprise AI Factor validated design. It supports up to eight GPUs in a 4U chassis.
In storage, the ObjectScale object storage portfolio supports AI deployments with a denser software-defined system and integrated Nvidia BlueField-3 and Spectrum-4 networking for to improve performance and scalability. ObjectScale also will support S3 over RDMA for 230 percent higher throughput and 80 percent lower latency than traditional S3.
A new reference architecture will include PowerScale storage and Project Lightning β an effort to create what the company says will be the fastest parallel file system in the world β and PowerEdge XE servers.
Dell also rolled out the PowerCool enclosed rear door heat exchanger (eRDHx), which the company says captures 100 percent of IT-generated heat via a self-contained airflow system. It uses warm water between 32 degrees and 36 degrees Celsius, with the result being a 60 percent reduction in cooling costs and enable 16 percent more rack density. IT also includes leak detection capabilities and real-time thermal monitoring.
Heat And Power Worries And Woes
Concerns about bringing AI workloads on-prem in many ways comes down to the same challenges that thereβs been in the datacenter for years β power and cooling, according to Seamus Jones, director of server engineering. Dell is looking for ways to ease those worries, Jones told The Next Platform.
That can be seen with the eRDHx and liquid-cooling technologies, said Armando Acosta, Dell product planner.
βMost customers have a threshold of power to their rack that the facility can support,β Acosta told The Next Platform. βIf we go beyond that threshold, then theyβre going to have to go through facility changes, possibly new generators within the framework.β
He said a customer told him they had a hard ceiling at 15 kilowatt per rack and were trying to both consolidate systems in the racks while also considering a future greenfield deployment. But even a higher ceiling β say 50 kilowatts per rack β is difficult.
βWhen youβre looking at the PowerEdge XE9780 and things like that, it draws 12 kilowatts, just that single unit,β he said. βYou then multiply that out by however many units per rack, and it can be a lot power for the entire system. The power and thermals are a major challenge.β
Dell also is trying to drive standards in direct liquid cooling systems, he said. Vendors like CoolIT, Motivair, Vertiv, and Carrier have good systems, but standards for such areas as manifolds to ensure an consistent fit in Dell and other systems and to give enterprises greater flexibility. There are groups coming together to work on developing such standards, he said.
βWhat weβre trying to do is lower that barrier of entry and where itβs not essentially the datacenter thatβs your barrier of entry,β Acosta said.
https://www.nextplatform.com/2025/05/20/dell-is-determined-to-gets-its-piece-of-the-ai-enterprise-pie/?mc_cid=4b35260f92&mc_eid=9d91dde03c
Weekly Chart