Try our new research platform with insights from 80,000+ expert users

NVIDIA DGX Cloud vs NVIDIA RTX Series comparison

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

NVIDIA DGX Cloud
Average Rating
9.0
Number of Reviews
1
Ranking in other categories
AI Infrastructure (2nd)
NVIDIA RTX Series
Average Rating
8.0
Number of Reviews
1
Ranking in other categories
Enterprise GPU (1st)
 

Mindshare comparison

NVIDIA DGX Cloud and NVIDIA RTX Series aren’t in the same category and serve different purposes. NVIDIA DGX Cloud is designed for AI Infrastructure and holds a mindshare of 13.5%, down 29.4% compared to last year.
NVIDIA RTX Series, on the other hand, focuses on Enterprise GPU, holds 24.1% mindshare, up 20.6% since last year.
AI Infrastructure Mindshare Distribution
ProductMindshare (%)
NVIDIA DGX Cloud13.5%
Amazon Bedrock15.1%
GroqCloud Platform12.4%
Other59.0%
AI Infrastructure
Enterprise GPU Mindshare Distribution
ProductMindshare (%)
NVIDIA RTX Series24.1%
Hailo-827.9%
Intel Movidius Myriad X VPU6.8%
Other41.2%
Enterprise GPU
 

Featured Reviews

reviewer2309676 - PeerSpot reviewer
Team Lead, High-Performance Computing (HPC) at a manufacturing company with 1,001-5,000 employees
Versatile, well-built, and powerful
The initial setup of the DGX server was quite straightforward. We treated it like any other server during deployment. It went to the data center, where they set it up, placed it in the rack, and enabled it. The deployment process was familiar, using our standard tools like Foreman and Ansible. Since the operating system is supported, we didn't encounter any specific challenges. For deploying the DGX server, we typically need two people for software tasks and sometimes vendor assistance for hardware setup. The process takes about four hours, with NVIDIA firmware updates taking the most time (around two hours), and the rest dedicated to OS and Ansible deployment. Maintaining the DGX server is pretty straightforward. We treat it like any other server, with around 10% downtime, while the rest of the cluster remains up.
Khasim Mirza - PeerSpot reviewer
Independent IT Security Consultant at Kinetic IT
Local ai has protected sensitive data and has enabled private rag workflows for small clients
NVIDIA RTX Series, if they provide more, any AI LLM is dependent on VRAM. Right now, the NVIDIA RTX Series 5090 comes with only 32 GB VRAM. But luckily, Apple has come up with their Mac Studios which have unified memory where we could use that unified memory as VRAM. If unified memory systems come into the picture, then NVIDIA might lose its value. Nowadays, people are buying Mac Minis which have unified memory to run local AI, local LLMs. Still, they are not as fast as NVIDIA, but there is a chance. It is a horse race; you don't know which horse will win next. It all depends on each hardware's capability, how many Tensor Cores it has, and at what frequency it is running. So I'm not sure how I can assess it; it all depends on how the architecture is and how fast your system is.
report
Use our free recommendation engine to learn which AI Infrastructure solutions are best for your needs.
884,873 professionals have used our research since 2012.
 

Top Industries

By visitors reading reviews
Computer Software Company
13%
University
12%
Manufacturing Company
11%
Comms Service Provider
8%
Comms Service Provider
14%
Computer Software Company
10%
Manufacturing Company
10%
University
10%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
No data available
No data available
 

Questions from the Community

Ask a question
Earn 20 points
What is your experience regarding pricing and costs for NVIDIA RTX Series?
NVIDIA RTX Series 5090 is around, I am in Australia, and it will cost you around $4,500 to $5,000 per piece from the dealer. If you're going for the Pro series, NVIDIA RTX Series Pro 6000, those ar...
What needs improvement with NVIDIA RTX Series?
NVIDIA RTX Series, if they provide more, any AI LLM is dependent on VRAM. Right now, the NVIDIA RTX Series 5090 comes with only 32 GB VRAM. But luckily, Apple has come up with their Mac Studios whi...
What is your primary use case for NVIDIA RTX Series?
NVIDIA RTX Series cards are very useful for Edge AI, basically to run local AI and local LLMs. Instead of running LLMs on the cloud or using the general ChatGPT, you can run your own LLMs on-premis...
 

Also Known As

NVIDIA DGX-1, DGX Cloud, NVIDIA DGX Platform
TITAN V
 

Overview

 

Sample Customers

Open AI, UC Berkley, New York University, Massachusetts General Hospital
Information Not Available