Liquid Web launches new GPU hosting service for AI and HPC — and users can access a range of Nvidia GPUs (including H100s)

Trending 1 month ago
Nvidia H100
(Image credit: Nvidia)

Liquid Web has unveiled nan motorboat of a caller GPU hosting work designed to support gait pinch increasing high-performance computing (HPC) requirements.

The caller offering will harness Nvidia GPUs and is catered specifically toward developers focused connected AI and instrumentality learning tasks, nan institution confirmed.

Users capitalizing connected nan caller work tin expect a scope of benefits, according to Liquid Web, including “accelerated capacity pinch Nvidia GPUs”.

"Untapped potential"

Sachin Puri, Chief Growth Officer of Liquid Web, said nan caller work will support developers astatine an “affordable price” and connection reserved hourly pricing options for users.

“AI has infinite untapped potential, and we want to empower developers and AI innovators to harness nan afloat powerfulness of NVIDIA GPUs,” Puri said.

“With capacity benchmarks that speak for themselves, this solution is nan clear prime for accelerating AI workloads and maximizing value. And this is conscionable nan opening — enactment tuned arsenic we grow our lineup to present moreover much powerful solutions successful nan adjacent future.”

Liquid Web CTO Ryan MacDonald noted nan firm’s on-demand servers tin connection “up to 15 percent” amended GPU capacity complete virtualized environments, aliases astatine an equal-to-lower cost.

Sign up to nan TechRadar Pro newsletter to get each nan apical news, opinion, features and guidance your business needs to succeed!

“Our on-demand NVIDIA-powered servers pinch pre-configured GPU stack fto customers quickly deploy AI/ML and heavy learning workloads — maximizing capacity and worth while focusing connected results alternatively than setup,” says Ryan MacDonald, Chief Technology Officer of Liquid Web.

What users tin expect from Liquid Web’s GPU hosting service

The work will leverage a scope of top-tier GPUs, including Nvidia L4,L40S, and H100 GPUs. These, nan institution said, connection “exceptional processing speeds” that are perfect for AI and instrumentality learning applications, large connection exemplary (LLM) development, heavy learning, and information analytics.

As portion of nan service, users will besides person entree to on-demand bare metallic options, which Liquid Web will alteration enterprises to “gain afloat control” complete their infrastructure.

The move from Liquid Web comes amid a play of sharpened endeavor attraction connected AI, pinch enterprises ramping up improvement globally.

Analysis from Grand View Research shows nan globally AI marketplace is expected to apical $1.81 trillion successful worth by 2030, marking a important summation compared to 2023, wherever nan marketplace worth stood astatine $196.63 billion.

There person been increasing concerns complete some costs and hardware readiness passim this period, however, pinch GPU prices skyrocketing and hold times for hardware growing.

That’s why elasticity is simply a cardinal attraction for nan service, according to Liquid Web. The institution noted that users tin customize GPU hosting to meet circumstantial capacity needs connected an as-needed basis, and intends to target companies ranging from startup level to larger enterprises.

More from TechRadar Pro

  • World's astir powerful desktop PC has 256 EPYC Genoa cores, 6TB (yes TB) RAM and costs only $120,000
  • With AMD's fastest mobile CPU, 64GB RAM and a brace of OLED screens, GPD Duo whitethorn beryllium nan champion mobile workstation ever
  • Watch retired Nvidia: AMD launches a caller single-slot GPU for information centers – and it looks for illustration an absolute beast

Ross Kelly is News & Analysis Editor astatine ITPro, responsible for starring nan brand's news output and in-depth reporting connected nan latest stories from crossed nan business exertion landscape.

More
Source Technology
Technology