South Korean AI startup HyperAccel collaborated pinch platform-based SoC and ASIC designer SEMIFIVE backmost successful January 2024, to create nan Bertha LPU.
Tailored for LLM inference, Bertha offers “low cost, debased latency, and domain-specific features,” pinch nan purpose to switch “high-cost and low-efficiency” GPUs. SEMIFIVE reports that activity has now concluded, and nan processor, designed utilizing 4nm technology, is slated for wide accumulation by early 2026.
HyperAccel claims Bertha tin present up to double nan capacity and 19 times amended price-to-performance ratio than that of a emblematic supercomputer, but it faces reliable title successful a marketplace wherever Nvidia’s GPUs are truthful profoundly entrenched.
Facing challenges
“We are delighted to activity pinch SEMIFIVE, a starring supplier of SoC platforms and broad ASIC creation solutions, for nan improvement of Bertha to beryllium mass-produced,” said Joo-Young Kim, CEO of HyperAccel. “By collaborating pinch SEMIFIVE , we are excited to connection customers AI semiconductors that supply much cost-effective and power-efficient LLM features than GPU platforms. This advancement will importantly trim nan operational expenses of information centers and grow our business scope to different industries that require LLMs.”
Groq, an AI challenger headquartered successful Silicon Valley and led by ex-Google technologist and CEO Jonathan Ross, has already made strides with its ain LPU product, focusing connected high-speed AI inference.
Groq’s technology, which provides unreality and on-prem conclusion astatine standard for AI applications, has already recovered a sizable assemblage pinch complete 525K developers utilizing nan LPU since it launched successful February. Bertha’s precocious introduction mightiness put it astatine a disadvantage.
Brandon Cho, CEO and co-founder of SEMIFIVE, is much upbeat astir Bertha’s chances. He said, “HyperAccel is simply a institution pinch nan astir businesslike and scalable LPU exertion for LLMs. As nan request for LLM computation is skyrocketing, HyperAccel has nan imaginable to go a caller powerhouse successful nan world processor infrastructure.”
Bertha's attraction connected ratio could pull enterprises looking for alternatives to trim operational costs, but pinch Nvidia’s power unmatched, HyperAccel’s merchandise whitethorn find itself fighting for a niche successful an already crowded space, alternatively than becoming an AI leader.
More from TechRadar Pro
- These are nan best AI tools astir today
- Groq's ultrafast LPU could good beryllium nan first LLM-native processor
- AI is becoming progressively captious successful package development