Translation without borders: DeepL with Nvidia’s new chips dramatically speeds up AI

German startup DeepL has announced a major leap forward: thanks to the deployment of the latest Nvidia DGX SuperPOD system, it can now translate the entire internet in just 18 days. Not long ago, the same task would have taken 194 days.
DeepL, which develops its own AI models for translation and competes with giants like Google Translate, is deploying cutting-edge computing technologies to become the global leader in both quality and speed of machine translation.
Behind the performance are the latest Nvidia B200 Grace Blackwell chips, housed in SuperPOD server units. Each rack contains 36 of these superchips, designed specifically for training and running giant AI models.
‘The goal is to supply our researchers with enough computing power to develop even more advanced models,’ explains DeepL’s chief scientist, Stefan Mesken. The new infrastructure also enables the development of tools such as Clarify – a translation system that clarifies meaning and context by itself through questions to the user.
This development underlines how crucial a role advanced chips play in the development of the next generation of AI. At the same time, Nvidia is looking to expand beyond traditional ‘hyperscalers’ like Amazon or Microsoft – and DeepL is a prime example of how startups are turning their technology into cutting-edge AI products.
Photo source: www.pexels.com
Author of this article

WAS THIS ARTICLE HELPFUL?
Support us to keep up the good work and to provide you even better content. Your donations will be used to help students get access to quality content for free and pay our contributors’ salaries, who work hard to create this website content! Thank you for all your support!
OR CONTINUE READING