Nvidia’s graphics processing units (GPUs) continue to excel as an infrastructure platform for state-of-the-art artificial intelligence language models. Recently, the company’s GPUs were able to train BERT, one of the world’s most advanced AI language models, in a record-breaking 53 minutes.1 Getting computers to understand all the nuances of human languages and respond appropriately has long been a holy grail in developing natural language processing (NLP) technology.2 Language models can now complete inference in just 2.2 milliseconds, i.e., understand and reach a conclusion based on information received.3 Strides made in NLP could be significant, with potential adopters spread across numerous industries.
1. Nvidia Developer Blog,” NVIDIA Clocks World’s Fastest BERT Training Time and Largest Transformer Based Model, Paving Path For Advanced Conversational AI,” Aug 13, 2019.
2. Ibid.
3. Nvidia Developer Blog, “Real-Time Natural Language Understanding with BERT Using TensorRT,” Aug 13, 2019.