All videos
All videos
Optimizing LLM Inference: Challenges and Best Practices
October 24, 2024
This presentation delves into the world of Large Language Models (LLMs), focusing on the efficiency of LLM inference. We will discuss the tradeoff of latency and bandwidth, followed by a deep dive into techniques for accelerating inference, such as KV caching, quantization, speculative decoding, and various forms of parallelism. We will compare popular inference frameworks and address the challenge of navigating the multitude of design choices. Finally, we'll introduce Nvidia Inference Microservices as a convenient one-stop solution for achieving efficient inference on many of the popular models.
Other videos that you might like

From Spark MLlib model to learning system with Watson Machine Learning
Łukasz Ćmielowski

Panel discussion on Reactive Systems
Samuel Weiss, Trevor Burton-McCreadie, Łukasz Biały, Alan Klikić, Paweł Dolega

Implementing Machine Learning Algorithms for Scale-Out Parallelism
William Benton

Meet bloop and get more productive with Scala
Martin Duhem