Ankit Patel
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
#1about 3 minutes
Understanding accelerated computing and GPU parallelism
Accelerated computing offloads parallelizable tasks from the CPU to specialized GPU cores, executing them simultaneously for a massive speedup.
#2about 2 minutes
Calculating the cost and power savings of GPUs
While a GPU-accelerated system costs more upfront, it can replace hundreds of CPU systems for parallel workloads, leading to significant cost and power savings.
#3about 4 minutes
Using NVIDIA libraries to easily accelerate applications
NVIDIA provides domain-specific libraries like cuDF that allow developers to accelerate their code, such as pandas dataframes, with minimal changes.
#4about 3 minutes
Shifting from traditional code to AI-powered logic
Modern AI development replaces complex, hard-coded logic with prompts to large language models, changing how developers implement functions like sentiment analysis.
#5about 3 minutes
Composing multiple AI models for complex tasks
Developers can now create sophisticated applications by chaining multiple AI models together, such as using a vision model's output to trigger an LLM that calls a tool.
#6about 2 minutes
Deploying enterprise AI applications with NVIDIA NIM
NVIDIA NIM provides enterprise-grade microservices for deploying AI models with features like runtime optimization, stable APIs, and Kubernetes integration.
#7about 4 minutes
Accessing NVIDIA's developer programs and training
NVIDIA offers a developer program with access to libraries, NIMs for local development, and free training courses through the Deep Learning Institute.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
01:12 MIN
Boosting Python performance with the Nvidia CUDA ecosystem
The weekly developer show: Boosting Python with CUDA, CSS Updates & Navigating New Tech Stacks
00:53 MIN
The rise of general-purpose GPU computing
Accelerating Python on GPUs
18:20 MIN
NVIDIA's platform for the end-to-end AI workflow
Trends, Challenges and Best Practices for AI at the Edge
07:36 MIN
Highlighting impactful contributions and the rise of open models
Open Source: The Engine of Innovation in the Digital Age
15:01 MIN
Using NVIDIA's full-stack platform for developers
Pioneering AI Assistants in Banking
00:05 MIN
Introduction to large-scale AI infrastructure challenges
Your Next AI Needs 10,000 GPUs. Now What?
01:11 MIN
How GPUs evolved from graphics to AI powerhouses
Accelerating Python on GPUs
27:27 MIN
Matching edge AI challenges with NVIDIA's solutions
Trends, Challenges and Best Practices for AI at the Edge
Featured Partners
Related Videos
How AI Models Get Smarter
Ankit Patel
Your Next AI Needs 10,000 GPUs. Now What?
Anshul Jindal & Martin Piercy
Accelerating Python on GPUs
Paul Graham
Generative AI power on the web: making web apps smarter with WebGPU and WebNN
Christian Liebel
The Future of Computing: AI Technologies in the Exascale Era
Stephan Gillich, Tomislav Tipurić, Christian Wiebus & Alan Southall
Trends, Challenges and Best Practices for AI at the Edge
Ekaterina Sirazitdinova
Bringing AI Everywhere
Stephan Gillich
AI Factories at Scale
Thomas Schmidt
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.




Product Engineer | AI Developer Automation
Neural Concept
Lausanne, Switzerland
DevOps
Continuous Integration




Robotics Solution Architect - DNN Training and Vision-Language-Action Models
Nvidia
Remote
Senior
PyTorch
