Vijay Krishan Gupta & Gauravdeep Singh Lotey
Creating Industry ready solutions with LLM Models
#1about 3 minutes
Understanding LLMs and the transformer self-attention mechanism
Large Language Models (LLMs) are defined by their parameters and training data, with the transformer's self-attention mechanism being key to resolving ambiguity in language.
#2about 4 minutes
Exploring the business adoption and emergent abilities of LLMs
Businesses are rapidly adopting LLMs due to their emergent abilities like in-context learning, instruction following, and chain-of-thought reasoning, which go beyond their original design.
#3about 9 minutes
Demo of an enterprise assistant for integrated systems
The Simplify Path demo showcases a unified chatbot interface that integrates with various enterprise systems like HRMS, Jira, and Salesforce for both informational queries and transactional tasks.
#4about 3 minutes
Demo of a document compliance checker for pharmaceuticals
The Doc Compliance tool validates pharmaceutical documents against a source-of-truth compliance document to ensure all parameters meet regulatory requirements.
#5about 3 minutes
Demo of a chatbot builder for any website
Web Water is a product that converts any website into an interactive chatbot by scraping its HTML, text, and media content to answer user questions.
#6about 5 minutes
Navigating the common challenges of building with LLMs
Key challenges in developing LLM applications include managing hallucinations, ensuring data privacy for sensitive industries, improving usability, and addressing the lack of repeatability.
#7about 7 minutes
Using prompt optimization to improve LLM usability
Prompt optimization techniques, such as defining a role, using zero-shot, few-shot, and chain-of-thought prompting, can significantly improve the quality and relevance of LLM outputs.
#8about 4 minutes
Advanced techniques like RAG, function calling, and fine-tuning
Overcome LLM limitations by using Retrieval-Augmented Generation (RAG) for domain-specific knowledge, function calling for real-time tasks, and fine-tuning for specialized models.
#9about 10 minutes
Code walkthrough for building a RAG-based chatbot
A practical code demonstration shows how to build a RAG pipeline using LangChain, ChromaDB for vector storage, and an open-source Llama 2 model to answer questions from a specific document.
#10about 9 minutes
Q&A on integration, offline RAG, and the future of LLMs
The discussion covers integrating LLMs into organizations, running RAG offline, suitability for small businesses, and the evolution towards large action models (LAMs).
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
07:45 MIN
Using large language models for voice-driven development
Speak, Code, Deploy: Transforming Developer Experience with Voice Commands
25:25 MIN
Exploring practical industry use cases for LLMs
Exploring LLMs across clouds
00:04 MIN
Three pillars for integrating LLMs in products
Using LLMs in your Product
00:27 MIN
Addressing the core challenges of large language models
Accelerating GenAI Development: Harnessing Astra DB Vector Store and Langflow for LLM-Powered Apps
03:36 MIN
The rapid evolution and adoption of LLMs
Building Blocks of RAG: From Understanding to Implementation
00:37 MIN
The challenge of applying general LLMs to enterprise problems
Give Your LLMs a Left Brain
28:51 MIN
Using large language models as a learning tool
Google Gemini: Open Source and Deep Thinking Models - Sam Witteveen
01:31 MIN
Understanding the core capabilities of large language models
Data Privacy in LLMs: Challenges and Best Practices
Featured Partners
Related Videos
Data Privacy in LLMs: Challenges and Best Practices
Aditi Godbole
How to Avoid LLM Pitfalls - Mete Atamel and Guillaume Laforge
Meta Atamel & Guillaume Laforge
Using LLMs in your Product
Daniel Töws
Lies, Damned Lies and Large Language Models
Jodie Burchell
From Traction to Production: Maturing your LLMOps step by step
Maxim Salnikov
Building Blocks of RAG: From Understanding to Implementation
Ashish Sharma
DevOps for AI: running LLMs in production with Kubernetes and KubeFlow
Aarno Aukia
Exploring LLMs across clouds
Tomislav Tipurić
Related Articles
View all articles.png?w=240&auto=compress,format)



From learning to earning
Jobs that call for the skills explored in this talk.




AI/ML Team Lead - Generative AI (LLMs, AWS)
Provectus
Remote
€96K
Senior
PyTorch
Tensorflow
Computer Vision
+2





R&D AI Software Engineer / End-to-End Machine Learning Engineer / RAG and LLM
Pathway
Remote
€72-75K
GIT
Unit testing
Machine Learning
+1