Roberto Carratalá
One AI API to Power Them All
#1about 5 minutes
The challenge of building production-ready AI applications
The current AI landscape is fragmented with many tools, making it complex to build, scale, and maintain applications with features like RAG and agents.
#2about 3 minutes
Introducing Llama Stack for a unified AI API
Llama Stack, an open-source project from Meta, provides a standardized, modular framework to simplify AI development with a single API for various components.
#3about 3 minutes
Standardizing model inference and safety guardrails
Llama Stack abstracts away differences between local and remote LLMs and integrates safety shields to filter harmful inputs and outputs.
#4about 2 minutes
Simplifying retrieval-augmented generation (RAG) pipelines
Llama Stack organizes the complex RAG process into three distinct, swappable layers for vector embeddings, retrieval, and agentic workflows.
#5about 4 minutes
Building AI agents using the Model Context Protocol
Llama Stack simplifies agent creation by integrating tools, orchestration, and reasoning models through the standardized Model Context Protocol (MCP).
#6about 3 minutes
Gaining application observability with built-in telemetry
Llama Stack provides out-of-the-box telemetry using OpenTelemetry, enabling developers to trace multi-step agent workflows with tools like Jaeger.
#7about 4 minutes
A local demo of inference, safety, and agents
This live demo showcases running Llama Stack locally to perform inference, block unsafe prompts, use an agent to check the weather, and inspect traces in Jaeger.
#8about 1 minute
Transitioning AI applications from local to production
Llama Stack enables a seamless transition from a local development setup to a scalable production environment on Kubernetes by maintaining a consistent API.
#9about 5 minutes
A production demo of a multi-agent business workflow
A complex agent interacts with multiple MCP servers to query a CRM, analyze customer data, send Slack notifications, and generate a PDF report.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
04:20 MIN
Comparing open source tools for serving LLMs
Self-Hosted LLMs: From Zero to Inference
13:32 MIN
Introducing RAGStack as an opinionated development framework
Accelerating GenAI Development: Harnessing Astra DB Vector Store and Langflow for LLM-Powered Apps
23:48 MIN
Integrating decentralized tech and AI into your stack
End-to-End TypeScript: Completing the Modern Development Stack
00:04 MIN
Three pillars for integrating LLMs in products
Using LLMs in your Product
22:29 MIN
Testing Spring AI applications with local LLMs
What's (new) with Spring Boot and Containers?
06:08 MIN
Understanding the modern LLM application stack
Building AI Applications with LangChain and Node.js
05:08 MIN
The opaque and complex stack of modern LLM services
You are not my model anymore - understanding LLM model behavior
15:43 MIN
Using Red Hat tools across the AI development lifecycle
Developer Experience, Platform Engineering and AI powered Apps
Featured Partners
Related Videos
DevOps for AI: running LLMs in production with Kubernetes and KubeFlow
Aarno Aukia
Self-Hosted LLMs: From Zero to Inference
Roberto Carratalá & Cedric Clyburn
The State of GenAI & Machine Learning in 2025
Alejandro Saucedo
Agentic AI Systems for Critical Workloads
Mario Fusco
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache Camel
Bruno Meseguer & Markus Eisele
Azure AI Foundry for Developers: Open Tools, Scalable Agents, Real Impact
Oliver Will
Java Meets AI: Empowering Spring Developers to Build Intelligent Apps
Timo Salm
AI Agents Graph: Your following tool in your Java AI journey
Alex Soto
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.








