Natan Silnitsky
Advanced Caching Patterns used by 2000 microservices
#1about 7 minutes
Why caching is critical for services at scale
Caching reduces latency, lowers infrastructure costs, and improves reliability by making services less dependent on databases or third-party services.
#2about 1 minute
Knowing when not to implement a cache
Avoid adding a cache prematurely for young products with low traffic, as it introduces unnecessary complexity, potential bugs, and additional failure points.
#3about 4 minutes
Caching critical configuration with an S3-backed cache
Use a read-through cache backed by S3 to store static, rarely updated configuration data, ensuring service startup reliability even when dependencies are down.
#4about 6 minutes
Building a dynamic LRU cache with DynamoDB and CDC
Implement a cache-aside pattern using an in-memory LRU cache backed by DynamoDB and populated via Kafka CDC streams to reduce database load for frequently accessed data.
#5about 5 minutes
Using Kafka compact topics for in-memory datasets
For smaller datasets, use Kafka's compact topics to maintain a complete, up-to-date copy of the data in-memory for each service instance.
#6about 6 minutes
Implementing an HTTP reverse proxy cache with Varnish
Use a reverse proxy like Varnish Cache with a robust invalidation strategy to dramatically reduce response times for services with expensive computations like server-side rendering.
#7about 4 minutes
A decision tree for choosing the right caching pattern
Follow a simple flowchart to select the appropriate caching strategy based on whether the data is for startup, dynamic retrieval, or stable HTTP responses.
#8about 12 minutes
Q&A on caching strategies and implementation details
The discussion covers HTTP header caching, custom invalidation logic, handling the "thundering herd" problem, and the choice of JVM for high-performance services.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
07:58 MIN
Moving to the cloud and implementing Varnish cache
Scaling: from 0 to 20 million users
27:25 MIN
Implementing resilience patterns like caching and fallbacks
Microservices with Micronaut
16:52 MIN
Implementing caching strategies with service workers and Workbox
Progressive Web Apps - The next big thing
12:55 MIN
Optimizing cache efficiency with a dedicated sharded layer
Scaling: from 0 to 20 million users
08:07 MIN
Reducing server load with build steps and caching
Sleek, Swift, and Sustainable: Optimizations every web developer should consider
32:15 MIN
Q&A on cache strategies and dynamic content
Offline first!
15:25 MIN
Using distributed caches to reduce database load
In-Memory Computing - The Big Picture
25:39 MIN
Applying patterns for data replication, caching, and commands
Building high performance and scalable architectures for enterprises
Featured Partners
Related Videos
Scaling: from 0 to 20 million users
Josip Stuhli
HTTP headers that make your website go faster
Thijs Feryn
In-Memory Computing - The Big Picture
Markus Kett
The Rise of Reactive Microservices
David Leitner
Event based cache invalidation in GraphQL
Simone Sanfratello
Swapping Low Latency Data Storage Under High Load
George Asafev
10 must-know design patterns for JS Devs
Erick Wendel
Offline first!
Rowdy Rabouw
Related Articles
View all articles
.gif?w=240&auto=compress,format)


From learning to earning
Jobs that call for the skills explored in this talk.

Senior DevOps Engineer - Search & Services - (f/m/x)
AUTO1 Group SE
Berlin, Germany
Intermediate
Senior
ELK
Terraform
Elasticsearch


Solution Architect (self-healing Micro-Frontend)
Westhouse Consulting GmbH
Intermediate
DevOps
Kotlin
Grafana
Openshift
Prometheus
+2


Front-end Headless Engineer (Wieni)
Craftzing
Leuven, Belgium
Intermediate
GIT
DevOps
Vue.js
Docker
Node.js
+6

Front-end Headless Engineer (Wieni)
Craftzing
Antwerp, Belgium
Intermediate
GIT
DevOps
Vue.js
Docker
Node.js
+6


