Microservices Caching Strategies

Format: Live Virtual or In-Person Training
Duration: 3-Hour Workshop
Instructor: Mark Richards
Students: Up to 50

In this workshop Mark Richards describes and demonstrates various caching strategies and patterns that you can use in Microservices to increase performance and scalability, manage shared data in a highly distributed architecture, and even manage on-prem data synchronization from cloud-based microservices. Using live coding examples in Apache Ignite and Hazelcast, he describes the differences between a distributed, replicated, and near cache, demonstrates various caching patterns within microservices, discusses error conditions resulting from data collisions when using replicated caching, and demonstrates various cache eviction policies.

For more information about the pricing and availability of this workshop for private (corporate) training, please contact Mark Richards at info@developertoarchitect.com. For public training offerings for this course, please see my public schedule at my upcoming events page. 


Workshop Agenda

Caching Topologies

  • Single in-memory data grid
  • Distributed caching
  • In-memory replicated caching
  • Near-cache hybrids
  • Guidelines of when to use each topology


Microservices Caching Patterns and Use Cases

  • Data sharing between services
  • Data sidecars 
  • Multi-instance caching
  • Tuple-space pattern


Data Collisions

  • Understanding data collisions
  • Avoiding data collisions
  • Calculating data collision probability


Cache Eviction Policies

  • Time to live (TTL) policy
  • Archive (ARC) policy
  • Least frequently used (LFU) policy
  • Least recently used (LRU) policy
  • Random Replacement (RR) policy


©2022 DeveloperToArchitect