JVM Garbage Collection Guide: Collectors, Tuning, and Selection
“The garbage collector is the invisible conductor of your application’s memory symphony. Choose the wrong maestro, and your performance hits dissonance; choose wisely, and even terabytes of data flow in perfect harmony.”
Prologue: The Symphony of Memory
Imagine your JVM application as a grand orchestra performing a complex symphony. The musicians are your objects - strings, brass, woodwinds, percussion - each playing their part in the performance. The garbage collector is the conductor, ensuring that when a musician’s part ends, they exit gracefully, making room for new performers without disrupting the music.
The JVM offers multiple garbage collectors because no single conductor suits every venue. Allocation rates, object lifespans, heap sizes - this memory patterns determine which collector will keep your performance in tune.
Most developers ignore GC until the music stops. Then they’re frantically searching for why their application hit pause times longer than a Mahler symphony. Master your GC choice now, and you’ll never face that 2 AM fire drill.
2026 JVM Reality Check:
- Java 25 LTS released September 2025 - current production standard
- G1 is the default for server-class machines (Java 9+) - but defaults aren’t always optimal
- ZGC is generational by default since Java 23 -
-XX:+ZGenerationalflag deprecated- Non-generational ZGC removed in Java 24 (JEP 490) - any
-XX:-ZGenerationalflags will now cause startup failure- Generational Shenandoah (JEP 521) added in Java 25
- Shenandoah delivers sub-millisecond pauses independent of heap size
- Parallel GC still dominates throughput-focused batch processing
- CMS is deprecated and removed - don’t build new systems on it
- Containerized environments demand GC tuning more than ever
Related: Learn about Java 25’s silent revolution - Project Leyden, Generational Shenandoah, and how modern JVM features can yield significant performance gains with minimal code changes.
Part I: The Generational Hypothesis - The Rhythm of Object Lifetimes
Most Objects Die Young
Before understanding specific collectors, grasp this fundamental truth: the vast majority of objects live brief, intense lives. Like fireflies that flash brightly for a moment and fade, temporary objects - request handlers, calculation intermediates, StringBuilder instances - sparkle into existence and disappear almost immediately.
This observation is the weak generational hypothesis, and it shapes every modern JVM garbage collector.
The heap is divided into two arenas:
- Young Generation: The nursery where new objects are born. Here, objects either mature quickly or perish in the first collection. Minor collections here are frequent but lightning-fast because most objects are already dead.
- Old Generation: The hall of survivors. Objects that endure multiple young collections earn tenure here. Full collections in this space are rare but more expensive.
[ Young Generation ] [ Old Generation ]
[ Eden | Survivor | Survivor ] [ Tenured ]
↑ ↑ ↑ ↑
Birth First Second Long-term
survival survival residence
The JVM leverages this lifecycle pattern. Young generation collections (minor GC) happen often but quickly because they’re filled with garbage. Old generation collections (major/full GC) happen rarely but take longer because live data accumulates there.
The Three Performance Dimensions
Every GC decision balances three competing priorities:
| Dimension | Description | When It Matters |
|---|---|---|
| Throughput | Total work done per unit time | Batch processing, data pipelines |
| Latency | Pause times during collection | Web services, trading systems, games |
| Footprint | Memory overhead of GC itself | Resource-constrained containers |
You cannot optimize all three simultaneously. Choose your priority, and the collector choice becomes clear.
Part II: Serial GC - The Solo Virtuoso
One Thread to Rule Them All
The Serial garbage collector is the minimalist’s choice - a single thread performs all GC work. When collection begins, the entire application pauses while this lone worker cleans house.
Use it when:
- Running on single-core machines
- Heap sizes under 100MB
- Embedded systems with minimal resources
- Testing and development environments
java -XX:+UseSerialGC -Xms512m -Xmx512m MyApp
The Performance Profile
| Metric | Serial GC |
|---|---|
| Pause Time | High (stop-the-world) |
| Throughput | Moderate |
| Memory Overhead | Minimal |
| Threads | 1 |
The Serial collector is the acoustic guitar of GCs - no amplification, no effects, just pure simplicity. It doesn’t scale, but for small venues, it’s perfectly adequate.
When to choose Serial over Parallel? Surprisingly, on single-core systems, Serial often outperforms Parallel because there’s no thread coordination overhead. Multiple threads fighting for one CPU core creates more contention than a single disciplined worker.
Part III: Parallel GC - The Full Orchestra
Throughput at All Costs
Parallel GC (also called Throughput Collector) is Serial’s multi-threaded sibling. During young generation collections, multiple threads race to clean up, dramatically reducing pause times compared to Serial. However, old generation collections still stop the world.
This collector prioritizes throughput - total work accomplished - over individual pause times. It’s the heavy metal of garbage collectors: loud, powerful, unapologetic about the noise it makes.
Use it when:
- Batch processing and ETL jobs
- Scientific computing
- Any workload where total throughput matters more than individual pauses
- Large heaps (4GB+) on multi-core systems
java -XX:+UseParallelGC -XX:ParallelGCThreads=8 -Xms8g -Xmx8g MyApp
Tuning the Parallel Collector
# Target maximum pause time (millis)
java -XX:MaxGCPauseMillis=200 ...
# Target throughput percentage (time spent in GC vs app)
java -XX:GCTimeRatio=19 # 1/(1+19) = 5% time in GC
# Explicit thread count (defaults to CPU count)
java -XX:ParallelGCThreads=16 ...
The Trade-Off
Parallel GC sacrifices latency for throughput. It will pause your application, sometimes for seconds, but when running, your application threads enjoy nearly all CPU resources. For non-interactive workloads, this is often the right choice.
Real-World Scenario: A nightly data transformation job processing terabytes of records doesn’t care about 2-second pauses. It cares about finishing by morning. Parallel GC is the workhorse here.
Part IV: G1 GC - The Section Leader
Region-Based Revolution
G1 (Garbage-First) represents a paradigm shift. Instead of treating the heap as two monolithic spaces (young/old), G1 divides it into many equal-sized regions - typically 1-32MB each. A region can be Eden, Survivor, Old, or Humongous (for objects larger than 50% of a region).
This regional architecture enables incremental collection - G1 can clean parts of the old generation without a full stop-the-world collection.
G1’s Core Strategy: Prioritize regions with the most garbage. Why waste time compacting regions full of live objects when others are 90% garbage?
The Collection Cycle
1. Young Collection (stop-the-world, parallel)
└── Copy live objects from Eden to Survivor regions
2. Concurrent Marking Cycle (mostly concurrent)
└── Mark live objects in Old regions while app runs
3. Mixed Collection (stop-the-world)
└── Evacuate both Young and selected Old regions
└── Target: regions with most garbage first
Configuration for G1
# Enable G1 (default on Java 9+ server-class machines)
java -XX:+UseG1GC -Xms8g -Xmx8g MyApp
# Target maximum pause time (default: 200ms)
java -XX:MaxGCPauseMillis=100 ...
# Region size (default: calculated based on heap, min 1MB, max 32MB)
java -XX:G1HeapRegionSize=16m ...
# Maximum tenuring threshold
java -XX:MaxTenuringThreshold=5 ...
When G1 Excels
| Scenario | G1 Advantage |
|---|---|
| Heaps 4-16GB | Efficient region management |
| Predictable pauses | Configurable target (though not guaranteed) |
| Mixed workloads | Balances throughput and latency |
| Humongous objects | Dedicated regions prevent fragmentation |
The Reality Check
G1 is the default for a reason - it’s the safe choice that works reasonably well for most applications. But “reasonable” isn’t “optimal.” G1 pause times under 50ms are unrealistic. If you need sub-10ms pauses, look to ZGC or Shenandoah.
2026 Best Practice: Set
-Xmsequal to-Xmxfor any production JVM. Heap resizing triggers full GCs and fragments memory. Pre-allocate and stay fixed.
Java 24 Improvements (JEP 475)
Java 24 brought a significant transparent optimization to G1: late barrier expansion for C2 compiler write barriers (JEP 475). This optimization moves G1 write barrier handling to a later stage in the C2 compilation pipeline, reducing compiler overhead by 10–20%. This is a transparent improvement - no configuration needed, users on Java 24+ benefit automatically.
Part V: ZGC - The Time Lord
Ultra-Low Latency at Scale
Z Garbage Collector (ZGC) is Oracle’s answer to the latency problem. Design goals:
- Pause times under 10ms regardless of heap size
- Scalability to 16TB heaps
- No more than 15% throughput overhead
ZGC achieves this through a combination of techniques:
- Concurrent operations: Marking, relocation, reference processing - all happen while application threads run
- Colored pointers: Pointer metadata tracks object state (marked, relocated, etc.) without touching the object itself
- Load barriers: Every object read checks if relocation is needed, enabling concurrent compaction
- Generational by default (Java 23+): Separates young and old collections for better efficiency - now the standard mode
The ZGC Architecture
Application Thread GC Thread
| |
v v
┌─────────────┐ ┌──────────────┐
│ Load Barrier│────────────▶│ Colored Ptr │
│ (every │ Check │ Translation │
│ read) │ │ Table │
└─────────────┘ └──────────────┘
│
v
┌──────────────┐
│ Concurrent │
│ Relocation │
└──────────────┘
Enabling ZGC
# Java 15-21 (non-generational)
java -XX:+UseZGC -Xms16g -Xmx16g MyApp
# Java 21-22 (generational ZGC - explicit flag required)
java -XX:+UseZGC -XX:+ZGenerational -Xms16g -Xmx16g MyApp
# Java 23+ (generational by default)
java -XX:+UseZGC -Xms16g -Xmx16g MyApp
# Java 24+ (non-generational mode removed - JEP 490)
java -XX:+UseZGC -Xms16g -Xmx16g MyApp
Note: ZGC is generational by default since Java 23. The
-XX:+ZGenerationalflag was deprecated in Java 23 and non-generational mode was completely removed in Java 24 (JEP 490). In Java 25, generational ZGC is mature and production-hardened.Note: ZGC is currently incompatible with Compact Object Headers (JEP 519) - relevant if you’re evaluating COH for other collectors.
When to Choose ZGC
| Requirement | ZGC Advantage |
|---|---|
| Sub-10ms p99 latency | Consistently delivers |
| Large heaps (16GB+) | Pause time independent of heap size |
| Cloud/container environments | Efficient memory return to OS |
| Latency-sensitive services | No tuning typically needed |
The Trade-Off
ZGC uses more CPU and memory than G1 or Parallel. It reserves address space aggressively and has higher memory overhead. But for latency-critical applications, this is a fair exchange.
Container Tip: ZGC works well in containers but set
-XX:MaxRAMPercentage=75.0to leave headroom for GC overhead. Don’t allocate 100% of container memory to the heap.
Part VI: Shenandoah - The Memory Whisperer
Concurrent Compaction Pioneer
Developed by Red Hat and merged into OpenJDK, Shenandoah shares ZGC’s goals but uses different techniques:
- Concurrent evacuation: Objects are moved while application threads run
- Brooks pointers: Forwarding pointers enable concurrent relocation
- Read barriers: Ensure correctness during concurrent operations
- Pause times independent of heap size - a 200GB heap collects as fast as a 2GB heap
Shenandoah’s Phases
1. Initial Mark (STW, <1ms)
└── Mark roots
2. Concurrent Marking
└── Traverse object graph concurrently
3. Final Mark (STW, <1ms)
└── Complete marking, identify collection set
4. Concurrent Evacuation
└── Copy live objects to new regions
└── Brooks pointers redirect old to new locations
5. Concurrent Update References
└── Fix pointers to relocated objects
└── Application threads help via read barriers
6. Final Update References (STW, <1ms)
└── Update root references
└── Cleanup evacuated regions
Enabling Shenandoah
# Available in OpenJDK 12+, Red Hat/OpenLogic builds
java -XX:+UseShenandoahGC -Xms8g -Xmx8g MyApp
# Heuristics mode
java -XX:ShenandoahGCHeuristics=compact ... # Aggressive compaction
java -XX:ShenandoahGCHeuristics=throughput ... # Maximize throughput
ZGC vs Shenandoah
| Aspect | ZGC | Shenandoah |
|---|---|---|
| Max Heap | 16TB | 16TB |
| Pause Target | <10ms | <10ms |
| Availability | Oracle JDK, OpenJDK | OpenJDK, Red Hat builds |
| Generational | Yes (Java 23+ default, Java 24+ only mode) | Yes (Java 25+) |
| Memory Overhead | Higher | Lower than ZGC |
| Throughput | Slightly lower | Slightly higher |
Both are excellent choices. In 2026, both ZGC and Shenandoah have generational modes. ZGC’s generational implementation has been default since Java 23, while Shenandoah added generational support in Java 25 (JEP 521). For ultra-low latency workloads, both collectors now offer mature, production-ready generational collection.
Part VII: The Epsilon Collector - The Sound of Silence
No-Op for Performance Testing
Epsilon is the anti-collector - it allocates memory but never collects. When the heap fills up, the JVM exits.
Use it for:
- Performance testing (eliminate GC as a variable)
- Short-lived applications (one-shot tools)
- Determining if you need a collector at all (if Epsilon works, you have no GC pressure)
java -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC MyApp
Warning: Epsilon is not for production. Your application will crash when memory fills. This is by design.
Part VIII: Java 25 Innovations - The New Frontier
Generational Shenandoah (JEP 521)
While ZGC went generational by default in Java 23, Java 25 brought the same treatment to Shenandoah (JEP 521). This is significant because:
- Shenandoah has lower memory overhead than ZGC
- Both collectors now offer generational modes
- Applications with typical object lifetime patterns see reduced pause times
Enable it (Java 25+):
java -XX:+UseShenandoahGC -XX:ShenandoahGCMode=generational -Xms8g -Xmx8g MyApp
The Java 25 LTS Recommendation
For production deployments in 2026, Java 25 LTS (released September 2025) should be your target:
# Recommended Java 25 configuration for low-latency services
java \
-XX:+UseZGC \
-XX:MaxGCPauseMillis=10 \
-XX:+AlwaysPreTouch \
-Xms4g \
-Xmx4g \
-jar application.jar
💡 Deep dive: For detailed coverage of Java 25’s performance improvements including Project Leyden’s AOT profiling and Scoped Values, see JVM: The Silent Revolution.
Part IX: Choosing Your Conductor - The Decision Matrix
Quick Selection Guide
┌─────────────────────────────────────────────────────────────┐
│ Question 1: What's your primary constraint? │
├─────────────────────────────────────────────────────────────┤
│ Throughput → Parallel GC │
│ Latency (<10ms) → ZGC or Shenandoah │
│ Balanced → G1 │
│ Minimal resources → Serial │
└─────────────────────────────────────────────────────────────┘
│
v
┌─────────────────────────────────────────────────────────────┐
│ Question 2: What's your heap size? │
├─────────────────────────────────────────────────────────────┤
│ < 1GB → Serial or Parallel │
│ 1-16GB → G1, ZGC, or Shenandoah │
│ > 16GB → ZGC or Shenandoah (only sensible options) │
└─────────────────────────────────────────────────────────────┘
│
v
┌─────────────────────────────────────────────────────────────┐
│ Question 3: What's your availability requirement? │
├─────────────────────────────────────────────────────────────┤
│ Oracle JDK only → ZGC, G1, Parallel, Serial │
│ OpenJDK/Red Hat builds → Shenandoah, ZGC, G1, others │
│ Container/Kubernetes → ZGC (low overhead), Shenandoah │
│ AWS Lambda/Serverless → Serial (platform managed) │
│ Need generational low-latency → ZGC (Java 23+) │
└─────────────────────────────────────────────────────────────┘
Collector Comparison Table
| Collector | Pause Time | Throughput | Heap Range | Use Case |
|---|---|---|---|---|
| Serial | High | Low | <100MB | Single-core, embedded |
| Parallel | Medium | Very High | 1GB-8GB | Batch processing, ETL |
| G1 | Medium (20-200ms) | High | 4GB-16GB | General purpose, default |
| ZGC | Low (<10ms) | High | 8GB-16TB | Low-latency services |
| Shenandoah | Low (<10ms) | High | 8GB-16TB | Low-latency, containers |
Part X: Tuning and Monitoring - Reading the Sheet Music
Essential GC Logging
# Unified logging (Java 9+)
java -Xlog:gc*:file=gc.log:time,uptime,level,tags:filecount=10,filesize=100m ...
# Key metrics to monitor
java -Xlog:gc+heap=info,gc+phases=debug,gc+age=trace ...
Critical Metrics
| Metric | Tool | Healthy Threshold |
|---|---|---|
| Pause Time | GC logs | < target (100ms for G1, 10ms for ZGC) |
| GC Frequency | GC logs | Minor GC every few seconds |
| Heap Usage | JMX, Prometheus | < 70% after full GC |
| Allocation Rate | GC logs | Steady state, not growing |
Common Problems and Solutions
Problem: Frequent Full GCs
- Cause: Heap too small, memory leak, or premature promotion
- Solution: Increase heap, tune young generation size, check for leaks
Problem: Long Pause Times with G1
- Cause: Heap too large, humongous objects, or target too aggressive
- Solution: Reduce heap, increase region size, raise pause target
Problem: ZGC Using Too Much CPU
- Cause: Allocation rate exceeds collection rate
- Solution: Reduce allocation, add heap, or tune concurrent threads
Problem: OutOfMemoryError despite free heap
- Cause: Native memory exhaustion (metaspace, direct buffers)
- Solution: Increase MetaspaceSize, review direct buffer usage
Safepoints - The Hidden Pause Driver
The article has focused on GC pauses, but GC is only one operation that requires your application to pause. Understanding safepoints completes the picture of JVM latency.
A safepoint is a point in program execution where all application threads must pause so the JVM can perform operations that require a consistent view of the heap and thread stacks. These operations include:
- Garbage Collection (stop-the-world phases)
- Deoptimization (reverting optimized code to interpreted mode)
- Class redefinition (hotswap during debugging)
- Thread dumps and heap dumps
- JVMTI operations (profiling, debugging)
The Time-to-Safepoint (TTSP) Problem
When a safepoint operation is requested, the JVM must wait for all application threads to reach a safepoint. This delay is called Time-to-Safepoint (TTSP). Even with ZGC’s sub-millisecond GC pauses, a long TTSP can result in multi-second application pauses.
What causes long TTSP?
- Counted loops without safepoint polls - Tight loops executing billions of iterations without method calls or allocations don’t check for pending safepoints
- Large array copies -
System.arraycopy()over large arrays blocks until completion - JNI calls - Native code cannot be interrupted
- Memory-mapped I/O operations - Reading large mapped files synchronously
Real-world impact: A 2020 analysis of production JVMs showed TTSP delays ranging from 10 seconds to several minutes, even when GC pauses were under 1ms. The application threads simply couldn’t reach safepoints quickly enough.
Diagnosing TTSP Issues
# Enable safepoint logging (verbose)
java -Xlog:safepoint*:file=safepoint.log:time,uptime,level,tags:filecount=10,filesize=100m ...
# Key fields in unified log output:
# "Reaching safepoint" = TTSP (time waiting for threads)
# "At safepoint" = VM operation time (GC, deopt, etc.)
# High "Reaching safepoint" with low "At safepoint" = TTSP problem
Interpreting safepoint logs:
Modern unified logging (-Xlog:safepoint*) outputs entries showing the time breakdown for each safepoint operation. Look for these phases:
- Reaching safepoint - Time waiting for all threads to reach a safepoint (TTSP)
- At safepoint - Time spent performing the actual VM operation (GC, deoptimization, etc.)
- Total - Combined time from request to completion
If “Reaching safepoint” time exceeds your latency budget while “At safepoint” time is low, you have a TTSP problem, not a GC problem.
Fixing TTSP Issues
- Add Thread.yield() or Thread.sleep(0) in tight loops processing large datasets
- Chunk large array operations - Process 1MB at a time instead of 1GB
- Use async file I/O instead of blocking memory-mapped operations
- Review JNI usage - Native code cannot be interrupted; minimize time spent in JNI calls
Modern JVMs (Java 10+) use polling-page-based safepoint mechanisms, making timer-based tuning largely unnecessary. TTSP issues require application-level fixes rather than JVM flags.
Workload-Specific Tuning Profiles
Selecting the right collector is only the beginning - configuring it for your specific workload pattern is what separates production-ready applications from those that fail under load. Different workload archetypes have fundamentally different allocation patterns, object lifetimes, and latency requirements.
Here are four archetypal workload patterns with specific tuning recommendations:
Profile 1: High-Frequency REST API
Characteristics: Massive volume of short-lived request/response objects, strict latency SLAs (p99 < 100ms)
Collector Choice: ZGC (generational, Java 23+) or Shenandoah (Java 25+)
Recommended Configuration:
java -XX:+UseZGC \
-XX:MaxGCPauseMillis=5 \
-XX:+AlwaysPreTouch \
-XX:MaxRAMPercentage=75.0 \
-Xms4g -Xmx4g \
-jar api-service.jar
Tuning Notes:
- Set aggressive pause target (5ms) for API latency
- Pre-touch pages to avoid allocation pauses during traffic spikes
- Monitor allocation rate - if > 1GB/s, consider increasing heap or optimizing object creation
Profile 2: Stateful WebSocket/Streaming Service
Characteristics: Long-lived connections (minutes to hours), gradual old generation growth, steady-state allocation
Collector Choice: G1 GC or Shenandoah
Recommended Configuration:
java -XX:+UseG1GC \
-XX:MaxGCPauseMillis=100 \
-XX:G1HeapRegionSize=16m \
-XX:InitiatingHeapOccupancyPercent=35 \
-XX:+UseStringDeduplication \
-Xms8g -Xmx8g \
-jar streaming-service.jar
Tuning Notes:
- Larger regions (16MB) reduce region management overhead for long sessions
- Lower IHOP (35%) starts old gen collection earlier, preventing full GC
- String deduplication helps with repeated JSON payloads in WebSocket frames
- Monitor tenuring threshold - long-lived session objects should promote quickly to old gen
Profile 3: Data Pipeline / ETL / Batch Processing
Characteristics: Throughput priority, predictable object lifetimes, large working sets, temporary spikes
Collector Choice: Parallel GC
Recommended Configuration:
java -XX:+UseParallelGC \
-XX:ParallelGCThreads=16 \
-XX:GCTimeRatio=19 \
-XX:MaxGCPauseMillis=1000 \
-Xms32g -Xmx32g \
-jar etl-job.jar
Tuning Notes:
- Maximize throughput over latency (1s pauses acceptable for batch)
- Match GC threads to available CPU cores
- Large heap accommodates working set without premature promotion
- Monitor GC overhead - should be < 5% of total runtime
Profile 4: Machine Learning Inference Service
Characteristics: Large primitive arrays, off-heap buffers (DirectByteBuffer), irregular allocation spikes, mixed object sizes
Collector Choice: ZGC (large heap support) or G1 with humongous region tuning
Recommended Configuration:
java -XX:+UseZGC \
-XX:MaxDirectMemorySize=8g \
-XX:+UseLargePages \
-XX:+AlwaysPreTouch \
-XX:MaxRAMPercentage=80.0 \
-Xms24g -Xmx24g \
-jar ml-inference.jar
Tuning Notes:
- Large pages improve TLB performance for array access
- Monitor direct memory separately from heap - native OOM can occur despite free heap
- ZGC’s scalability to 16TB heaps handles large model weights efficiently
- Consider off-heap memory management (Foreign Memory API) for model tensors to reduce GC pressure
Key Diagnostic Commands:
# Check direct buffer usage
jcmd <pid> VM.native_memory summary
# Monitor allocation rate in real-time
jstat -gc <pid> 1s | awk '{print $6}' # EU (Eden Used) - Column 6
# Profile allocation by object type
java -XX:StartFlightRecording=filename=allocation-profile.jfr,duration=60s \
-XX:FlightRecorderOptions=stackdepth=128 \
-XX:+UnlockDiagnosticVMOptions \
-XX:+DebugNonSafepoints ...
Part XI: Container and Cloud Considerations
The Container Challenge
Containers break assumptions that traditional GC tuning relied on:
- CPU limits: GC threads may exceed cgroup limits, causing throttling
- Memory limits: OOM killer terminates containers before GC can react
- Noisy neighbors: Shared resources affect GC timing
Container-Optimized Settings
# Java 17+ container awareness
java -XX:+UseContainerSupport \
-XX:MaxRAMPercentage=75.0 \
-XX:InitialRAMPercentage=75.0 \
-XX:+UseG1GC \
-XX:MaxGCPauseMillis=100 \
MyApp
# ZGC in Kubernetes (Java 23+ - generational by default)
java -XX:+UseZGC \
-XX:MaxRAMPercentage=75.0 \
-XX:+AlwaysPreTouch \
MyApp
Cloud-Native Recommendations
| Environment | Recommended Collector | Configuration Notes |
|---|---|---|
| AWS Lambda | Serial (forced) | Minimal cold start |
| Kubernetes | ZGC or G1 | Set resource limits appropriately |
| Serverless | N/A (platform managed) | Focus on allocation reduction |
| EC2/VMs | Any | Full control, tune to workload |
Epilogue: The Art of Memory Management
Garbage collection in the JVM is not a necessary evil - it’s a sophisticated automation that eliminates entire classes of bugs (memory leaks, double frees, dangling pointers) that plague manual memory management languages. The cost is unpredictability, but modern collectors have reduced that cost to negligible levels for most applications.
The key insights:
- Start with defaults: G1 is the default because it works well enough for most
- Measure before tuning: Don’t optimize what you haven’t measured
- Match collector to constraints: Throughput? Parallel. Latency? ZGC. Balanced? G1
- Size your heap correctly:
-Xms=-Xmxin production, always - Monitor continuously: GC behavior changes with workload patterns
Your application is a symphony. The garbage collector ensures every musician - every object - enters and exits at the right moment. Choose your conductor wisely, tune your instruments properly, and your performance will resonate with users.
The Road Ahead: Project Leyden and AOT Profiling
The JVM continues to evolve toward even more intelligent memory management. Project Leyden’s AOT (Ahead-of-Time) profiling (JEPs 514 and 515) represents a significant advancement that indirectly benefits all garbage collectors. By capturing and reusing profiling data across JVM restarts, AOT profiling reduces the JIT compilation warmup period. This translates to reduced allocation rate variance during the application’s early phase, which means less pressure on the young generation and more predictable GC behavior from the moment your application starts serving traffic. For latency-sensitive services, this elimination of warmup-induced allocation spikes is another step toward truly consistent performance.
Updated for February 2026. JVM garbage collection has evolved rapidly - ZGC is generational by default since Java 23, non-generational ZGC was removed in Java 24 (JEP 490), Shenandoah added generational mode in Java 25 (JEP 521), and G1 gained compiler barrier optimizations in Java 24 (JEP 475). The future points toward zero-configuration, self-tuning collectors, but understanding these fundamentals remains essential for every JVM developer.
Quick Reference
Collector Selection Flowchart
Single core? ──Yes──▶ Serial GC
│
No
│
Heap < 1GB? ──Yes──▶ Serial or Parallel
│
No
│
Latency critical (<10ms)? ──Yes──▶ ZGC or Shenandoah
│
No
│
Throughput priority? ──Yes──▶ Parallel GC
│
No
▼
Default: G1 GC (works for most cases)
JVM Flags Cheat Sheet
| Goal | Flags |
|---|---|
| Enable G1 | -XX:+UseG1GC |
| Enable ZGC | -XX:+UseZGC |
| Enable Shenandoah | -XX:+UseShenandoahGC |
| Enable Parallel | -XX:+UseParallelGC |
| Target pause time | -XX:MaxGCPauseMillis=100 |
| Fixed heap size | -Xms8g -Xmx8g |
| Container support | -XX:+UseContainerSupport |
| GC logging | -Xlog:gc*:file=gc.log |
Further Reading
JVM Architecture & Performance:
- JVM: The Silent Revolution - Java 25’s Compact Object Headers, Project Leyden, and zero-code-change performance gains
- Java Performance: The Definitive Guide - Scott Oaks
Garbage Collector Documentation:
- ZGC Wiki - Official ZGC documentation
- Shenandoah Wiki - Shenandoah project page
- G1 GC Tuning Guide - Oracle’s official guide for Java 25
JEP References (Java 24-25):
- JEP 519: Compact Object Headers - Object header compression
- JEP 521: Generational Shenandoah - Generational mode for Shenandoah GC
- JEP 474: Generational ZGC - Generational ZGC, default since Java 23
Research and Case Studies:
- Chirumamilla, P. et al. “Java Virtual Threads: a Case Study.” InfoQ, July 2024. Production analysis of virtual thread memory behavior and GC implications.
Master the orchestra of memory management, and your JVM applications will perform with the precision of a world-class symphony - responsive, efficient, and beautifully tuned to the demands of your users.