Skip to content
Michał Artur Marciniak
Go back

JVM Garbage Collection Guide: Collectors, Tuning, and Selection

JVM Garbage Collection Guide: Collectors, Tuning, and Selection


“The garbage collector is the invisible conductor of your application’s memory symphony. Choose the wrong maestro, and your performance hits dissonance; choose wisely, and even terabytes of data flow in perfect harmony.”


Prologue: The Symphony of Memory

Imagine your JVM application as a grand orchestra performing a complex symphony. The musicians are your objects - strings, brass, woodwinds, percussion - each playing their part in the performance. The garbage collector is the conductor, ensuring that when a musician’s part ends, they exit gracefully, making room for new performers without disrupting the music.

The JVM offers multiple garbage collectors because no single conductor suits every venue. Allocation rates, object lifespans, heap sizes - this memory patterns determine which collector will keep your performance in tune.

Most developers ignore GC until the music stops. Then they’re frantically searching for why their application hit pause times longer than a Mahler symphony. Master your GC choice now, and you’ll never face that 2 AM fire drill.

2026 JVM Reality Check:

  • Java 25 LTS released September 2025 - current production standard
  • G1 is the default for server-class machines (Java 9+) - but defaults aren’t always optimal
  • ZGC is generational by default since Java 23 - -XX:+ZGenerational flag deprecated
  • Non-generational ZGC removed in Java 24 (JEP 490) - any -XX:-ZGenerational flags will now cause startup failure
  • Generational Shenandoah (JEP 521) added in Java 25
  • Shenandoah delivers sub-millisecond pauses independent of heap size
  • Parallel GC still dominates throughput-focused batch processing
  • CMS is deprecated and removed - don’t build new systems on it
  • Containerized environments demand GC tuning more than ever

Related: Learn about Java 25’s silent revolution - Project Leyden, Generational Shenandoah, and how modern JVM features can yield significant performance gains with minimal code changes.


Part I: The Generational Hypothesis - The Rhythm of Object Lifetimes

Most Objects Die Young

Before understanding specific collectors, grasp this fundamental truth: the vast majority of objects live brief, intense lives. Like fireflies that flash brightly for a moment and fade, temporary objects - request handlers, calculation intermediates, StringBuilder instances - sparkle into existence and disappear almost immediately.

This observation is the weak generational hypothesis, and it shapes every modern JVM garbage collector.

The heap is divided into two arenas:

[ Young Generation ]          [    Old Generation    ]
[ Eden | Survivor | Survivor ] [       Tenured        ]
  ↑      ↑          ↑              ↑
Birth   First      Second       Long-term
       survival   survival      residence

The JVM leverages this lifecycle pattern. Young generation collections (minor GC) happen often but quickly because they’re filled with garbage. Old generation collections (major/full GC) happen rarely but take longer because live data accumulates there.

The Three Performance Dimensions

Every GC decision balances three competing priorities:

DimensionDescriptionWhen It Matters
ThroughputTotal work done per unit timeBatch processing, data pipelines
LatencyPause times during collectionWeb services, trading systems, games
FootprintMemory overhead of GC itselfResource-constrained containers

You cannot optimize all three simultaneously. Choose your priority, and the collector choice becomes clear.


Part II: Serial GC - The Solo Virtuoso

One Thread to Rule Them All

The Serial garbage collector is the minimalist’s choice - a single thread performs all GC work. When collection begins, the entire application pauses while this lone worker cleans house.

Use it when:

java -XX:+UseSerialGC -Xms512m -Xmx512m MyApp

The Performance Profile

MetricSerial GC
Pause TimeHigh (stop-the-world)
ThroughputModerate
Memory OverheadMinimal
Threads1

The Serial collector is the acoustic guitar of GCs - no amplification, no effects, just pure simplicity. It doesn’t scale, but for small venues, it’s perfectly adequate.

When to choose Serial over Parallel? Surprisingly, on single-core systems, Serial often outperforms Parallel because there’s no thread coordination overhead. Multiple threads fighting for one CPU core creates more contention than a single disciplined worker.


Part III: Parallel GC - The Full Orchestra

Throughput at All Costs

Parallel GC (also called Throughput Collector) is Serial’s multi-threaded sibling. During young generation collections, multiple threads race to clean up, dramatically reducing pause times compared to Serial. However, old generation collections still stop the world.

This collector prioritizes throughput - total work accomplished - over individual pause times. It’s the heavy metal of garbage collectors: loud, powerful, unapologetic about the noise it makes.

Use it when:

java -XX:+UseParallelGC -XX:ParallelGCThreads=8 -Xms8g -Xmx8g MyApp

Tuning the Parallel Collector

# Target maximum pause time (millis)
java -XX:MaxGCPauseMillis=200 ...

# Target throughput percentage (time spent in GC vs app)
java -XX:GCTimeRatio=19  # 1/(1+19) = 5% time in GC

# Explicit thread count (defaults to CPU count)
java -XX:ParallelGCThreads=16 ...

The Trade-Off

Parallel GC sacrifices latency for throughput. It will pause your application, sometimes for seconds, but when running, your application threads enjoy nearly all CPU resources. For non-interactive workloads, this is often the right choice.

Real-World Scenario: A nightly data transformation job processing terabytes of records doesn’t care about 2-second pauses. It cares about finishing by morning. Parallel GC is the workhorse here.


Part IV: G1 GC - The Section Leader

Region-Based Revolution

G1 (Garbage-First) represents a paradigm shift. Instead of treating the heap as two monolithic spaces (young/old), G1 divides it into many equal-sized regions - typically 1-32MB each. A region can be Eden, Survivor, Old, or Humongous (for objects larger than 50% of a region).

This regional architecture enables incremental collection - G1 can clean parts of the old generation without a full stop-the-world collection.

G1’s Core Strategy: Prioritize regions with the most garbage. Why waste time compacting regions full of live objects when others are 90% garbage?

The Collection Cycle

1. Young Collection (stop-the-world, parallel)
   └── Copy live objects from Eden to Survivor regions

2. Concurrent Marking Cycle (mostly concurrent)
   └── Mark live objects in Old regions while app runs

3. Mixed Collection (stop-the-world)
   └── Evacuate both Young and selected Old regions
   └── Target: regions with most garbage first

Configuration for G1

# Enable G1 (default on Java 9+ server-class machines)
java -XX:+UseG1GC -Xms8g -Xmx8g MyApp

# Target maximum pause time (default: 200ms)
java -XX:MaxGCPauseMillis=100 ...

# Region size (default: calculated based on heap, min 1MB, max 32MB)
java -XX:G1HeapRegionSize=16m ...

# Maximum tenuring threshold
java -XX:MaxTenuringThreshold=5 ...

When G1 Excels

ScenarioG1 Advantage
Heaps 4-16GBEfficient region management
Predictable pausesConfigurable target (though not guaranteed)
Mixed workloadsBalances throughput and latency
Humongous objectsDedicated regions prevent fragmentation

The Reality Check

G1 is the default for a reason - it’s the safe choice that works reasonably well for most applications. But “reasonable” isn’t “optimal.” G1 pause times under 50ms are unrealistic. If you need sub-10ms pauses, look to ZGC or Shenandoah.

2026 Best Practice: Set -Xms equal to -Xmx for any production JVM. Heap resizing triggers full GCs and fragments memory. Pre-allocate and stay fixed.

Java 24 Improvements (JEP 475)

Java 24 brought a significant transparent optimization to G1: late barrier expansion for C2 compiler write barriers (JEP 475). This optimization moves G1 write barrier handling to a later stage in the C2 compilation pipeline, reducing compiler overhead by 10–20%. This is a transparent improvement - no configuration needed, users on Java 24+ benefit automatically.


Part V: ZGC - The Time Lord

Ultra-Low Latency at Scale

Z Garbage Collector (ZGC) is Oracle’s answer to the latency problem. Design goals:

ZGC achieves this through a combination of techniques:

  1. Concurrent operations: Marking, relocation, reference processing - all happen while application threads run
  2. Colored pointers: Pointer metadata tracks object state (marked, relocated, etc.) without touching the object itself
  3. Load barriers: Every object read checks if relocation is needed, enabling concurrent compaction
  4. Generational by default (Java 23+): Separates young and old collections for better efficiency - now the standard mode

The ZGC Architecture

Application Thread              GC Thread
      |                            |
      v                            v
┌─────────────┐             ┌──────────────┐
│ Load Barrier│────────────▶│ Colored Ptr  │
│  (every     │   Check     │ Translation  │
│   read)     │             │   Table      │
└─────────────┘             └──────────────┘

                                    v
                            ┌──────────────┐
                            │ Concurrent   │
                            │ Relocation   │
                            └──────────────┘

Enabling ZGC

# Java 15-21 (non-generational)
java -XX:+UseZGC -Xms16g -Xmx16g MyApp

# Java 21-22 (generational ZGC - explicit flag required)
java -XX:+UseZGC -XX:+ZGenerational -Xms16g -Xmx16g MyApp

# Java 23+ (generational by default)
java -XX:+UseZGC -Xms16g -Xmx16g MyApp

# Java 24+ (non-generational mode removed - JEP 490)
java -XX:+UseZGC -Xms16g -Xmx16g MyApp

Note: ZGC is generational by default since Java 23. The -XX:+ZGenerational flag was deprecated in Java 23 and non-generational mode was completely removed in Java 24 (JEP 490). In Java 25, generational ZGC is mature and production-hardened.

Note: ZGC is currently incompatible with Compact Object Headers (JEP 519) - relevant if you’re evaluating COH for other collectors.

When to Choose ZGC

RequirementZGC Advantage
Sub-10ms p99 latencyConsistently delivers
Large heaps (16GB+)Pause time independent of heap size
Cloud/container environmentsEfficient memory return to OS
Latency-sensitive servicesNo tuning typically needed

The Trade-Off

ZGC uses more CPU and memory than G1 or Parallel. It reserves address space aggressively and has higher memory overhead. But for latency-critical applications, this is a fair exchange.

Container Tip: ZGC works well in containers but set -XX:MaxRAMPercentage=75.0 to leave headroom for GC overhead. Don’t allocate 100% of container memory to the heap.


Part VI: Shenandoah - The Memory Whisperer

Concurrent Compaction Pioneer

Developed by Red Hat and merged into OpenJDK, Shenandoah shares ZGC’s goals but uses different techniques:

Shenandoah’s Phases

1. Initial Mark (STW, <1ms)
   └── Mark roots

2. Concurrent Marking
   └── Traverse object graph concurrently

3. Final Mark (STW, <1ms)
   └── Complete marking, identify collection set

4. Concurrent Evacuation
   └── Copy live objects to new regions
   └── Brooks pointers redirect old to new locations

5. Concurrent Update References
   └── Fix pointers to relocated objects
   └── Application threads help via read barriers

6. Final Update References (STW, <1ms)
   └── Update root references
   └── Cleanup evacuated regions

Enabling Shenandoah

# Available in OpenJDK 12+, Red Hat/OpenLogic builds
java -XX:+UseShenandoahGC -Xms8g -Xmx8g MyApp

# Heuristics mode
java -XX:ShenandoahGCHeuristics=compact ...  # Aggressive compaction
java -XX:ShenandoahGCHeuristics=throughput ... # Maximize throughput

ZGC vs Shenandoah

AspectZGCShenandoah
Max Heap16TB16TB
Pause Target<10ms<10ms
AvailabilityOracle JDK, OpenJDKOpenJDK, Red Hat builds
GenerationalYes (Java 23+ default, Java 24+ only mode)Yes (Java 25+)
Memory OverheadHigherLower than ZGC
ThroughputSlightly lowerSlightly higher

Both are excellent choices. In 2026, both ZGC and Shenandoah have generational modes. ZGC’s generational implementation has been default since Java 23, while Shenandoah added generational support in Java 25 (JEP 521). For ultra-low latency workloads, both collectors now offer mature, production-ready generational collection.


Part VII: The Epsilon Collector - The Sound of Silence

No-Op for Performance Testing

Epsilon is the anti-collector - it allocates memory but never collects. When the heap fills up, the JVM exits.

Use it for:

java -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC MyApp

Warning: Epsilon is not for production. Your application will crash when memory fills. This is by design.


Part VIII: Java 25 Innovations - The New Frontier

Generational Shenandoah (JEP 521)

While ZGC went generational by default in Java 23, Java 25 brought the same treatment to Shenandoah (JEP 521). This is significant because:

Enable it (Java 25+):

java -XX:+UseShenandoahGC -XX:ShenandoahGCMode=generational -Xms8g -Xmx8g MyApp

The Java 25 LTS Recommendation

For production deployments in 2026, Java 25 LTS (released September 2025) should be your target:

# Recommended Java 25 configuration for low-latency services
java \
  -XX:+UseZGC \
  -XX:MaxGCPauseMillis=10 \
  -XX:+AlwaysPreTouch \
  -Xms4g \
  -Xmx4g \
  -jar application.jar

💡 Deep dive: For detailed coverage of Java 25’s performance improvements including Project Leyden’s AOT profiling and Scoped Values, see JVM: The Silent Revolution.


Part IX: Choosing Your Conductor - The Decision Matrix

Quick Selection Guide

┌─────────────────────────────────────────────────────────────┐
│  Question 1: What's your primary constraint?                │
├─────────────────────────────────────────────────────────────┤
│  Throughput → Parallel GC                                   │
│  Latency (<10ms) → ZGC or Shenandoah                        │
│  Balanced → G1                                              │
│  Minimal resources → Serial                                 │
└─────────────────────────────────────────────────────────────┘

                              v
┌─────────────────────────────────────────────────────────────┐
│  Question 2: What's your heap size?                         │
├─────────────────────────────────────────────────────────────┤
│  < 1GB → Serial or Parallel                                 │
│  1-16GB → G1, ZGC, or Shenandoah                            │
│  > 16GB → ZGC or Shenandoah (only sensible options)         │
└─────────────────────────────────────────────────────────────┘

                              v
┌─────────────────────────────────────────────────────────────┐
│  Question 3: What's your availability requirement?          │
├─────────────────────────────────────────────────────────────┤
│  Oracle JDK only → ZGC, G1, Parallel, Serial                │
│  OpenJDK/Red Hat builds → Shenandoah, ZGC, G1, others       │
│  Container/Kubernetes → ZGC (low overhead), Shenandoah      │
│  AWS Lambda/Serverless → Serial (platform managed)          │
│  Need generational low-latency → ZGC (Java 23+)             │
└─────────────────────────────────────────────────────────────┘

Collector Comparison Table

CollectorPause TimeThroughputHeap RangeUse Case
SerialHighLow<100MBSingle-core, embedded
ParallelMediumVery High1GB-8GBBatch processing, ETL
G1Medium (20-200ms)High4GB-16GBGeneral purpose, default
ZGCLow (<10ms)High8GB-16TBLow-latency services
ShenandoahLow (<10ms)High8GB-16TBLow-latency, containers

Part X: Tuning and Monitoring - Reading the Sheet Music

Essential GC Logging

# Unified logging (Java 9+)
java -Xlog:gc*:file=gc.log:time,uptime,level,tags:filecount=10,filesize=100m ...

# Key metrics to monitor
java -Xlog:gc+heap=info,gc+phases=debug,gc+age=trace ...

Critical Metrics

MetricToolHealthy Threshold
Pause TimeGC logs< target (100ms for G1, 10ms for ZGC)
GC FrequencyGC logsMinor GC every few seconds
Heap UsageJMX, Prometheus< 70% after full GC
Allocation RateGC logsSteady state, not growing

Common Problems and Solutions

Problem: Frequent Full GCs

Problem: Long Pause Times with G1

Problem: ZGC Using Too Much CPU

Problem: OutOfMemoryError despite free heap

Safepoints - The Hidden Pause Driver

The article has focused on GC pauses, but GC is only one operation that requires your application to pause. Understanding safepoints completes the picture of JVM latency.

A safepoint is a point in program execution where all application threads must pause so the JVM can perform operations that require a consistent view of the heap and thread stacks. These operations include:

The Time-to-Safepoint (TTSP) Problem

When a safepoint operation is requested, the JVM must wait for all application threads to reach a safepoint. This delay is called Time-to-Safepoint (TTSP). Even with ZGC’s sub-millisecond GC pauses, a long TTSP can result in multi-second application pauses.

What causes long TTSP?

  1. Counted loops without safepoint polls - Tight loops executing billions of iterations without method calls or allocations don’t check for pending safepoints
  2. Large array copies - System.arraycopy() over large arrays blocks until completion
  3. JNI calls - Native code cannot be interrupted
  4. Memory-mapped I/O operations - Reading large mapped files synchronously

Real-world impact: A 2020 analysis of production JVMs showed TTSP delays ranging from 10 seconds to several minutes, even when GC pauses were under 1ms. The application threads simply couldn’t reach safepoints quickly enough.

Diagnosing TTSP Issues

# Enable safepoint logging (verbose)
java -Xlog:safepoint*:file=safepoint.log:time,uptime,level,tags:filecount=10,filesize=100m ...

# Key fields in unified log output:
# "Reaching safepoint" = TTSP (time waiting for threads)
# "At safepoint" = VM operation time (GC, deopt, etc.)
# High "Reaching safepoint" with low "At safepoint" = TTSP problem

Interpreting safepoint logs:

Modern unified logging (-Xlog:safepoint*) outputs entries showing the time breakdown for each safepoint operation. Look for these phases:

If “Reaching safepoint” time exceeds your latency budget while “At safepoint” time is low, you have a TTSP problem, not a GC problem.

Fixing TTSP Issues

  1. Add Thread.yield() or Thread.sleep(0) in tight loops processing large datasets
  2. Chunk large array operations - Process 1MB at a time instead of 1GB
  3. Use async file I/O instead of blocking memory-mapped operations
  4. Review JNI usage - Native code cannot be interrupted; minimize time spent in JNI calls

Modern JVMs (Java 10+) use polling-page-based safepoint mechanisms, making timer-based tuning largely unnecessary. TTSP issues require application-level fixes rather than JVM flags.

Workload-Specific Tuning Profiles

Selecting the right collector is only the beginning - configuring it for your specific workload pattern is what separates production-ready applications from those that fail under load. Different workload archetypes have fundamentally different allocation patterns, object lifetimes, and latency requirements.

Here are four archetypal workload patterns with specific tuning recommendations:

Profile 1: High-Frequency REST API

Characteristics: Massive volume of short-lived request/response objects, strict latency SLAs (p99 < 100ms)

Collector Choice: ZGC (generational, Java 23+) or Shenandoah (Java 25+)

Recommended Configuration:

java -XX:+UseZGC \
     -XX:MaxGCPauseMillis=5 \
     -XX:+AlwaysPreTouch \
     -XX:MaxRAMPercentage=75.0 \
     -Xms4g -Xmx4g \
     -jar api-service.jar

Tuning Notes:

Profile 2: Stateful WebSocket/Streaming Service

Characteristics: Long-lived connections (minutes to hours), gradual old generation growth, steady-state allocation

Collector Choice: G1 GC or Shenandoah

Recommended Configuration:

java -XX:+UseG1GC \
     -XX:MaxGCPauseMillis=100 \
     -XX:G1HeapRegionSize=16m \
     -XX:InitiatingHeapOccupancyPercent=35 \
     -XX:+UseStringDeduplication \
     -Xms8g -Xmx8g \
     -jar streaming-service.jar

Tuning Notes:

Profile 3: Data Pipeline / ETL / Batch Processing

Characteristics: Throughput priority, predictable object lifetimes, large working sets, temporary spikes

Collector Choice: Parallel GC

Recommended Configuration:

java -XX:+UseParallelGC \
     -XX:ParallelGCThreads=16 \
     -XX:GCTimeRatio=19 \
     -XX:MaxGCPauseMillis=1000 \
     -Xms32g -Xmx32g \
     -jar etl-job.jar

Tuning Notes:

Profile 4: Machine Learning Inference Service

Characteristics: Large primitive arrays, off-heap buffers (DirectByteBuffer), irregular allocation spikes, mixed object sizes

Collector Choice: ZGC (large heap support) or G1 with humongous region tuning

Recommended Configuration:

java -XX:+UseZGC \
     -XX:MaxDirectMemorySize=8g \
     -XX:+UseLargePages \
     -XX:+AlwaysPreTouch \
     -XX:MaxRAMPercentage=80.0 \
     -Xms24g -Xmx24g \
     -jar ml-inference.jar

Tuning Notes:

Key Diagnostic Commands:

# Check direct buffer usage
jcmd <pid> VM.native_memory summary

# Monitor allocation rate in real-time
jstat -gc <pid> 1s | awk '{print $6}'  # EU (Eden Used) - Column 6

# Profile allocation by object type
java -XX:StartFlightRecording=filename=allocation-profile.jfr,duration=60s \
     -XX:FlightRecorderOptions=stackdepth=128 \
     -XX:+UnlockDiagnosticVMOptions \
     -XX:+DebugNonSafepoints ...

Part XI: Container and Cloud Considerations

The Container Challenge

Containers break assumptions that traditional GC tuning relied on:

  1. CPU limits: GC threads may exceed cgroup limits, causing throttling
  2. Memory limits: OOM killer terminates containers before GC can react
  3. Noisy neighbors: Shared resources affect GC timing

Container-Optimized Settings

# Java 17+ container awareness
java -XX:+UseContainerSupport \
     -XX:MaxRAMPercentage=75.0 \
     -XX:InitialRAMPercentage=75.0 \
     -XX:+UseG1GC \
     -XX:MaxGCPauseMillis=100 \
     MyApp

# ZGC in Kubernetes (Java 23+ - generational by default)
java -XX:+UseZGC \
     -XX:MaxRAMPercentage=75.0 \
     -XX:+AlwaysPreTouch \
     MyApp

Cloud-Native Recommendations

EnvironmentRecommended CollectorConfiguration Notes
AWS LambdaSerial (forced)Minimal cold start
KubernetesZGC or G1Set resource limits appropriately
ServerlessN/A (platform managed)Focus on allocation reduction
EC2/VMsAnyFull control, tune to workload

Epilogue: The Art of Memory Management

Garbage collection in the JVM is not a necessary evil - it’s a sophisticated automation that eliminates entire classes of bugs (memory leaks, double frees, dangling pointers) that plague manual memory management languages. The cost is unpredictability, but modern collectors have reduced that cost to negligible levels for most applications.

The key insights:

  1. Start with defaults: G1 is the default because it works well enough for most
  2. Measure before tuning: Don’t optimize what you haven’t measured
  3. Match collector to constraints: Throughput? Parallel. Latency? ZGC. Balanced? G1
  4. Size your heap correctly: -Xms = -Xmx in production, always
  5. Monitor continuously: GC behavior changes with workload patterns

Your application is a symphony. The garbage collector ensures every musician - every object - enters and exits at the right moment. Choose your conductor wisely, tune your instruments properly, and your performance will resonate with users.

The Road Ahead: Project Leyden and AOT Profiling

The JVM continues to evolve toward even more intelligent memory management. Project Leyden’s AOT (Ahead-of-Time) profiling (JEPs 514 and 515) represents a significant advancement that indirectly benefits all garbage collectors. By capturing and reusing profiling data across JVM restarts, AOT profiling reduces the JIT compilation warmup period. This translates to reduced allocation rate variance during the application’s early phase, which means less pressure on the young generation and more predictable GC behavior from the moment your application starts serving traffic. For latency-sensitive services, this elimination of warmup-induced allocation spikes is another step toward truly consistent performance.


Updated for February 2026. JVM garbage collection has evolved rapidly - ZGC is generational by default since Java 23, non-generational ZGC was removed in Java 24 (JEP 490), Shenandoah added generational mode in Java 25 (JEP 521), and G1 gained compiler barrier optimizations in Java 24 (JEP 475). The future points toward zero-configuration, self-tuning collectors, but understanding these fundamentals remains essential for every JVM developer.


Quick Reference

Collector Selection Flowchart

Single core? ──Yes──▶ Serial GC

    No

Heap < 1GB? ──Yes──▶ Serial or Parallel

    No

Latency critical (<10ms)? ──Yes──▶ ZGC or Shenandoah

    No

Throughput priority? ──Yes──▶ Parallel GC

    No

Default: G1 GC (works for most cases)

JVM Flags Cheat Sheet

GoalFlags
Enable G1-XX:+UseG1GC
Enable ZGC-XX:+UseZGC
Enable Shenandoah-XX:+UseShenandoahGC
Enable Parallel-XX:+UseParallelGC
Target pause time-XX:MaxGCPauseMillis=100
Fixed heap size-Xms8g -Xmx8g
Container support-XX:+UseContainerSupport
GC logging-Xlog:gc*:file=gc.log

Further Reading

JVM Architecture & Performance:

Garbage Collector Documentation:

JEP References (Java 24-25):

Research and Case Studies:


Master the orchestra of memory management, and your JVM applications will perform with the precision of a world-class symphony - responsive, efficient, and beautifully tuned to the demands of your users.


Share this post on:

Previous Post
JVM Memory Fundamentals: Stack, Heap, and Object Headers
Next Post
Pragmatic DDD: Architecture Without Dogma