KLogic
Replication Guide

Kafka MirrorMaker 2 Guide

Master Kafka MirrorMaker 2 for cross-cluster replication. Learn architecture, configuration, and monitoring best practices for disaster recovery and multi-datacenter Kafka deployments.

Published: January 10, 2026 • 16 min read • Replication Guide

What is Kafka MirrorMaker 2?

MirrorMaker 2 (MM2) is Kafka's built-in solution for replicating data between Kafka clusters. Built on Kafka Connect, it provides continuous, reliable replication for disaster recovery, data migration, and multi-datacenter architectures.

MirrorMaker 2 vs MirrorMaker 1

MirrorMaker 2 (Recommended)
  • • Built on Kafka Connect
  • • Preserves offsets and partitions
  • • Automatic topic creation
  • • Exactly-once semantics support
MirrorMaker 1 (Legacy)
  • • Consumer-producer pair
  • • No offset preservation
  • • Manual topic management
  • • At-least-once only

MirrorMaker 2 Use Cases

Disaster Recovery

Replicate data to a standby cluster in another region. In case of primary failure, consumers can failover to the DR cluster with minimal data loss.

Multi-Datacenter

Aggregate data from multiple regional clusters to a central cluster for analytics, or distribute global data to regional clusters for local consumption.

Cloud Migration

Migrate data from on-premises Kafka to cloud-based Kafka (MSK, Confluent Cloud) with zero downtime using continuous replication.

Cluster Upgrade

Migrate to a new Kafka cluster with different configuration or version while maintaining continuous data availability.

MirrorMaker 2 Configuration

Basic Configuration

# mm2.properties # Define clusters clusters = source, target # Source cluster source.bootstrap.servers = source-broker1:9092,source-broker2:9092 # Target cluster target.bootstrap.servers = target-broker1:9092,target-broker2:9092 # Replication flows source->target.enabled = true source->target.topics = .* # Sync consumer group offsets source->target.sync.group.offsets.enabled = true source->target.sync.group.offsets.interval.seconds = 60

With Authentication

# Source cluster with SASL source.security.protocol = SASL_SSL source.sasl.mechanism = PLAIN source.sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule required \ username="user" password="password"; # Target cluster with SASL target.security.protocol = SASL_SSL target.sasl.mechanism = PLAIN target.sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule required \ username="user" password="password";

Advanced Settings

# Topic filtering source->target.topics = orders.*, payments.* source->target.topics.exclude = .*-internal # Replication settings replication.factor = 3 offset-syncs.topic.replication.factor = 3 heartbeats.topic.replication.factor = 3 checkpoints.topic.replication.factor = 3 # Performance tuning tasks.max = 10 producer.batch.size = 524288 producer.linger.ms = 100 consumer.fetch.max.bytes = 52428800

Running MirrorMaker 2

Dedicated Mode (Standalone)

# Run MirrorMaker 2 in dedicated mode ./bin/connect-mirror-maker.sh mm2.properties

Simplest deployment method. MM2 runs as a standalone process managing all connectors.

Kafka Connect Distributed Mode

# Deploy MM2 connectors to existing Kafka Connect cluster curl -X POST http://connect:8083/connectors \ -H "Content-Type: application/json" \ -d '{ "name": "mm2-source-target", "config": { "connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector", "source.cluster.alias": "source", "target.cluster.alias": "target", "source.cluster.bootstrap.servers": "source:9092", "target.cluster.bootstrap.servers": "target:9092", "topics": ".*", "tasks.max": "10" } }'

Deploy as Kafka Connect connectors for better scalability and management.

Docker Deployment

# docker-compose.yml services: mirror-maker: image: confluentinc/cp-kafka:7.5.0 command: connect-mirror-maker /etc/kafka/mm2.properties volumes: - ./mm2.properties:/etc/kafka/mm2.properties environment: KAFKA_HEAP_OPTS: "-Xms2g -Xmx2g"

Monitoring MirrorMaker 2

Key Metrics to Monitor

MetricDescriptionAlert Threshold
replication-latency-ms-avgAverage replication lag time> 1000ms
record-countRecords replicatedDrops to 0
byte-rateBytes replicated per secondSignificant drops
checkpoint-latencyConsumer offset sync delay> 5 minutes

Critical: Monitor Replication Lag

For disaster recovery, replication lag determines your Recovery Point Objective (RPO). High lag means more data loss in a failover scenario. Monitor both record lag and time-based lag.

MirrorMaker 2 Best Practices

Size MM2 appropriately

Use enough tasks to handle your throughput. Rule of thumb: 1 task per 10 partitions being replicated.

Enable offset syncing

For DR scenarios, enable consumer offset synchronization to allow seamless failover.

Use topic filters wisely

Exclude internal topics and only replicate what's needed to reduce overhead.

Test failover regularly

Periodically test DR failover to ensure your consumers can switch to the replica cluster.

Set appropriate replication factor

MM2 internal topics should have the same replication factor as your production topics.

Monitor MirrorMaker 2 with KLogic

KLogic provides comprehensive MirrorMaker 2 monitoring with real-time replication lag tracking, cross-cluster visibility, and intelligent alerting for DR readiness.

Real-time replication lag monitoring
Cross-cluster topic comparison
Offset synchronization tracking
DR readiness alerts
Try KLogic Free