By default, YugabyteDB provides synchronous replication and strong consistency across geo-distributed data centers. However, many use cases do not require synchronous replication or justify the additional complexity and operating costs associated with managing three or more data centers. A cross-universe (xCluster) deployment provides asynchronous replication across two data centers or cloud regions. Using an xCluster deployment, you can use unidirectional (master-follower) or bidirectional (multi-master) asynchronous replication between two universes (aka data centers).

For information on xCluster deployment architecture and replication scenarios, refer to [xCluster architecture](../../../architecture/docdb-replication/async-replication/).

Before deploying xCluster, review the [limitations](../../../architecture/docdb-replication/async-replication/#limitations).

{{<index/block>}}

  {{<index/item
    title="Deploy transactional xCluster"
    body="Set up transactional unidirectional replication."
    href="async-replication-transactional/"
    icon="fa-thin fa-money-from-bracket">}}

  {{<index/item
    title="Deploy non-transactional xCluster"
    body="Set up non-transactional unidirectional or bidirectional replication."
    href="async-deployment/"
    icon="fa-thin fa-copy">}}

{{</index/block>}}

## Prerequisites

- If the root certificates for the source and target universe are different, (for example, the node certificates for target and source nodes were not created on the same machine), copy the `ca.crt` for the source universe to all target nodes, and vice-versa. If the root certificate for both source and target universes is the same, you can skip this step.

    1. For each YB-Master and YB-TServer on both the source and target universe, set the flag `certs_for_cdc_dir` to the parent directory (for example, `<home>/xcluster-certs`) where you want to store all the other universe's certificates for replication.
    1. Find the certificate authority file used by the source universe (`ca.crt`). This should be stored in the [--certs_dir](../../../reference/configuration/yb-master/#certs-dir).
    1. Copy this file to each node on the target universe. It needs to be copied to a directory named `<home>/xcluster-certs/xcluster-replication-id` (create the directory if it is not there).
    1. Similarly, copy the `ca.crt` file for the target universe from any target universe node at `--certs_dir` to the source universe nodes at `<home>/xcluster-certs/<xcluster-replication-id>/` (create the directory if it is not there).

- Global objects like users, roles, tablespaces are not managed by xCluster. You must explicitly create and manage these objects on both source and target universes.

- For moving data out of YugabyteDB, set up CDC on the xCluster source universe. CDC on the xCluster target universe is not supported. CDC is not supported in bi-directional xCluster setups.

## Best practices

- Set the YB-TServer [cdc_wal_retention_time_secs](../../../reference/configuration/all-flags-yb-tserver/#cdc-wal-retention-time-secs) flag to 86400 on both source and target.

    This flag determines the duration for which write-ahead log (WAL) is retained on the source in case of a network partition or a complete outage of the target. For xCluster replication, set the flag to a value greater than the default. The goal is to retain WALs during a network partition or target outage until replication can be restarted. While setting this value to 86400 (24 hours) is a good starting point, you should also consider how quickly you will be able to recover from a network partition or target outage.

- Make sure all YB-Master and YB-TServer flags are set to the same value on both the source and target universes.

- Monitor CPU usage and ensure it remains under 65%. Note that xCluster replication typically incurs a 20% CPU overhead.

- Monitor disk space usage and ensure it remains under 65%. Allocate sufficient disk space to accommodate WALs generated based on `cdc_wal_retention_time_secs`.
