Migration
You can migrate data from your on-premises MongoDB deployments to Atlas using one of a variety of methods. We recommend using Atlas live migration when possible because it automates many of the tasks with the least amount of downtime, but you can use other tools that accommodate the variety and complexity inherent to database migration.
Live Migration Overview
Atlas live migration automates moving data from on-premises MongoDB databases to Atlas. You can pull data from an on-premises MongoDB database or push data using Cloud Manager or Ops Manager, but with either method, Atlas live migration includes the following features:
The migration host always encrypts traffic to the Atlas cluster. To encrypt data end-to-end, enable TLS on your source cluster. Only users with specific Role-Based Access Control (RBAC) roles (such as
backup
,readAnyDatabase
, orclusterMonitor
) can initiate live migration. Users authenticate to clusters using SCRAM-SHA-1 or SCRAM-SHA-256.Live migration automates most tasks. For the fully-managed "pull" and "push" methods, live migration monitors key metrics, provisions the host servers and the strict sequencing of the migration commands. You specify the resource requirements and scalable options to prevent over-provisioning.
Detailed instructions help you provision migration hosts and scale destination clusters to control costs. Recommendations include appropriate cluster sizing and temporary scaling, followed by resizing to optimal levels post-migration.
Live migration uses mongosync to facilitate fast cutover through parallel data copying. Processes manage temporary network interruptions and cluster elections, using continuous data synchronization and a final cutover phase to achieve minimal downtime. Retry mechanisms and pre-migration validations enhance resilience against interruptions.
Monitor migrations with real-time status updates and notifications.
Live Migration Methods
You can use Cloud Manager or Ops Manager to push data into Atlas or use a live migration server to pull data into Atlas.
Ensure you allocate adequate CPU and network resources for the migration host. While you can run multiple concurrent migrations, each deployment must have a dedicated migration host.
All live migration methods require that the source and destination databases run MongoDB 6.0.13+ or MongoDB 7.0.8+. To migrate data from databases using prior versions of MongoDB, see Legacy Migration or Manual Migration Methods.
Pull data into Atlas. Atlas pulls data from the source MongoDB deployment and requires access to the source deployment through the deployment's firewall. When the clusters are nearly synced, you must stop write operations on the source, redirect applications to the Atlas cluster, and restart them. The following considerations apply:
Best for deployments not monitored by Cloud Manager or Ops Manager.
The source database must be publicly accessible to allow inbound access from the live migration server.
Doesn't support VPC peering or private endpoints for either the source or destination cluster.
Source and destination cluster topologies must match. For example, both must be replica sets or sharded clusters with the same number of shards.
Plan for minimal downtime to stop writes and restart applications with a new connection string. The migration process is CPU-intensive and requires significant network bandwidth. MongoDB recommends that migration hosts are run on their own dedicated server with a high network bandwidth. Provision the migration host with the following minimum configuration:
Number of VMsTotal of 3 VMs, 2 for sharded and 1 for RSPurpose
Mongosync
Location
Must access on-prem and public cloud
CPU
8
Memory
32 GB
OS
OS - 64-bit
Disk Size
Disk enough for logging
To ensure a smooth migration process, confirm that the source cluster's oplog size is adequate to cover the entire migraiton duration.
For full migration recommendations and instructions, see Live Migrate (Pull) a Cluster into Atlas.
Push data into Atlas. Cloud Manager or Ops Manager pushes data to Atlas using a secure link-token without requiring access to the source cluster through the cluster's firewall. During migration, Atlas continuously syncs real-time data between the source and destination clusters until cutover. The following considerations apply:
Data is synchronized in one direction only: changes made to the destination won't reflect back on the source.
Supports VPC peering and private endpoints.
Source and destination cluster topologies must match. For example, both must be replica sets or sharded clusters with the same number of shards.
Ensure that you connect to your Atlas cluster from all client hardware where your applications run. Testing your connection string helps ensure that your data migration process completes with minimal downtime.
For full migration recommendations and instructions, see Live Migrate (Push) a Cluster Monitored by Cloud Manager into Atlas.
Monitoring Migrations
To review both ongoing and past migrations, navigate to the Migration Home page in Atlas.
You can click each migration process for more detailed information, including the initial data copy time estimate and comprehensive progress reports. Use the cluster card to create, cutover, or cancel a migration.
To learn more, see Monitor Migrations.
Manual Migration Methods
If Atlas live migration can't satisfy the constraints of your migration requirements,
you can bring data from existing MongoDB deployments, JSON
, or CSV
files
into Atlas using one of the following tools that you run outside of Atlas.
Tool | Description |
---|---|
The mongosync binary is the primary process used in Atlas live migration.
Use | |
Migrate from a MongoDB replica set into an Atlas cluster
without shutting down your existing replica set or applications.
mongomirror does not import
user/role data or copy the | |
Seed an Atlas cluster with a | |
Load data from a | |
Use a GUI to load data
from a |
You can also restore from an Atlas cluster backup data to another Atlas cluster. For information, see Restore Your Cluster.
If you are required to use either Atlas VNet peering or Private Link configurations, you don't want to allow direct connection from a third party to its source cluster, or if you don't already have or don't want to import the source cluster in Ops Manager or Cloud Manager, then MongoDB recommends the mongosync approach.
If you have relatively small datasets (<300 GB) to migrate, and can afford application downtime for an extended time period, then MongoDB recommends the mongodump and mongorestore approach.
If you have relatively small datasets (<300 GB) to migrate, no index concerns, and can afford application downtime for an extended time period, then MongoDB recommends the mongoexport and mongoimport approach.
Cutover
When a migration reaches the "Ready for Cutover" status, click Cutover on target cluster followed by the Prepare to Cutover on the cluster card to initiate the cutover process. Upon successful completion of the cutover, reconfigure your application to point to the new destination cluster.
To learn more, see Monitor Migrations.
Next Steps
See the Guidance for Atlas Orgs, Projects, and Clusters page to learn about the building blocks of your Atlas enterprise estate or use the left navigation to find features and best practices for each Well-Architected Framework pillar.