Guidance for Atlas Network Security
Atlas provides secure network configuration defaults for your database deployments, such as:
Mandatory TLS/SSL connection encryption
VPCs for all projects with one-or-more Dedicated clusters
Authentication that uses IP access lists and only accepts database connections from sources you explicitly declare
You can further configure these protections to meet your unique security needs and preferences.
Use the recommendations on this page to plan for the network security configuration of your clusters.
Features for Atlas Network Security
Atlas enforces TLS/SSL encryption for all connections to your databases.
We recommend using M10+ dedicated clusters because all Atlas projects with one or more M10+ dedicated clusters receive their own dedicated:
VPC on AWS or Google Cloud.
VNet on Azure.
Atlas deploys all dedicated clusters inside this VPC or VNet.
By default, all access to your clusters is blocked. You must explicitly allow an inbound connection by one of the following methods:
Add private endpoints, which Atlas adds automatically to your IP access list. No other access is automatically added.
Use VPC or VNet peering to add private IP addresses.
Add public IP addresses to your IP access list.
You can also use multiple methods together for added security.
TLS
Atlas enforces mandatory TLS encryption of connections to your databases. TLS 1.2 is the default protocol. To learn more, see the Set Minimum TLS Protocol Version section of Configure Additional Settings.
IP access lists
As an Atlas administrator, you can:
You can configure IP access lists to limit which IP addresses can attempt authentication to your database.
Your Atlas clusters allow access only from the IP addresses
and CIDR block IP ranges that you add to your IP access list.
We recommend that you permit access to the smallest network segments
possible, such as an individual /32
address.
Application servers and other clients can't access your Atlas clusters if their IP addresses aren't included in your IP access list.
You can configure temporary access list entries that expire automatically after a user-defined period.
Firewall Configuration
When connecting from your client application servers to Atlas and passing through a firewall that blocks outbound network connections, you must also configure your firewall to allow your applications to make outbound connections to TCP traffic on Atlas hosts. This grants your applications access to your clusters.
Atlas cluster public IPs remain the same in the majority of cases of cluster changes such as vertical scaling, topology changes, or maintenance events. However, certain topology changes, such as a conversion from replica set to sharded cluster, the addition of shards, or a region change require that you use new IP addresses.
In the case of converting from a replica set to a sharded cluster, the
failure to reconnect the application clients might cause your application
to suffer from data outages. If you use a DNS seed list
connection string, your application automatically connects to the mongos
for your sharded cluster. If you use a standard connection string,
you must update your connection string to reflect your new cluster topology.
In the case of adding new shards, the failure to reconnect the application clients may cause your application to suffer from a data outage.
Private Endpoints
A private endpoint facilitates a one-way connection from a VPC that you manage directly to your Atlas VPC, without permitting Atlas to initiate a reciprocal connection. This allows you to make use of secure connections to Atlas without extending your network trust boundary. The following private endpoints are available:
AWS PrivateLink, for connections from AWS VPCs
Microsoft Azure Private Link, for connections from Microsoft Azure VNets
Private Service Connect, for connections from Google Cloud VPCs
Multi-Region Considerations
For global private endpoints, Atlas automatically generates a SRV record that points to all Atlas cluster nodes. The MongoDB driver attempts to connect to each SRV record from your application. This allows the driver to handle a failover event without waiting for DNS replication and without requiring you to update the driver's connection string.
In order to facilitate automatic SRV record generation for all nodes in your Atlas cluster, you must establish a VPC peering connection between the VPC in which your application is deployed and and your MongoDB VPC with PrivateLink or equivalent.
Private endpoints must be enabled in every region that you have an Atlas cluster deployed.
VPC/VNet Peering
Network peering allows you to connect your own VPCs with an Atlas VPC to route traffic privately and isolate your data flow from the public Internet. Atlas maps VPCs one-to-one to Atlas projects.
Most operations performed over a VPC connection originate from your application environment, minimizing the need for Atlas to make outbound access requests to peer VPCs. However, if you configure Atlas to use LDAP authentication, you must enable Atlas to connect outbound to the authentication endpoint of your peer VPC over the LDAP protocol. Note that LDAP authentication is deprecated on Atlas with 8.0. We recommend that you use Workforce Identity Federation and Workload Identity Federation instead.
You can choose your Atlas CIDR block with the VPC peering wizard
before you deploy your first cluster. The Atlas VPC CIDR
block must not overlap with the CIDR block of any VPC you intend to
peer to. Atlas limits the number of MongoDB instances per VPC
based on the CIDR block. For example, a project with a CIDR block of
/24
is limited to the equivalent of 27 3-node replica sets.
Recommendations for Atlas Network Security
Recommendations here
Recommendations here
All Deployment Paradigm Recommendations
The following recommendations apply to all deployment paradigms.
Private Endpoints
We recommend that you set up private endpoints for all new staging and production projects to limit the extension of your network trust boundary.
In general, we recommend using private endpoints for every Atlas project, because this approach provides the most granular security and eases the administrative burden that can come from managing IP access lists and large blocks of IP addresses as your cloud network scales. There is a cost associated with each endpoint. So, you don't need private endpoints in lower environments, but you should leverage them in higher environments to limit the extension of your network trust boundary.
If VPCs or VNets in which your application is deployed can't be peered with one another, potentially due to a combination of on-premises and cloud deployments, you might want to consider a regional private endpoint.
With regional private endpoints, you can do the following:
Connect a single private endpoint to multiple VNets or VPCs without peering them directly to each other.
Mitigate partial region failures in which one or more services within a region fails.
To network with regional endpoints, you must do the following:
Perform regular and robust health checks to detect an outage and update routing.
Use a distinct connection string for each region.
Manage cross-region routing in Atlas.
Deploy a Mongos server and an additional metadata server if you are running a MongoDB version older than v8.0.
To learn more about private endpoints in Atlas, including limitations and considerations, see Learn About Private Endpoints in Atlas. To learn how to set up private endpoints for your clusters, see Set Up a Private Endpoint for a Dedicated Cluster.
Cloud Provider-Specific Guidance
AWS: We recommend VPC peering across all of your self-managed VPCs that need to connect to Atlas. Leverage global private endpoints.
Azure: We recommend VNet peering across all of your self-managed VNets that need to connect to Atlas. Leverage global private endpoints.
GCP: Peering is not required across your self-managed VPCs when using GlobalConnect. All Atlas regions must be networked with private endpoints to your self-managed VPC in each region.
GCP Private Endpoints Considerations and Limitations
Atlas services are accessed through GCP Private Service Connect endpoints on ports 27015 through 27017. The ports can change under specific circumstances, including (but not limited to) cluster changes.
GCP Private Service Connect must be active in all regions into which you deploy a multi-region cluster. You will receive an error if GCP Private Service Connect is active in some, but not all, targeted regions.
You can do only one of the following:
Deploy nodes in more than one region, and have one private endpoint per region.
Have multiple private endpoints in one region, and no other private endpoints.
Important
This limitation applies across cloud providers. For example, if you create more than one private endpoint in a single region in GCP, you can't create private endpoints in AWS or any other GCP region.
See (Optional) Regionalized Private Endpoints for Multi-Region Sharded Clusters for an exception for multi-region and global sharded clusters.
Atlas creates 50 service attachments, each with a subnet mask value of 27. You can change the number of service attachments and the subnet masks that Atlas creates by setting the following limits with the Set One Project Limit Atlas Administration API endpoint:
Set the
atlas.project.deployment.privateServiceConnectionsPerRegionGroup
limit to change the number of service attachments.Set the
atlas.project.deployment.privateServiceConnectionsSubnetMask
limit to change the subnet mask for each service attachment.
To learn more, see Set One Project Limit.
You can have up to 50 nodes when you create Atlas projects that use GCP Private Service Connect in a single region. If you need to change the number of nodes, perform one of the following actions:
Remove existing private endpoints and then change the limit using the Set One Project Limit Atlas Administration API endpoint.
Contact MongoDB Support.
Use additional projects or regions to connect to nodes beyond this limit.
Important
Each private endpoint in GCP reserves an IP address within your GCP VPC and forwards traffic from the endpoints' IP addresses to the service attachments. You must create an equal number of private endpoints to the number of service attachments. The number of service attachments defaults to 50.
Addressable targets include:
You can have up to 40 nodes when you create Atlas projects that use GCP Private Service Connect across multiple regions. This total excludes the following instances:
GCP regions communicating with each other
Free clusters or Shared clusters
GCP Private Service Connect supports up to 1024 outgoing connections per virtual machine. As a result, you can't have more than 1024 connections from a single GCP virtual machine to an Atlas cluster.
To learn more, see the GCP cloud NAT documentation.
GCP Private Service Connect is region-specific. However, you can configure global access to access private endpoints from a different region.
To learn more, see Multi-Region Support.
IP Access Lists
We recommend that you configure an IP access list for your API keys and programmatic access to allow access only from trusted IP addresses such as your CI/CD pipeline or orchestration system. These IP access lists are set on the Atlas control plane upon provisioning a service account and are separate from IP access lists which can be set on the Atlas project data plane for connections to the clusters.
When you configure your IP access list, we recommend that you:
Use temporary access list entries in situations where team members require access to your environment from temporary work locations or during break-glass scenarios where production access to humans is required to resolve a production-down scenario. We recommend that you build an automation script to quickly add temporary access to prepare for these incidents.
Define IP access list entries covering the smallest network segments possible. To do this, favor individual IP addresses where possible, and avoid large CIDR blocks.
VPC/VNet Peering
If you configure VPC or VNet peering, we recommend that you:
To maintain tight network trust boundaries, configure security groups and network ACLs to prevent inbound access to systems inside your application VPCs from the Atlas-side VPC.
Create new VPCs to act as intermediaries between sensitive application infrastructure and your Atlas VPCs. VPCs are intransitive, allowing you to only expose those components of your application that need access to Atlas.
Automation Examples: Atlas Network Security
The following examples configure connections between your application environment and your Atlas clusters using IP access lists, VPC Peering, and Private Endpoints.
These examples also apply other recommended configurations, including:
Cluster tier set to
M10
for a dev/test environment. Use the cluster size guide to learn the recommended cluster tier for your application size.Single Region, 3-Node Replica Set / Shard deployment topology.
Our examples use AWS, Azure, and Google Cloud interchangeably. You can use any of these three cloud providers, but you must change the region name to match the cloud provider. To learn about the cloud providers and their regions, see Cloud Providers.
Cluster tier set to
M30
for a medium-sized application. Use the cluster size guide to learn the recommended cluster tier for your application size.Single Region, 3-Node Replica Set / Shard deployment topology.
Our examples use AWS, Azure, and Google Cloud interchangeably. You can use any of these three cloud providers, but you must change the region name to match the cloud provider. To learn about the cloud providers and their regions, see Cloud Providers.
Note
Before you can configure connections with the Atlas CLI, you must:
Create your paying organization and create an API key for the paying organization.
Connect from the Atlas CLI using the steps for Programmatic Use.
Create an IP access list Entry
Run the following command for each connection you want to allow. Change the entries to use the appropriate options and your actual values:
atlas accessList create 192.0.2.15 --type ipAddress --projectId 5e2211c17a3e5a48f5497de3 --comment "IP address for app server 2" --output json
For more configuration options and information about this example, see atlas accessLists create.
For information on how to create an IP access list entry with AWS, GCP and Azure, see Set Up a Private Endpoint for a Dedicated Cluster
Create a VPC Peering Connection
Run the following command for each VPC you want to peer to your
Atlas VPC. Replace aws
with azure
or gcp
as
appropriate, and change the options and values to the
appropriate ones for your VPC or VNet:
atlas networking peering create aws --accountId 854333054055 --atlasCidrBlock 192.168.0.0/24 --region us-east-1 --routeTableCidrBlock 10.0.0.0/24 --vpcId vpc-078ac381aa90e1e63
For more configuration options and information about this example, see:
atlas networking peering create aws, for AWS VPCs
atlas networking peering create azure, for Microsoft Azure VNets
atlas networking peering create gcp, for Google Cloud VPCs
Create a Private Endpoint
Run the following command for each private endpoint you want to
create. Replace aws
with azure
or gcp
as
appropriate, and change the options and values to the
appropriate ones for your VPC or VNet:
atlas privateEndpoints aws create --region us-east-1 --projectId 5e2211c17a3e5a48f5497de3 --output json
For more configuration options and information about this example, see:
atlas privateEndpoints aws create, for connections from AWS VPCs
atlas privateEndpoints azure create, for connections from Microsoft Azure VNets
atlas privateEndpoints gcp create, for connections from GCP Private Service Connect
Note
Before you can create resources with Terraform, you must:
Create your paying organization and create an API key for the paying organization. Store your API key as environment variables by running the following command in the terminal:
export MONGODB_ATLAS_PUBLIC_KEY="<insert your public key here>" export MONGODB_ATLAS_PRIVATE_KEY="<insert your private key here>"
We also suggest creating a workspace for your enviornment.
Create an IP access list Entry
To add an entry to your IP access list, create the following file and place it in the directory of the project you want to grant access to. Change the IDs and names to use your values:
accessEntryForAddress1.tf
# Add an entry to your IP Access List resource "mongodbatlas_access_list_api_key" "address_1" { org_id = "<org-id>" ip_address = "2.3.4.5" api_key_id = "a29120e123cd" }
After you create the files, navigate to your project directory and run the following command to initialize Terraform:
terraform init
Run the following command to view the Terraform plan:
terraform plan
Run the following command to add one entry to the IP access list for your project. The command uses the file and the MongoDB & HashiCorp Terraform to add the entry.
terraform apply
When prompted, type yes
and press Enter
to apply
the configuration.
Create a VPC Peering Connection
To create a peering connection between your application VPC and your Atlas VPC, create the following file and place it in the directory of the project you want to grant access to. Change the IDs and names to use your values:
vpcConnection.tf
# Define your application VPC resource "aws_default_vpc" "default" { tags = { Name = "Default VPC" } } # Create the peering connection request resource "mongodbatlas_network_peering" "mongo_peer" { accepter_region_name = "us-east-2" project_id = local.project_id container_id = one(values(mongodbatlas_advanced_cluster.test.container_id)) provider_name = "AWS" route_table_cidr_block = "172.31.0.0/16" vpc_id = aws_default_vpc.default.id aws_account_id = local.AWS_ACCOUNT_ID } # Accept the connection resource "aws_vpc_peering_connection_accepter" "aws_peer" { vpc_peering_connection_id = mongodbatlas_network_peering.mongo_peer.connection_id auto_accept = true tags = { Side = "Accepter" } }
After you create the file, navigate to your project directory and run the following command to initialize Terraform:
terraform init
Run the following command to view the Terraform plan:
terraform plan
Run the following command to add a VPC peering connection from your application to your project. The command uses the file and the MongoDB & HashiCorp Terraform to add the entry.
terraform apply
When prompted, type yes
and press Enter
to apply
the configuration.
Create a Private Link
To create a PrivateLink from your application VPC to your Atlas VPC, create the following file and place it in the directory of the project you want to connect to. Change the IDs and names to use your values:
privateLink.tf
resource "mongodbatlas_privatelink_endpoint" "test" { project_id = "<project-id>" provider_name = "AWS/AZURE" region = "US_EAST_1" timeouts { create = "30m" delete = "20m" } }
After you create the file, navigate to your project directory and run the following command to initialize Terraform:
terraform init
Run the following command to view the Terraform plan:
terraform plan
Run the following command to add a PrivateLink endpoint from your application to your project. The command uses the file and the MongoDB & HashiCorp Terraform to add the entry.
terraform apply
When prompted, type yes
and press Enter
to apply
the configuration.