Deploying a Production-Level MongoDB Replica Set

Written by: Wanli Xing [email protected]

Date: January 21, 2025

「Environment Description」

  • Operating System: Anolis OS 8
  • Kernel: 4.18.0-477.13.1.0.1.an8.x86_64
  • MongoDB Version: 7.0.16
  • Dedicated User for Running Program: apprun (can be replaced as needed, the document includes user creation methods)
  • Dedicated Directory for Running Program: /apprun (can be replaced as needed, ensure the directory ownership is correct, the document includes methods to modify permissions)

MongoDB binary file download link: https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel8-7.0.16.tgz

Mongosh binary file download link: https://downloads.mongodb.com/compass/mongosh-2.3.8-linux-x64.tgz

「Prepare three servers for deployment<span>Primary Node</span>, <span>Secondary Node</span> and <span>Arbiter Node</span>

「Important」

To avoid updating the configuration due to IP address changes, use DNS hostnames instead of IP addresses. This is particularly important when configuring replica set members or sharded cluster members.

In a horizontally partitioned network configuration, use hostnames instead of IP addresses to configure the cluster. Starting from MongoDB 5.0, nodes configured only with IP addresses will not start due to authentication failure.

If there is no DNS, static resolution records can be added by editing /etc/hosts. For example:

172.21.48.101 mongo1
172.21.48.102 mongo2
172.21.48.103 mongo3

Step One: Preparation

  1. Download the MongoDB binary file provided in the links above to any directory:
wget -O mongodb.tgz  \
    https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel8-7.0.16.tgz

# or

curl -o mongodb.tgz \
    https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel8-7.0.16.tgz
  1. Create a directory for running <span>mongodb</span>:

All software for this document is placed in the <span>/apprun</span> directory and runs as the <span>apprun</span> user. Please ensure that <span>/apprun</span> exists and has correct ownership:

id apprun 
useradd apprun                   # or `useradd apprun`
                                 # It is recommended to use `/sbin/nologin` type shell.
                                 # The document uses the apprun user for deployment, so for security
                                 # considerations, the shell type of the apprun user can be changed to `/sbin/nologin` after deployment.
passwd apprun                

mkdir -p /apprun
chown apprun:apprun /apprun
  1. Extract the downloaded <span>mongodb</span> binary file archive:
tar -xzf mongodb-linux-x86_64-rhel8-7.0.16.tgz 
mv mongodb-linux-x86_64-rhel80-7.0.16 /apprun/mongodb-27017
mkdir -v /apprun/mongodb-27017/{config,data,logs,run}
chown -R apprun:apprun /apprun/mongodb-27017

Step Two: Generate KEYFILE

Create a <span>keyfile</span> for user cluster authentication:

su - apprun     # Switch to the apprun user first
cd /apprun/mongodb-27017
openssl rand -base64 756 > ${PWD}/key/mongod.key
chmod 400 ${PWD}/key/mongod.key

Step Three: Generate Configuration File

Use <span>--outputConfig</span><span> to convert command line options and parameters into a </span><code><span>YAML</span> style configuration file, but it will need to be modified later to be usable:

「Tips」: You can use the <span>mongod --help</span> command to get help on the usage of options.

# Example for mongo1 node
./bin/mongod --port 27017 \
    --pidfilepath ${PWD}/run/mongod.pid \
    --timeZoneInfo /usr/share/zoneinfo \
    --unixSocketPrefix=${PWD}/run \
    --networkMessageCompressors=zstd \
    --fork \
    --logpath=${PWD}/logs/mongod.log \
    --logappend \
    --logRotate=rename \
    --timeStampFormat iso8601-local \
    --bind_ip localhost,mongo1\
    --slowms 200 \
    --slowOpSampleRate 0.2 \
    --auth \
    --keyFile=${PWD}/key/mongod.key \
    --clusterAuthMode=keyFile \
    --replSet=rs0 \
    --enableMajorityReadConcern=1 \
    --storageEngine=wiredTiger \
    --dbpath=${PWD}/data --outputConfig > ${PWD}/config/mongod.conf

The generated raw file is as follows:

# Example for mongo1 node
net:
bindIp:localhost,mongo1
compression:
    compressors:zstd
port:27017
unixDomainSocket:
    pathPrefix:/apprun/mongodb-27017/run
operationProfiling:
slowOpSampleRate:0.2
slowOpThresholdMs:200
outputConfig:true                      # Corresponds to --outoutConfig option, needs to be deleted
processManagement:
fork:true
pidFilePath:/apprun/mongodb-27017/run/mongod.pid
timeZoneInfo:/usr/share/zoneinfo
replication:
enableMajorityReadConcern:true
replSet:rs0                        # Corresponds to --replSet=rs0 option, needs to be modified to `replSetName`
security:
authorization:enabled
clusterAuthMode:keyFile
keyFile:/apprun/mongodb-27017/key/mongod.key
storage:
dbPath:/apprun/mongodb-27017/data
engine:wiredTiger
systemLog:
destination:file
logAppend:true
logRotate:rename
path:/apprun/mongodb-27017/logs/mongod.log
timeStampFormat:iso8601-local

The final configuration is as follows:

net:
  bindIp:localhost,mongo1
compression:
    compressors:zstd
port:27017
unixDomainSocket:
    pathPrefix:/apprun/mongodb-27017/run
operationProfiling:
slowOpSampleRate:0.2
slowOpThresholdMs:200
processManagement:
fork:true
pidFilePath:/apprun/mongodb-27017/run/mongod.pid
timeZoneInfo:/usr/share/zoneinfo
replication:
enableMajorityReadConcern:true
replSetName:rs0
security:
authorization:enabled
clusterAuthMode:keyFile
keyFile:/apprun/mongodb-27017/key/mongod.key
storage:
dbPath:/apprun/mongodb-27017/data
engine:wiredTiger
systemLog:
destination:file
logAppend:true
logRotate:rename
path:/apprun/mongodb-27017/logs/mongod.log
timeStampFormat:iso8601-local

Step Four: Validate Configuration File and Create systemd Unit File

  1. Start testing in the command line:
./bin/mongod -f ./config/mongod.conf

Check the process running status

$ ps --forest -C mongod -o pid,user,cmd

  PID USER     CMD
  1255547 apprun   ./bin/mongod -f ./config/mongod.conf

  1. Create the unit file using the root account
su - root
vim /etc/systemd/system/mongod.service
  1. The unit file content is as follows:
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network-online.target
Wants=network-online.target

[Service]
User=apprun
Group=apprun
Environment="OPTIONS=-f /apprun/mongodb-27017/config/mongod.conf"
Environment="MONGODB_CONFIG_OVERRIDE_NOFORK=1"
#EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/apprun/mongodb-27017/bin/mongod $OPTIONS
RuntimeDirectory=mongodb
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for mongod as specified in
# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings

[Install]
WantedBy=multi-user.target
  1. Control mongod using the unit file:
  • First, terminate the old mongod process:
kill $(cat /apprun/mongodb-27017/run/mongod.pid)
  • If the apprun user can use sudo, use sudo
sudo systemctl daemon-reload
sudo systemctl enable --now mongod.service
  • If the apprun user cannot use sudo, execute with the root account
systemctl daemon-reload
systemctl enable --now mongod.service
  • Check the mongo status:
systemctl status mongod.service 
  • Example:
$ systemctl status mongod.service 
● mongod.service - MongoDB Database Server
   Loaded: loaded (/etc/systemd/system/mongod.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2025-01-21 15:07:33 CST; 1s ago
     Docs: https://docs.mongodb.org/manual
 Main PID: 1262262 (mongod)
   Memory: 259.2M
   CGroup: /system.slice/mongod.service
           └─1262262 /apprun/mongodb-27017/bin/mongod -f /apprun/mongodb-27017/config/mongod.conf

If everything goes smoothly, please deploy the other two nodes in the same way:

Note: A cluster uses one keyfile. The keyfile has already been generated during deployment on the first server, just synchronize the keyfile to the other two servers, ensuring the file has correct permissions and ownership.

Example:

# Target servers in the example specified according to specific paths
for hosts in mongo2 mongo3;do \
scp /apprun/mongodb-27017/key/mongod.key \
apprun@${hosts}:/apprun/mongodb-27017/key/mongod.key; \
done

Step Five: Build Cluster

  1. Download <span>mongosh</span> to one of the mongo servers and extract it
tar -xf mongosh-2.3.8-linux-x64.tgz
mv mongosh-2.3.8-linux-x64/bin/* /apprun/mongodb-27017/bin/
  1. Stop all mongo servers, modify the configuration as per the following example, temporarily disable authentication:
net:
  bindIp:localhost,mongo1
compression:
    compressors:zstd
port:27017
unixDomainSocket:
    pathPrefix:/apprun/mongodb-27017/run
operationProfiling:
slowOpSampleRate:0.2
slowOpThresholdMs:200
processManagement:
fork:true
pidFilePath:/apprun/mongodb-27017/run/mongod.pid
timeZoneInfo:/usr/share/zoneinfo
replication:
enableMajorityReadConcern:true
replSetName:rs0
# security:
#   authorization: enabled
#   clusterAuthMode: keyFile
#   keyFile: /apprun/mongodb-27017/key/mongod.key
storage:
dbPath:/apprun/mongodb-27017/data
engine:wiredTiger
systemLog:
destination:file
logAppend:true
logRotate:rename
path:/apprun/mongodb-27017/logs/mongod.log
timeStampFormat:iso8601-local
  • Restart the mongod service
systemctl restart mongod.service
  1. On the mongo server expected to become the <span>PRIMARY</span> node, execute:
./bin/mongosh mongo1
  1. Initialize:
rs.initiate({_id: "rs0",members: [{ _id: 0 , host: "mongo1:27017" }]})
  1. After execution, press enter, you can see the current node becomes <span>PRIMARY</span>:
mongo1> rs.initiate({_id: "rs0",members: [{ _id: 0 , host: "mongo1:27017" }]})
{ ok: 1 }
rs0 [direct: other] mongo1> 

rs0 [direct: primary] mongo1> 
  1. Add a secondary node:
rs.add("mongo2:27017")
rs0 [direct: primary] mongo1> rs.add("mongo2:27017")
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1737448647, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1737448647, i: 1 })
}
  1. Add an arbiter node:
rs.addArb("mongo3:27017")
rs0 [direct: primary] admin> rs.addArb("mongo3:27017")
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1737449112, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1737449112, i: 1 })
}

「Note」 If adding the arbiter prompts the following error:

MongoServerError[NewReplicaSetConfigurationIncompatible]: Reconfig attempted to install a config that would change the implicit default write concern. Use the setDefaultRWConcern command to set a cluster-wide write concern and try the reconfig again.

Execute the following command before adding the arbiter node

use admin

db.adminCommand({
  setDefaultRWConcern: 1,
  defaultWriteConcern: { w: "majority" }
})
  1. Final cluster status:
rs0 [direct: primary] admin> rs.status()
{
set: 'rs0',
  date: ISODate('2025-01-21T08:45:43.798Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 2,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1737449139, i: 1 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2025-01-21T08:45:39.404Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1737449139, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1737449139, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1737449139, i: 1 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2025-01-21T08:45:39.404Z'),
    lastDurableWallTime: ISODate('2025-01-21T08:45:39.404Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1737449122, i: 2 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2025-01-21T08:26:29.068Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1737447988, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1737447988, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2025-01-21T08:26:29.278Z'),
    wMajorityWriteAvailabilityDate: ISODate('2025-01-21T08:26:29.399Z')
  },
  members: [
    {
      _id: 0,
      name: 'mongo1:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 1223,
      optime: { ts: Timestamp({ t: 1737449139, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2025-01-21T08:45:39.000Z'),
      lastAppliedWallTime: ISODate('2025-01-21T08:45:39.404Z'),
      lastDurableWallTime: ISODate('2025-01-21T08:45:39.404Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1737447989, i: 1 }),
      electionDate: ISODate('2025-01-21T08:26:29.000Z'),
      configVersion: 4,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: 'mongo2:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 495,
      optime: { ts: Timestamp({ t: 1737449139, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1737449139, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2025-01-21T08:45:39.000Z'),
      optimeDurableDate: ISODate('2025-01-21T08:45:39.000Z'),
      lastAppliedWallTime: ISODate('2025-01-21T08:45:39.404Z'),
      lastDurableWallTime: ISODate('2025-01-21T08:45:39.404Z'),
      lastHeartbeat: ISODate('2025-01-21T08:45:42.316Z'),
      lastHeartbeatRecv: ISODate('2025-01-21T08:45:42.316Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: 'mongo1:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 4,
      configTerm: 1
    },
    {
      _id: 2,
      name: 'mongo3:27017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 31,
      lastHeartbeat: ISODate('2025-01-21T08:45:42.971Z'),
      lastHeartbeatRecv: ISODate('2025-01-21T08:45:42.970Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 4,
      configTerm: 1
    }
  ],
  ok: 1,
'$clusterTime': {
    clusterTime: Timestamp({ t: 1737449139, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1737449139, i: 1 })
}
  1. Create MongoDB super admin account:
rs0 [direct: primary] test> use admin
  • Execute the following commands:

「Note」: Password should be as complex as possible, recommended to combine uppercase and lowercase letters, numbers, and special characters, with a length of at least 8 characters

Create user administrator

admin = db.getSiblingDB("admin")
admin.createUser(
  {
    user: "root",
    pwd: "THE_ROOT_PASSWORD", 
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
  }
)

Create cluster administrator

db.getSiblingDB("admin").createUser(
  {
    "user" : "admin",
    "pwd" :  "THE_ADMIN_PASSWORD",    
    roles: [ { "role" : "clusterAdmin", "db" : "admin" } ] 
  }
)
  1. Restore configuration file and restart MongoDB cluster

Uncomment and restart the server in the order: Secondary Node/Arbiter -> Primary Node

# Example configuration for mongo2 node
net:
bindIp:localhost,mongo2
compression:
    compressors:zstd
port:27017
unixDomainSocket:
    pathPrefix:/apprun/mongodb-27017/run
operationProfiling:
slowOpSampleRate:0.2
slowOpThresholdMs:200
processManagement:
fork:true
pidFilePath:/apprun/mongodb-27017/run/mongod.pid
timeZoneInfo:/usr/share/zoneinfo
replication:
enableMajorityReadConcern:true
replSetName:rs0
security:
authorization:enabled
clusterAuthMode:keyFile
keyFile:/apprun/mongodb-27017/key/mongod.key
storage:
dbPath:/apprun/mongodb-27017/data
engine:wiredTiger
systemLog:
destination:file
logAppend:true
logRotate:rename
path:/apprun/mongodb-27017/logs/mongod.log
timeStampFormat:iso8601-local

Step Six: Testing

  • Log in as cluster administrator:
./bin/mongosh mongodb://mongo1:27017 --username admin --password

Enter password: ********************
Current Mongosh Log ID: 678f6b30872123953e544ca6
Connecting to:          mongodb://<credentials>@mongo1:27017/?directConnection=true&appName=mongosh+2.3.8
Using MongoDB:          7.0.16
Using Mongosh:          2.3.8

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/

------
   The server generated these startup warnings when booting
   2025-01-21T17:38:36.932+08:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'in this binary version
   2025-01-21T17:38:36.932+08:00: vm.max_map_count is too low
------

rs0 [direct: secondary] test> rs.status()
{
set: 'rs0',
  date: ISODate('2025-01-21T09:39:01.509Z'),
  myState: 2,
  term: Long('9'),
  syncSourceHost: 'mongo2:27017',
  syncSourceId: 1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 2,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1737452337, i: 1 }), t: Long('9') },
    lastCommittedWallTime: ISODate('2025-01-21T09:38:57.817Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1737452337, i: 1 }), t: Long('9') },
    appliedOpTime: { ts: Timestamp({ t: 1737452337, i: 1 }), t: Long('9') },
    durableOpTime: { ts: Timestamp({ t: 1737452337, i: 1 }), t: Long('9') },
    lastAppliedWallTime: ISODate('2025-01-21T09:38:57.817Z'),
    lastDurableWallTime: ISODate('2025-01-21T09:38:57.817Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1737452242, i: 1 }),
  members: [
    {
      _id: 0,
      name: 'mongo1:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 26,
      optime: { ts: Timestamp({ t: 1737452337, i: 1 }), t: Long('9') },
      optimeDate: ISODate('2025-01-21T09:38:57.000Z'),
      lastAppliedWallTime: ISODate('2025-01-21T09:38:57.817Z'),
      lastDurableWallTime: ISODate('2025-01-21T09:38:57.817Z'),
      syncSourceHost: 'mongo2:27017',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 4,
      configTerm: 9,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: 'mongo2:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 24,
      optime: { ts: Timestamp({ t: 1737452337, i: 1 }), t: Long('9') },
      optimeDurable: { ts: Timestamp({ t: 1737452337, i: 1 }), t: Long('9') },
      optimeDate: ISODate('2025-01-21T09:38:57.000Z'),
      optimeDurableDate: ISODate('2025-01-21T09:38:57.000Z'),
      lastAppliedWallTime: ISODate('2025-01-21T09:38:57.817Z'),
      lastDurableWallTime: ISODate('2025-01-21T09:38:57.817Z'),
      lastHeartbeat: ISODate('2025-01-21T09:39:00.357Z'),
      lastHeartbeatRecv: ISODate('2025-01-21T09:39:01.409Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1737452297, i: 1 }),
      electionDate: ISODate('2025-01-21T09:38:17.000Z'),
      configVersion: 4,
      configTerm: 9
    },
    {
      _id: 2,
      name: 'mongo3:27017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 24,
      lastHeartbeat: ISODate('2025-01-21T09:39:00.356Z'),
      lastHeartbeatRecv: ISODate('2025-01-21T09:39:01.409Z'),
      pingMs: Long('2'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 4,
      configTerm: 9
    }
  ],
  ok: 1,
'$clusterTime': {
    clusterTime: Timestamp({ t: 1737452337, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('3YacVBI5ntPDfnJOL6S/BKfi+SU=', 0),
      keyId: Long('7462282291255967750')
    }
  },
  operationTime: Timestamp({ t: 1737452337, i: 1 })
}
  1. Optional: Restore mongo1 as the primary node
./bin/mongosh mongodb://mongo2:27017 --username admin --password
rs.stepDown()

Your support is my motivation. If you like this article, please 「like」, 「bookmark」 and 「share」 it with others.

Leave a Comment