How to Install Plesk in a High-Available Cluster

HA cluster: software configuration

Hosting related system services

The software responsible for the HA cluster manages system services such as Plesk, nginx, etc. so these services should all be disabled as they should not start automatically on the system. Disable autostart for these services that need to run on both nodes using the following command:

ha-node1 and ha-node2# for i in

plesk-ip-remapping
plesk-php74-fpm.service
plesk-php82-fpm.service
plesk-web-socket.service
plesk-task-manager.service
plesk-ssh-terminal.service
plesk-repaird.socket
sw-engine.service
sw-cp-server.service
psa.service
cron.service
xinetd.service
nginx.service
apache2.service httpd.service
mariadb.service mysql.service postgresql.service
named.service bind9.service named-chroot.service
postfix.service;
do systemctl disable $i && systemctl stop $i;

done

As output, you might see a line like “Failed to disable unit: Unit file bind9.service does not exist”. This is not a fatal error because the command contains different names for the same service for different OSes like CentOS and Ubuntu, or names for other services that provide similar functionality (such as MySQL and MariaDB). If you have installed additional components such as “php80” with Plesk, you will also need to disable services for those components if services are added to the server by Plesk components.

You can run `ps ax` to double check that there are no more running services related to Plesk or any of its components.

flask file

In a previous blog post, you can see how to copy the Plesk “vhosts” directory to NFS storage. For an HA cluster, you will need to do the same steps plus a few extra steps for the rest of the Plesk directories to be available to the nodes in the HA cluster.

On the NFS server configure the “/var/nfs/plesk-ha/plesk_files” export the same way as “/var/nfs/plesk-ha/vhosts”. After configuration, you should see a directory available for remote mounting on your internal network.

ha-nfs# exportfs -v
/var/nfs/plesk-ha/vhosts

10.0.0.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

/var/nfs/plesk-ha/plesk_files

10.0.0.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

Once the export is configured, Plesk needs to be copied to NFS. For this purpose, a directory must be created in advance on each node to mount the NFS storage to.

ha-node1 and ha-node2# mkdir -p /nfs/plesk_files

File: vhosts

Since we previously decided to assume ha-node1 as the active node, the following command should be run on ha-node1. To copy an existing vhosts directory, run the following command:

ha-node1# mount -t nfs -o "hard,timeo=600,retrans=2,_netdev" 10.0.0.12:/var/nfs/plesk-ha/vhosts /mnt
ha-node1# cp -aRv /var/www/vhosts/* /mnt
ha-node1# umount /mnt

File: Plesk related

Also, since we previously decided to assume ha-node1 as the active node, the following command should be run on ha-node1.

ha-node1# mount -t nfs -o "hard,timeo=600,retrans=2,_netdev" 10.0.0.12:/var/nfs/plesk-ha/plesk_files /nfs/plesk_files

ha-node1# mkdir -p /nfs/plesk_files/etc/{apache2,nginx,psa,sw,sw-cp-server,domainkeys,psa-webmail}
ha-node1# cp -a /etc/passwd /nfs/plesk_files/etc/
ha-node1# cp -aR /etc/apache2/. /nfs/plesk_files/etc/apache2
ha-node1# cp -aR /etc/nginx/. /nfs/plesk_files/etc/nginx
ha-node1# cp -aR /etc/psa/. /nfs/plesk_files/etc/psa
ha-node1# cp -aR /etc/sw/. /nfs/plesk_files/etc/sw
ha-node1# cp -aR /etc/sw-cp-server/. /nfs/plesk_files/etc/sw-cp-server
ha-node1# cp -aR /etc/sw-engine/. /nfs/plesk_files/etc/sw-engine
ha-node1# cp -aR /etc/domainkeys/. /nfs/plesk_files/etc/domainkeys
ha-node1# cp -aR /etc/psa-webmail/. /nfs/plesk_files/etc/psa-webmail

ha-node1# mkdir -p /nfs/plesk_files/var/{spool,named}
ha-node1# cp -aR /var/named/. /nfs/plesk_files/var/named
ha-node1# cp -aR /var/spool/. /nfs/plesk_files/var/spool

ha-node1# mkdir -p /nfs/plesk_files/opt/plesk/php/{7.4,8.2}/etc
ha-node1# cp -aR /opt/plesk/php/7.4/etc/. /nfs/plesk_files/opt/plesk/php/7.4/etc
ha-node1# cp -aR /opt/plesk/php/8.2/etc/. /nfs/plesk_files/opt/plesk/php/8.2/etc

ha-node1# mkdir -p /nfs/plesk_files/usr/local/psa/{admin/conf,admin/plib/modules,etc/modules,var/modules,var/certificates}
ha-node1# cp -aR /usr/local/psa/admin/conf/. /nfs/plesk_files/usr/local/psa/admin/conf
ha-node1# cp -aR /usr/local/psa/admin/plib/modules/. /nfs/plesk_files/usr/local/psa/admin/plib/modules
ha-node1# cp -aR /usr/local/psa/etc/modules/. /nfs/plesk_files/usr/local/psa/etc/modules
ha-node1# cp -aR /usr/local/psa/var/modules/. /nfs/plesk_files/usr/local/psa/var/modules
ha-node1# cp -aR /usr/local/psa/var/certificates/. /nfs/plesk_files/usr/local/psa/var/certificates

ha-node1# umount /nfs/plesk_files

Event handler to keep /etc/passwd up2date

Whenever Plesk updates the system users, the NFS repository’s passwd and group files must be updated. To do this, we create some event handlers for scenarios like domain creation, subscription update, etc. Event handlers are stored in the Plesk database, so you need to run the following command only on the active node.

Since we previously decided to assume ha-node1 as the active node, the following command should be run on ha-node1.

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_delete

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_delete

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_delete
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_move

Source

HA cluster: software configuration

Hosting related system services

The software responsible for the HA cluster manages system services such as Plesk, nginx, etc. so these services should all be disabled as they should not start automatically on the system. Disable autostart for these services that need to run on both nodes using the following command:

ha-node1 and ha-node2# for i in

plesk-ip-remapping
plesk-php74-fpm.service
plesk-php82-fpm.service
plesk-web-socket.service
plesk-task-manager.service
plesk-ssh-terminal.service
plesk-repaird.socket
sw-engine.service
sw-cp-server.service
psa.service
cron.service
xinetd.service
nginx.service
apache2.service httpd.service
mariadb.service mysql.service postgresql.service
named.service bind9.service named-chroot.service
postfix.service;
do systemctl disable $i && systemctl stop $i;

done

As output, you might see a line like “Failed to disable unit: Unit file bind9.service does not exist”. This is not a fatal error because the command contains different names for the same service for different OSes like CentOS and Ubuntu, or names for other services that provide similar functionality (such as MySQL and MariaDB). If you have installed additional components such as “php80” with Plesk, you will also need to disable services for those components if services are added to the server by Plesk components.

You can run `ps ax` to double check that there are no more running services related to Plesk or any of its components.

flask file

In a previous blog post, you can see how to copy the Plesk “vhosts” directory to NFS storage. For an HA cluster, you will need to do the same steps plus a few extra steps for the rest of the Plesk directories to be available to the nodes in the HA cluster.

On the NFS server configure the “/var/nfs/plesk-ha/plesk_files” export the same way as “/var/nfs/plesk-ha/vhosts”. After configuration, you should see a directory available for remote mounting on your internal network.

ha-nfs# exportfs -v
/var/nfs/plesk-ha/vhosts

10.0.0.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

/var/nfs/plesk-ha/plesk_files

10.0.0.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

Once the export is configured, Plesk needs to be copied to NFS. For this purpose, a directory must be created in advance on each node to mount the NFS storage to.

ha-node1 and ha-node2# mkdir -p /nfs/plesk_files

File: vhosts

Since we previously decided to assume ha-node1 as the active node, the following command should be run on ha-node1. To copy an existing vhosts directory, run the following command:

ha-node1# mount -t nfs -o "hard,timeo=600,retrans=2,_netdev" 10.0.0.12:/var/nfs/plesk-ha/vhosts /mnt
ha-node1# cp -aRv /var/www/vhosts/* /mnt
ha-node1# umount /mnt

File: Plesk related

Also, since we previously decided to assume ha-node1 as the active node, the following command should be run on ha-node1.

ha-node1# mount -t nfs -o "hard,timeo=600,retrans=2,_netdev" 10.0.0.12:/var/nfs/plesk-ha/plesk_files /nfs/plesk_files

ha-node1# mkdir -p /nfs/plesk_files/etc/{apache2,nginx,psa,sw,sw-cp-server,domainkeys,psa-webmail}
ha-node1# cp -a /etc/passwd /nfs/plesk_files/etc/
ha-node1# cp -aR /etc/apache2/. /nfs/plesk_files/etc/apache2
ha-node1# cp -aR /etc/nginx/. /nfs/plesk_files/etc/nginx
ha-node1# cp -aR /etc/psa/. /nfs/plesk_files/etc/psa
ha-node1# cp -aR /etc/sw/. /nfs/plesk_files/etc/sw
ha-node1# cp -aR /etc/sw-cp-server/. /nfs/plesk_files/etc/sw-cp-server
ha-node1# cp -aR /etc/sw-engine/. /nfs/plesk_files/etc/sw-engine
ha-node1# cp -aR /etc/domainkeys/. /nfs/plesk_files/etc/domainkeys
ha-node1# cp -aR /etc/psa-webmail/. /nfs/plesk_files/etc/psa-webmail

ha-node1# mkdir -p /nfs/plesk_files/var/{spool,named}
ha-node1# cp -aR /var/named/. /nfs/plesk_files/var/named
ha-node1# cp -aR /var/spool/. /nfs/plesk_files/var/spool

ha-node1# mkdir -p /nfs/plesk_files/opt/plesk/php/{7.4,8.2}/etc
ha-node1# cp -aR /opt/plesk/php/7.4/etc/. /nfs/plesk_files/opt/plesk/php/7.4/etc
ha-node1# cp -aR /opt/plesk/php/8.2/etc/. /nfs/plesk_files/opt/plesk/php/8.2/etc

ha-node1# mkdir -p /nfs/plesk_files/usr/local/psa/{admin/conf,admin/plib/modules,etc/modules,var/modules,var/certificates}
ha-node1# cp -aR /usr/local/psa/admin/conf/. /nfs/plesk_files/usr/local/psa/admin/conf
ha-node1# cp -aR /usr/local/psa/admin/plib/modules/. /nfs/plesk_files/usr/local/psa/admin/plib/modules
ha-node1# cp -aR /usr/local/psa/etc/modules/. /nfs/plesk_files/usr/local/psa/etc/modules
ha-node1# cp -aR /usr/local/psa/var/modules/. /nfs/plesk_files/usr/local/psa/var/modules
ha-node1# cp -aR /usr/local/psa/var/certificates/. /nfs/plesk_files/usr/local/psa/var/certificates

ha-node1# umount /nfs/plesk_files

Event handler to keep /etc/passwd up2date

Whenever Plesk updates the system users, the NFS repository’s passwd and group files must be updated. To do this, we create some event handlers for scenarios like domain creation, subscription update, etc. Event handlers are stored in the Plesk database, so you need to run the following command only on the active node.

Since we previously decided to assume ha-node1 as the active node, the following command should be run on ha-node1.

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event phys_hosting_delete

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event ftpuser_delete

ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_delete
ha-node1# plesk bin event_handler --create -command "/bin/cp /etc/passwd /nfs/plesk_files/etc/passwd" -priority 50 -user root -event site_subdomain_move

Source

More from author

Related posts

Latest posts

Dynamic List vs. Active List: A Comprehensive Comparison – Unveiling the Ultimate Winner!

Problem: Difficulty finding necessary controls in the domain overviewwithin the domain outline A variety of essential tools are waiting for you., streamlines the development...

5 Tips for Picking the Best Load Balancer

In today's rapidly changing and highly demanding digital environment, consumers expect web applications to be fast and reliable. According to Google Search Advocate...

Want to stay up to date with the latest news?

We would love to hear from you! Please fill in your details and we will stay in touch. It's that simple!