In this post I will be walking you through installing a 3 node test XtraDB Cluster (PXC) environment using VirtualBox. The version of VirtualBox I will be using is 4.3.26. And I will be using CentOS 6.6 for the operating system on the three nodes. To get started you will want to perform a minimal install of 6.6 on 3 virtual machines. I named mine pxc-node1, pxc-node2, and pxc-node3. I also set up port forwarding in VirtualBox to allow me to ssh to the machines via a local port on the host. I set up 22201 to point to pxc-node1, 22202 to point to pxc-node2, and 22203 to point to pxc-node3. The last thing that I did was to add a second host only network interface to each of the nodes and ensure the same name was set for all the nodes. You can do all of this work yourselves, or simply download an already configured appliance from here.
Now that we have the images created, it is time to discuss the plan for installing our Percona XtraDB Cluster. The actual installation process is very similar to the process that I used for installing Percona Server. But once we have the binaries laid down the configuration of the instances will be different and we will need to start the first node of the cluster a special way. While this quick overview does not give a lot of detail, I think it best to learn by doing. With that in mind, here we go.
First ensure that all of the nodes are running. To keep things simple we will be installing as root. But I have done installs at work on RHEL 6 using sudo and the process is pretty much identical. First thing we will need to do is to enable the network interfaces on each machine and disable SELinux. Galera does not work with SELinux and since XtraDB Cluster is just Percona’s Galera implementation we will need to disable it. In the console for each of the nodes you will need to run the following commands.
[root@localhost ~]$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
You will want to modify the ONBOOT to be yes. The file should look like this when finished.
DEVICE=eth0
HWADDR=08:00:27:24:FD:19
TYPE=Ethernet
UUID=b3c128fe-335e-437d-bf3b-32f3706d9273
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=dhcp
Now save the file. We also need to add a config for the secondary NIC which we want to not set up for DHCP, but use static ip addresses. We will use 192.168.70.11 for pxc-node1, 192.168.70.12 for pxc-node2, and 192.168.70.13 for pxc-node3. The contents of /etc/sysconfig/network-scripts/ifcfg-eth1 for pxc-node1 would look like.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.70.11
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
To disable SELInux we need to run.
[root@localhost ~] vi /etc/selinux/config
You will want to set it to disabled. The file should look like this when finished.
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
Now save the file. We also will be stopping the firewall and setting it not to restart. In production we would want to add rules to allow traffic, but this is just a test environment so we’ll take the short cut.
[root@localhost ~]$ service iptables stop
[root@localhost ~]$ chkconfig iptables off
To make it easier to know what system you are on I would modify the shell to show the node name. To do that edit the bash configuration file using.
vi /etc/bashrc
Find the line that says
[ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@\h \W]\\$ "
Comment that line out and add the following
PS1='\u@pxc-node1:\w\$ '
The section should look like this when finished then save the file.
# Turn on checkwinsize
shopt -s checkwinsize
# [ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@\h \W]\\$ "
PS1='\u@pxc-node1:\w\$ '
After making these changes you will want to restart the node using the following command and then perform the same steps on the other two nodes.
[root@localhost ~] shutdown -r 0
Once we are finished with the prep of the three nodes we are ready to start the installation of the PXC cluster. First we want to make sure the the MySQL libraries are not already on the box.
root@pxc-node1:~# yum -y remove mysql-libs
Then we want to import the EPEL and the Percona Repositories.
root@pxc-node1:~# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
root@pxc-node1:~# rpm -Uvh http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
Now that we have the repository registered it is time to lay down the binaries.
root@pxc-node1:~# yum install -y socat
root@pxc-node1:~# yum install -y Percona-XtraDB-Cluster-server-56 Percona-XtraDB-Cluster-client-56 Percona-XtraDB-Cluster-shared-56 percona-toolkit percona-xtrabackup
root@pxc-node1:~# touch /etc/my.cnf
root@pxc-node1:~# /usr/bin/mysql_install_db --defaults-file=/etc/my.cnf --force --datadir=/var/lib/mysql --basedir=/usr/ --user=mysql
You will need to run these same steps on all three nodes to install the binaries. This will also create the data directory and populate it with the system tables.
Now that we have the binaries on each machine it is time to set up the cluster. This involves editing the /etc/my.cnf file. To start with, let’s configure the first node in our cluster. On pxc-node1 edit the /etc/my.cnf file so that it looks like this.
[mysqld]
datadir = /var/lib/mysql
log_error = error.log
log-bin
server-id = 1
query_cache_size=0
query_cache_type=0
innodb_buffer_pool_size = 48M
innodb_log_file_size = 24M
innodb_flush_method = O_DIRECT
innodb_file_per_table
innodb_flush_log_at_trx_commit = 0
performance_schema=OFF
binlog_format = ROW
# galera settings
wsrep_provider = /usr/lib64/libgalera_smm.so
wsrep_cluster_name = mycluster
wsrep_cluster_address = gcomm://192.168.70.11,192.168.70.12,192.168.11.13
wsrep_node_name = node1
wsrep_node_address = 192.168.70.11
wsrep_sst_auth = sst:secret
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = ON
[mysql]
prompt = "node1 mysql> "
[client]
user = root
If we try to start the cluster now using the same method for a normal MySQL instance it will fail.
root@pxc-node1:~# service mysql start
Starting MySQL (Percona XtraDB Cluster)................................... ERROR! The server quit without updating PID file (/var/lib/mysql/localhost.localdomain.pid).
ERROR! MySQL (Percona XtraDB Cluster) server startup failed!
This is because the first node in a cluster has to be forced online as a safety feature since you cannot have quorum with only one node. To force it online run the following.
root@pxc-node1:~# service mysql bootstrap-pxc
Bootstrapping PXC (Percona XtraDB Cluster)Starting MySQL (Percona XtraDB Cluster)...... SUCCESS!
You may see a message similar to the following since we just tried to start the cluster without bootstrapping first.
root@pxc-node1:~# service mysql bootstrap-pxc
Bootstrapping PXC (Percona XtraDB Cluster) ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists
Starting MySQL (Percona XtraDB Cluster)......... SUCCESS!
Either way you now have a cluster up and running with one node. Before we add any more nodes, let’s put some things in place to verify the cluster once it is up. We will be installing sysbench to allow us to drive a load at the machine, and we will be installing myq_tools to check the flow of transactions in the cluster. Run this on all three nodes to set up sysbench and myq_tools and get them ready for our use.
root@pxc-node1:~# yum install -y sysbench
root@pxc-node1:~# mysql
node1 mysql> CREATE USER 'test'@'localhost' IDENTIFIED BY 'test';
node1 mysql> GRANT ALL PRIVILEGES ON test.* TO 'test'@'localhost';
node1 mysql> \q
root@pxc-node1:~# yum install -y wget
root@pxc-node1:~# cd /usr/local/bin
root@pxc-node1:~# wget https://github.com/jayjanssen/myq-tools/releases/download/v0.5/myq_tools.tgz
root@pxc-node1:~# tar -xzvf myq_tools.tgz
root@pxc-node1:~# cd bin
root@pxc-node1:~# mv * ../
root@pxc-node1:~# cd ..
root@pxc-node1:~# rm -rf bin
root@pxc-node1:~# ln -s /usr/local/bin/myq_status.linux-amd64 myq_status
root@pxc-node1:~# vi /usr/local/bin/run_sysbench_oltp.sh
sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-user=test --mysql-password=test --mysql-db=test --mysql-host=localhost --mysql-ignore-errors=all --oltp-tables-count=1 --oltp-table-size=250000 --oltp-auto-inc=off --num-threads=1 --report-interval=1 --max-requests=0 --tx-rate=10 run | grep tps
root@pxc-node1:~# chmod +x run_sysbench_oltp.sh
root@pxc-node1:~# sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-user=test --mysql-password=test --mysql-db=test --mysql-host=localhost --mysql-ignore-errors=all --oltp-table-size=250000 --num-threads=1 prepare
You can verify that you have sysbench set up correctly but running the following. You should not see any errors. There should also be numbers in the reads and writes columns.
root@pxc-node1:/usr/local/bin# run_sysbench_oltp.sh
[ 1s] threads: 1, tps: 10.99, reads: 153.85, writes: 43.96, response time: 12.64ms (95%), errors: 0.00, reconnects: 0.00
[ 2s] threads: 1, tps: 17.02, reads: 238.28, writes: 68.08, response time: 12.84ms (95%), errors: 0.00, reconnects: 0.00
[ 3s] threads: 1, tps: 10.00, reads: 153.02, writes: 40.01, response time: 10.10ms (95%), errors: 0.00, reconnects: 0.00
[ 4s] threads: 1, tps: 19.00, reads: 253.00, writes: 76.00, response time: 10.26ms (95%), errors: 0.00, reconnects: 0.00
[ 5s] threads: 1, tps: 9.00, reads: 125.99, writes: 36.00, response time: 11.42ms (95%), errors: 0.00, reconnects: 0.00
[ 6s] threads: 1, tps: 6.00, reads: 83.98, writes: 23.99, response time: 12.19ms (95%), errors: 0.00, reconnects: 0.00
[ 7s] threads: 1, tps: 6.00, reads: 84.01, writes: 24.00, response time: 63.33ms (95%), errors: 0.00, reconnects: 0.00
Now that we have the cluster running we need to add the other two nodes. But before we do that, we need to create a user with permissions for the SST process. We’ll talk more about SST in the future, but for now its important to know this process allows the other nodes to get a copy of the database when they join the cluster. To do that we need to run the following on node 1.
root@pxc-node1:/usr/local/bin# mysql
node1 mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret';
node1 mysql> \q
Since we already have the binaries installed we just need to configure the other two instances and start mysql on the nodes. The instructions for node 2 are below. Notice that we don’t need to bootstrap these instances. In fact, if we do bootstrap them we would end up with 3 clusters instead of just one.
root@pxc-node2:~#vi /etc/my.cnf
[mysqld]
datadir = /var/lib/mysql
log_error = error.log
log-bin
server-id = 2
query_cache_size=0
query_cache_type=0
innodb_buffer_pool_size = 48M
innodb_log_file_size = 24M
innodb_flush_method = O_DIRECT
innodb_file_per_table
innodb_flush_log_at_trx_commit = 0
performance_schema=OFF
binlog_format = ROW
# galera settings
wsrep_provider = /usr/lib64/libgalera_smm.so
wsrep_cluster_name = mycluster
wsrep_cluster_address = gcomm://192.168.70.11,192.168.70.12,192.168.70.13
wsrep_node_name = node2
wsrep_node_address = 192.168.70.12
wsrep_sst_auth = sst:secret
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = ON
[mysql]
prompt = "node2 mysql> "
[client]
user = root
root@pxc-node2:~# service mysql restart
Shutting down MySQL (Percona XtraDB Cluster).. SUCCESS!
Starting MySQL (Percona XtraDB Cluster).....State transfer in progress, setting sleep higher
... SUCCESS!
At this point we now have a two node cluster. To add the last node you need to modify /etc/my.cnf on that node and then start the mysql service. You can use the configuration file for node 2 as an example. You will need to modify the server-id, wsrep_node_name, wsrep_node_address, and prompt to the values for pxc-node3. If you have any issues let me know.
In a future post, I will show you how to set up HAProxy to provide a single connection (actually we will use 2 connections but more on that later) for your applications. We will also play around with some of the more interesting features of Galera and Percona XtraDB Cluster.
You must be logged in to post a comment.