-
Notifications
You must be signed in to change notification settings - Fork 0
Percona
This guide will show how to install the Percona XtraDB Cluster on three CentOS servers, using the packages from Percona repositories.
This cluster will be assembled of three servers/nodes:
node #1
hostname: percona1
IP: 192.168.70.71
node #2
hostname: percona2
IP: 192.168.70.72
node #3
hostname: percona3
IP: 192.168.70.73
### Prerequisites
- All three nodes have a CentOS 6 installation.
- Firewall has been set up to allow connecting to ports 3306, 4444, 4567 and 4568
- SELinux is disabled
On all three nodes execute the commands:
rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
rpm --import http://www.percona.com/redir/downloads/percona-release/RPM-GPG-KEY-percona
rpm -e --nodeps mysql-libs
yum -y install Percona-XtraDB-Cluster-server-56 Percona-XtraDB-Cluster-client-56 Percona-XtraDB-Cluster-galera-3
Individual nodes should be configured to be able to bootstrap the cluster. On all three nodes execute the following commands (substitute IPLIST's value with the actual comma-separated IPs of your cluster):
export IPLIST="192.168.70.71,192.168.70.72,192.168.70.73"
export MYIP=`ip -f inet addr show eth0 | grep inet | awk '{print $2}'|awk -F"/" '{print $1}'`
export CLUSTER_NAME="mysql_cluster"
cat << EOF > /etc/my.cnf
[mysqld]
#set-variable = max_connect_errors=999999999
datadir=/var/lib/mysql
user=mysql
innodb_log_file_size=64M
innodb_file_per_table=1
innodb_locks_unsafe_for_binlog=1
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://$IPLIST
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This is a recommended tuning variable for performance
innodb_locks_unsafe_for_binlog=1
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=$MYIP
# SST method
wsrep_sst_method=xtrabackup
# Cluster name
wsrep_cluster_name=$CLUSTER_NAME
# Authentication for SST method
wsrep_sst_auth="SSTUSER_NAME:SSTUSER_PASSWORD"
EOF
Temporary choose a node as primary. In the primary node execute the commands:
/etc/init.d/mysql bootstrap-pxc
This command will start the cluster with initial wsrep_cluster_address set to gcomm://. This way the cluster will be bootstrapped and in case the node or MySQL have to be restarted later, there would be no need to change the configuration file.
Change the MySQL root password
mysqladmin -u root password 'NEW_PASSWORD'
In order to perform successful State Snapshot Transfer using XtraBackup new user needs to be set up with proper privileges:
mysql> CREATE USER ‘SSTUSER_NAME’@'localhost' IDENTIFIED BY 'SSTUSER_PASSWORD’;
mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'SSTUSER_NAME'@'localhost';
mysql> FLUSH PRIVILEGES;
Replicate the /etc/my.cnf file on each server of the cluster and then start the mysql service with the command:
service mysql start
After the server has been started it should receive the state snapshot transfer automatically. Cluster status can now be checked on both nodes.
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |
...
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
...
| wsrep_cluster_size | 3 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
...
| wsrep_ready | ON |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)
On each serve set the automatic start of mysql on boot:
chkconfig mysql on
The server log file is located in /var/lib/mysql/HOSTNAME.err.
Although the password change from the first node has replicated successfully, this example will show that writing on any node will replicate to the whole cluster. In order to check this, new database will be created on second node and table for that database will be created on the third node.
Creating the new database on the second node:
mysql@percona2> CREATE DATABASE percona;
Query OK, 1 row affected (0.01 sec)
Creating the example table on the third node:
mysql@percona3> USE percona;
Database changed
mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
Query OK, 0 rows affected (0.05 sec)
Inserting records on the first node:
mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');
Query OK, 1 row affected (0.02 sec)
Retrieving all the rows from that table on the second node:
mysql@percona2> SELECT * FROM percona.example;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
| 1 | percona1 |
+---------+-----------+
1 row in set (0.00 sec)
This small example shows that all nodes in the cluster are synchronized and working as intended.
See the Percona How Tos