Setting up a load balanced mysql cluster with only two servers. 

ImageThis document is a how to for installing mysql 5 cluster with only two server machines. Our cluster combines two data nodes:two management nodes and two mysql nodes.
Every machine has 1 data node,1 management node, and 1 mysql server.

To avoid network problems between our internet connection, switches,etc, we networked the two machines with dual switch connection and one crossover cable between them.
The mysql cluster will communicate using the crossover cable, just to be sure that there will be no network problems (except from the crossover cable link).

To communicate with the mysql we will use a shared ip, which will be available with linux ha , so the mysql clients will connect to that ip. and have 99.999999% uptime.

This is an expirimental guide, that i used on my own, and didn't find any workaround like this on the web. So, before use it, test it for yourself and if it works good for you then use it.

Section1 : The Servers

  • Server1 : two public ips for internet connection and 192.168.0.1 for mysql ( crossover cable with server2)
  • Server2: two public ips for internet connection and 192.168.0.2 for mysql (crossover cable with server1)

       192.168.0.10 is the shared ip, owned by the active node of heartbeat

First we configure the ethernet interface on each machine to be in private network. (192.168.0.1 – 192.168.0.2) and we add 192.168.0.10 as a resource in heartbeat ( /etc/ha.d/haresources ),
then just restart heartbeat ( /etc/init.d/heartbeat restart in our systems).
Now check with ifconfig -a , if you have the private ips listed on your machine – server.

The output will be something like this(server1):
eth2      Link encap:Ethernet  HWaddr 00:01:R5:4A:1C:55
          inet addr:192.168.0.1  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::202:a5ff:fe4c:1a80/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:591826624 errors:0 dropped:0 overruns:0 frame:0
          TX packets:426209146 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:130775885 (124.7 MiB)  TX bytes:2244545947 (2.0 GiB)
          Base address:0x4000 Memory:f02e0000-f0300000
eth2:0    Link encap:Ethernet  HWaddr 00:01:R5:4A:1C:55
          inet addr:192.168.0.10  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Base address:0x4000 Memory:f02e0000-f0300000

and server2:
eth2      Link encap:Ethernet  HWaddr 00:01:B5:33:3A:A7
          inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::202:a5ff:fe4c:3167/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:119407048 errors:0 dropped:0 overruns:0 frame:0
          TX packets:155201486 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:766437200 (730.9 MiB)  TX bytes:299855299 (285.9 MiB)
          Base address:0x4040 Memory:f7c60000-f7c80000

Server1 and Server2 are ready for mysql installation. Try ping from Server 1 to Server2 and vise versa. Lets proceed.

Section 2: Mysql 5 Installation

The following procedure will be done in both servers

[root@server1]# cd /usr/local/
[root@server1]# wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-5.0.51a-linux-i686-icc-glibc23.tar.gz/from/http://ftp.ntua.gr/pub/databases/mysql/
[root@server1]# tar -zxvf mysql-5.0.51a-linux-i686-icc-glibc23.tar.gz
[root@server1]# rm mysql-5.0.51a-linux-i686-icc-glibc23.tar.gz
[root@server1]# mv mysql-5.0.51a-linux-i686-icc-glibc23 mysql
[root@server1]# ln -s mysql mysql-5.0.51a-linux-i686-icc-glibc23
[root@server1]# cd mysql
[root@server1]# chown -R mysql .
[root@server1]# chgrp -R mysql .
[root@server1]# scripts/mysql_install_db –user=mysql
[root@server1]# cp support-files/mysql.server /etc/init.d/mysql.server
[root@server1]# ./bin/mysqladmin -u root password '<your_chosen_mysql_password>'
[root@server1]# ./bin/mysql_secure_installation

Config files:
[root@server1]# more /etc/my.cnf
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring = "host=192.168.0.1,host=192.168.0.2"
ndb_force_send = 0
ndb_use_exact_count = 0
set-variable = query-cache-size=128M
set-variable = thread_cache_size=1024
set-variable = table_cache=1024
key_buffer_size = 32M
sort_buffer_size = 8M
read_buffer_size = 8M
thread_concurrency = 8
thread_cache_size = 8
max_connect_errors=10000

[ndb_mgm]
connect-string = "host=192.168.0.1,host=192.168.0.2"

[ndbd]
connect-string = "host=192.168.0.1,host=192.168.0.2"

[client]

If the memory options is not suitable for you, then use other. Our Server1/2 are equipted with 4GB of RAM.
The option ndb-connectstring is reffering to ip address of cluster management node. We use both server1/2 ip addresses, just in case we loose one of server1/2.
[mysqld] section is for mysql server node
[ndb_mgm] section is for ndb management node and
[ndbd] section for data node.

Ok, lets see the mysql-cluster configuration
[root@server1]# mkdir /var/lib/mysql-cluster
[root@server1]# # more config.ini
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=1500M
IndexMemory=512M
RedoBuffer=256M
TimeBetweenLocalCheckpoints=20 #4MB writes
NoOfFragmentLogFiles=8 #512MB of REDO log — reasonable, and we could most likely do with less
# MaxNoOfAttributes – Number of columns over all tables
# These are somewhat arbitrary, although make sense for the most part
MaxNoOfAttributes=20000
MaxNoOfConcurrentOperations=50000
MaxNoOfLocalOperations=50000
MaxNoOfOrderedIndexes = 5000
MaxNoOfUniqueHashIndexes = 5000
# NoOfReplicas=2
# MaxNoOfAttributes=100000
# datamemory=1500M
# MaxNoOfConcurrentOperations=150000
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Section for the cluster management node
[NDB_MGMD]
Id=1
Hostname=192.168.0.1

[NDB_MGMD]
Id=2
Hostname=192.168.0.2

# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.0.1
DataDir=/var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.0.2
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]

In this configuration file you can see that we have named the management nodes (ndb_mgmd) with ids 1 2 just to be sure that we won't have mix up of the management node. We also define the data nodes (ndbd) with the ips of server1/2
and left mysql nodes blank. (not a particular reason, if you want , you can define the hostname ips of mysql server)

You can use scp or rsync to copy configuration files from server1 to server2 to save some time.

So we have finished with config files. Be sure to have the same config files to server1 and server2.
I have also made some linked files to mysql/bin directory for mysql client, and ndb_mgm which is the ndb management console.
[root@server1]# cd /usr/bin
[root@server1]# ln -s /usr/local/mysql/bin/mysql .
[root@server1]# ln -s /usr/local/mysql/bin/ndb_mgm .

Section 3: Running For the first time

Now, we will run DATA node, MANAGEMENT node, MYSQL node for the first time.
Running Data node for the first time :
[root@server1]# /usr/local/mysql/bin/ndbd –initial
Running Management node:
[root@server1]# /usr/local/mysql/bin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
Running Mysql Server
[root@server1]# /etc/init.d/mysql.server start

Be sure to run all steps in Server2 too .

Connect to the management console to see if your mysql cluster is up and running
server1:~# ndb_mgm
— NDB Cluster — Management Client —
ndb_mgm> show
Connected to Management Server at: 192.168.0.10:1186
Cluster Configuration
———————
[ndbd(NDB)]     2 node(s)
id=3    @192.168.0.1  (Version: 5.0.37, Nodegroup: 0, Master)
id=4    @192.168.0.2  (Version: 5.0.37, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=1    @192.168.0.1  (Version: 5.0.37)
id=2    @192.168.0.2  (Version: 5.0.37)

[mysqld(API)]   2 node(s)
id=5    @192.168.0.1  (Version: 5.0.37)
id=6    @192.168.0.2  (Version: 5.0.37)

Everything looks good. Lets try some tests. First we create a database and use ENGINE=NDBCLUSTER for clustering purposes.

server1:~#mysql -u root -p
mysql> CREATE DATABASE testt;
mysql> USE testt;
mysql> CREATE TABLE test_table (num INT) ENGINE=NDBCLUSTER;
mysql> INSERT INTO test_table () VALUES (1);
mysql> SELECT * FROM test_table;

mysql> SELECT * FROM test_table;
+------+
| num |
+------+
| 1 |
+------+
1 row in set (0.03 sec)

mysql> quit;

Now we create the same database to server2 and see what happens
mysql> mysql -u root -p
mysql> CREATE DATABASE ttest;
mysql> USE testt;
mysql> SELECT * FROM test_table;

+------+
| i |
+------+
| 1 |
+------+
1 row in set (0.04 sec)

If you get the same result to server2, means that your mysql cluster is perfectly installed.
Now what about high availability? Linux Heartbeat is the answer with our custom scripts.

Section 4: Configure Linux Heartbeat

First of all we will need init scripts for ndbd (data node) and ndb_mgmd(management node).
Please note to create these scripts to both servers. Only /etc/ha.d/resource.d/ndbd differs

[root@server1]#vi /etc/init.d/ndbd
#!/bin/sh
#
# ndbd: MySQL Cluster Storage node
#
# chkconfig: 2345 63 25
# description: MySQL Cluster Storage node
#
# Source function library.

. /etc/rc.d/init.d/functions

start()

{
echo -n $"Starting ndbd: "
cd /var/lib/mysql-cluster/
daemon /usr/local/mysql/bin/ndbd –ndb-optimized-node-selection –ndb-shm
touch /var/lock/subsys/ndbd
echo
}

stop()
{
echo -n $"Shutting down ndbd: "
killproc ndbd
rm -f /var/lock/subsys/ndbd
echo
}

[ -f /usr/local/mysql/bin/ndbd ] || exit 0

Save&Exit
chmod u+x /etc/init.d/ndbd
chkconfig –add ndbd
chkconfig –level 345 ndbd on

Now lets create ndbd_mgmd start script for Mysql Cluster Management node
#!/bin/sh
#
# ndb_mgmd: MySQL Cluster management node
#
# chkconfig: 2345 62 20
# description: MySQL Cluster Management node
#

# Source function library.
. /etc/rc.d/init.d/functions

start()
{
echo -n $"Starting ndb_mgmd: "
cd /var/lib/mysql-cluster/
daemon /usr/local/mysql/bin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
touch /var/lock/subsys/ndb_mgmd
echo
}

stop()
{
echo -n $"Shutting down ndb_mgmd: "
killproc ndb_mgmd
rm -f /var/lock/subsys/ndb_mgmd
echo
}

[ -f /usr/local/mysql/bin/ndb_mgmd ] || exit 0

# See how we were called.

case "$1" in
start)
start

;;

stop)
stop

;;

restart|reload)
stop

start
;;

condrestart)
[ -e /var/lock/subsys/ndb_mgmd ] && (stop; start)
;;
*)

echo $"Usage: $0 {start|stop|restart|reload|condrestart}"
exit 1
esac
exit 0

Save&Exit
chmod u+x /etc/init.d/ndbd
chkconfig –add ndbd
chkconfig –level 345 ndbd on

Now we have done everything for mysql cluster to start when system boot.

Edit file /etc/ha.d/haresources and add ndbd and mysql.server to your configuration. I assume that you already have ip address 192.168.0.10 as a resource to haresources. This ip address will be used as a cluster IP to let your local programs connect to mysql cluster server.

Save and exit. Now lets add the resources

cd /etc/ha.d/resource.d

vi ndbd (this is for server1)

#!/bin/sh
#
# ndbd: MySQL Cluster data node
#
#

# Source function library.
start()
{
echo -n $"Starting ndbd: "
/etc/init.d/ndb_mgmd restart
sleep 10
/etc/init.d/ndbd restart
/etc/init.d/mysql.server restart
echo
}

stop()
{

echo -n $"Shutting down ndbd: "
#/etc/init.d/ndbd restart
#/etc/ha.d/resource.d/ndbd.sleep &

}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
*)
echo $"Usage: $0 {start|stop}"
exit 1
esac
exit 0

Save and exit
chmod u+x ndbd

vi ndbd.sleep
#!/bin/bash
/bin/sleep 10
/etc/init.d/ndbd stop
/bin/sleep 10
/etc/init.d/ndbd start
/etc/init.d/mysql.server restart

Save and exit
chmod u+x ndbd.sleep

vi ndbd.restart
#!/bin/sh
#
# ndbd: MySQL Cluster data node
#
#
# Source function library.

start()
{
echo -n $"Starting ndbd: "
/etc/init.d/ndbd restart
echo
}

stop()
{
echo -n $"Shutting down ndbd: "
/etc/ha.d/resource.d/ndbd.sleep &
}

# See how we were called.

case "$1" in
start)
start
;;
stop)
stop
;;
*)
echo $"Usage: $0 {start|stop}"
exit 1
esac
exit 0

Save and exit
chmod u+x ndbd.sleep

Now on server2 create all these scripts except /etc/ha.d/resource.d/ndbd. This file goes as follows

vi /etc/ha.d/resource.d/ndbd

#!/bin/sh
#
# ndbd: MySQL Cluster data node
#
#

# Source function library.

start()
{
echo -n $"Starting ndbd: "
/etc/init.d/ndb_mgmd restart
sleep 10
/etc/init.d/ndbd restart
/etc/init.d/mysql.server restart
echo
}
stop()
{
echo -n $"Shutting down ndbd: "
/etc/ha.d/resource.d/ndbd.sleep &
}
# See how we were called.

case "$1" in
start)
start
;;
stop)
stop
;;
*)
echo $"Usage: $0 {start|stop}"
exit 1
esac
exit 0

Section 5 : Conclusion

This set-up is completely expiremental and i recommend  to do a lot of testing to ensure that it is the correct for you.
It worked well for us and already set it up in some real server environment

Good luck

Christos