Docker Images for MySQL Cluster

We’re constantly working to improve packaging and distribution of MySQL products. We have official Docker images for MySQL Server and we use Docker images to provide easy to use previews of upcoming and experimental setups and features in our products. Today we’re dockerizing another major product by releasing preview Docker images for MySQL Cluster. In this blog post, we’ll see just how easy it is to have your own dockerized cluster up and running in less than five minutes.

Just a brief but important note on the status of these images first: While the MySQL Cluster version in these images is a fully tested and supported GA version, the Docker image setup is still experimental and should not at this time be utilized for production purposes.

The Docker image for MySQL Cluster comes with a default configuration that will give you a cluster consisting of two data nodes, one management node and one server node. We’ll use this default configuration in our first example and then proceed to show how you can easily override these defaults to customize your setup.

Now, let us start the stopwatch and get going.

Running with the default config

This setup will consist of one management node at IP 192.168.0.2, two data nodes at 192.168.0.3 and 192.168.0.4 respectively, and a MySQL server node at 192.168.0.10.

Since we will run our containers on a separate network, we go ahead and create that as follows:

docker network create cluster --subnet=192.168.0.0/16

We’re now ready to launch our containers. As you will see from the commands below, the first argument after the image name specifies the process to be started in the container (ndb_mgmd, ndbd or mysqld) and thus the type or role of the container. Any subsequent arguments will be forwarded directly to the respective process.

Launching our management node:

docker run -d --net=cluster --name=management1 --ip=192.168.0.2 mysql/mysql-cluster ndb_mgmd

Launching our first data node:

docker run -d --net=cluster --name=ndb1 --ip=192.168.0.3 mysql/mysql-cluster ndbd

Launching our second data node:

docker run -d --net=cluster --name=ndb2 --ip=192.168.0.4 mysql/mysql-cluster ndbd

Launching our MySQL node:

docker run -d --net=cluster --name=mysql1 --ip=192.168.0.10 mysql/mysql-cluster mysqld

By default, the MySQL node generates a one-time password for the MySQL admin user root@localhost which can be retrieved with

docker logs mysql1 2>&1 | grep password

For security reasons, you must reset this password upon the first connection to the MySQL server. Proceed as follows:

docker exec -it mysql1 mysql -uroot -p

On the resulting mysql client command line, input

ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';

… where MyNewPass is the new root password.

You now have a running MySQL Cluster setup. To verify that and in general to monitor and administer your cluster, you can spin up an interactive management client by running

docker run -it --net=cluster mysql/mysql-cluster

Type SHOW and you should see this:

ndb_mgm> SHOW
    Connected to Management Server at: 192.168.0.2:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)] 2 node(s)
    id=2    @192.168.0.3  (mysql-5.7.18 ndb-7.6.2, Nodegroup: 0, *)
    id=3    @192.168.0.4  (mysql-5.7.18 ndb-7.6.2, Nodegroup: 0)

    [ndb_mgmd(MGM)] 1 node(s)
    id=1    @192.168.0.2  (mysql-5.7.18 ndb-7.6.2)

    [mysqld(API)]   1 node(s)
    id=4    @192.168.0.10  (mysql-5.7.18 ndb-7.6.2)

Using a custom config

The walkthrough above utilizes a default set of minimal config files embedded in the image, which gets you going with a simple and lean cluster for sandboxing and prototyping. We’ll now go on to show how you can run with a custom MySQL Cluster config.

The approach we’ll use is to create the config files on the host computer, then proceed to mount them into the Docker containers afterwards. In effect we’re going to be replacing the default config files that are present in the image with custom ones mounted from outside the container. For the sake of a simple example, we will describe the setup of a cluster with data and index memory increased from the rather anaemic default config used above.

Remember to clean out any running containers and networks from the previous example before you proceed.

On the Docker host machine, create <your-path>/mysql-cluster.cnf and paste the following into it:

[ndbd default]
NoOfReplicas=2
DataMemory=1536M
IndexMemory=192M

[ndb_mgmd]
NodeId=1
hostname=192.168.0.2
datadir=/var/lib/mysql

[ndbd]
NodeId=2
hostname=192.168.0.3
datadir=/var/lib/mysql

[ndbd]
NodeId=3
hostname=192.168.0.4
datadir=/var/lib/mysql

[mysqld]
NodeId=4
hostname=192.168.0.10

We also need a my.cnf file for the MySQL server in this cluster. Create <your-path>/my.cnf and paste the following into it.

[mysqld]
ndbcluster
ndb-connectstring=192.168.0.2

[mysql_cluster]
ndb-connectstring=192.168.0.2

Create the Docker private network we’ll need:

docker network create cluster --subnet=192.168.0.0/16

We’re now ready to launch our containers.

Launching our management node:

docker run -d --net=cluster --name=management1 --ip=192.168.0.2 -v <your-path>/mysql-cluster.cnf:/etc/mysql-cluster.cnf mysql/mysql-cluster ndb_mgmd

Launching our first data node:

docker run -d --net=cluster --name=ndb1 --ip=192.168.0.3 -v <your-path>/my.cnf:/etc/my.cnf mysql/mysql-cluster ndbd

Launching our second data node:

docker run -d --net=cluster --name=ndb2 --ip=192.168.0.4 -v <your-path>/my.cnf:/etc/my.cnf mysql/mysql-cluster ndbd

Launching our MySQL node:

docker run -d --net=cluster --name=mysql1 --ip=192.168.0.10 -v <your-path>/my.cnf:/etc/my.cnf mysql/mysql-cluster mysqld

Now you can go on to retrieve and reset the MySQL admin user password and check that you have an actual running cluster by using the procedure described in the first example in this post.

Conclusion

The Docker image for MySQL Cluster aims to provide and easy and frictionless way to get up and running for sandbox development and rapid prototyping. Please do remember that this is a preview, and shouldn’t be used in production at this stage. We very much welcome feedback on this image. Please give us your comments below and we’ll factor your input into our ongoing work to take these images forward to production readiness.

About Trond Humborstad

Trond Humborstad is a contractor at Oracle and a CS student at NTNU in Trondheim, Norway. He has a passion for programming, distributed systems and virtualisation. Hobbies include music, sound reproduction and recording techniques.

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter * Time limit is exhausted. Please reload the CAPTCHA.