Docker Compose Setup for InnoDB Cluster

In the following we show how InnoDB cluster can be deployed in a container context. In the official documentation (Introducing InnoDB Cluster), InnoDB is described as:

MySQL InnoDB cluster provides a complete high availability solution for MySQL. MySQL Shell includes AdminAPI which enables you to easily configure and administer a group of at least three MySQL server instances to function as an InnoDB cluster. Each MySQL server instance runs MySQL Group Replication, which provides the mechanism to replicate data within InnoDB clusters, with built-in failover.

In this blog post we show how to set up InnoDB cluster using the official MySQL Docker containers and run them with docker-compose. We want to show a full example, including how to connect to the cluster through MySQL Router using a simple example application, and we end up with the following components:

  • three mysql-server containers
  • one temporary mysql-shell container (to set up the InnoDB cluster)
  • one mysql-router container (to access the cluster)
  • one simple db application using the router to access the cluster

In order to run the example we require docker as well as docker-compose. The full example is available here (and works out of the box for linux):

Docker compose files on Github

A short overview of containers and dependencies is given in the following:

The files in this example are organised around a docker-compose file:

|-- docker-compose.yml
|-- dbwebapp.env
|-- mysql-router.env
|-- mysql-server.env
|-- mysql-shell.env
`-- scripts
    |-- db.sql
    `-- setupCluster.js

The docker-compose.yml file describes the individual containers, the .env files contain configuration for the individual parts, and the scripts folder contains Javascript and SQL scripts to set up the cluster and databases.

MySQL Server Images as a Basis

In our docker-compose.yml file we first start three mysql-server images (mysql-server-1, mysql-server-2, mysql-server-3). All three use the following startup commands to satisfy InnoDB cluster requirements (the only difference is the unique --server_id parameter):

      - mysql-server/mysql.env
    image: mysql/mysql-server:5.7
      - "3301:3306"
    command: ["mysqld",

This is based on Production Deployment of InnoDB Cluster and more details can be found there. In addition, we pass $MYSQL_ROOT_PASSWORD and $MYSQL_ROOT_HOST which we will use later on to provision the cluster. NOTE: this is not recommended in a production setting; sound security practices would involve creating less privileged users in this context, but we omit that here for the sake of simplicity and clarity.

MySQL Shell to Provision the Cluster

Then we start a fourth image, neumayer/mysql-shell-batch, to set up the cluster via MySQL Shell. This image is not an official MySQL image and basically waits until the given MySQL server is up and then runs the given scripts against it. We use this image to keep our example self-contained.

This image is available under MySQL Shell batch image

    - mysql-server.env
  image: neumayer/mysql-shell-batch
      - ./mysql-shell/scripts/:/scripts/
    - mysql-server-1
    - mysql-server-2
    - mysql-server-3

Internally it runs the following Javascript (via the mounted scripts directory):

var dbPass = "mysql"
var clusterName = "devCluster"

try {
  print('Setting up InnoDB cluster...\n');
  shell.connect('root@mysql-server-1:3306', dbPass)
  var cluster = dba.createCluster(clusterName);
  print('Adding instances to the cluster.');
  cluster.addInstance({user: "root", host: "mysql-server-2", password: dbPass})
  cluster.addInstance({user: "root", host: "mysql-server-3", password: dbPass})
  print('.\nInstances successfully added to the cluster.');
  print('\nInnoDB cluster deployed successfully.\n');
} catch(e) {
  print('\nThe InnoDB cluster could not be created.\n\nError: ' + e.message + '\n');

And the following SQL to set up a database and user for the example app:

CREATE USER 'dbwebapp'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON dbwebappdb.* TO 'dbwebapp'@'%';

If all goes according to plan, the cluster is ready for use, the user for our example app is created, and the temporary image exits.

MySQL Router

Further, we set up a mysql-router container, bootstrapping using one of the existing mysql-server images (this is the official MySQL Router image on Docker Hub):

    - mysql-shell.env
  image: mysql/mysql-router
    - "6446:6446"
    - mysql-server-1
    - mysql-server-2
    - mysql-server-3
    - mysql-shell

Internally it makes the following calls:

mysqlrouter --bootstrap $MYSQL_USER@$MYSQL_HOST:$MYSQL_PORT --user=mysqlrouter <<< "$MYSQL_PASSWORD"

The first call contacts one of the mysql-server instances and acquires information about the other servers from it. A config file is written and then used for the normal startup of the router.

Example App

Finally, we start an application container using the mysql-router container as its database. This application is described in more detail in Docker Compose and App Deployment with MySQL

    - dbwebapp.env
  image: neumayer/dbwebapp
    - "8057:8080"
    - mysql-router

The dbwebapp.env contains the necessary parameters to connect to the router container on the right host and port (DBHOST and DBPORT).

Putting it Together

To run this example, first check out the example repo. Running docker-compose up should pull all needed images and spin up your test cluster. If successful, the following output is displayed from the MySQL shell:

mysql-shell_1     | Adding instances to the cluster...
mysql-shell_1     | Instances successfully added to the cluster.
mysql-shell_1     | InnoDB cluster deployed successfully.

The MySQL router will report it successfully contacted the cluster and that it is ready to accept incoming connections:

mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Connected with metadata server running on mysql-server-1:3306
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Connected to replicaset 'default' through mysql-server-1:3306
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Changes detected in cluster 'devCluster' after metadata refresh
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Metadata for cluster 'devCluster' has 1 replicasets:
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] 'default' (3 members, single-master)
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-1:3306 / 33060 - role=HA mode=RW
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-2:3306 / 33060 - role=HA mode=RO
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-3:3306 / 33060 - role=HA mode=RO

And our example app:

dbwebapp_1        | 2018/03/05 12:34:19 Pinging db mysql-router.
dbwebapp_1        | 2018/03/05 12:34:19 Connected to db.
dbwebapp_1        | 2018/03/05 12:34:19 Starting dbwebapp server.


We showed how to provision an InnoDB cluster locally with docker-compose using the official MySQL Server and MySQL Router Docker images. We also showed how to configure the cluster and use an example app to access it. Real world deployment requirements may vary, but this approach can be adjusted to any dockerised environment.

Further we want to be clear that our examples are not suitable for a production setting without adjustments. We have no focus on the security of the MySQL instances themselves, the distribution of secrets to the temporary provisioning image or our application, or general network-level security. Most of these security questions should be addressed by the design of your cloud environment or production setting. Also please note that stopping docker compose will effectively kill your test cluster: a cluster can not survive a full outage, which this would amount to. To take down the cluster and start from scratch run docker-compose down.


Introducing InnoDB Cluster

Example on github

Official MySQL router image on docker hub

Docker Compose and App Deployment with MySQL

MySQL Shell batch image

Robert Neumayer

About Robert Neumayer

Software engineer with a passion for infrastructure and automation. I have hands-on experience in all steps of the development process, particularly focusing on streamlining deployment processes and supporting agile teams. I am working on release engineering, devops integration, and deployment of MySQL products across the MySQL engineering organisation.

6 thoughts on “Docker Compose Setup for InnoDB Cluster

  1. Very beautifull article Robert, thank you!.

    Do you think that this setting would work in a configuration with 30 Instancies and 90 nodes in produciton enviroment?

    Each Instance has a cluster with 3 nodes.

    Thank you so much!

    1. For production settings I’d recommend taking a look at the MySQL operator for Kubernetes:

      I’m no expert on high-availability deployments of InnoDB Cluster per se but multiple nodes per instance may not be the best for a production deployment.

  2. Hello Robert,
    Very nice blog post, thank you!
    I’m a newbie to MySQL InnoDB Cluster, and I ‘m discovering it these days, especially with Docker. I found your example and it will be very useful for me to test it and see the things in real time 😉
    I downloaded your code from Github and executed the docker-compose up command, it works well.
    Now, when the creation of the Cluster is being done, it works by displaying this information : ” mysql-router_1 | 2018-07-25 09:58:49 metadata_cache INFO [7ff4ad821700] Connected to replicaset ‘default’ through mysql-server-1:3306 ” in my Terminal.
    My question is how can I connect to MySQL Shell and access to the servers so that I could manage the database ?
    Thank you a lot in advance.

    Best regards,

    1. Hi!

      If you have mysql shell installed on your host machine you can talk to the mysql-servers via the mapped ports (in the docker compose file the mysql port 3306 is mapped to 3301, 3302, and 3303 on the host machine). So mysqlsh should be able to connect to localhost:3301, localhost:3302, and localhost:3303, for example:
      “mysqlsh root@localhost:3301”

      If you want to use one of the existig containers (i.e. the mysql-server containers) you can find their ids (with “docker ps”) and get a shell in them and then connect via the docker compose network names:
      “docker exec -it /bin/bash”
      “mysqlsh mysql-server-1” or “mysqlsh mysql-server-2” or “mysqlsh mysql-server-3”

      You can also start a new container (outside of the docker-compose setup) and connect to the existing containers, in that case you need to use the docker0 network. For more info see:


  3. Hi Robert,

    thank you for very nice article. I downloaded , but when I run docker-compose ps or docker-compose up , I get the following error:

    ERROR: In file ‘./docker-compose.yml’ service ‘version’ doesn’t have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.

    Is the file updated, or am I doing something wrong?

    Best regards,

Comments are closed.