Oracle Linux/MySQL

From r00tedvw.com wiki
(Difference between revisions)
Jump to: navigation, search
(Script)
 
(34 intermediate revisions by one user not shown)
Line 9: Line 9:
 
=Management Node Configuration=
 
=Management Node Configuration=
 
==dependencies==
 
==dependencies==
 +
There are some packages you may need to install, but are not required. More info can be [[Oracle_Linux/Quick_Reference#Common_packages_to_install_on_fresh_.28minimal.29_install|found here]].
 +
<nowiki>~$ sudo yum install -y wget telnet vim net-tools bind-utils</nowiki>
 
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available.  [https://dev.mysql.com/downloads/repo/yum/ MySQL provides a YUM repository].
 
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available.  [https://dev.mysql.com/downloads/repo/yum/ MySQL provides a YUM repository].
 
  <nowiki>~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
 
  <nowiki>~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
Line 28: Line 30:
 
We also need to open a port and make sure our networking interface is setup correctly.  In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
 
We also need to open a port and make sure our networking interface is setup correctly.  In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
 
  <nowiki>~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
 
  <nowiki>~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
~$ sudo firewall-cmd --zone=internal --permanent --add-port=1186/tcp</nowiki>
+
~$ sudo firewall-cmd --zone=internal --permanent --add-port=1186/tcp
Lastly, there are some other packages you may need to install, but are not required. More info can be [[Oracle_Linux/Quick_Reference#Common_packages_to_install_on_fresh_.28minimal.29_install|found here]].
+
~$ sudo firewall-cmd --reload
<nowiki>~$ sudo yum install -y telnet vim net-tools bind-utils</nowiki>
+
~$ sudo firewall-cmd --zone=internal --list-all</nowiki>
  
 
==package installation==
 
==package installation==
Line 39: Line 41:
  
 
==configure management node cluster==
 
==configure management node cluster==
Now we can setup a basic configuration file for the cluster.
+
Now we can setup a basic configuration file for the cluster. [https://dev.mysql.com/doc/mysql-cluster-excerpt/8.0/en/mysql-cluster-config-starting.html reccommended]
 
  <nowiki>~$ sudo mkdir -p /var/lib/mysql-cluster
 
  <nowiki>~$ sudo mkdir -p /var/lib/mysql-cluster
 
~$ sudo vim /var/lib/mysql-cluster/config.ini
 
~$ sudo vim /var/lib/mysql-cluster/config.ini
 
...
 
...
 +
# TCP PARAMETERS
 +
[tcp default]SendBufferMemory=2M
 +
ReceiveBufferMemory=2M
 +
 +
# MANAGEMENT NODE PARAMETERS
 
[ndb_mgmd default]
 
[ndb_mgmd default]
 
# Directory for MGM node log files
 
# Directory for MGM node log files
Line 51: Line 58:
 
HostName=rncv-sqlmgmt01.virtual.local
 
HostName=rncv-sqlmgmt01.virtual.local
  
 +
# DATA NODE PARAMETERS
 
[ndbd default]
 
[ndbd default]
 
NoOfReplicas=2      # Number of replicas
 
NoOfReplicas=2      # Number of replicas
 
DataMemory=256M    # Memory allocate for data storage
 
DataMemory=256M    # Memory allocate for data storage
 
IndexMemory=128M    # Memory allocate for index storage
 
IndexMemory=128M    # Memory allocate for index storage
 +
#LockPagesInMainMemory=1 # Locks data node processes into memory  ##causes crash
 
#Directory for Data Node
 
#Directory for Data Node
 
DataDir=/var/lib/mysql-cluster
 
DataDir=/var/lib/mysql-cluster
 +
 +
#NoOfFragmentLogFiles=300 ##causes crash
 +
MaxNoOfConcurrentOperations=100000
 +
#SchedulerSpinTimer=400  ##causes crash
 +
#SchedulerExecutionTimer=100  ##causes crash
 +
#RealTimeScheduler=1  ##causes crash
 +
TimeBetweenGlobalCheckpoints=1000
 +
TimeBetweenEpochs=200
 +
RedoBuffer=32M
 +
MaxNoOfTables=1024
 +
MaxNoOfOrderedIndexes=256
  
 
[ndbd]
 
[ndbd]
 
#Data Node db2
 
#Data Node db2
 
HostName=rncv-sqldata01.virtual.local
 
HostName=rncv-sqldata01.virtual.local
 +
#LockExecuteThreadToCPU=1  ##causes crash
 +
#LockMaintThreadsToCPU=0  ##causes crash
  
 
[ndbd]
 
[ndbd]
 
#Data Node db3
 
#Data Node db3
 
HostName=rncv-sqldata02.virtual.local
 
HostName=rncv-sqldata02.virtual.local
 +
#LockExecuteThreadToCPU=1  ##causes crash
 +
#LockMaintThreadsToCPU=0  #causes crash
  
 
[mysqld]
 
[mysqld]
Line 72: Line 96:
 
[mysqld]
 
[mysqld]
 
#SQL Node db5
 
#SQL Node db5
HostName=rncv-sql02.virtual.local</nowiki>
+
HostName=rncv-sql02.virtual.local
 +
 
 +
[mysqld]
 +
 
 +
[api]
 +
</nowiki>
 
Now we can implement the config
 
Now we can implement the config
  <nowiki>~$ sudo ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini
+
  <nowiki>~$ sudo ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload
 
MySQL Cluster Management Server mysql-8.0.19 ndb-8.0.19
 
MySQL Cluster Management Server mysql-8.0.19 ndb-8.0.19
 
2020-02-24 17:54:57 [MgmtSrvr] INFO    -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
 
2020-02-24 17:54:57 [MgmtSrvr] INFO    -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
Line 98: Line 127:
  
 
ndb_mgm> quit</nowiki>
 
ndb_mgm> quit</nowiki>
 +
 +
==autostart==
 +
quick systemd service to manage <code>ndb_mgmd</code><br>
 +
Kill it.
 +
<nowiki>~$ ndb_mgm
 +
-- NDB Cluster -- Management Client --
 +
ndb_mgm> SHUTDOWN
 +
Connected to Management Server at: localhost:1186
 +
Node 2: Cluster shutdown initiated
 +
Node 3: Cluster shutdown initiated
 +
3 NDB Cluster node(s) have shutdown.
 +
Disconnecting to allow management server to shutdown.
 +
Node 3: Node shutdown completed.
 +
ndb_mgm> quit
 +
~$ sudo pkill -f ndb_mgmd</nowiki>
 +
Create the service
 +
<nowiki>~$ sudo vim /etc/systemd/system/ndb_mgmd.service
 +
...
 +
[Unit]
 +
Description=MySQL NDB Cluster Management Server
 +
After=network.target auditd.service
 +
 +
[Service]
 +
Type=forking
 +
ExecStart=/usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload
 +
ExecReload=/bin/kill -HUP $MAINPID
 +
KillMode=process
 +
Restart=on-failure
 +
 +
[Install]
 +
WantedBy=multi-user.target</nowiki>
 +
 +
<nowiki>~$ sudo systemctl daemon-reload</nowiki>
 +
Start/Enable
 +
<nowiki>~$ sudo systemctl start ndb_mgmd.service
 +
~$ sudo systemctl status ndb_mgmd.service
 +
ndb_mgmd.service - MySQL NDB Cluster Management Server
 +
  Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; disabled; vendor preset: disabled)
 +
  Active: active (running) since Mon 2020-03-09 17:59:55 EDT; 6s ago
 +
  Process: 91567 ExecStart=/usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload (code=exited, status=0/SUCCESS)
 +
Main PID: 91568 (ndb_mgmd)
 +
  CGroup: /system.slice/ndb_mgmd.service
 +
          └─91568 /usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-clu...
 +
 +
Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local systemd[1]: Starting MySQ...
 +
Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local ndb_mgmd[91567]: MySQL Cl...
 +
Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local systemd[1]: Started MySQL...
 +
Hint: Some lines were ellipsized, use -l to show in full.</nowiki>
 +
 +
<nowiki>~$ sudo systemctl enable ndb_mgmd.service</nowiki>
  
 
=Data Node Configuration=
 
=Data Node Configuration=
Line 103: Line 182:
  
 
==dependencies==
 
==dependencies==
Again, same as with the management node configuration, while most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available.  [https://dev.mysql.com/downloads/repo/yum/ MySQL provides a YUM repository].
+
Again, same as with the management node configuration, there are some packages you may need to install, but are not required. More info can be [[Oracle_Linux/Quick_Reference#Common_packages_to_install_on_fresh_.28minimal.29_install|found here]].
 +
<nowiki>~$ sudo yum install -y wget telnet vim net-tools bind-utils</nowiki>
 +
 
 +
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available.  [https://dev.mysql.com/downloads/repo/yum/ MySQL provides a YUM repository].
 
  <nowiki>~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
 
  <nowiki>~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
 
~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm</nowiki>
 
~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm</nowiki>
Line 120: Line 202:
 
Do a quick <code>check-update</code> to refresh the package list cache
 
Do a quick <code>check-update</code> to refresh the package list cache
 
  <nowiki>~$ sudo yum check-update</nowiki>
 
  <nowiki>~$ sudo yum check-update</nowiki>
Lastly, there are some other packages you may need to install, but are not required. More info can be [[Oracle_Linux/Quick_Reference#Common_packages_to_install_on_fresh_.28minimal.29_install|found here]].
+
 
  <nowiki>~$ sudo yum install -y telnet vim net-tools bind-utils</nowiki>
+
We also need to open a port and make sure our networking interface is setup correctly.  In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
 +
  <nowiki>~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
 +
~$ sudo firewall-cmd --zone=internal --permanent --add-port=32768-60999/tcp
 +
~$ sudo firewall-cmd --reload
 +
~$ sudo firewall-cmd --zone=internal --list-all</nowiki>
  
 
==package installation==
 
==package installation==
Line 128: Line 214:
 
Install the packages
 
Install the packages
 
  <nowiki>~$ sudo yum install -y mysql-cluster-community-data-node
 
  <nowiki>~$ sudo yum install -y mysql-cluster-community-data-node
~$ sudo yum install -y mysql-cluster-community-management-server mysql-cluster-community-client mysql-cluster-community-libs</nowiki>
+
~$ sudo yum install -y mysql-cluster-community-client mysql-cluster-community-libs</nowiki>
  
 
==configure data node==
 
==configure data node==
Line 143: Line 229:
 
  <nowiki>~$ sudo mkdir -p /var/lib/mysql-cluster</nowiki>
 
  <nowiki>~$ sudo mkdir -p /var/lib/mysql-cluster</nowiki>
 
I had an issue starting the <code>ndbd</code> client because of the folder permissions set on the folder above.  Since <code>ndbd</code> runs as whatever user it is launched from, make sure that user has read, write, and execute permissions to the folder.
 
I had an issue starting the <code>ndbd</code> client because of the folder permissions set on the folder above.  Since <code>ndbd</code> runs as whatever user it is launched from, make sure that user has read, write, and execute permissions to the folder.
 +
<br><br>
 +
I also had to configure SELinux as it was blocking ndbd from starting. It is best to follow the [[Oracle_Linux/Quick_Reference#Creating_a_SELinux_Policy|steps listed here]], but ultimately you'll just be creating a policy allowing <code>ndbd</code> the ability to manipulate files.  It should look something like this:
 +
<nowiki>$ cat ndbd1-01.te
 +
 +
module ndbd-01 1.0;
 +
 +
require {
 +
type var_lib_t;
 +
type mysqld_t;
 +
class file open;
 +
class file append;
 +
class file { read write };
 +
class file lock;
 +
class file getattr;
 +
        type ephemeral_port_t;
 +
        class tcp_socket name_connect;
 +
}
 +
 +
#============= mysqld_t ==============
 +
 +
#!!!! WARNING: 'var_lib_t' is a base type.
 +
allow mysqld_t var_lib_t:file open;
 +
allow mysqld_t var_lib_t:file append;
 +
allow mysqld_t var_lib_t:file { read write };
 +
allow mysqld_t var_lib_t:file lock;
 +
allow mysqld_t var_lib_t:file getattr;
 +
allow mysqld_t ephemeral_port_t:tcp_socket name_connect;
 +
</nowiki>
 +
 
==start data node==
 
==start data node==
 +
===manually===
 
Now we can start the data node and confirm it's connected.
 
Now we can start the data node and confirm it's connected.
 
  <nowiki>~$ ndbd
 
  <nowiki>~$ ndbd
 
2020-03-04 13:35:04 [ndbd] INFO    -- Angel connected to 'rncv-sqlmgmt01.virtual.local:1186'
 
2020-03-04 13:35:04 [ndbd] INFO    -- Angel connected to 'rncv-sqlmgmt01.virtual.local:1186'
 
2020-03-04 13:35:04 [ndbd] INFO    -- Angel allocated nodeid: 2</nowiki>
 
2020-03-04 13:35:04 [ndbd] INFO    -- Angel allocated nodeid: 2</nowiki>
 +
===autostart===
 +
If you prefer to auto start ndbd and manage it through systemctl, you can do this:<br>
 +
Kill it. <br>
 +
<nowiki>~$ sudo pkill -f ndbd</nowiki>
 +
Create the service
 +
<nowiki>~$ sudo vim /etc/systemd/system/ndbd.service
 +
...
 +
[Unit]
 +
Description=MySQL NDB Data Node Daemon
 +
After=network.target auditd.service
 +
 +
[Service]
 +
Type=forking
 +
ExecStart=/bin/sh -c "/usr/sbin/ndbd"
 +
ExecReload=/bin/kill -HUP $MAINPID
 +
KillMode=control-group
 +
Restart=on-failure
 +
 +
[Install]
 +
WantedBy=multi-user.target</nowiki>
 +
 +
<nowiki>~$ sudo systemctl daemon-reload</nowiki>
 +
Start/Enable
 +
<nowiki>~$ sudo systemctl start ndbd
 +
~$ sudo systemctl enable ndbd</nowiki>
 +
 +
===confirm connectivity===
 
From the management node you can also confirm this:
 
From the management node you can also confirm this:
 
  <nowiki>#management_node~$ ndb_mgm
 
  <nowiki>#management_node~$ ndb_mgm
Line 167: Line 310:
 
<br>
 
<br>
 
'''If you have a 2nd data node, repeat this process'''
 
'''If you have a 2nd data node, repeat this process'''
 +
 +
=SQL Node Configuration=
 +
==dependencies==
 +
Again, same as with the management node configuration, there are some packages you may need to install, but are not required. More info can be [[Oracle_Linux/Quick_Reference#Common_packages_to_install_on_fresh_.28minimal.29_install|found here]].
 +
<nowiki>~$ sudo yum install -y wget telnet vim net-tools bind-utils</nowiki>
 +
 +
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available.  [https://dev.mysql.com/downloads/repo/yum/ MySQL provides a YUM repository].
 +
<nowiki>~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
 +
~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm</nowiki>
 +
Verify you can see the new '''cluster''' packages and search them.
 +
<nowiki>~$ yum repolist all | grep mysql
 +
~$ yum --enablerepo=mysql-cluster-8.0-community search mysql</nowiki>
 +
Enable the cluster repo by setting <code>enabled=1</code>
 +
<nowiki>~$ sudo vim /etc/yum.repos.d/mysql-community.repo
 +
...
 +
[mysql-cluster-8.0-community]
 +
name=MySQL Cluster 8.0 Community
 +
baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/
 +
enabled=1
 +
gpgcheck=1
 +
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql</nowiki>
 +
Do a quick <code>check-update</code> to refresh the package list cache
 +
<nowiki>~$ sudo yum check-update</nowiki>
 +
 +
We also need to open a port and make sure our networking interface is setup correctly.  In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
 +
<nowiki>~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
 +
~$ sudo firewall-cmd --permanent --zone=internal --add-service=mysql
 +
~$ sudo firewall-cmd --permanent --zone=internal --add-port=33060/tcp
 +
~$ sudo firewall-cmd --reload
 +
~$ sudo firewall-cmd --zone=internal --list-all</nowiki>
 +
 +
==package installation==
 +
There are a couple of dependencies we need to install.
 +
<nowiki>~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker perl-Data-Dumper</nowiki>
 +
Install the packages
 +
<nowiki>~$ sudo yum install -y mysql-cluster-community-server
 +
~$ sudo yum install -y mysql-cluster-community-client mysql-cluster-community-libs mysql-cluster-community-common</nowiki>
 +
 +
==configure sql node==
 +
We now need to tell the sql node where to find the sql management node. When I did this, <code>my.cnf</code> already existed and had a <code>[mysqld]</code> section.
 +
<nowiki>~$ sudo vim /etc/my.cnf
 +
...
 +
[mysqld]
 +
ndbcluster
 +
ndb-connectstring=rncv-sqlmgmt01.virtual.local      # IP address for server management node
 +
default-storage-engine=ndbcluster    # Define default Storage Engine used by MySQL
 +
 +
[mysql_cluster]
 +
ndb-connectstring=rncv-sqlmgmt01.virtual.local      # IP address for server management node</nowiki>
 +
 +
I also had to configure SELinux as it was blocking mysqld from connecting to the sql management node. It is best to follow the [[Oracle_Linux/Quick_Reference#Creating_a_SELinux_Policy|steps listed here]], but ultimately you'll just be creating a policy allowing <code>mysqld</code> the ability to create tcp socket connections over ephemeral ports.  It should look something like this:
 +
<nowiki>$ cat local-mysql1.te
 +
 +
module local-mysql1 1.0;
 +
 +
require {
 +
type ephemeral_port_t;
 +
type mysqld_t;
 +
class tcp_socket name_connect;
 +
}
 +
 +
#============= mysqld_t ==============
 +
 +
#!!!! This avc can be allowed using one of the these booleans:
 +
#    nis_enabled, mysql_connect_any
 +
allow mysqld_t ephemeral_port_t:tcp_socket name_connect;</nowiki>
 +
 +
==start sql node==
 +
<nowiki>~$ sudo systemctl restart mysqld.service</nowiki>
 +
<br>
 +
'''If you have a 2nd sql node, repeat this process'''
 +
=MySQL 8 Distributed Privileges=
 +
At the time of this writing, I still have not gotten distributed privileges working, but i'm following [https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-privilege-distribution.html the steps listed here].<br>
 +
So far, you have to grant the <code>NDB_STORED_USER</code> privilege to said user like so:
 +
<nowiki>mysql> GRANT NDB_STORED_USER ON *.* TO 'cluster_app_user'@'localhost'
 +
or
 +
mysql> GRANT NDB_STORED_USER ON *.* TO 'admin'@'%';
 +
Query OK, 0 rows affected (0.26 sec)</nowiki>
 +
With the privilege in place, you can verify the appropriate values are in the <code>ndb_sql_metadata</code> NDB table.
 +
<nowiki>~$ ndb_select_all -d mysql ndb_sql_metadata
 +
type name seq note sql_ddl_text
 +
11 "'admin'@'%'" 0 2 "CREATE USER 'admin'@'%' IDENTIFIED WITH 'caching_sha2_password' AS '$A$005$MGBmokOj
 +
                                m!<=P9q7BhmH0ANKg1Fs3JGEv.ITwLdXa7KC0n2GOogn6Xc/' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK PASSWORD HISTORY DEFAULT PASSWORD REUSE INTERVAL DEFAULT PASSWORD REQUIRE CURRENT DEFAULT"
 +
12 "'admin'@'%'" 1 [NULL] "GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,GROUP_REPLICATION_ADMIN,INNODB_REDO_LOG_ARCHIVE,NDB_STORED_USER,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `admin`@`%`"
 +
12 "'admin'@'%'" 0 [NULL] "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `admin`@`%`"
 +
3 rows returned</nowiki>
 +
You should only need to restart the mysqld.service on the rest of the SQL nodes for them to pull in the changes.
 +
<nowiki>Whenever a MySQL server starts up and joins the cluster as an SQL node it executes these stored CREATE USER and GRANT statements as part of the cluster schema synchronization process.</nowiki>
 +
 +
=NDB cluster status script=
 +
Using HAProxy in front of a MySQL 8 NDB cluster, i ran into a couple of challenges.
 +
#I could not get user/pass data to sync across the mysql nodes
 +
#This in turn made it impossible for HAProxy's built in mysql status checking plugin to work.
 +
#Something else i'm forgetting as well.
 +
So, in lieu of these issues, I opted to write my own NDB cluster status service that the HAProxy httpd plugin can query to determine availability.  [https://raw.githubusercontent.com/olafz/percona-clustercheck/master/clustercheck This is based off of this script].<br>
 +
This is expected to work in combination with xinetd.
 +
 +
==Script==
 +
<div class="toccolours mw-collapsible mw-collapsed">
 +
NDB cluster status script:
 +
<div class="mw-collapsible-content"><pre style="white-space: pre-wrap;"><nowiki>
 +
#!/bin/bash
 +
#
 +
# Description: Script that reports status of ndb sql cluster nodes via HTTP, allowing HAProxy monitoring.
 +
# Author: <[email protected]>
 +
#
 +
 +
# Package name must be exact. generally this is in the format <name>-<version>-<release>.<arch>
 +
# use yum info <package> to get the above values.
 +
 +
# Default values for parameters in case they are not specified.
 +
sqlhost="localhost"
 +
sqluser="root"
 +
sqlpass='7,q(hf`U+3=?STg4]5a4xG!7DQL%Le'
 +
logfile="/var/log/ndbclustercheck/xinetd.log"
 +
 +
# Independent parameters for script, passed in any order. ie. ~$ script.sh -b -a -c
 +
while getopts s:u:p:l: option
 +
do
 +
    case "${option}"
 +
    in
 +
        s) sqlhost=${OPTARG};;
 +
        u) sqluser=${OPTARG};;
 +
        p) sqlpass=${OPTARG};;
 +
        l) logfile=${OPTARG};;
 +
    esac
 +
done
 +
 +
# Dependent parameters
 +
dateprefix=$(date +%F.%T)
 +
 +
# function to check if log file is writable
 +
function check_log {
 +
    logdir=$(dirname $logfile)
 +
    if [ -d $logdir ]; then
 +
        touch $logdir/test
 +
        if [ $? -eq 0 ]; then
 +
            rm $logdir/test
 +
        else
 +
            echo "$dateprefix unable to write to log directory: $logdir" &>> /tmp/xinetd.log
 +
            #exit 1
 +
        fi
 +
    else
 +
        echo "$dateprefix log directory does not exist: $logdir" &>> /tmp/xinetd.log
 +
        #exit 1
 +
    fi
 +
}
 +
 +
# function to check if the specified package exists
 +
function check_packages {
 +
    echo "$dateprefix Checking Packages" &>> $logfile
 +
    packages=("xinetd" "nmap")
 +
    for package in ${packages[@]}
 +
    do
 +
        rpm -qa | grep $package &>/dev/null
 +
        if [ $? -ne 0 ]; then
 +
            echo "$dateprefix package: $package is not installed.  EOL" &>> $logfile
 +
            #exit 1
 +
        fi
 +
    done
 +
}
 +
 +
# function to check if sql is accepting connections on port 3306 and if data nodes are available.
 +
# return a 503 HTTP response if either condition is false, otherwise return a 200 HTTP response.
 +
function check_sql {
 +
    echo "$dateprefix Checking SQL" &>> $logfile
 +
    #change default field separator, white space, to newline character so arrays are parsed properly
 +
    saveifs=$IFS
 +
    IFS=$'\n'
 +
    declare -a sql_available=("HTTP/1.1 200 OK\r\n"
 +
        "Content-Type: text/plain\r\n"
 +
        "Connection: close\r\n"
 +
        "Content-Length: 26\r\n"
 +
        "\r\n"
 +
        "SQL server is available.\r\n"
 +
        "\r\n")
 +
    declare -a no_datanodes=("HTTP/1.1 503 Service Unavailable\r\n"
 +
        "Content-Type: text/plain\r\n"
 +
        "Connection: close\r\n"
 +
        "Content-Length: 40\r\n"
 +
        "\r\n"
 +
        "No available data nodes from SQL node.\r\n"
 +
        "\r\n")
 +
    declare -a sql_unavailable=("HTTP/1.1 503 Service Unavailable\r\n"
 +
        "Content-Type: text/plain\r\n"
 +
        "Connection: close\r\n"
 +
        "Content-Length: 27\r\n"
 +
        "\r\n"
 +
        "SQL Node is unresponsive.\r\n"
 +
        "\r\n")
 +
    nc -z $sqlhost 3306
 +
    if [ $? -eq 0 ]; then
 +
        data_nodes=$(mysql --user=$sqluser --password=$sqlpass -sN -e "select count(*) from ndbinfo.nodes where status = 'STARTED';" 2> $logfile)
 +
        if [ -n "$data_nodes" ]; then
 +
            for http in ${sql_available[@]}
 +
            do
 +
                echo -en $http | tee -a $logfile
 +
            done
 +
            sleep 0.1
 +
            #exit 0
 +
        else
 +
            for http in ${no_datanodes[@]}
 +
            do
 +
                echo -en $http | tee -a $logfile
 +
            done
 +
            sleep 0.1
 +
            #exit 1
 +
        fi
 +
    else
 +
        for http in ${sql_unavailable[@]}
 +
        do
 +
            echo -en $http | tee -a $logfile
 +
        done
 +
        sleep 0.1
 +
        #exit 1
 +
    fi
 +
    #restore default field separator to a whitespace
 +
    IFS=$saveifs
 +
}
 +
 +
check_log
 +
check_packages
 +
check_sql
 +
</nowiki></pre></div></div>
 +
 +
==xinetd==
 +
<nowiki>~$ sudo cat /etc/xinetd.d/ndbclusterstatus
 +
service ndbclusterstatus
 +
{
 +
port = 5900
 +
socket_type = stream
 +
protocol = tcp
 +
wait = no
 +
user = root
 +
server = /home/r00t/ndbclustercheck
 +
type = UNLISTED
 +
}</nowiki>
 +
 +
==firewalld rules==
 +
<nowiki>~$ sudo firewall-cmd --zone=internal --add-port=5900/tcp --permanent
 +
success
 +
~$ sudo firewall-cmd --reload
 +
success</nowiki>
 +
 +
==haproxy==
 +
example config
 +
<nowiki>listen mysql-cluster 0.0.0.0:3306
 +
mode tcp
 +
balance roundrobin
 +
option tcplog
 +
option httpchk GET /
 +
server sql01 rncv-sql01.virtual.local:3306 check port 5900
 +
server sql02 rncv-sql02.virtual.local:3306 check port 5900</nowiki>
 +
 +
==mysql access==
 +
Keep in mind that you'll need to either create a user specifically for haproxy with the host defined as the name/ip, or specify the host as <code>%</code> so that the user can log in from anywhere.
 +
<nowiki> mysql> create user `admin`@`%` identified by '^R5^2Jzz@$La-]x[H;3}';
 +
Query OK, 0 rows affected (0.02 sec)
 +
 +
mysql> grant all privileges on *.* to `admin`@`%` with grant option;
 +
Query OK, 0 rows affected (0.33 sec)</nowiki>
 +
In the case that you've tried to log into MySQL too many time through the haproxy, it could cause haproxy to get locked out.  flush the hosts from mysql to reset.
 +
<nowiki>mysql --user='root' --password='7,q(hf`U+3=?STg4]5a4xG!7DQL%Le' -sN -e "flush hosts;"</nowiki>

Latest revision as of 18:04, 18 March 2020

Contents

[edit] Simple Cluster configuration

Will consist of 5 components:

Management Node
holds the configuration for MySQL Cluster in a file called config.ini. It also writes a cluster log, and takes part in arbitration to prevent split-brain or network partitioning.
(2) Data Nodes
stores the data and the tables. The data nodes takes cares of managing the transactions, recovery, etc
(2) SQL Nodes
the most common way to read/write data to the Data Nodes is by using the MySQL Server (aka SQL Node)

[edit] Management Node Configuration

[edit] dependencies

There are some packages you may need to install, but are not required. More info can be found here.

~$ sudo yum install -y wget telnet vim net-tools bind-utils

While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.

~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm

Verify you can see the new cluster packages and search them.

~$ yum repolist all | grep mysql
~$ yum --enablerepo=mysql-cluster-8.0-community search mysql

Enable the cluster repo by setting enabled=1

~$ sudo vim /etc/yum.repos.d/mysql-community.repo
...
[mysql-cluster-8.0-community]
name=MySQL Cluster 8.0 Community
baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql

Do a quick check-update to refresh the package list cache

~$ sudo yum check-update

We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.

~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
~$ sudo firewall-cmd --zone=internal --permanent --add-port=1186/tcp
~$ sudo firewall-cmd --reload
~$ sudo firewall-cmd --zone=internal --list-all

[edit] package installation

There are a couple of dependencies we need to install.

~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker

Install the packages

 ~$ sudo yum install -y mysql-cluster-community-management-server mysql-cluster-community-client mysql-cluster-community-libs

[edit] configure management node cluster

Now we can setup a basic configuration file for the cluster. reccommended

~$ sudo mkdir -p /var/lib/mysql-cluster
~$ sudo vim /var/lib/mysql-cluster/config.ini
...
# TCP PARAMETERS
[tcp default]SendBufferMemory=2M
ReceiveBufferMemory=2M

# MANAGEMENT NODE PARAMETERS
[ndb_mgmd default]
# Directory for MGM node log files
DataDir=/var/lib/mysql-cluster

[ndb_mgmd]
#Management Node db1
HostName=rncv-sqlmgmt01.virtual.local

# DATA NODE PARAMETERS
[ndbd default]
NoOfReplicas=2      # Number of replicas
DataMemory=256M     # Memory allocate for data storage
IndexMemory=128M    # Memory allocate for index storage
#LockPagesInMainMemory=1 # Locks data node processes into memory  ##causes crash
#Directory for Data Node
DataDir=/var/lib/mysql-cluster

#NoOfFragmentLogFiles=300 ##causes crash
MaxNoOfConcurrentOperations=100000
#SchedulerSpinTimer=400  ##causes crash
#SchedulerExecutionTimer=100  ##causes crash
#RealTimeScheduler=1  ##causes crash
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=200
RedoBuffer=32M
MaxNoOfTables=1024
MaxNoOfOrderedIndexes=256

[ndbd]
#Data Node db2
HostName=rncv-sqldata01.virtual.local
#LockExecuteThreadToCPU=1  ##causes crash
#LockMaintThreadsToCPU=0  ##causes crash

[ndbd]
#Data Node db3
HostName=rncv-sqldata02.virtual.local
#LockExecuteThreadToCPU=1  ##causes crash
#LockMaintThreadsToCPU=0  #causes crash

[mysqld]
#SQL Node db4
HostName=rncv-sql01.virtual.local

[mysqld]
#SQL Node db5
HostName=rncv-sql02.virtual.local

[mysqld]

[api]

Now we can implement the config

~$ sudo ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload
MySQL Cluster Management Server mysql-8.0.19 ndb-8.0.19
2020-02-24 17:54:57 [MgmtSrvr] INFO     -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
2020-02-24 17:54:57 [MgmtSrvr] INFO     -- Sucessfully created config directory
2020-02-24 17:54:57 [MgmtSrvr] WARNING  -- at line 12: [DB] IndexMemory is deprecated, will use Number bytes on each ndbd(DB) node allocated for storing indexes instead

And finally we can view the implementation:

~$ ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=2 (not connected, accepting connect from rncv-sqldata01.virtual.local)
id=3 (not connected, accepting connect from rncv-sqldata02.virtual.local)

[ndb_mgmd(MGM)]	1 node(s)
id=1	@10.10.0.40  (mysql-8.0.19 ndb-8.0.19)

[mysqld(API)]	2 node(s)
id=4 (not connected, accepting connect from rncv-sql01.virtual.local)
id=5 (not connected, accepting connect from rncv-sql02.virtual.local)

ndb_mgm> quit

[edit] autostart

quick systemd service to manage ndb_mgmd
Kill it.

~$ ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHUTDOWN
Connected to Management Server at: localhost:1186
Node 2: Cluster shutdown initiated
Node 3: Cluster shutdown initiated
3 NDB Cluster node(s) have shutdown.
Disconnecting to allow management server to shutdown.
Node 3: Node shutdown completed.
ndb_mgm> quit
~$ sudo pkill -f ndb_mgmd

Create the service

~$ sudo vim /etc/systemd/system/ndb_mgmd.service
...
[Unit]
Description=MySQL NDB Cluster Management Server
After=network.target auditd.service

[Service]
Type=forking
ExecStart=/usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure

[Install]
WantedBy=multi-user.target
~$ sudo systemctl daemon-reload

Start/Enable

~$ sudo systemctl start ndb_mgmd.service
~$ sudo systemctl status ndb_mgmd.service
ndb_mgmd.service - MySQL NDB Cluster Management Server
   Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-03-09 17:59:55 EDT; 6s ago
  Process: 91567 ExecStart=/usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload (code=exited, status=0/SUCCESS)
 Main PID: 91568 (ndb_mgmd)
   CGroup: /system.slice/ndb_mgmd.service
           └─91568 /usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-clu...

Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local systemd[1]: Starting MySQ...
Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local ndb_mgmd[91567]: MySQL Cl...
Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local systemd[1]: Started MySQL...
Hint: Some lines were ellipsized, use -l to show in full.

~$ sudo systemctl enable ndb_mgmd.service

[edit] Data Node Configuration

The data node configuration is very similar to the Management node configuration.

[edit] dependencies

Again, same as with the management node configuration, there are some packages you may need to install, but are not required. More info can be found here.

~$ sudo yum install -y wget telnet vim net-tools bind-utils

While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.

~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm

Verify you can see the new cluster packages and search them.

~$ yum repolist all | grep mysql
~$ yum --enablerepo=mysql-cluster-8.0-community search mysql

Enable the cluster repo by setting enabled=1

~$ sudo vim /etc/yum.repos.d/mysql-community.repo
...
[mysql-cluster-8.0-community]
name=MySQL Cluster 8.0 Community
baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql

Do a quick check-update to refresh the package list cache

~$ sudo yum check-update

We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.

~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
~$ sudo firewall-cmd --zone=internal --permanent --add-port=32768-60999/tcp
~$ sudo firewall-cmd --reload
~$ sudo firewall-cmd --zone=internal --list-all

[edit] package installation

There are a couple of dependencies we need to install.

~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker perl-Data-Dumper

Install the packages

~$ sudo yum install -y mysql-cluster-community-data-node
~$ sudo yum install -y mysql-cluster-community-client mysql-cluster-community-libs

[edit] configure data node

We now need to tell the data node where to find the sql management node.

~$ sudo vim /etc/my.cnf
...
[mysqld]
ndbcluster
ndb-connectstring=rncv-sqlmgmt01.virtual.local     # IP address of Management Node
 
[mysql_cluster]
ndb-connectstring=rncv-sqlmgmt01.virtual.local     # IP address of Management Node

Create a folder for the configuration files and data.

~$ sudo mkdir -p /var/lib/mysql-cluster

I had an issue starting the ndbd client because of the folder permissions set on the folder above. Since ndbd runs as whatever user it is launched from, make sure that user has read, write, and execute permissions to the folder.

I also had to configure SELinux as it was blocking ndbd from starting. It is best to follow the steps listed here, but ultimately you'll just be creating a policy allowing ndbd the ability to manipulate files. It should look something like this:

$ cat ndbd1-01.te

module ndbd-01 1.0;

require {
	type var_lib_t;
	type mysqld_t;
	class file open;
	class file append;
	class file { read write };
	class file lock;
	class file getattr;
        type ephemeral_port_t;
        class tcp_socket name_connect;
}

#============= mysqld_t ==============

#!!!! WARNING: 'var_lib_t' is a base type.
allow mysqld_t var_lib_t:file open;
allow mysqld_t var_lib_t:file append;
allow mysqld_t var_lib_t:file { read write };
allow mysqld_t var_lib_t:file lock;
allow mysqld_t var_lib_t:file getattr;
allow mysqld_t ephemeral_port_t:tcp_socket name_connect;

[edit] start data node

[edit] manually

Now we can start the data node and confirm it's connected.

~$ ndbd
2020-03-04 13:35:04 [ndbd] INFO     -- Angel connected to 'rncv-sqlmgmt01.virtual.local:1186'
2020-03-04 13:35:04 [ndbd] INFO     -- Angel allocated nodeid: 2

[edit] autostart

If you prefer to auto start ndbd and manage it through systemctl, you can do this:
Kill it.

~$ sudo pkill -f ndbd

Create the service

~$ sudo vim /etc/systemd/system/ndbd.service
...
[Unit]
Description=MySQL NDB Data Node Daemon
After=network.target auditd.service

[Service]
Type=forking
ExecStart=/bin/sh -c "/usr/sbin/ndbd"
ExecReload=/bin/kill -HUP $MAINPID
KillMode=control-group
Restart=on-failure

[Install]
WantedBy=multi-user.target
~$ sudo systemctl daemon-reload

Start/Enable

~$ sudo systemctl start ndbd
~$ sudo systemctl enable ndbd

[edit] confirm connectivity

From the management node you can also confirm this:

#management_node~$ ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=2	@10.10.0.41  (mysql-8.0.19 ndb-8.0.19, starting, Nodegroup: 0)
id=3 (not connected, accepting connect from rncv-sqldata02.virtual.local)

[ndb_mgmd(MGM)]	1 node(s)
id=1	@10.10.0.40  (mysql-8.0.19 ndb-8.0.19)

[mysqld(API)]	2 node(s)
id=4 (not connected, accepting connect from rncv-sql01.virtual.local)
id=5 (not connected, accepting connect from rncv-sql02.virtual.local)


If you have a 2nd data node, repeat this process

[edit] SQL Node Configuration

[edit] dependencies

Again, same as with the management node configuration, there are some packages you may need to install, but are not required. More info can be found here.

~$ sudo yum install -y wget telnet vim net-tools bind-utils

While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.

~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm

Verify you can see the new cluster packages and search them.

~$ yum repolist all | grep mysql
~$ yum --enablerepo=mysql-cluster-8.0-community search mysql

Enable the cluster repo by setting enabled=1

~$ sudo vim /etc/yum.repos.d/mysql-community.repo
...
[mysql-cluster-8.0-community]
name=MySQL Cluster 8.0 Community
baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql

Do a quick check-update to refresh the package list cache

~$ sudo yum check-update

We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.

~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
~$ sudo firewall-cmd --permanent --zone=internal --add-service=mysql
~$ sudo firewall-cmd --permanent --zone=internal --add-port=33060/tcp
~$ sudo firewall-cmd --reload
~$ sudo firewall-cmd --zone=internal --list-all

[edit] package installation

There are a couple of dependencies we need to install.

~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker perl-Data-Dumper

Install the packages

~$ sudo yum install -y mysql-cluster-community-server
~$ sudo yum install -y mysql-cluster-community-client mysql-cluster-community-libs mysql-cluster-community-common

[edit] configure sql node

We now need to tell the sql node where to find the sql management node. When I did this, my.cnf already existed and had a [mysqld] section.

~$ sudo vim /etc/my.cnf
...
[mysqld]
ndbcluster
ndb-connectstring=rncv-sqlmgmt01.virtual.local       # IP address for server management node
default-storage-engine=ndbcluster     # Define default Storage Engine used by MySQL
 
[mysql_cluster]
ndb-connectstring=rncv-sqlmgmt01.virtual.local      # IP address for server management node

I also had to configure SELinux as it was blocking mysqld from connecting to the sql management node. It is best to follow the steps listed here, but ultimately you'll just be creating a policy allowing mysqld the ability to create tcp socket connections over ephemeral ports. It should look something like this:

$ cat local-mysql1.te

module local-mysql1 1.0;

require {
	type ephemeral_port_t;
	type mysqld_t;
	class tcp_socket name_connect;
}

#============= mysqld_t ==============

#!!!! This avc can be allowed using one of the these booleans:
#     nis_enabled, mysql_connect_any
allow mysqld_t ephemeral_port_t:tcp_socket name_connect;

[edit] start sql node

~$ sudo systemctl restart mysqld.service


If you have a 2nd sql node, repeat this process

[edit] MySQL 8 Distributed Privileges

At the time of this writing, I still have not gotten distributed privileges working, but i'm following the steps listed here.
So far, you have to grant the NDB_STORED_USER privilege to said user like so:

mysql> GRANT NDB_STORED_USER ON *.* TO 'cluster_app_user'@'localhost'
or
mysql> GRANT NDB_STORED_USER ON *.* TO 'admin'@'%';
Query OK, 0 rows affected (0.26 sec)

With the privilege in place, you can verify the appropriate values are in the ndb_sql_metadata NDB table.

~$ ndb_select_all -d mysql ndb_sql_metadata
type	name	seq	note	sql_ddl_text
11	"'admin'@'%'"	0	2	"CREATE USER 'admin'@'%' IDENTIFIED WITH 'caching_sha2_password' AS '$A$005$MGBmokOj
                                m!<=P9q7BhmH0ANKg1Fs3JGEv.ITwLdXa7KC0n2GOogn6Xc/' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK PASSWORD HISTORY DEFAULT PASSWORD REUSE INTERVAL DEFAULT PASSWORD REQUIRE CURRENT DEFAULT"
12	"'admin'@'%'"	1	[NULL]	"GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,GROUP_REPLICATION_ADMIN,INNODB_REDO_LOG_ARCHIVE,NDB_STORED_USER,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `admin`@`%`"
12	"'admin'@'%'"	0	[NULL]	"GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `admin`@`%`"
3 rows returned

You should only need to restart the mysqld.service on the rest of the SQL nodes for them to pull in the changes.

Whenever a MySQL server starts up and joins the cluster as an SQL node it executes these stored CREATE USER and GRANT statements as part of the cluster schema synchronization process.

[edit] NDB cluster status script

Using HAProxy in front of a MySQL 8 NDB cluster, i ran into a couple of challenges.

  1. I could not get user/pass data to sync across the mysql nodes
  2. This in turn made it impossible for HAProxy's built in mysql status checking plugin to work.
  3. Something else i'm forgetting as well.

So, in lieu of these issues, I opted to write my own NDB cluster status service that the HAProxy httpd plugin can query to determine availability. This is based off of this script.
This is expected to work in combination with xinetd.

[edit] Script

NDB cluster status script:

#!/bin/bash
#
# Description: Script that reports status of ndb sql cluster nodes via HTTP, allowing HAProxy monitoring.
# Author: <[email protected]>
#

# Package name must be exact. generally this is in the format <name>-<version>-<release>.<arch>
# use yum info <package> to get the above values.

# Default values for parameters in case they are not specified.
sqlhost="localhost"
sqluser="root"
sqlpass='7,q(hf`U+3=?STg4]5a4xG!7DQL%Le'
logfile="/var/log/ndbclustercheck/xinetd.log"

# Independent parameters for script, passed in any order. ie. ~$ script.sh -b -a -c
while getopts s:u:p:l: option
do
    case "${option}"
    in
        s) sqlhost=${OPTARG};;
        u) sqluser=${OPTARG};;
        p) sqlpass=${OPTARG};;
        l) logfile=${OPTARG};;
    esac
done

# Dependent parameters
dateprefix=$(date +%F.%T)

# function to check if log file is writable
function check_log {
    logdir=$(dirname $logfile)
    if [ -d $logdir ]; then
        touch $logdir/test
        if [ $? -eq 0 ]; then
            rm $logdir/test
        else
            echo "$dateprefix unable to write to log directory: $logdir" &>> /tmp/xinetd.log
            #exit 1
        fi
    else
        echo "$dateprefix log directory does not exist: $logdir" &>> /tmp/xinetd.log
        #exit 1
    fi
}

# function to check if the specified package exists
function check_packages {
    echo "$dateprefix Checking Packages" &>> $logfile
    packages=("xinetd" "nmap")
    for package in ${packages[@]}
    do
        rpm -qa | grep $package &>/dev/null
        if [ $? -ne 0 ]; then
            echo "$dateprefix package: $package is not installed.  EOL" &>> $logfile
            #exit 1
        fi
    done
}

# function to check if sql is accepting connections on port 3306 and if data nodes are available.
# return a 503 HTTP response if either condition is false, otherwise return a 200 HTTP response.
function check_sql {
    echo "$dateprefix Checking SQL" &>> $logfile
    #change default field separator, white space, to newline character so arrays are parsed properly
    saveifs=$IFS
    IFS=$'\n'
    declare -a sql_available=("HTTP/1.1 200 OK\r\n" 
        "Content-Type: text/plain\r\n" 
        "Connection: close\r\n" 
        "Content-Length: 26\r\n" 
        "\r\n" 
        "SQL server is available.\r\n"
        "\r\n")
    declare -a no_datanodes=("HTTP/1.1 503 Service Unavailable\r\n" 
        "Content-Type: text/plain\r\n" 
        "Connection: close\r\n" 
        "Content-Length: 40\r\n" 
        "\r\n" 
        "No available data nodes from SQL node.\r\n"
        "\r\n")
    declare -a sql_unavailable=("HTTP/1.1 503 Service Unavailable\r\n" 
        "Content-Type: text/plain\r\n" 
        "Connection: close\r\n" 
        "Content-Length: 27\r\n" 
        "\r\n" 
        "SQL Node is unresponsive.\r\n"
        "\r\n")
    nc -z $sqlhost 3306
    if [ $? -eq 0 ]; then
        data_nodes=$(mysql --user=$sqluser --password=$sqlpass -sN -e "select count(*) from ndbinfo.nodes where status = 'STARTED';" 2> $logfile)
        if [ -n "$data_nodes" ]; then
            for http in ${sql_available[@]}
            do
                echo -en $http | tee -a $logfile
            done
            sleep 0.1
            #exit 0
        else
            for http in ${no_datanodes[@]}
            do
                echo -en $http | tee -a $logfile
            done
            sleep 0.1
            #exit 1
        fi
    else
        for http in ${sql_unavailable[@]}
        do
            echo -en $http | tee -a $logfile
        done 
        sleep 0.1
        #exit 1
    fi
    #restore default field separator to a whitespace
    IFS=$saveifs
}

check_log
check_packages
check_sql

[edit] xinetd

~$ sudo cat /etc/xinetd.d/ndbclusterstatus
service ndbclusterstatus
{
	port		= 5900
	socket_type	= stream
	protocol	= tcp
	wait		= no
	user		= root
	server		= /home/r00t/ndbclustercheck
	type		= UNLISTED
}

[edit] firewalld rules

~$ sudo firewall-cmd --zone=internal --add-port=5900/tcp --permanent
success
~$ sudo firewall-cmd --reload
success

[edit] haproxy

example config

listen mysql-cluster 0.0.0.0:3306
	mode tcp
	balance roundrobin
	option tcplog
	option httpchk GET /
	server sql01 rncv-sql01.virtual.local:3306 check port 5900
	server sql02 rncv-sql02.virtual.local:3306 check port 5900

[edit] mysql access

Keep in mind that you'll need to either create a user specifically for haproxy with the host defined as the name/ip, or specify the host as % so that the user can log in from anywhere.

 mysql> create user `admin`@`%` identified by '^R5^2Jzz@$La-]x[H;3}';
Query OK, 0 rows affected (0.02 sec)

mysql> grant all privileges on *.* to `admin`@`%` with grant option;
Query OK, 0 rows affected (0.33 sec)

In the case that you've tried to log into MySQL too many time through the haproxy, it could cause haproxy to get locked out. flush the hosts from mysql to reset.

mysql --user='root' --password='7,q(hf`U+3=?STg4]5a4xG!7DQL%Le' -sN -e "flush hosts;"
Personal tools
Namespaces

Variants
Actions
Navigation
Mediawiki
Confluence
DevOps Tools
Open Source Products
Ubuntu
Ubuntu 22
Mac OSX
Oracle Linux
AWS
Windows
OpenVPN
Grafana
InfluxDB2
TrueNas
MagicMirror
OwnCloud
Pivotal
osTicket
OTRS
phpBB
WordPress
VmWare ESXI 5.1
Crypto currencies
HTML
CSS
Python
Java Script
PHP
Raspberry Pi
Canvas LMS
Kaltura Media Server
Plex Media Server
MetaSploit
Zoneminder
ShinobiCE
Photoshop CS2
Fortinet
Uploaded
Certifications
General Info
Games
Meal Plans
NC Statutes
Politics
Volkswagen
Covid
NCDMV
Toolbox