Oracle Linux/MySQL
(→NDB cluster status script) |
(→NDB cluster status script) |
||
Line 544: | Line 544: | ||
type = UNLISTED | type = UNLISTED | ||
}</nowiki> | }</nowiki> | ||
+ | |||
+ | ==firewalld rules== | ||
+ | <nowiki>~$ sudo firewall-cmd --zone=internal --add-port=5900/tcp --permanent | ||
+ | success | ||
+ | ~$ sudo firewall-cmd --reload | ||
+ | success</nowiki> |
Revision as of 09:58, 18 March 2020
Contents |
Simple Cluster configuration
Will consist of 5 components:
- Management Node
- holds the configuration for MySQL Cluster in a file called config.ini. It also writes a cluster log, and takes part in arbitration to prevent split-brain or network partitioning.
- (2) Data Nodes
- stores the data and the tables. The data nodes takes cares of managing the transactions, recovery, etc
- (2) SQL Nodes
- the most common way to read/write data to the Data Nodes is by using the MySQL Server (aka SQL Node)
Management Node Configuration
dependencies
There are some packages you may need to install, but are not required. More info can be found here.
~$ sudo yum install -y wget telnet vim net-tools bind-utils
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.
~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm ~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm
Verify you can see the new cluster packages and search them.
~$ yum repolist all | grep mysql ~$ yum --enablerepo=mysql-cluster-8.0-community search mysql
Enable the cluster repo by setting enabled=1
~$ sudo vim /etc/yum.repos.d/mysql-community.repo ... [mysql-cluster-8.0-community] name=MySQL Cluster 8.0 Community baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Do a quick check-update
to refresh the package list cache
~$ sudo yum check-update
We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1 ~$ sudo firewall-cmd --zone=internal --permanent --add-port=1186/tcp ~$ sudo firewall-cmd --reload ~$ sudo firewall-cmd --zone=internal --list-all
package installation
There are a couple of dependencies we need to install.
~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker
Install the packages
~$ sudo yum install -y mysql-cluster-community-management-server mysql-cluster-community-client mysql-cluster-community-libs
configure management node cluster
Now we can setup a basic configuration file for the cluster. reccommended
~$ sudo mkdir -p /var/lib/mysql-cluster ~$ sudo vim /var/lib/mysql-cluster/config.ini ... # TCP PARAMETERS [tcp default]SendBufferMemory=2M ReceiveBufferMemory=2M # MANAGEMENT NODE PARAMETERS [ndb_mgmd default] # Directory for MGM node log files DataDir=/var/lib/mysql-cluster [ndb_mgmd] #Management Node db1 HostName=rncv-sqlmgmt01.virtual.local # DATA NODE PARAMETERS [ndbd default] NoOfReplicas=2 # Number of replicas DataMemory=256M # Memory allocate for data storage IndexMemory=128M # Memory allocate for index storage #LockPagesInMainMemory=1 # Locks data node processes into memory ##causes crash #Directory for Data Node DataDir=/var/lib/mysql-cluster #NoOfFragmentLogFiles=300 ##causes crash MaxNoOfConcurrentOperations=100000 #SchedulerSpinTimer=400 ##causes crash #SchedulerExecutionTimer=100 ##causes crash #RealTimeScheduler=1 ##causes crash TimeBetweenGlobalCheckpoints=1000 TimeBetweenEpochs=200 RedoBuffer=32M MaxNoOfTables=1024 MaxNoOfOrderedIndexes=256 [ndbd] #Data Node db2 HostName=rncv-sqldata01.virtual.local #LockExecuteThreadToCPU=1 ##causes crash #LockMaintThreadsToCPU=0 ##causes crash [ndbd] #Data Node db3 HostName=rncv-sqldata02.virtual.local #LockExecuteThreadToCPU=1 ##causes crash #LockMaintThreadsToCPU=0 #causes crash [mysqld] #SQL Node db4 HostName=rncv-sql01.virtual.local [mysqld] #SQL Node db5 HostName=rncv-sql02.virtual.local [mysqld] [api]
Now we can implement the config
~$ sudo ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload MySQL Cluster Management Server mysql-8.0.19 ndb-8.0.19 2020-02-24 17:54:57 [MgmtSrvr] INFO -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it... 2020-02-24 17:54:57 [MgmtSrvr] INFO -- Sucessfully created config directory 2020-02-24 17:54:57 [MgmtSrvr] WARNING -- at line 12: [DB] IndexMemory is deprecated, will use Number bytes on each ndbd(DB) node allocated for storing indexes instead
And finally we can view the implementation:
~$ ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accepting connect from rncv-sqldata01.virtual.local) id=3 (not connected, accepting connect from rncv-sqldata02.virtual.local) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.10.0.40 (mysql-8.0.19 ndb-8.0.19) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from rncv-sql01.virtual.local) id=5 (not connected, accepting connect from rncv-sql02.virtual.local) ndb_mgm> quit
autostart
quick systemd service to manage ndb_mgmd
Kill it.
~$ ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> SHUTDOWN Connected to Management Server at: localhost:1186 Node 2: Cluster shutdown initiated Node 3: Cluster shutdown initiated 3 NDB Cluster node(s) have shutdown. Disconnecting to allow management server to shutdown. Node 3: Node shutdown completed. ndb_mgm> quit ~$ sudo pkill -f ndb_mgmd
Create the service
~$ sudo vim /etc/systemd/system/ndb_mgmd.service ... [Unit] Description=MySQL NDB Cluster Management Server After=network.target auditd.service [Service] Type=forking ExecStart=/usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload ExecReload=/bin/kill -HUP $MAINPID KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target
~$ sudo systemctl daemon-reload
Start/Enable
~$ sudo systemctl start ndb_mgmd.service ~$ sudo systemctl status ndb_mgmd.service ndb_mgmd.service - MySQL NDB Cluster Management Server Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2020-03-09 17:59:55 EDT; 6s ago Process: 91567 ExecStart=/usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload (code=exited, status=0/SUCCESS) Main PID: 91568 (ndb_mgmd) CGroup: /system.slice/ndb_mgmd.service └─91568 /usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-clu... Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local systemd[1]: Starting MySQ... Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local ndb_mgmd[91567]: MySQL Cl... Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local systemd[1]: Started MySQL... Hint: Some lines were ellipsized, use -l to show in full. ~$ sudo systemctl enable ndb_mgmd.service
Data Node Configuration
The data node configuration is very similar to the Management node configuration.
dependencies
Again, same as with the management node configuration, there are some packages you may need to install, but are not required. More info can be found here.
~$ sudo yum install -y wget telnet vim net-tools bind-utils
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.
~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm ~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm
Verify you can see the new cluster packages and search them.
~$ yum repolist all | grep mysql ~$ yum --enablerepo=mysql-cluster-8.0-community search mysql
Enable the cluster repo by setting enabled=1
~$ sudo vim /etc/yum.repos.d/mysql-community.repo ... [mysql-cluster-8.0-community] name=MySQL Cluster 8.0 Community baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Do a quick check-update
to refresh the package list cache
~$ sudo yum check-update
We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1 ~$ sudo firewall-cmd --zone=internal --permanent --add-port=32768-60999/tcp ~$ sudo firewall-cmd --reload ~$ sudo firewall-cmd --zone=internal --list-all
package installation
There are a couple of dependencies we need to install.
~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker perl-Data-Dumper
Install the packages
~$ sudo yum install -y mysql-cluster-community-data-node ~$ sudo yum install -y mysql-cluster-community-client mysql-cluster-community-libs
configure data node
We now need to tell the data node where to find the sql management node.
~$ sudo vim /etc/my.cnf ... [mysqld] ndbcluster ndb-connectstring=rncv-sqlmgmt01.virtual.local # IP address of Management Node [mysql_cluster] ndb-connectstring=rncv-sqlmgmt01.virtual.local # IP address of Management Node
Create a folder for the configuration files and data.
~$ sudo mkdir -p /var/lib/mysql-cluster
I had an issue starting the ndbd
client because of the folder permissions set on the folder above. Since ndbd
runs as whatever user it is launched from, make sure that user has read, write, and execute permissions to the folder.
I also had to configure SELinux as it was blocking ndbd from starting. It is best to follow the steps listed here, but ultimately you'll just be creating a policy allowing ndbd
the ability to manipulate files. It should look something like this:
$ cat ndbd1-01.te module ndbd-01 1.0; require { type var_lib_t; type mysqld_t; class file open; class file append; class file { read write }; class file lock; class file getattr; type ephemeral_port_t; class tcp_socket name_connect; } #============= mysqld_t ============== #!!!! WARNING: 'var_lib_t' is a base type. allow mysqld_t var_lib_t:file open; allow mysqld_t var_lib_t:file append; allow mysqld_t var_lib_t:file { read write }; allow mysqld_t var_lib_t:file lock; allow mysqld_t var_lib_t:file getattr; allow mysqld_t ephemeral_port_t:tcp_socket name_connect;
start data node
manually
Now we can start the data node and confirm it's connected.
~$ ndbd 2020-03-04 13:35:04 [ndbd] INFO -- Angel connected to 'rncv-sqlmgmt01.virtual.local:1186' 2020-03-04 13:35:04 [ndbd] INFO -- Angel allocated nodeid: 2
autostart
If you prefer to auto start ndbd and manage it through systemctl, you can do this:
Kill it.
~$ sudo pkill -f ndbd
Create the service
~$ sudo vim /etc/systemd/system/ndbd.service ... [Unit] Description=MySQL NDB Data Node Daemon After=network.target auditd.service [Service] Type=forking ExecStart=/bin/sh -c "/usr/sbin/ndbd" ExecReload=/bin/kill -HUP $MAINPID KillMode=control-group Restart=on-failure [Install] WantedBy=multi-user.target
~$ sudo systemctl daemon-reload
Start/Enable
~$ sudo systemctl start ndbd ~$ sudo systemctl enable ndbd
confirm connectivity
From the management node you can also confirm this:
#management_node~$ ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @10.10.0.41 (mysql-8.0.19 ndb-8.0.19, starting, Nodegroup: 0) id=3 (not connected, accepting connect from rncv-sqldata02.virtual.local) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.10.0.40 (mysql-8.0.19 ndb-8.0.19) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from rncv-sql01.virtual.local) id=5 (not connected, accepting connect from rncv-sql02.virtual.local)
If you have a 2nd data node, repeat this process
SQL Node Configuration
dependencies
Again, same as with the management node configuration, there are some packages you may need to install, but are not required. More info can be found here.
~$ sudo yum install -y wget telnet vim net-tools bind-utils
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.
~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm ~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm
Verify you can see the new cluster packages and search them.
~$ yum repolist all | grep mysql ~$ yum --enablerepo=mysql-cluster-8.0-community search mysql
Enable the cluster repo by setting enabled=1
~$ sudo vim /etc/yum.repos.d/mysql-community.repo ... [mysql-cluster-8.0-community] name=MySQL Cluster 8.0 Community baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Do a quick check-update
to refresh the package list cache
~$ sudo yum check-update
We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1 ~$ sudo firewall-cmd --permanent --zone=internal --add-service=mysql ~$ sudo firewall-cmd --permanent --zone=internal --add-port=33060/tcp ~$ sudo firewall-cmd --reload ~$ sudo firewall-cmd --zone=internal --list-all
package installation
There are a couple of dependencies we need to install.
~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker perl-Data-Dumper
Install the packages
~$ sudo yum install -y mysql-cluster-community-server ~$ sudo yum install -y mysql-cluster-community-client mysql-cluster-community-libs mysql-cluster-community-common
configure sql node
We now need to tell the sql node where to find the sql management node. When I did this, my.cnf
already existed and had a [mysqld]
section.
~$ sudo vim /etc/my.cnf ... [mysqld] ndbcluster ndb-connectstring=rncv-sqlmgmt01.virtual.local # IP address for server management node default-storage-engine=ndbcluster # Define default Storage Engine used by MySQL [mysql_cluster] ndb-connectstring=rncv-sqlmgmt01.virtual.local # IP address for server management node
I also had to configure SELinux as it was blocking mysqld from connecting to the sql management node. It is best to follow the steps listed here, but ultimately you'll just be creating a policy allowing mysqld
the ability to create tcp socket connections over ephemeral ports. It should look something like this:
$ cat local-mysql1.te module local-mysql1 1.0; require { type ephemeral_port_t; type mysqld_t; class tcp_socket name_connect; } #============= mysqld_t ============== #!!!! This avc can be allowed using one of the these booleans: # nis_enabled, mysql_connect_any allow mysqld_t ephemeral_port_t:tcp_socket name_connect;
start sql node
~$ sudo systemctl restart mysqld.service
If you have a 2nd sql node, repeat this process
MySQL 8 Distributed Privileges
At the time of this writing, I still have not gotten distributed privileges working, but i'm following the steps listed here.
So far, you have to grant the NDB_STORED_USER
privilege to said user like so:
mysql> GRANT NDB_STORED_USER ON *.* TO 'cluster_app_user'@'localhost' or mysql> GRANT NDB_STORED_USER ON *.* TO 'admin'@'%'; Query OK, 0 rows affected (0.26 sec)
With the privilege in place, you can verify the appropriate values are in the ndb_sql_metadata
NDB table.
~$ ndb_select_all -d mysql ndb_sql_metadata type name seq note sql_ddl_text 11 "'admin'@'%'" 0 2 "CREATE USER 'admin'@'%' IDENTIFIED WITH 'caching_sha2_password' AS '$A$005$MGBmokOj m!<=P9q7BhmH0ANKg1Fs3JGEv.ITwLdXa7KC0n2GOogn6Xc/' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK PASSWORD HISTORY DEFAULT PASSWORD REUSE INTERVAL DEFAULT PASSWORD REQUIRE CURRENT DEFAULT" 12 "'admin'@'%'" 1 [NULL] "GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,GROUP_REPLICATION_ADMIN,INNODB_REDO_LOG_ARCHIVE,NDB_STORED_USER,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `admin`@`%`" 12 "'admin'@'%'" 0 [NULL] "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `admin`@`%`" 3 rows returned
You should only need to restart the mysqld.service on the rest of the SQL nodes for them to pull in the changes.
Whenever a MySQL server starts up and joins the cluster as an SQL node it executes these stored CREATE USER and GRANT statements as part of the cluster schema synchronization process.
NDB cluster status script
Using HAProxy in front of a MySQL 8 NDB cluster, i ran into a couple of challenges.
- I could not get user/pass data to sync across the mysql nodes
- This in turn made it impossible for HAProxy's built in mysql status checking plugin to work.
- Something else i'm forgetting as well.
So, in lieu of these issues, I opted to write my own NDB cluster status service that the HAProxy httpd plugin can query to determine availability. This is based off of this script.
This is expected to work in combination with xinetd.
Script
NDB cluster status script:
#!/bin/bash # # Description: Script that reports status of ndb sql cluster nodes via HTTP, allowing HAProxy monitoring. # Author: <[email protected]> # # Package name must be exact. generally this is in the format <name>-<version>-<release>.<arch> # use yum info <package> to get the above values. # Default values for parameters in case they are not specified. sqlhost="localhost" sqluser="root" sqlpass='7,q(hf`U+3=?STg4]5a4xG!7DQL%Le' logfile="/var/log/ndbclustercheck/xinetd.log" # Independent parameters for script, passed in any order. ie. ~$ script.sh -b -a -c while getopts s:u:p:l: option do case "${option}" in s) sqlhost=${OPTARG};; u) sqluser=${OPTARG};; p) sqlpass=${OPTARG};; l) logfile=${OPTARG};; esac done # Dependent parameters dateprefix=$(date +%F.%T) # function to check if log file is writable function check_log { logdir=$(dirname $logfile) if [ -d $logdir ]; then touch $logdir/test if [ $? -eq 0 ]; then rm $logdir/test else echo "$dateprefix unable to write to log directory: $logdir" &> /tmp/xinetd.log #exit 1 fi else echo "$dateprefix log directory does not exist: $logdir" &> /tmp/xinetd.log #exit 1 fi } # function to check if the specified package exists function check_packages { echo "$dateprefix Checking Packages" &> $logfile packages=("xinetd-2.3.15-13.el7.x86_64" "nmap.x86_64 2:6.40-19.el7") for package in ${packages[@]} do rpm -qa | grep $package &>/dev/null if [ $? -ne 0 ]; then echo "$dateprefix package: $package is not installed. EOL" &> $logfile #exit 1 fi done } # function to check if sql is accepting connections on port 3306 and if data nodes are available. # return a 503 HTTP response if either condition is false, otherwise return a 200 HTTP response. function check_sql { echo "$dateprefix Checking SQL" &> $logfile #change default field separator, white space, to newline character so arrays are parsed properly saveifs=$IFS IFS=$'\n' declare -a sql_available=("HTTP/1.1 200 OK\r\n" "Content-Type: text/plain\r\n" "Connection: close\r\n" "Content-Length: 40\r\n" "\r\n" "SQL server is available.\r\n") declare -a no_datanodes=("HTTP/1.1 503 Service Unavailable\r\n" "Content-Type: text/plain\r\n" "Connection: close\r\n" "Content-Length: 44\r\n" "\r\n" "No available data nodes from SQL node.\r\n") declare -a sql_unavailable=("HTTP/1.1 503 Service Unavailable\r\n" "Content-Type: text/plain\r\n" "Connection: close\r\n" "Content-Length: 44\r\n" "\r\n" "SQL Node is unresponsive.\r\n") nc -z $sqlhost 3306 if [ $? -eq 0 ]; then data_nodes=$(mysql --user=$sqluser --password=$sqlpass -sN -e "select count(*) from ndbinfo.nodes where status = 'STARTED';" 2> $logfile) if [ -n "$data_nodes" ]; then for http in ${sql_available[@]} do echo -en $http | tee -a $logfile done sleep 0.1 #exit 0 else for http in ${no_datanodes[@]} do echo -en $http | tee -a $logfile done sleep 0.1 #exit 1 fi else for http in ${sql_unavailable[@]} do echo -en $http | tee -a $logfile done sleep 0.1 #exit 1 fi #restore default field separator to a whitespace IFS=$saveifs } check_log check_packages check_sql
xinetd
~$ sudo cat /etc/xinetd.d/ndbclusterstatus service ndbclusterstatus { port = 5900 socket_type = stream protocol = tcp wait = no user = root server = /home/r00t/ndbclustercheck type = UNLISTED }
firewalld rules
~$ sudo firewall-cmd --zone=internal --add-port=5900/tcp --permanent success ~$ sudo firewall-cmd --reload success