Oracle Linux/MySQL
(→start data node) |
|||
Line 261: | Line 261: | ||
2020-03-04 13:35:04 [ndbd] INFO -- Angel allocated nodeid: 2</nowiki> | 2020-03-04 13:35:04 [ndbd] INFO -- Angel allocated nodeid: 2</nowiki> | ||
===autostart=== | ===autostart=== | ||
− | If you prefer to auto start ndbd and manage it through systemctl, you can do this: | + | If you prefer to auto start ndbd and manage it through systemctl, you can do this:<br> |
+ | Kill it. <br> | ||
+ | <nowiki>~$ sudo pkill -f ndbd</nowiki> | ||
+ | Create the service | ||
+ | <nowiki>~$ sudo vim /etc/systemd/system/ndbd.service | ||
+ | ... | ||
+ | [Unit] | ||
+ | Description=MySQL NDB Data Node Daemon | ||
+ | After=network.target auditd.service | ||
+ | [Service] | ||
+ | Type=forking | ||
+ | ExecStart=/bin/sh -c "/usr/sbin/ndbd" | ||
+ | ExecReload=/bin/kill -HUP $MAINPID | ||
+ | KillMode=control-group | ||
+ | Restart=on-failure | ||
+ | |||
+ | [Install] | ||
+ | WantedBy=multi-user.target</nowiki> | ||
+ | |||
+ | <nowiki>~$ sudo systemctl daemon-reload</nowiki> | ||
+ | Start/Enable | ||
+ | </nowiki>~$ sudo systemctl start ndbd | ||
+ | ~$ $ sudo systemctl enable ndbd</nowiki> | ||
===confirm connectivity=== | ===confirm connectivity=== |
Revision as of 11:27, 10 March 2020
Contents |
Simple Cluster configuration
Will consist of 5 components:
- Management Node
- holds the configuration for MySQL Cluster in a file called config.ini. It also writes a cluster log, and takes part in arbitration to prevent split-brain or network partitioning.
- (2) Data Nodes
- stores the data and the tables. The data nodes takes cares of managing the transactions, recovery, etc
- (2) SQL Nodes
- the most common way to read/write data to the Data Nodes is by using the MySQL Server (aka SQL Node)
Management Node Configuration
dependencies
There are some packages you may need to install, but are not required. More info can be found here.
~$ sudo yum install -y wget telnet vim net-tools bind-utils
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.
~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm ~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm
Verify you can see the new cluster packages and search them.
~$ yum repolist all | grep mysql ~$ yum --enablerepo=mysql-cluster-8.0-community search mysql
Enable the cluster repo by setting enabled=1
~$ sudo vim /etc/yum.repos.d/mysql-community.repo ... [mysql-cluster-8.0-community] name=MySQL Cluster 8.0 Community baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Do a quick check-update
to refresh the package list cache
~$ sudo yum check-update
We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1 ~$ sudo firewall-cmd --zone=internal --permanent --add-port=1186/tcp ~$ sudo firewall-cmd --reload ~$ sudo firewall-cmd --zone=internal --list-all
package installation
There are a couple of dependencies we need to install.
~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker
Install the packages
~$ sudo yum install -y mysql-cluster-community-management-server mysql-cluster-community-client mysql-cluster-community-libs
configure management node cluster
Now we can setup a basic configuration file for the cluster.
~$ sudo mkdir -p /var/lib/mysql-cluster ~$ sudo vim /var/lib/mysql-cluster/config.ini ... # TCP PARAMETERS [tcp default]SendBufferMemory=2M ReceiveBufferMemory=2M # MANAGEMENT NODE PARAMETERS [ndb_mgmd default] # Directory for MGM node log files DataDir=/var/lib/mysql-cluster [ndb_mgmd] #Management Node db1 HostName=rncv-sqlmgmt01.virtual.local # DATA NODE PARAMETERS [ndbd default] NoOfReplicas=2 # Number of replicas DataMemory=256M # Memory allocate for data storage IndexMemory=128M # Memory allocate for index storage #LockPagesInMainMemory=1 # Locks data node processes into memory ##causes crash #Directory for Data Node DataDir=/var/lib/mysql-cluster #NoOfFragmentLogFiles=300 ##causes crash MaxNoOfConcurrentOperations=100000 #SchedulerSpinTimer=400 ##causes crash #SchedulerExecutionTimer=100 ##causes crash #RealTimeScheduler=1 ##causes crash TimeBetweenGlobalCheckpoints=1000 TimeBetweenEpochs=200 RedoBuffer=32M MaxNoOfTables=1024 MaxNoOfOrderedIndexes=256 [ndbd] #Data Node db2 HostName=rncv-sqldata01.virtual.local #LockExecuteThreadToCPU=1 ##causes crash #LockMaintThreadsToCPU=0 ##causes crash [ndbd] #Data Node db3 HostName=rncv-sqldata02.virtual.local #LockExecuteThreadToCPU=1 ##causes crash #LockMaintThreadsToCPU=0 #causes crash [mysqld] #SQL Node db4 HostName=rncv-sql01.virtual.local [mysqld] #SQL Node db5 HostName=rncv-sql02.virtual.local [mysqld] [api]
Now we can implement the config
~$ sudo ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload MySQL Cluster Management Server mysql-8.0.19 ndb-8.0.19 2020-02-24 17:54:57 [MgmtSrvr] INFO -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it... 2020-02-24 17:54:57 [MgmtSrvr] INFO -- Sucessfully created config directory 2020-02-24 17:54:57 [MgmtSrvr] WARNING -- at line 12: [DB] IndexMemory is deprecated, will use Number bytes on each ndbd(DB) node allocated for storing indexes instead
And finally we can view the implementation:
~$ ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accepting connect from rncv-sqldata01.virtual.local) id=3 (not connected, accepting connect from rncv-sqldata02.virtual.local) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.10.0.40 (mysql-8.0.19 ndb-8.0.19) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from rncv-sql01.virtual.local) id=5 (not connected, accepting connect from rncv-sql02.virtual.local) ndb_mgm> quit
autostart
quick systemd service to manage ndb_mgmd
Kill it.
~$ ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> SHUTDOWN Connected to Management Server at: localhost:1186 Node 2: Cluster shutdown initiated Node 3: Cluster shutdown initiated 3 NDB Cluster node(s) have shutdown. Disconnecting to allow management server to shutdown. Node 3: Node shutdown completed. ndb_mgm> quit ~$ sudo pkill -f ndb_mgmd
Create the service
~$ sudo vim /etc/systemd/system/ndb_mgmd.service ... [Unit] Description=MySQL NDB Cluster Management Server After=network.target auditd.service [Service] Type=forking ExecStart=/usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload ExecReload=/bin/kill -HUP $MAINPID KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target
~$ sudo systemctl daemon-reload
Start/Enable
~$ sudo systemctl start ndb_mgmd.service ~$ sudo systemctl status ndb_mgmd.service ndb_mgmd.service - MySQL NDB Cluster Management Server Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2020-03-09 17:59:55 EDT; 6s ago Process: 91567 ExecStart=/usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini --reload (code=exited, status=0/SUCCESS) Main PID: 91568 (ndb_mgmd) CGroup: /system.slice/ndb_mgmd.service └─91568 /usr/sbin/ndb_mgmd --config-file=/var/lib/mysql-clu... Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local systemd[1]: Starting MySQ... Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local ndb_mgmd[91567]: MySQL Cl... Mar 09 17:59:55 rncv-sqlmgmt01.virtual.local systemd[1]: Started MySQL... Hint: Some lines were ellipsized, use -l to show in full. ~$ sudo systemctl enable ndb_mgmd.service
Data Node Configuration
The data node configuration is very similar to the Management node configuration.
dependencies
Again, same as with the management node configuration, there are some packages you may need to install, but are not required. More info can be found here.
~$ sudo yum install -y wget telnet vim net-tools bind-utils
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.
~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm ~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm
Verify you can see the new cluster packages and search them.
~$ yum repolist all | grep mysql ~$ yum --enablerepo=mysql-cluster-8.0-community search mysql
Enable the cluster repo by setting enabled=1
~$ sudo vim /etc/yum.repos.d/mysql-community.repo ... [mysql-cluster-8.0-community] name=MySQL Cluster 8.0 Community baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Do a quick check-update
to refresh the package list cache
~$ sudo yum check-update
We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1 ~$ sudo firewall-cmd --zone=internal --permanent --add-port=32768-60999/tcp ~$ sudo firewall-cmd --reload ~$ sudo firewall-cmd --zone=internal --list-all
package installation
There are a couple of dependencies we need to install.
~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker perl-Data-Dumper
Install the packages
~$ sudo yum install -y mysql-cluster-community-data-node ~$ sudo yum install -y mysql-cluster-community-client mysql-cluster-community-libs
configure data node
We now need to tell the data node where to find the sql management node.
~$ sudo vim /etc/my.cnf ... [mysqld] ndbcluster ndb-connectstring=rncv-sqlmgmt01.virtual.local # IP address of Management Node [mysql_cluster] ndb-connectstring=rncv-sqlmgmt01.virtual.local # IP address of Management Node
Create a folder for the configuration files and data.
~$ sudo mkdir -p /var/lib/mysql-cluster
I had an issue starting the ndbd
client because of the folder permissions set on the folder above. Since ndbd
runs as whatever user it is launched from, make sure that user has read, write, and execute permissions to the folder.
I also had to configure SELinux as it was blocking ndbd from starting. It is best to follow the steps listed here, but ultimately you'll just be creating a policy allowing ndbd
the ability to manipulate files. It should look something like this:
$ cat ndbd1-01.te module ndbd-01 1.0; require { type var_lib_t; type mysqld_t; class file open; class file append; class file { read write }; class file lock; class file getattr; } #============= mysqld_t ============== #!!!! WARNING: 'var_lib_t' is a base type. allow mysqld_t var_lib_t:file open; allow mysqld_t var_lib_t:file append; allow mysqld_t var_lib_t:file { read write }; allow mysqld_t var_lib_t:file lock; allow mysqld_t var_lib_t:file getattr;
start data node
manually
Now we can start the data node and confirm it's connected.
~$ ndbd 2020-03-04 13:35:04 [ndbd] INFO -- Angel connected to 'rncv-sqlmgmt01.virtual.local:1186' 2020-03-04 13:35:04 [ndbd] INFO -- Angel allocated nodeid: 2
autostart
If you prefer to auto start ndbd and manage it through systemctl, you can do this:
Kill it.
~$ sudo pkill -f ndbd
Create the service
~$ sudo vim /etc/systemd/system/ndbd.service ... [Unit] Description=MySQL NDB Data Node Daemon After=network.target auditd.service [Service] Type=forking ExecStart=/bin/sh -c "/usr/sbin/ndbd" ExecReload=/bin/kill -HUP $MAINPID KillMode=control-group Restart=on-failure [Install] WantedBy=multi-user.target
~$ sudo systemctl daemon-reload
Start/Enable
</nowiki>~$ sudo systemctl start ndbd
~$ $ sudo systemctl enable ndbd</nowiki>
confirm connectivity
From the management node you can also confirm this:
#management_node~$ ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @10.10.0.41 (mysql-8.0.19 ndb-8.0.19, starting, Nodegroup: 0) id=3 (not connected, accepting connect from rncv-sqldata02.virtual.local) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.10.0.40 (mysql-8.0.19 ndb-8.0.19) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from rncv-sql01.virtual.local) id=5 (not connected, accepting connect from rncv-sql02.virtual.local)
If you have a 2nd data node, repeat this process
SQL Node Configuration
dependencies
Again, same as with the management node configuration, there are some packages you may need to install, but are not required. More info can be found here.
~$ sudo yum install -y wget telnet vim net-tools bind-utils
While most of the guides I found perform installation directly using the RPM packages, I prefer to install packages from a repository whenever available. MySQL provides a YUM repository.
~$ wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm ~$ sudo yum localinstall -y mysql80-community-release-el7-3.noarch.rpm
Verify you can see the new cluster packages and search them.
~$ yum repolist all | grep mysql ~$ yum --enablerepo=mysql-cluster-8.0-community search mysql
Enable the cluster repo by setting enabled=1
~$ sudo vim /etc/yum.repos.d/mysql-community.repo ... [mysql-cluster-8.0-community] name=MySQL Cluster 8.0 Community baseurl=http://repo.mysql.com/yum/mysql-cluster-8.0-community/el/7/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Do a quick check-update
to refresh the package list cache
~$ sudo yum check-update
We also need to open a port and make sure our networking interface is setup correctly. In my environment, servers communicate with each other through a virtual 10GB network rather than traversing the physical 1GB network.
~$ sudo firewall-cmd --permanent --zone=internal --change-interface=eth1 ~$ sudo firewall-cmd --permanent --zone=internal --add-service=mysql ~$ sudo firewall-cmd --permanent --zone=internal --add-port=33060/tcp ~$ sudo firewall-cmd --reload ~$ sudo firewall-cmd --zone=internal --list-all
package installation
There are a couple of dependencies we need to install.
~$ sudo yum install -y epel-release && sudo yum install -y perl-Class-MethodMaker perl-Data-Dumper
Install the packages
~$ sudo yum install -y mysql-cluster-community-server ~$ sudo yum install -y mysql-cluster-community-client mysql-cluster-community-libs mysql-cluster-community-common
configure sql node
We now need to tell the sql node where to find the sql management node. When I did this, my.cnf
already existed and had a [mysqld]
section.
~$ sudo vim /etc/my.cnf ... [mysqld] ndbcluster ndb-connectstring=rncv-sqlmgmt01.virtual.local # IP address for server management node default-storage-engine=ndbcluster # Define default Storage Engine used by MySQL [mysql_cluster] ndb-connectstring=rncv-sqlmgmt01.virtual.local # IP address for server management node
I also had to configure SELinux as it was blocking mysqld from connecting to the sql management node. It is best to follow the steps listed here, but ultimately you'll just be creating a policy allowing mysqld
the ability to create tcp socket connections over ephemeral ports. It should look something like this:
$ cat local-mysql1.te module local-mysql1 1.0; require { type ephemeral_port_t; type mysqld_t; class tcp_socket name_connect; } #============= mysqld_t ============== #!!!! This avc can be allowed using one of the these booleans: # nis_enabled, mysql_connect_any allow mysqld_t ephemeral_port_t:tcp_socket name_connect;
start sql node
~$ sudo systemctl restart mysqld.service
If you have a 2nd sql node, repeat this process
MySQL 8 Distributed Privileges
At the time of this writing, I still have not gotten distributed privileges working, but i'm following the steps listed here.
So far, you have to grant the NDB_STORED_USER
privilege to said user like so:
mysql> GRANT NDB_STORED_USER ON *.* TO 'cluster_app_user'@'localhost' or mysql> GRANT NDB_STORED_USER ON *.* TO 'admin'@'%'; Query OK, 0 rows affected (0.26 sec)
With the privilege in place, you can verify the appropriate values are in the ndb_sql_metadata
NDB table.
~$ ndb_select_all -d mysql ndb_sql_metadata type name seq note sql_ddl_text 11 "'admin'@'%'" 0 2 "CREATE USER 'admin'@'%' IDENTIFIED WITH 'caching_sha2_password' AS '$A$005$MGBmokOj m!<=P9q7BhmH0ANKg1Fs3JGEv.ITwLdXa7KC0n2GOogn6Xc/' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK PASSWORD HISTORY DEFAULT PASSWORD REUSE INTERVAL DEFAULT PASSWORD REQUIRE CURRENT DEFAULT" 12 "'admin'@'%'" 1 [NULL] "GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,GROUP_REPLICATION_ADMIN,INNODB_REDO_LOG_ARCHIVE,NDB_STORED_USER,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `admin`@`%`" 12 "'admin'@'%'" 0 [NULL] "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `admin`@`%`" 3 rows returned
You should only need to restart the mysqld.service on the rest of the SQL nodes for them to pull in the changes.
Whenever a MySQL server starts up and joins the cluster as an SQL node it executes these stored CREATE USER and GRANT statements as part of the cluster schema synchronization process.