This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
MySQL Cluster NDB 6.3.21 was withdrawn due to issues discovered after its release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.3 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.31 (see Section C.1.21, “Changes in MySQL 5.1.31 (19 January 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Important Change:
Formerly, when the management server failed to create a
transporter for a data node connection,
net_write_timeout
seconds
elapsed before the data node was actually allowed to disconnect.
Now in such cases the disconnection occurs immediately.
(Bug#41965)
See also Bug#41713.
It is now possible while in Single User Mode to restart all data
nodes using ALL RESTART
in the management
client. Restarting of individual nodes while in Single User Mode
remains disallowed.
(Bug#31056)
Formerly, when using MySQL Cluster Replication, records for
“empty” epochs — that is, epochs in which no
changes to NDBCLUSTER
data or
tables took place — were inserted into the
ndb_apply_status
and
ndb_binlog_index
tables on the slave even
when --log-slave-updates
was
disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL
Cluster NDB 6.3.13 this was changed so that these
“empty” eopchs were no longer logged. However, it
is now possible to re-enable the older behavior (and cause
“empty” epochs to be logged) by using the
--ndb-log-empty-epochs
option. For more
information, see Section 16.1.3.3, “Replication Slave Options and Variables”.
See also Bug#37472.
Bugs fixed:
A maximum of 11 TUP
scans were allowed in
parallel.
(Bug#42084)
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN
statement while inserting rows into the
table caused mysqld to crash.
(Bug#41905)
If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug#41469)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug#41462)
It was not possible to perform online upgrades from a MySQL Cluster NDB 6.2 release to MySQL Cluster NDB 6.3.8 or a later MySQL Cluster NDB 6.3 release. (Bug#41435)
Cluster log files were opened twice by internal log-handling code, resulting in a resource leak. (Bug#41362)
An abort path in the DBLQH
kernel block
failed to release a commit acknowledgement marker. This meant
that, during node failure handling, the local query handler
could be added multiple times to the marker record which could
lead to additional node failures due an array overflow.
(Bug#41296)
During node failure handling (of a data node other than the
master), there was a chance that the master was waiting for a
GCP_NODEFINISHED
signal from the failed node
after having received it from all other data nodes. If this
occurred while the failed node had a transaction that was still
being committed in the current epoch, the master node could
crash in the DBTC
kernel block when
discovering that a transaction actually belonged to an epoch
which was already completed.
(Bug#41295)
Issuing EXIT
in the management client
sometimes caused the client to hang.
(Bug#40922)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug#34526)
If all data nodes were shut down, MySQL clients were unable to
access NDBCLUSTER
tables and data
even after the data nodes were restarted, unless the MySQL
clients themselves were restarted.
(Bug#33626)
Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.
The fix for this bug also corrected an issue in the
LGMAN
kernel block where the amount of free
space left in the undo buffer was miscalculated, causing buffer
overruns. This could cause records in the buffer to be
overwritten, leading to problems when restarting data nodes.
(Bug#28077)
Cluster Replication:
Sometimes, when using the --ndb_log_orig
option, the orig_epoch
and
orig_server_id
columns of the
ndb_binlog_index
table on the slave contained
the ID and epoch of the local server instead.
(Bug#41601)
Cluster API:
mgmapi.h
contained constructs which only
worked in C++, but not in C.
(Bug#27004)
User Comments
Add your own comment.