This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.3 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.35 (see Section C.1.15, “Changes in MySQL 5.1.35 (13 May 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
On Solaris platforms, the MySQL Cluster management server and
NDB API applications now use CLOCK_REALTIME
as the default clock.
(Bug#46183)
A new option --exclude-missing-columns
has been
added for the ndb_restore program. In the
event that any tables in the database or databases being
restored to have fewer columns than the same-named tables in the
backup, the extra columns in the backup's version of the
tables are ignored. For more information, see
Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”.
(Bug#43139)
This issue, originally resolved in MySQL 5.1.16, re-occurred due to a later (unrelated) change. The fix has been re-applied.
Bugs fixed:
Restarting the cluster following a local checkpoint and an
online ALTER TABLE
on a non-empty
table caused data nodes to crash.
(Bug#46651)
Full table scans failed to execute when the cluster contained more than 21 table fragments.
The number of table fragments in the cluster can be calculated
as the number of data nodes, times 8 (that is, times the value
of the internal constant
MAX_FRAG_PER_NODE
), divided by the number
of replicas. Thus, when NoOfReplicas = 1
at
least 3 data nodes were required to trigger this issue, and
when NoOfReplicas = 2
at least 4 data nodes
were required to do so.
Killing MySQL Cluster nodes immediately following a local checkpoint could lead to a crash of the cluster when later attempting to perform a system restart.
The exact sequence of events causing this issue was as follows:
Local checkpoint occurs.
Immediately following the LCP, kill the master data node.
Kill the remaining data nodes within a few seconds of killing the master.
Attempt to restart the cluster.
Ending a line in the config.ini
file with
an extra semicolon character (;
) caused
reading the file to fail with a parsing error.
(Bug#46242)
When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug#46069)
OPTIMIZE TABLE
on an
NDB
table could in some cases cause
SQL and data nodes to crash. This issue was observed with both
ndbd and ndbmtd.
(Bug#45971)
The AutoReconnect
configuration parameter for
API nodes (including SQL nodes) has been added. This is intended
to prevent API nodes from re-using allocated node IDs during
cluster restarts. For more information, see
Section 17.3.2.7, “Defining SQL and Other API Nodes in a MySQL Cluster”.
This fix also introduces two new methods of the
Ndb_cluster_connection
class in the NDB API.
For more information, see
Ndb_cluster_connection::set_auto_reconnect()
,
and
Ndb_cluster_connection::get_auto_reconnect()
.
(Bug#45921)
The signals used by ndb_restore to send progress information about backups to the cluster log accessed the cluster transporter without using any locks. Because of this, it was theoretically possible that these signals could be interefered with by heartbeat signals if both were sent at the same time, causing the ndb_restore messages to be corrupted. (Bug#45646)
Problems could arise when using
VARCHAR
columns
whose size was greater than 341 characters and which used the
utf8_unicode_ci
collation. In some cases,
this combination of conditions could cause certain queries and
OPTIMIZE TABLE
statements to
crash mysqld.
(Bug#45053)
An internal NDB API buffer was not properly initialized. (Bug#44977)
When a data node had written its GCI marker to the first page of a megabyte, and that node was later killed during restart after having processed that page (marker) but before completing a LCP, the data node could fail with filesystem errors. (Bug#44952)
The warning message Possible bug in Dbdih::execBLOCK_COMMIT_ORD ... could sometimes appear in the cluster log. This warning is obsolete, and has been removed. (Bug#44563)
In some cases, OPTIMIZE TABLE
on
an NDB
table did not free any
DataMemory
.
(Bug#43683)
If the cluster crashed during the execution of a
CREATE LOGFILE GROUP
statement,
the cluster could not be restarted afterwards.
(Bug#36702)
See also Bug#34102.
Disk Data: Partitioning:
An NDBCLUSTER
table created with a
very large value for the MAX_ROWS
option
could — if this table was dropped and a new table with
fewer partitions, but having the same table ID, was created
— cause ndbd to crash when performing a
system restart. This was because the server attempted to examine
each partition whether or not it actually existed.
(Bug#45154)
Disk Data:
If the value set in the config.ini
file for
FileSystemPathDD
,
FileSystemPathDataFiles
, or
FileSystemPathUndoFiles
was identical to the
value set for FileSystemPath
, that parameter
was ignored when starting the data node with
--initial
option. As a result, the Disk Data
files in the corresponding directory were not removed when
performing an initial start of the affected data node or data
nodes.
(Bug#46243)
Disk Data: During a checkpoint, restore points are created for both the on-disk and in-memory parts of a Disk Data table. Under certain rare conditions, the in-memory restore point could include or exclude a row that should have been in the snapshot. This would later lead to a crash during or following recovery. (Bug#41915)
See also Bug#47832.
User Comments
Add your own comment.