This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.2 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.34 (see Section C.1.17, “Changes in MySQL 5.1.34 (02 April 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Bugs fixed:
Important Change: Partitioning:
User-defined partitioning of an
NDBCLUSTER
table without any
primary key sometimes failed, and could cause
mysqld to crash.
Now, if you wish to create an
NDBCLUSTER
table with user-defined
partitioning, the table must have an explicit primary key, and
all columns listed in the partitioning expression must be part
of the primary key. The hidden primary key used by the
NDBCLUSTER
storage engine is not
sufficient for this purpose. However, if the list of columns is
empty (that is, the table is defined using PARTITION BY
[LINEAR] KEY()
), then no explicit primary key is
required.
This change does not effect the partitioning of tables using any
storage engine other than
NDBCLUSTER
.
(Bug#40709)
An internal NDB API buffer was not properly initialized. (Bug#44977)
When a data node had written its GCI marker to the first page of a megabyte, and that node was later killed during restart after having processed that page (marker) but before completing a LCP, the data node could fail with filesystem errors. (Bug#44952)
Inspection of the code revealed that several assignment
operators (=
) were used in place of
comparison operators (==
) in
DbdihMain.cpp
.
(Bug#44567)
See also Bug#44570.
It was possible for NDB API applications to insert corrupt data into the database, which could subquently lead to data node crashes. Now, stricter checking is enforced on input data for inserts and updates. (Bug#44132)
TransactionDeadlockDetectionTimeout
values
less than 100 were treated as 100. This could cause scans to
time out unexpectedly.
(Bug#44099)
The file ndberror.c
contained a C++-style
comment, which caused builds to fail with some C compilers.
(Bug#44036)
A race condition could occur when a data node failed to restart just before being included in the next global checkpoint. This could cause other data nodes to fail. (Bug#43888)
When trying to use a data node with an older version of the management server, the data node crashed on startup. (Bug#43699)
Using indexes containing variable-sized columns could lead to internal errors when the indexes were being built. (Bug#43226)
In some cases, data node restarts during a system restart could fail due to insufficient redo log space. (Bug#43156)
Some queries using combinations of logical and comparison
operators on an indexed column in the WHERE
clause could fail with the error Got error 4541
'IndexBound has no bound information' from
NDBCLUSTER.
(Bug#42857)
ndb_restore --print_data
did
not handle DECIMAL
columns
correctly.
(Bug#37171)
The output of ndbd --help
did not provide clear information about the program's
--initial
and --initial-start
options.
(Bug#28905)
It was theoretically possible for the value of a nonexistent
column to be read as NULL
, rather than
causing an error.
(Bug#27843)
When aborting an operation involving both an insert and a delete, the insert and delete were aborted separately. This was because the transaction coordinator did not know that the operations affected on same row, and, in the case of a committed-read (tuple or index) scan, the abort of the insert was performed first, then the row was examined after the insert was aborted but before the delete was aborted. In some cases, this would leave the row in a inconsistent state. This could occur when a local checkpoint was performed during a backup. This issue did not affect primary ley operations or scans that used locks (these are serialized).
After this fix, for ordered indexes, all operations that follow the operation to be aborted are now also aborted.
Disk Data: Partitioning:
An NDBCLUSTER
table created with a
very large value for the MAX_ROWS
option
could — if this table was dropped and a new table with
fewer partitions, but having the same table ID, was created
— cause ndbd to crash when performing a
system restart. This was because the server attempted to examine
each partition whether or not it actually existed.
(Bug#45154)
Disk Data: During a checkpoint, restore points are created for both the on-disk and in-memory parts of a Disk Data table. Under certain rare conditions, the in-memory restore point could include or exclude a row that should have been in the snapshot. This would later lead to a crash during or following recovery. (Bug#41915)
See also Bug#47832.
Disk Data: When a log file group had an undo log file whose size was too small, restarting data nodes failed with Read underflow errors.
As a result of this fix, the minimum allowed
INTIAL_SIZE
for an undo log file is now
1M
(1 megabyte).
(Bug#29574)
Disk Data: This fix supercedes and improves on an earlier fix made for this bug in MySQL 5.1.18. (Bug#24521)
Cluster Replication: A failure when setting up replication events could lead to subsequent data node failures. (Bug#44915)
Cluster API:
If the largest offset of a
RecordSpecification
used for an
NdbRecord
object was for the NULL
bits (and thus not a
column), this offset was not taken into account when calculating
the size used for the RecordSpecification
.
This meant that the space for the NULL
bits
could be overwritten by key or other information.
(Bug#43891)
Cluster API:
The default NdbRecord
structures created by
NdbDictionary
could have overlapping null
bits and data fields.
(Bug#43590)
Cluster API:
When performing insert or write operations,
NdbRecord
allows key columns to be specified
in both the key record and in the attribute record. Only one key
column value for each key column should be sent to the NDB
kernel, but this was not guaranteed. This is now ensured as
follows: For insert and write operations, key column values are
taken from the key record; for scan takeover update operations,
key column values are taken from the attribute record.
(Bug#42238)
Cluster API:
Ordered index scans using NdbRecord
formerly
expressed a BoundEQ
range as separate lower
and upper bounds, resulting in 2 copies of the column values
being sent to the NDB kernel.
Now, when a range is specified by
NdbScanOperation::setBound()
, the passed
pointers, key lengths, and inclusive bits are compared, and only
one copy of the equal key columns is sent to the kernel. This
makes such operations more efficient, as half the amount of
KeyInfo
is now sent for a
BoundEQ
range as before.
(Bug#38793)
User Comments
Add your own comment.