MySQL Cluster NDB 7.0.11 was withdrawn shortly after release, due to Bug#51027. Users should upgrade to MySQL Cluster NDB 7.0.11a, which fixes this issue.
This release incorporates new features in the
NDBCLUSTER
storage engine and fixes
recently discovered bugs in MySQL Cluster NDB 7.0.10.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1, 6.2, 6.3, and 7.0 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.39 (see Section C.1.7, “Changes in MySQL 5.1.41 (05 November 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Important Change:
The maximum allowed value of the
ndb_autoincrement_prefetch_sz
system variable has been increased from 256 to 65536.
(Bug#50621)
Added multi-threaded ordered index building capability during
system restarts or node restarts, controlled by the
BuildIndexThreads
data node configuration
parameter (also introduced in this release).
Cluster Replication:
Due to the fact that no timestamp is available for delete
operations, a delete using NDB$MAX()
is
actually processed as NDB$OLD
. However,
because this is not optimal for some use cases,
NDB$MAX_DELETE_WIN()
is added as a conflict
resolution function; if the “timestamp” column
value for a given row adding or updating an existing row coming
from the master is higher than that on the slave, it is applied
(as with NDB$MAX()
); however, delete
operations are treated as always having the higher value.
See
NDB$MAX_DELETE_WIN(
),
for more information.
(Bug#50650)column_name
Bugs fixed:
Initial start of partitioned nodes did not work correctly. This issue was observed in MySQL Cluster NDB 7.0 only. (Bug#50661)
The CREATE NODEGROUP
client command in
ndb_mgm could sometimes cause the forced
shutdown of a data node.
(Bug#50594)
Local query handler information was not reported or written to the cluster log correctly. This issue is thought to have been introduced in MySQL Cluster NDB 7.0.10. (Bug#50467)
Online upgrades from MySQL Cluster NDB 7.0.9b to MySQL Cluster NDB 7.0.10 did not work correctly. Current MySQL Cluster NDB 7.0 users should upgrade directly to MySQL Cluster NDB 7.0.11 or later.
This issue is not known to have affected MySQL Cluster NDB 6.3, and it should be possible to upgrade from MySQL Cluster NDB 6.3 to MySQL Cluster NDB 7.0.10 without problems. See Section 17.2.6.2, “MySQL Cluster 5.1 and MySQL Cluster NDB 6.x/7.x Upgrade and Downgrade Compatibility”, for more information. (Bug#50433)
Dropping unique indexes in parallel while they were in use could cause node and cluster failures. (Bug#50118)
When attempting to join a running cluster whose management
server had been started with the
--nowait-nodes
option and
having SQL nodes with dynamically allocated node IDs, a second
management server failed with spurious INTERNAL
ERROR: Found dynamic ports with value in config...
error messages.
(Bug#49807)
When setting the LockPagesInMainMemory
configuration parameter failed, only the error Failed
to memlock pages... was returned. Now in such cases
the operating system's error code is also returned.
(Bug#49724)
If a query on an NDB
table compared
a constant string value to a column, and the length of the
string was greater than that of the column, condition pushdown
did not work correctly. (The string was truncated to fit the
column length before being pushed down.) Now in such cases, the
condition is no longer pushed down.
(Bug#49459)
ndbmtd was not built on Windows (CMake did not provide a build target for it). (Bug#49325)
Performing intensive inserts and deletes in parallel with a high
scan load could a data node crashes due to a failure in the
DBACC
kernel block. This was because checking
for when to perform bucket splits or merges considered the first
4 scans only.
(Bug#48700)
During Start Phases 1 and 2, the STATUS
command sometimes (falsely) returned Not
Connected
for data nodes running
ndbmtd.
(Bug#47818)
When performing a DELETE
that
included a left join from an NDB
table, only the first matching row was deleted.
(Bug#47054)
Under some circumstances, the DBTC
kernel
block could send an excessive number of commit and completion
messages which could lead to a the job buffer filling up and
node failure. This was especially likely to occur when using
ndbmtd with a single data node.
(Bug#45989)
When setting LockPagesInMainMemory
, the
stated memory was not allocated when the node was started, but
rather only when the memory was used by the data node process
for other reasons.
(Bug#37430)
Trying to insert more rows than would fit into an
NDB
table caused data nodes to crash. Now in
such situations, the insert fails gracefully with error 633
Table fragment hash index has reached maximum
possible size.
(Bug#34348)
On Mac OS X or Windows, sending a SIGHUP
signal to the server or an asynchronous flush (triggered by
flush_time
) caused the server
to crash.
(Bug#47525)
The ARCHIVE
storage engine lost
records during a bulk insert.
(Bug#46961)
When using the ARCHIVE
storage
engine, SHOW TABLE STATUS
displayed incorrect
information for Max_data_length
,
Data_length
and
Avg_row_length
.
(Bug#29203)
User Comments
Add your own comment.