not convert MySQL system tables in the
mysql database from
This is an unsupported operation. If you do this, MySQL does
not restart until you restore the old system tables from a
backup or re-generate them with the
A table cannot contain more than 1000 columns.
The internal maximum key length is 3500 bytes, but MySQL
itself restricts this to 1024 bytes.
The maximum row length, except for
is slightly less than half of a database page. That is, the
maximum row length is about 8000 bytes.
columns must be less than 4GB, and the total row length,
TEXT columns, must be less than 4GB.
InnoDB stores the first 768 bytes of a
TEXT column in the row, and the rest into
InnoDB supports row sizes larger
than 65535 internally, you cannot define a row containing
VARCHAR columns with a combined size larger
CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),
f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB;
ERROR 1118 (42000): Row size too large. The maximum row size for the
used table type, not counting BLOBs, is 65535. You have to change some
columns to TEXT or BLOBs
On some older operating systems, files must be less than 2GB.
This is not a limitation of
but if you require a large tablespace, you will need to
configure it using several smaller data files rather than one
or a file large data files.
The combined size of the
InnoDB log files
must be less than 4GB.
The minimum tablespace size is 10MB. The maximum tablespace
size is four billion database pages (64TB). This is also the
maximum size for a table.
InnoDB tables do not support
ANALYZE TABLE determines index cardinality
(as displayed in the
Cardinality column of
SHOW INDEX output) by doing eight random
dives to each of the index trees and updating index
cardinality estimates accordingly. Note that because these are
only estimates, repeated runs of
TABLE may produce different numbers. This makes
ANALYZE TABLE fast on
InnoDB tables but not 100% accurate as it
doesn't take all rows into account.
MySQL uses index cardinality estimates only in join
optimization. If some join is not optimized in the right way,
you can try using
ANALYZE TABLE. In the few
ANALYZE TABLE doesn't produce
values good enough for your particular tables, you can use
FORCE INDEX with your queries to force the
use of a particular index, or set the
max_seeks_for_key system variable to ensure
that MySQL prefers index lookups over table scans. See
Section 5.2.2, “Server System Variables”, and
Section A.6, “Optimizer-Related Issues”.
SHOW TABLE STATUS does not give accurate
InnoDB tables, except for the
physical size reserved by the table. The row count is only a
rough estimate used in SQL optimization.
InnoDB does not keep an internal count of
rows in a table. (In practice, this would be somewhat
complicated due to multi-versioning.) To process a
SELECT COUNT(*) FROM t statement,
InnoDB must scan an index of the table,
which takes some time if the index is not entirely in the
buffer pool. To get a fast count, you have to use a counter
table you create yourself and let your application update it
according to the inserts and deletes it does. If your table
does not change often, using the MySQL query cache is a good
SHOW TABLE STATUS also can be
used if an approximate row count is sufficient. See
Section 14.2.11, “
InnoDB Performance Tuning Tips”.
InnoDB always stores database
and table names internally in lowercase. To move databases in
binary format from Unix to Windows or from Windows to Unix,
you should always use explicitly lowercase names when creating
databases and tables.
AUTO_INCREMENT column, you must
always define an index for the table, and that index must
contain just the
AUTO_INCREMENT column. In
MyISAM tables, the
AUTO_INCREMENT column may be part of a
While initializing a previously specified
AUTO_INCREMENT column on a table,
InnoDB sets an exclusive lock on the end of
the index associated with the
AUTO_INCREMENT column. In accessing the
InnoDB uses a
specific table lock mode
AUTO-INC where the
lock lasts only to the end of the current SQL statement, not
to the end of the entire transaction. Note that other clients
cannot insert into the table while the
AUTO-INC table lock is held; see
Section 18.104.22.168, “
When you restart the MySQL server,
may reuse an old value that was generated for an
AUTO_INCREMENT column but never stored
(that is, a value that was generated during an old transaction
that was rolled back).
AUTO_INCREMENT column runs out of
InnoDB wraps a
BIGINT values have 64 bits, so do note that
if you were to insert one million rows per second, it would
still take nearly three hundred thousand years before
BIGINT reached its upper bound. With all
other integer type columns, a duplicate-key error results.
This is similar to how
because it is mostly general MySQL behavior and not about any
storage engine in particular.
regenerate the table but instead deletes all rows, one by one.
Under some conditions,
InnoDB table is mapped to
FROM and doesn't
AUTO_INCREMENT counter. See
Section 13.2.9, “
In MySQL 5.1, the MySQL
TABLES operation acquires two locks on each table if
innodb_table_locks=1 (the default). In
addition to a table lock on the MySQL layer, it also acquires
InnoDB table lock. Older versions of
MySQL did not acquire
InnoDB table locks;
the old behavior can be selected by setting
innodb_table_locks=0. If no
InnoDB table lock is acquired,
LOCK TABLES completes even if some records
of the tables are being locked by other transactions.
InnoDB locks held by a transaction are
released when the transaction is committed or aborted. Thus,
it does not make much sense to invoke
InnoDB tables in
AUTOCOMMIT=1 mode, because the acquired
InnoDB table locks would be released
Sometimes it would be useful to lock further tables in the
course of a transaction. Unfortunately,
TABLES in MySQL performs an implicit
InnoDB variant of
LOCK TABLES has been planned that can be
executed in the middle of a transaction.
LOAD TABLE FROM MASTER statement for
setting up replication slave servers does not yet work for
InnoDB tables. A workaround is to alter the
MyISAM on the master, do then the
load, and after that alter the master table back to
InnoDB. Do not do this if the tables use
InnoDB-specific features such as foreign
The default database page size in
16KB. By recompiling the code, you can set it to values
ranging from 8KB to 64KB. You must update the values of
UNIV_PAGE_SIZE_SHIFT in the
univ.i source file.
Currently, triggers are not activated by cascaded foreign key