Count null_bits separately from field offsets and adjust them in case of primary key parts.
(Previously a CREATE TABLE with a lot of null fields that was part of a primary key caused MySQL to wrongly count the number of bytes needed to store null bits)
This is a more complete bug fix for #6236
if we fall back to mysql_alter_table() (for InnoDB), don't do binlogging in mysql_alter_table(), as mysql_admin_table()
is not supposed to do any binlogging (it is done by the caller).
CREATE DATABASE statement used the current database instead of the
database created when checking conditions for replication.
CREATE/DROP/ALTER DATABASE statements are now replicated based on
the manipulated database.
insertion of new records partially failed. It would get logged because of the
logic to log a partially-failed 'INSERT ... SELECT' (which can't be rolled back
in non-transactional tables), but 'CREATE TABLE ... SELECT' is always rolled
back on failure, even for non-transactional tables. (Bug #6682)
(Original fix reimplemented after review by Serg and Guilhem.)
Added auto-correct of field length for enum/set tables for ALTER TABLE
This is becasue of a bug in previous MySQL 4.1 versions where the length for enum/set was set incorrectly after ALTER TABLE
- add_field_to_list() now uses <List>String
instead of TYPELIB to be able to distinguish
literals 'aaa' and hex literals 0xaabbcc.
- move some code from add_field_to_list() where
we don't know column charset yet, to
mysql_prepare_table(), where we do.
When we are writing a transaction to the binlog, we log BEGIN/COMMIT with zero error code.
Example: all statements of trans succeeded, connection lost and so implicit rollback:
we don't want ER_NET* errors to be logged in the BEGIN/ROLLBACK events, while statement
events have 0. If there was really a serious error code, it's already in the statement events.
The idea of the fix is that the administrative statements
OPTIMIZE TABLE, REPAIR TABLE and ANALYZE TABLE should not
generate binlog errors if there is no errors on the master.
Added a try to a normal repair() if repair_by_sort() failed.
This was not done with ENABLE KEYS and OPTIMIZE TABLE.
Fixed error code handling in mysql_alter_table().
(Bug #4315: GROUP_CONCAT with ORDER BY returns strange results for TEXT fields
Bug #5564: Strange behaviour with group_concat and distinct
Bug #5970: group_concat doesn't print warnings)
to auto_increment in 4.1".
Now we are enforcing NO_AUTO_VALUE_ON_ZERO mode during ALTER TABLE only
if we are converting one auto_increment column to another auto_increment
column (this also includes most common case when we don't do anything
with such column).
Also now when we convert some column to TIMESTAMP NOT NULL column with
ALTER TABLE we convert NULL values to current timestamp, (as we do this
in INSERT). One can still get old behavior by setting system TIMESTAMP
variable to 0.
column types TIMESTAMP is NOT NULL by default, so in order to have
TIMESTAMP column holding NULL valaues you have to specify NULL as
one of its attributes (this needed for backward compatibility).
Main changes:
Replaced TABLE::timestamp_default_now/on_update_now members with
TABLE::timestamp_auto_set_type flag which is used everywhere
for determining if we should auto-set value of TIMESTAMP field
during this operation or not. We are also use Field_timestamp::set_time()
instead of handler::update_timestamp() in handlers.
BUG#4335 - one name can be handler open'ed many times.
Reworked the HANDLER functions and interface.
Using a HASH to store information on open tables that
survives FLUSH TABLE.
HANDLER tables alias names must now be unique, though it
is allowed in 4.0 to qualify them with the database name
of the base table.
Changed the semantics of open_count so that it is decremented
at every unlock (if it was incremented due to data changes).
So it indicates a crash, if it is non-zero after a lock.
The table will then be repaired.
added tests to alter table for "large" alter tables and truncates in ndbcluster
added debug printout in restart() in ndbcluster
added flag THD::transaction.on to enable/disable transaction
Fix for BUG#4971 "CREATE TABLE ... TYPE=HEAP SELECT ... stops slave (wrong DELETE in binlog)":
replacing the no_log argument of mysql_create_table() by some safer method
(temporarily setting OPTION_BIN_LOG to 0) which guarantees that even the automatic
DELETE FROM heap_table does not get into the binlog when a not-yet-existing HEAP table
is opened by mysql_create_table().
replacing the no_log argument of mysql_create_table() by some safer method
(temporarily setting OPTION_BIN_LOG to 0) which guarantees that even the automatic
DELETE FROM heap_table does not get into the binlog when a not-yet-existing HEAP table
is opened by mysql_create_table().
Bug #4810 "deadlock with KILL when the victim was in a wait state"
(I included mutex unlock into exit_cond() for future safety)
and BUG#4827 "KILL while START SLAVE may lead to replication slave crash"
BUG#4506 "mysqlbinlog --position --read-from-remote-server has wrong "# at" lines",
BUG#4553 "Multi-table DROP TABLE replicates improperly for nonexistent table" with a test file.
It was not possible to add a test for BUG#4506 as in the test suite we must use --short-form
which does not display the "# at" lines.
more logical table/index_flags
return HA_ERR_WRONG_COMMAND instead of abstract methods where appropriate
max_keys and other limits renamed to max_supported_keys/etc
max_keys/etc are now wrappers to max_supported_keys/etc
ha_index_init/ha_rnd_init/ha_index_end/ha_rnd_end are now wrappers to real {index,rnd}_{init,end} to enforce strict pairing
binlog even if they changed nothing, and a test for this.
This is useful when users use these commands to clean up their master and slave by issuing
one command on master (assume master and slave have slightly different data for some
reason and you want to clean up both).
Note that I have not changed multi-table DELETE and multi-table UPDATE because their
error-reporting mechanism is more complicated.
which occured because we were not lowering case of file names
for temporary tables altough handler assumes so if
lower_case_table_names==2. Now we are lowering case for them.