Merge of fixes from 5.0 -> 5.1
Moved restoration of concurrent_insert's original value to the end of the 5.1 tests
Re-recorded .result file to account for changes to test file.
Moved fix for this bug to 5.0 as other mysqldump bugs seem tied to concurrent_insert being on
Setting concurrent_insert off during this test as INSERTs weren't being
completely processed before the calls to mysqldump, resulting in failing tests.
Altered .test file to turn concurrent_insert off during the test and to restore it
to whatever the value was at the start of the test when complete.
Re-recorded .result file to account for changes to variables in the test.
The problem here is that symbols can not be loaded, because symbol
path is not set and default path does not include the directory
where PDB is located.
The problem is _not_ reproducible on the same machine where
mysqld.exe is built - if PDB is not found in the symbol path,
dbghelp would fallback to fully qualified PDB path as given in the
executable header and on the build host this will succeed.
The solution is to calculate symbol path and pass it to SymInitialize()
call.
Problem: with @@sql_mode=pad_char_to_full_length
a CHAR column returned additional garbage
after trailing space characters due to
incorrect my_charpos() call.
Fix: call my_charpos() with correct arguments.
If [NOT] PRESERVE was not given, parser always defaulted to NOT
PRESERVE, making it impossible for the "not given = no change"
rule to work in ALTER EVENT. Leaving out the PRESERVE-clause
defaults to NOT PRESERVE on CREATE now, and to "no change" in
ALTER.
mysqldump creates stand-in tables before dumping the actual view.
Those tables were of the default type; if the view had more columns
than that (a pathological case, arguably), loading the dump would
fail. We now make the temporary stand-ins MyISAM tables to prevent
this.
mysqldump creates stand-in tables before dumping the actual view.
Those tables were of the default type; if the view had more columns
than that (a pathological case, arguably), loading the dump would
fail. We now make the temporary stand-ins MyISAM tables to prevent
this.
If a delayed insert thread was aborted by a concurrent 'truncate table'
statement, then the diagnostic area would fail with an assert in a debug build
because no actual error message was pushed on the stack despite a thread
being killed.
This patch adds an error message to the stack.
statement/stored procedure
View privileges are properly checked after the fix for bug no
36086, so the method TABLE_LIST::get_db_name() must be used
instead of field TABLE_LIST::db, as this only works for tables.
Bug appears when accessing views in prepared statements.
SUPER is not required to change binlog format for session
A user without SUPER privileges can change the value of the
session variable BINLOG_FORMAT, causing problems for a DBA.
This changeset requires a user to have SUPER privileges to
change the value of the session variable BINLOG_FORMAT, and
not only the global variable BINLOG_FORMAT.
The size of the Innodb_buffer_pool_pages differs by one byte on row versus statement
log, so neuter the last position of the stringified decimal representation. Innobase
says the size isn't very important in any case.
Also, split out the "mixed" format to its own file, as mtr seems to dislike having only
stm and row but not mix.
Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)
Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.
Backport of bug-33479 from 6.0:
Bug-33479: auto_increment failures in partitioning
Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)
Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.
The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).
This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.
SET col
When reporting a duplicate key error the server was making incorrect assumptions
on what the state of the value string to include in the error is.
Fixed by accessing the data in this string in a "safe" way (without relying on it
having a terminating 0).
Detected by code analysis and fixed a similar problem in reporting the foreign key
duplicate errors.
Added a rule that uses gcc to generate preprocessor
output (gcc -E) that can be compared to an already
generated output using the diff utility.
icheck has been removed and replaced by gcc -E
because icheck does not support C++.
The check_table_access function initializes per-table grant info and performs
access rights check. It wasn't called for SHOW STATUS statement thus left
grants info uninitialized. In some cases this led to server crash. In other
cases it allowed a user to check for presence/absence of arbitrary values in
any tables.
Now the check_table_access function is called prior to the statement
processing.
The assertion indicates that some data was left in the transaction
cache when the server was shut down, which means that a previous
statement did not commit or rollback correctly.
What happened was that a bug in the rollback of a transactional
table caused the transaction cache to be emptied, but not reset.
The error can be triggered by having a failing UPDATE or INSERT,
on a transactional table, causing an implicit rollback.
Fixed by always flushing the pending event to reset the state
properly.