Assertion happens due to missing initialization of unsigned_flag
for Item_func_set_user_var object. It leads to incorrect
calculation of decimal field size.
The fix is to add initialization of unsigned_flag.
When executing row-ordered-retrieval index merge,
the handler was cloned, but it used the wrong
memory root, so instead of allocating memory
on the thread/query's mem_root, it used the table's
mem_root, resulting in non released memory in the
table object, and was not freed until the table was
closed.
Solution was to ensure that memory used during cloning
of a handler was allocated from the correct memory root.
This was implemented by fixing handler::clone() to also
take a name argument, so it can be used with partitioning.
And in ha_partition only allocate the ha_partition's ref, and
call the original ha_partition partitions clone() and set at cloned
partitions.
Fix of .bzrignore on Windows with VS 2010
ARE NOT BEING HONORED
max_allowed_packet works in conjunction with net_buffer_length.
max_allowed_packet is an upper bound of net_buffer_length.
So it doesn't make sense to set the upper limit lower than the value.
Added a warning (using ER_UNKNOWN_ERRROR and a specific message)
when this is done (in the log at startup and when setting either
max_allowed_packet or the net_buffer_length variables)
Added a test case.
Fixed several tests that broke the above rule.
(aka BUG#11766883)
- fix review comments
- Rewrite last usage of handler::get_tablespace_name to use
table->s->tablespace directly
- Remove(revert) the addition of default implementation for
handler::get_tablespace_name
- Add comments describing the new TABLE_SHARE members default_storage_media
and tablespace
- Fix usage of incorrect mask for column_format bits, i.e COLUMN_FORMAT_MASK
The slave was not able to find the correct row in the innodb
table, because the row fetched from the innodb table would not
match the before image. This happened because the (don't care)
bytes in the NULLed fields would change once the row was stored
in the storage engine (from zero to the default value). This
would make bulk memory comparison (using memcmp) to fail.
We fix this by taking a preventing measure and avoiding memcmp
for tables that contain nullable fields. Therefore, we protect
the slave search routine from engines that return arbitrary
values for don't care bytes (in the nulled fields). Instead, the
slave thread will only check null_bits and those fields that are
not set to NULL when comparing the before image against the
storage engine row.
prepared statements with cursor protocol).
The problem was a bug in Materialized-cursor implementation.
Materialized_cursor::open() called send_result_metadata()
with items pointing to already closed table.
The fix is to send metadata when the table is still open.
NOTE: this is a "partial" fix: metadata are different with
and without --cursor-protocol, but that's a different large
problem, one indication of which is reported as Bug 24176.
Issue:
------
Due to prefix match, database like 'k' was matching with 'ka' and events of 'ka' we getting displayed for 'show event' of 'k'.
Resolution:
-----------
Scan for listing of events in a schema is made to be done on exact match of database (schema) name instead of just prefix.
configuration wizard to fail
Made the fields mysql.user.plugin and mysql.user.authentication_string
nullable to conform with some older clients doing inserts instead of
using the commands.
Regression from bug#11766232.
m_last_part could be set beyond the last partition.
Fixed by only setting it if within the limit.
Also added check in print_error.
Currently, rpl_semi_sync is failing in PB due to the warning message:
"Slave SQL: slave SQL thread is being stopped in the middle of "
"applying of a group having updated a non-transaction table; "
"waiting for the group completion ..."
The problem started happening after the fix for BUG#11762407 what was
automatically suppressing some warning messages.
To fix the current issue, we suppress the aforementioned warning message
and exploit the opportunity to make the sentence clearer.
Bug#28928: UNIX_TIMESTAMP() should be considered unary monotonic by partition pruning
Made UNIX_TIMESTAMP MONOTONIC_INCREASING when it have TIMESTAMP argument (only).
The problem was that server didn't check resulting size of prepared
statement argument which was set using mysql_send_long_data() API.
By calling mysql_send_long_data() several times it was possible
to create overly big string and thus force server to allocate
memory for it. There was no way to limit this allocation.
The solution is to add check for size of result string against
value of max_long_data_size start-up parameter. When intermediate
string exceeds max_long_data_size value an appropriate error message
is emitted.
We can't use existing max_allowed_packet parameter for this purpose
since its value is limited by 1GB and therefore using it as a limit
for data set through mysql_send_long_data() API would have been an
incompatible change. Newly introduced max_long_data_size parameter
gets value from max_allowed_packet parameter unless its value is
specified explicitly. This new parameter is marked as deprecated
and will be eventually replaced by max_allowed_packet parameter.
Value of max_long_data_size parameter can be set only at server
startup.
FAILED DROP DATABASE CAN BREAK STATEMENT BASED REPLICATION
The first phase of DROP DATABASE is to delete the tables in the database.
If deletion of one or more of the tables fail (e.g. due to a FOREIGN KEY
constraint), DROP DATABASE will be aborted. However, some tables could
still have been deleted. The problem was that nothing would be written
to the binary log in this case, so any slaves would not delete these tables.
Therefore the master and the slaves would get out of sync.
This patch fixes the problem by making sure that DROP TABLE is written
to the binary log for the tables that were in fact deleted by the failed
DROP DATABASE statement.
Test case added to binlog.binlog_database.test.
FAILS IN SET_FIELD_ITERATOR
(Former 59299)
When a PROCEDURE does a natural join, resolving of which columns are
used in the join is done only once; consecutive CALLs to the procedure
will reuse this information:
CREATE PROCEDURE proc() SELECT * FROM t1 NATURAL JOIN v1;
CALL proc(); <- natural join columns resolved here
CALL proc(); <- reuse resolved NJ columns from first CALL
The second CALL knows that it can reuse the resolved NJ columns because
the first CALL sets st_select_lex::first_natural_join_processing=false.
The problem in this bug was that the table the view v1 depends on
changed between CREATE PROCEDURE and the first CALL:
CREATE PROCEDURE...
ALTER TABLE t2 CHANGE COLUMN a b CHAR;
CALL proc(); <- error when resolving natural join columns
CALL proc(); <- tries to reuse from first CALL => crash
The fix for this bug is to set first_natural_join_processing= FALSE iff
the natural join columns resolving was successful.
INSTALLATION
When starting mysqld as an MS Windows NT service, it crashed
with "Error 1067: The process terminated unexpectedly".
The problem is that thread local variables are not allocated
and initialized properly when started as a service. When the
server is started as a regular executable, the problem does
not occur.
Analysis showed that this is a regression after the patch for
Bug#11765237/Bug#11763065. Before, the thread local storage
was initialized by the call chain:
win_main->my_basic_init->my_thread_basic_global_init->
my_thread_init
When the my_init() structure was changed, this initialization
was moved from win_main to mysqld_main. When started as
a service win_main is run in a separate thread, which does
not have mysqld_main in its call path, so my_thread_init
is never called for this thread.
Added a call to my_thread_init / my_thread_end in the service
handler function, which solves the problem.
pre-locking list caused by triggers).
The thing is that CREATE TRIGGER / DROP TRIGGER may actually
change pre-locking list of (some) stored routines.
The SP-cache does not detect such changes. Thus if sp_head-instance
is cached in SP-cache, subsequent executions of the cached
sp_head will use inaccurate pre-locking list.
The patch is to invalidate SP-cache on CREATE TRIGGER / DROP TRIGGER.
MAP 'REPAIR TABLE' TO RECREATE +ANALYZE FOR ENGINES NOT
SUPPORTING NATIVE REPAIR
Executing 'mysqlcheck --check-upgrade --auto-repair ...' will first issue
'CHECK TABLE FOR UPGRADE' for all tables in the database in order to check if the
tables are compatible with the current version of MySQL. Any tables that are
found incompatible are then upgraded using 'REPAIR TABLE'.
The problem was that some engines (e.g. InnoDB) do not support 'REPAIR TABLE'.
This caused any such tables to be left incompatible. As a result such tables were
not properly fixed by the mysql_upgrade tool.
This patch fixes the problem by first changing 'CHECK TABLE FOR UPGRADE' to return
a different error message if the engine does not support REPAIR. Instead of
"Table upgrade required. Please do "REPAIR TABLE ..." it will report
"Table rebuild required. Please do "ALTER TABLE ... FORCE ..."
Second, the patch changes mysqlcheck to do 'ALTER TABLE ... FORCE' instead of
'REPAIR TABLE' in these cases.
This patch also fixes 'ALTER TABLE ... FORCE' to actually rebuild the table.
This change should be reflected in the documentation. Before this patch,
'ALTER TABLE ... FORCE' was unused (See Bug#11746162)
Test case added to mysqlcheck.test
FLUSH TABLES under FLUSH TABLES <list> WITH READ LOCK leads
to assert failure.
This assert was triggered if a statement tried up upgrade a metadata
lock with an active FLUSH TABLE <list> WITH READ LOCK. The assert
checks that the connection already holds a global intention exclusive
metadata lock. However, FLUSH TABLE <list> WITH READ LOCK does not
acquire this lock in order to be compatible with FLUSH TABLES WITH
READ LOCK. Therefore any metadata lock upgrade caused the assert to
be triggered.
This patch fixes the problem by preventing metadata lock upgrade
if the connection has an active FLUSH TABLE <list> WITH READ LOCK.
ER_TABLE_NOT_LOCKED_FOR_WRITE will instead be reported to the client.
Test case added to flush.test.
@ mysql-test/r/ctype_latin1.result
@ mysql-test/r/ctype_utf8.result
@ mysql-test/t/ctype_latin1.test
@ mysql-test/t/ctype_utf8.test
Adding tests
@ sql/mysqld.h
@ sql/item.cc
@ sql/sql_parse.cc
@ sql/sql_view.cc
Refactoring (thanks to Guilhem for the idea):
Item_string::print() was hard to understand because of the different
QT_ constants: in "query_type==QT_x", QT_x is explicitely included
but the other two QT_ are implicitely excluded. The combinations
with '||' and '&&' make this even harder.
- logic is now more "explicit" by changing QT_ constants to a bitmap of flags:
QT_ORDINARY: no change,
QT_IS -> QT_TO_SYSTEM_CHARSET | QT_WITHOUT_INTRODUCERS,
QT_EXPLAIN -> QT_TO_SYSTEM_CHARSET
(QT_EXPLAIN was introduced in the first version of the Bug#57341 patch)
- Item_string::print() is rewritten using those flags
Bugfix itself:
When QT_TO_SYSTEM_CHARSET is used alone (with no QT_WITHOUT_INTRODUCERS),
we print string literals as follows:
- display introducers if they were in the original query
- print ASCII characters as is
- print non-ASCII characters using hex-escape
Note: as "EXPLAIN" output is only for human readability purposes
and does not need to be a pasrable SQL, so using hex-escape is Ok.
ErrConvString class perfectly suites for hex escaping purposes.
from 5.1 to 5.5
(Former 59405)
In this bug, args[0] in an Item_func_find_in_set stored an
Item_func_weekday that was constant. In
Item_func_find_in_set::fix_length_and_dec(), args[0]->val_str()
was called. Later, when Item_func_find_in_set::val_int() was
called, args[0]->null_value was checked. However, the
Item_func_weekday in args[0] had now been replaced with an
Item_cache. No val_*() calls had been made to this Item_cache,
thus null_value was incorrectly 'true', resulting in missing
rows in the result set.
enum_value gets a value in fix_length_and_dec() iff args[0]
is both constant and non-null. It is therefore unnecessary
to check the null_value of args[0] in val_int().
An alternative fix would be to call args[0]->val_int() inside
Item_func_find_in_set::val_int(). This would ensure
args[0]->null_value was set correctly (always false in this case),
but that would have to be done for every record this const value
is checked against.
- Add new "format section" in extra data segment with additional table and
column properties. This was originally introduced in 5.1.20 based MySQL Cluster
- Remove hardcoded STORAGE DISK for table and instead
output the real storage format used. Keep both TABLESPACE
and STORAGE inside same version guard.
- Implement default version of handler::get_tablespace_name() since tablespace
is now available in share and it's unnecessary for each handler to implement.
(the function could actually be removed totally now).
- Add test for combinations of TABLESPACE and STORAGE with CREATE TABLE
and ALTER TABLE
- Add test to show that 5.5 now can read a .frm file created by MySQL Cluster
7.0.22. Although it does not yet show the column level attributes, they are read.
Part 2. Function QUOTE() was not multi-byte safe.
@ mysql-test/r/ctype_ucs.result
@ mysql-test/t/ctype_ucs.test
Adding tests
@ sql/item_strfunc.cc
Fixing Item_func_quote::val_str to be multi-byte safe.
@ sql/item_strfunc.h
Multiple size needed for quote characters to mbmaxlen
This was a buffer overrun in do_div_mod(), overwriting the internal buffer
of auto variable 'tmp' in Item_func_int_div::val_int.
Result on windows: 'this' is set to zero, and crash.
Ran fine on other platforms (no valgrind warnings),
but this is undefined behaviour on any platform of course.
Problem: wrong character set pointer was passed to my_strtoll10_mb2,
which led to DBUG_ASSERT failure in some cases.
@ mysql-test/r/func_encrypt_ucs2.result
@ mysql-test/t/func_encrypt_ucs2.test
@ mysql-test/r/ctype_ucs.result
@ mysql-test/t/ctype_ucs.test
Adding tests
@ sql/item_func.cc
"cs" initialization was wrong (res does not necessarily point to &str_value)
@ sql/item_strfunc.cc
Item_func_dec_encrypt::val_str() and Item_func_des_descrypt::val_str()
did not set character set for tmp_value (the returned value),
so the old value, which was previously copied from args[1]->val_str(),
was incorrectly returned with tmp_value.
Bug#11765108 - Bug#58036: CLIENT UTF32, UTF16, UCS2 SHOULD BE DISALLOWED, THEY CRASH SERVER
Fixing wrong usage of DBUG_ASSERT.
In non-debug version thd_init_client_charset
was not executed at all.
Before this fix, all the performance schema instrumentation for both the binary log
and the relay log would use the following instruments:
- wait/io/file/sql/binlog
- wait/io/file/sql/binlog_index
- wait/synch/mutex/sql/MYSQL_BIN_LOG::LOCK_index
- wait/synch/cond/sql/MYSQL_BIN_LOG::update_cond
This instrumentation is too general and can be more specific.
With this fix, the binlog instrumentation is identical,
and the relay log instrumentation is changed to:
- wait/io/file/sql/relaylog
- wait/io/file/sql/relaylog_index
- wait/synch/mutex/sql/MYSQL_RELAY_LOG::LOCK_index
- wait/synch/cond/sql/MYSQL_RELAY_LOG::update_cond
With this change, the performance instrumentation for the binary log and the relay log,
which share the same structure but have different uses, is more detailed.
This is especially important for hosts in the middle of a replication chain,
that are both masters (binlog) and slaves (relaylog).
Problem: in case of string CASE/WHEN arguments with different
character sets, Item_func_case::find_item() called comparator
cmp_items[x] on mixed character set Items, so a 8-bit value could
be errouneously referenced to as being utf16/utf32 value,
which led to crash on DBUG_ASSERT() because of wrong value length.
This was wrong, as string comparator expects arguments in the same
character set.
Fix: modify Item_func_case's argument list after calling
agg_arg_charsets_for_comparison() - put the Items in "agg" array
back to "args", because some of the Items in the "agg" array might
have been changed to character set converters:
- to Item_func_conv_charset for non-constant items
- to Item_string for constant items
In other words, perform the same substitution which is done in
all other operations string comparison or string result operations:
Replace
CASE latin1_item WHEN utf16_item THEN ... END
to
CASE CONVERT(latin1_item USING utf16) WHEN utf16_item THEN ... END
Replace
CASE utf16_item WHEN latin1_item THEN ... END
to
CASE utf16_item WHEN CONVERT(latin1_item USING utf16) THEN ... END
@ mysql-test/r/ctype_utf16.result
@ mysql-test/r/ctype_utf32.result
@ mysql-test/t/ctype_utf16.test
@ mysql-test/t/ctype_utf32.test
Adding tests
@ sql/item_cmpfunc.cc
Put "agg" back to "args".
@ sql/sql_string.cc
Backporting a fix for String::set_or_copy_aligned() from 5.6,
for better test coverage:
"SELECT _utf16 0x61" should expand the string to 0x0061 rather
than to 0x000061.
This fix was made in 5.6 under terms of "WL#4616 Implement UTF16-LE".
Bug#11763065 - 55730: KILL_SERVER() CALLS SETEVENT ON A NULL
HANDLE, SMEM_EVENT_CONNECT_REQUEST
Application Verifier is a Microsoft tool used for
detecting certain classes of programming errors.
In particular, MS Windows OS resource usage is
monitored for wrong usage (handles, thread local
storage, critical sections, ...)
In MySQL 5.5.x, an error was introduced where an
object on thread local storage was used before the
TLS and the object was created.
The fix has been to move the mysys initialization
to an earlier stage in the boot process when built for
Windows. For non-win builds, the init already happens
early.
Some un-tangling of calls to my_init(), my_basic_init()
and my_thread_global_init() was done. There is no
longer a need to do init in steps, so the full my_init()
is called instead of my_init_basic().
In addition, Bug#11763065 was fixed. The event handle
'smem_event_connect_request' is only created if
'opt_enable_shared_memory' is set. When killing the
server, an event was flagged on the handle
unconditionally. Added a test, so it will only be
flagged if created.
This is a backport of the patch for MySQL Bug#50574.
Adding a SPATIAL INDEX on non-geometrical columns caused a
segmentation fault when the table was subsequently
inserted into.
A test was added in mysql_prepare_create_table to explicitly
check whether non-geometrical columns are used in a
spatial index, and throw an error if so.
For MySQL 5.5 and later, a new and more meaningful error
message was introduced. For 5.1, we (re-)use an existing
error code.
UPDATES THE TABLE ENTRIES (formerly 55385)
BUG#11764529: MULTI UPDATE+INNODB REPORTS ER_KEY_NOT_FOUND
IF A TABLE IS UPDATED TWICE (formerly 57373)
If multiple-table update updates a row through two aliases and
the first update physically moves the row, the second update will
fail to locate the row. This results in different errors
depending on storage engine:
* MyISAM: Got error 134 from storage engine
* InnoDB: Can't find record in 'tbl'
None of these errors accurately describe the problem.
Furthermore, since MyISAM is non-transactional, the update
executed first will be performed while the second will not.
In addition, for two equal multiple-table update statements,
one could succeed and the other fail based on whether or not
the record actually moved or not. This was inconsistent.
Two update operations may physically move a row:
1) Update of a column in a clustered primary key
2) Update of a column used to calculate which partition the
row belongs to
BUG#11764529 is about case 1) above, BUG#11762751 was about case 2).
The fix for these bugs is to return with an error if multiple-table
update is about to:
a) Update a table through multiple aliases, and
b) Perform an update that may physically more the row
in at least one of these aliases
This avoids
* partial updates as described for MyISAM above,
* provides the same error message that describes the actual problem
for all SEs
* inconsistent behavior where a statement fails or succeeds based on
e.g. the partitioning algorithm of the table.
A separate fix for 5.1 (as 5.1 and 5.5 have seriously
differged in the related pieces of the code).
A patch for 5.5 was approved earlier.
Problem: ucs2 was correctly disallowed in "SET NAMES" only,
while mysql_real_connect() and mysql_change_user() still allowed
to use ucs2, which made server crash.
Fix: disallow ucs2 in mysql_real_connect() and mysql_change_user().
@ sql/sql_priv.h
- changing return type for thd_init_client_charset() to bool,
to return errors to the caller
@ sql/sql_var.cc
- using new function
@ sql/sql_connect.cc
- thd_client_charset_init:
in case of unsupported client character set send error and return true;
in case of success return false
- check_connection:
Return error if character set initialization failed
@ sql/sql_parse.cc
- check charset in the very beginnig of the CMD_CHANGE_USER handling code
@ tests/mysql_client_test.c
- adding tests
The loop that was looping over subqueries' references to outer field used a
local boolean variable to tell whether the field was grouped or not. But the
implementor failed to reset the variable after each iteration. Thus a field
that was not directly aggregated appeared to be.
Fixed by resetting the variable upon each new iteration.
Problem: ucs2 was correctly disallowed in "SET NAMES" only,
while mysql_real_connect() and mysql_change_user() still allowed
to use ucs2, which made server crash.
Fix: disallow ucs2 in mysql_real_connect() and mysql_change_user().
@ sql/set_var.cc
Using new function.
@ sql/sql_acl.cc
- Return error if character set initialization failed
- Getting rid of pointer aliasing:
Initialize user_name to NULL, to avoid double free().
@ sql/sql_connect.cc
- in case of unsupported client character set send error and return true
- in case of success return false
@ sql/sql_connect.h
- changing return type for thd_init_client_charset() to bool,
to return errors to the caller
@ sql/sql_parse.h
- introducing a new function, to reuse in all places where we need
to check client character set.
@ tests/mysql_client_test.c
Adding test
MONTHNAME(0) claims that it is about to return NOT NULL
value, whereas it actually returns NULL.
As a result storage_engine variable (which cannot be NULL)
protection was bypassed and NULL value was accepted, causing
server crash.
Fixed MONTHNAME(0) to report valid NULL flag.
Problem:
IF() did not copy collation derivation and repertoire from
an argument if the opposite argument was NULL:
IF(cond, res1, NULL)
IF(cond, NULL, res2)
only CHARSET_INFO pointer was copied.
This resulted in illegal mix of collations error.
Fix:
copy all collation parameters from the non-NULL argument:
CHARSET_INFO pointer, derivation, repertoire.
This assumption in Item_cache_datetime::cache_value_int
was wrong:
- /* Assume here that the underlying item will do correct conversion.*/
- int_value= example->val_int_result();
memory reference
There are two issues present here.
1) There is a possibility that we test a byte beyond the
allocated buffer
2) We compare a byte that might never have been
initalized to see if it's 0.
The first issue is not triggered by existing code, but an
ASSERT has been added to safe-guard against introducing
new code that triggers it.
The second issue is what triggers the Valgrind warnings
reported in the bug report. A buffer is allocated in
class String to hold the value. This buffer is populated
by the character data constituting the string, but is not
zero-terminated in most cases. Testing if it is indeed
zero-terminated means that we check a byte that has never
been explicitly set, thus causing Valgrind to trigger.
Note that issue 2 is not a serious problem. The variable
is read, and if it's not zero, we will set it to zero.
There are no further consequences.
Note that this patch does not fix the underlying problems
with issue 1, as it is deemed too risky to fix at this
point (as noted in the bug report). As discussed in
the report, the c_ptr() method should probably be
replaced, but this requires a thorough analysis of the
~200 calls to the method.
Assert in Diagnostics_area::set_ok_status() for XA COMMIT
This assert was triggered if XA COMMIT was issued when an XA transaction
already had encountered an error (e.g. a deadlock) which required
the XA transaction to be rolled back.
In general, the assert is triggered if a statement tries to send OK to
the client when an error has already been reported. It was triggered
in this case because the trans_xa_commit() function first reported an
error, then rolled back the transaction and finally returned FALSE,
indicating success. Since trans_xa_commit() reported success,
mysql_execute_command() tried to report OK, triggering the assert.
This patch fixes the problem by fixing trans_xa_commit() to return TRUE
if it encounters an error that requires rollback, even if the rollback
itself is successful.
Test case added to xa.test.
"set optimizer_switch to e or d causes invalid memory writes/valgrind warnings":
due to prefix support, the argument "e" was overwritten with its full value
"engine_condition_pushdown", which caused a buffer overrun.
This was wrong usage of find_type(); other wrong usages are fixed here too.
Please start reading with the comment of typelib.c.
The crash happens because Item_cache which is result
holder for Item_subselect can't correctly convert
a DATETIME value from string to int representation.
The fix is to disable constant item convertion for
subselect(partial rollback of bug52157 fix).
privileges".
The first problem was that DROP USER didn't properly remove privileges
on stored functions from in-memory structures. So the dropped user
could have called stored functions on which he had privileges before
being dropped while his connection was still around.
Even worse if a new user with the same name was created he would
inherit privileges on stored functions from the dropped user.
Similar thing happened with old user name and function privileges
during RENAME USER.
This problem stemmed from the fact that the handle_grant_data() function
which handled DROP/RENAME USER didn't take any measures to update
in-memory hash with information about function privileges after
updating them on disk.
This patch solves this problem by adding code doing just that.
The second problem was that RENAME USER didn't properly update in-memory
structures describing table-level privileges and privileges on stored
procedures. As result such privileges could have been lost after a rename
(i.e. not associated with the new name of user) and inherited by a new
user with the same name as the old name of the original user.
This problem was caused by code handling RENAME USER in
handle_grant_struct() which [sic!]:
a) tried to update wrong (tables) hash when updating stored procedure
privileges for new user name.
b) passed wrong arguments to function performing the hash update and
didn't take into account the way in which such update could have
changed the order of the hash elements.
This patch solves this problem by ensuring that a) the correct hash
is updated, b) correct arguments are used for the hash_update()
function and c) we take into account possible changes in the order
of hash elements.
Also fix bug#59110: Memory leak of QUICK_SELECT_I allocated memory.
Includes Jørgen Lølands review comments.
Root cause of these bugs are that test_if_skip_sort_order() decided to
revert the 'skip_sort_order' descision (and use filesort) after the
query plan has been updated to reflect a 'skip' of the sort order.
This might happen in 'check_reverse_order:' if we have a
select->quick which could not be made descending by appending
a QUICK_SELECT_DESC. ().
The original 'save_quick' was then restored after the QEP has been modified,
which caused:
- An incorrect 'precomputed_group_by= TRUE' may have been set,
and not reverted, as part of the already modifified QEP (Bug#59308)
- A 'select->quick' might have been created which we fail to delete (bug#59110).
This fix is a refactorication of test_if_skip_sort_order() where all logic
related to modification of QEP (controlled by argument 'bool no_changes'), is
moved to the end of test_if_skip_sort_order(), and done after *all* 'test_if_skip'
checks has been performed - including the 'check_reverse_order:' checks.
The refactorication above contains now intentional changes to the logic which
has been moved to the end of the function.
Furthermore, a smaller part of the fix address the handling of the
select->quick objects which may already exists when we call
'test_if_skip_sort_order()' (save_quick) -and
new select->quick's created during test_if_skip_sort_order():
- Before new select->quick may be created by calling ::test_quick_select(), we
set 'select->quick= 0' to avoid that ::test_quick_select() prematurely
delete the save_quick's. (After this call we may have both a 'save_quick'
and 'select->quick')
- All returns from ::test_if_skip_sort_order() where we may have both a
'save_quick' and a 'select->quick' has been changed to goto's to the
exit points 'skiped_sort_order:' or 'need_filesort:' where we
decide which of the QUICK_SELECT's to keep, and delete the other.
handling.
The problem was that parsing of nested regular expression involved
recursive calls. Such recursion didn't take into account the amount of
available stack space, which ended up leading to stack overflow crashes.
Bug #55755 : Date STD variable signness breaks server on FreeBSD and OpenBSD
* Added a check to configure on the size of time_t
* Created a macro to check for a valid time_t that is safe to use with datetime
functions and store in TIMESTAMP columns.
* Used the macro consistently instead of the ad-hoc checks introduced by 52315
* Fixed compliation warnings on platforms where the size of time_t is smaller than
the size of a long (e.g. OpenBSD 4.8 64 amd64).
Bug #52315: utc_date() crashes when system time > year 2037
* Added a correct check for the timestamp range instead of just variable size check to
SET TIMESTAMP.
* Added overflow checking before converting to time_t.
* Using a correct localized error message in this case instead of the generic error.
* Added a test suite.
* fixed the checks so that they check for unsigned time_t as well. Used the checks
consistently across the source code.
* fixed the original test case to expect the new error code.
primary_key_no == 0".
Attempt to create InnoDB table with non-nullable column of
geometry type having an unique key with length 12 on it and
with some other candidate key led to server crash due to
assertion failure in both non-debug and debug builds.
The problem was that such a non-candidate key could have
been sorted as the first key in table/.FRM, before any legit
candidate keys. This resulted in assertion failure in InnoDB
engine which assumes that primary key should either be the
first key in table/.FRM or should not exist at all.
The reason behind such an incorrect sorting was an wrong
value of Create_field::key_length member for geometry field
(which was set to its pack_length == 12) which confused code
in mysql_prepare_create_table(), so it would skip marking
such key as a key with partial segments.
This patch fixes the problem by ensuring that this member
gets the same value of Create_field::key_length member as
for other blob fields (from which geometry field class is
inherited), and as result unique keys on geometry fields
are correctly marked as having partial segments.
Write an additional warning message to the server log,
explaining why a sort operation is aborted.
The output in mysqld.err will look something like:
110127 15:07:54 [ERROR] mysqld: Sort aborted: Out of memory (Needed 24 bytes)
110127 15:07:54 [ERROR] mysqld: Out of sort memory, consider increasing server sort buffer size
110127 15:07:54 [ERROR] mysqld: Sort aborted: Out of sort memory, consider increasing server sort buffer size
110127 15:07:54 [ERROR] mysqld: Sort aborted: Incorrect number of arguments for FUNCTION test.f1; expected 0, got 1
If --log-warn=2 is enabled, we output information about host/user/query as well.
in combination with IS NULL'
As this bug is a duplicate of bug#49322, it also includes test cases
covering this bugreport
Qualifying an OUTER JOIN with the condition 'WHERE <column> IS NULL',
where <column> is declared as 'NOT NULL' causes the
'not_exists_optimize' to be enabled by the optimizer.
In evaluate_join_record() the 'not_exists_optimize' caused
'NESTED_LOOP_NO_MORE_ROWS' to be returned immediately
when a matching row was found.
However, as the 'not_exists_optimize' is derived from
'JOIN_TAB::select_cond', the usual rules for condition guards
also applies for 'not_exist_optimize'. It is therefore incorrect
to check 'not_exists_optimize' without ensuring that all guards
protecting it is 'open'.
This fix uses the fact that 'not_exists_optimize' is derived from
a 'is_null' predicate term in 'tab->select_cond'. Furthermore,
'is_null' will evaluate to 'false' for any 'non-null' rows
once all guards protecting the is_null is open.
We can use this knowledge as an implicit guard check for the
'not_exists_optimize' by moving 'if (...not_exists_optimize)'
inside the handling of 'select_cond==false'. It will then
not take effect before its guards are open.
We also add an assert which requires that a
'not_exists_optimize' always comes together with
a select_cond. (containing 'is_null').
Root cause for this bug is that the optimizer try to detect&
optimize the special case:
'<field> BETWEEN c1 AND c1' and handle this as the condition '<field> = c1'
This was implemented inside add_key_field(.. *field, *value[]...)
which assumed field to refer key Field, and value[] to refer a [low...high]
constant pair. value[0] and value[1] was then compared for equality.
In a 'normal' BETWEEN condition of the form '<field> BETWEEN val1 and val2' the
BETWEEN operation is represented with an argementlist containing the
values [<field>, val1, val2] - add_key_field() is then called with
parameters field=<field>, *value=val1.
However, if the BETWEEN predicate specified:
1) '<const1> BETWEEN<const2> AND<field>
the 'field' and 'value' arguments to add_key_field() had to be swapped.
This was implemented by trying to cheat add_key_field() to handle it like:
2) '<const1> GE<const2> AND<const1> LE<field>'
As we didn't really replace the BETWEEN operation with 'ge' and 'le',
add_key_field() still handled it as a 'BETWEEN' and compared the (swapped)
arguments<const1> and<const2> for equality. If they was equal, the
condition 1) was incorrectly 'optimized' to:
3) '<field> EQ <const1>'
This fix moves this optimization of '<field> BETWEEN c1 AND c1' into
add_key_fields() which then calls add_key_equal_fields() to collect
key equality / comparison for the key fields in the BETWEEN condition.
In SBR, if a statement does not fail, it is always written to the binary
log, regardless if rows are changed or not. If there is a failure, a
statement is only written to the binary log if a non-transactional (.e.g.
MyIsam) engine is updated.
INSERT ON DUPLICATE KEY UPDATE and INSERT IGNORE were not following the
rule above and were not written to the binary log, if then engine was
Innodb.
There are two calls to read_log_event() on master in mysql_binlog_send().
Each call reads 19 bytes in this test case and the error of the second
read_log_event() is reported to the slave.
The second read_log_event() starts from position 94 (75 + 19) to 113
(75 + 19 + 19). Usually, there are two events in the binary log:
. 0 - 3 - Header
. 4 - 105 - Format Descriptor Event
. 106 - 304 - Query Event
and both reads fail because operations are reading from invalid positions
as expected.
However, mysql_binlog_send() does not use the same IO_CACHE that is used to
write into binary log (i.e. mysql_bin_log.log_file) for the hot binary log.
It opens the binary log file directly by calling open_binlog() and creates a
separated IO_CACHE. So there is a possibly that after a master has flushed
the binary log file, the content has been cached by the filesystem, and has
not updated the disk file. If this happens, then a slave will only see part
of the file, and thus the second read_log_event() will report event truncated
error.
To fix the problem, if the first read_log_event() has failed, we ensure that
the second one will try to read from the same position.
that implement add_index
The problem was that ALTER TABLE blocked reads on an InnoDB table
while adding a secondary index, even if this was not needed. It is
only needed for the final step where the .frm file is updated.
The reason queries were blocked, was that ALTER TABLE upgraded the
metadata lock from MDL_SHARED_NO_WRITE (which blocks writes) to
MDL_EXCLUSIVE (which blocks all accesses) before index creation.
The way the server handles index creation, is that storage engines
publish their capabilities to the server and the server determines
which of the following three ways this can be handled: 1) build a
new version of the table; 2) change the existing table but with
exclusive metadata lock; 3) change the existing table but without
metadata lock upgrade.
For InnoDB and secondary index creation, option 3) should have been
selected. However this failed for two reasons. First, InnoDB did
not publish this capability properly.
Second, the ALTER TABLE code failed to made proper use of the
information supplied by the storage engine. A variable
need_lock_for_indexes was set accordingly, but was not later used.
This patch fixes this problem by only doing metadata lock upgrade
before index creation/deletion if this variable has been set.
This patch also changes some of the related terminology used
in the code. Specifically the use of "fast" and "online" with
respect to ALTER TABLE. "Fast" was used to indicate that an
ALTER TABLE operation could be done without involving a
temporary table. "Fast" has been renamed "in-place" to more
accurately describe the behavior.
"Online" meant that the operation could be done without taking
a table lock. However, in the current implementation writes
are always prohibited during ALTER TABLE and an exclusive
metadata lock is held while updating the .frm, so ALTER TABLE
is not completely online. This patch replaces "online" with
"in-place", with additional comments indicating if concurrent
reads are allowed during index creation/deletion or not.
An important part of this update of terminology is renaming
of the handler flags used by handlers to indicate if index
creation/deletion can be done in-place and if concurrent reads
are allowed. For example, the HA_ONLINE_ADD_INDEX_NO_WRITES
flag has been renamed to HA_INPLACE_ADD_INDEX_NO_READ_WRITE,
while HA_ONLINE_ADD_INDEX is now HA_INPLACE_ADD_INDEX_NO_WRITE.
Note that this is a rename to clarify current behavior, the
flag values have not changed and no flags have been removed or
added.
Test case added to innodb_mysql_sync.test.
Regression introduced in bug#52455. Problem was that the
fixed function did not set the last used partition variable, resulting
in wrong partition used when storing the position of the newly
retrieved row.
Fixed by setting the last used partition in ha_partition::index_read_idx_map.
ZERO
When dates are represented internally as strings, i.e. when a string constant
is compared to a date value, both values are converted to long integers,
ostensibly for fast comparisons. DATE typed integer values are converted to
DATETIME by multiplying by 1,000,000 (each digit pair representing hour,
minute and second, respectively). But the mechanism did not distuinguish
cached INTEGER values, already in correct format, from newly converted
strings.
Fixed by marking the INTEGER cache as being of DATETIME format.
rpl_packet got a timeout failure sporadically on PB when stopping
slave. The real reason of this bug is that STOP SLAVE stopped
IO thread first and then stopped SQL thread. It was
possible that IO thread stopped after replicating part of a
transaction which SQL thread was executing. SQL thread would
be hung if the transaction could not be rolled back safely.
After this patch, STOP SLAVE will stop SQL thread first and then stop IO
thread, which guarantees that IO thread will fetch the reset of the
events of the transaction that SQL thread is executing, so that SQL
thread can finish the transaction if it cannot be rolled back safely.
Added below auxiliary files to make the test code neater.
restart_slave_sql.inc
rpl_connection_master.inc
rpl_connection_slave.inc
rpl_connection_slave1.inc
Introduced by the fix for bug#44766.
Problem: it's not correct to use args[0]->str_value as a buffer,
because args[0] may need this buffer for its own purposes.
Fix: adding a new class member tmp_value to use as return value.
@ mysql-test/r/ctype_many.result
@ mysql-test/t/ctype_many.test
Adding tests
@ sql/item_strfunc.cc
Changing code into traditional style:
use "str" as a buffer for the argument and tmp_value for the result value.
@ sql/item_strfunc.h
Adding tmp_value
Problem: when processing a query like:
SELECT '' LIKE '1' ESCAPE COUNT(1);
escape_item->val_str() was never executed and the "escape" class member
stayed initialized, which led to valgrind uninitialized memory error.
Note, a query with some tables in "FROM" clause
returns ER_WRONG_ARGUMENTS in the same situation:
SELECT '' LIKE '1' ESCAPE COUNT(1) FROM t1;
ERROR 1210 (HY000): Incorrect arguments to ESCAPE
Fix: disallowing using aggregate functions in ESCAPE clause,
even if there are no tables used. There is no much use of that anyway.
Backport to 5.0.
/*![:version:] Query Code */, where [:version:] is a sequence of 5
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those
servers whose versions are larger than the version appearing in the
comment. It leads to a security issue when slave's version is larger
than master's. A malicious user can improve his privileges on slaves.
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.
This bug is fixed with the logic below:
- To replace '!' with ' ' in the magic comments which are not applied on
master. So they become common comments and will not be applied on slave.
- Example:
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
will be binlogged as
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
When the server sends the name of the plugin it's using in the handshake
packet it was null terminating it in it's buffer, but was sending a length of
the packet 1 byte short.
Fixed to send the terminating 0 as well by increasing the length of the
packet to include it.
In this way the handshake packet becomes similar to the change user packet
where the plugin name is null terminated.
No test suite added as the fix can only be observed by analyzing the bytes
sent over the wire.
other crashes
Some string manipulating SQL functions use a shared string object intended to
contain an immutable empty string. This object was used by the SQL function
SUBSTRING_INDEX() to return an empty string when one argument was of the wrong
datatype. If the string object was then modified by the sql function INSERT(),
undefined behavior ensued.
Fixed by instead modifying the string object representing the function's
result value whenever string manipulating SQL functions return an empty
string.
Relevant code has also been documented.
Fixed incorrect checks in join_read_const_table() for when to
accept a non-existing, or empty const-row as a part of the const'ified
set of tables.
Intention of this test is to only accept NULL-rows if this table is outer joined
into the resultset. (In case of an inner-join we can conclude at this point that
resultset will be empty, end we want to return 'error' to signal this.)
Initially 'maybe_null' is set to the same value as 'outer_join' in
setup_table_map(), mysql_priv.h ~line 2424. Later simplify_joins() will
attemp to replace outer joins by inner join whenever possible. This
will cause 'outer_join' to be updated. However, 'maybe_null' is *not* updated
to reflect this rewrite as this field is used to currectly set the 'nullability'
property for the columns in the resultset.
We should therefore change join_read_const_table() to check the 'outer_join'
property instead of 'maybe_null', as this correctly reflect the nullability of
the *execution plan* (not *resultset*).
An incorrect 'table_map' containing both the table itself,
and possible any outer-refs if this was the last table in
the subquery, was presented to make_cond_for_table().
As a pushed condition is only able to refer column from the table
the condition is pushed to, nothing else than columns from the
table itself (tab->table->map) may be refered in the pushed condition
constructed by 'push_cond= make_cond_for_table()'.
Also fix a minor 'copy and paste' bug in a comment
inside make_cond_for_table().
No testcase is possible on mainbranch as the NDB engine is not available (yet)
on mysql >= 5.5
Item_equal::val_int() checked for NULL-values by checking Item::null_value
*before* the respective ::store_value() and ::cmp(Item*) metods where called.
As Item::null_value is set by these metods, the value of 'null_value'
is not valid until *after* ::store_value() or ::cmp() has
been called for the Item object.
Fix is to swap order of ::store_value()/::cmp() and checking of Item::null_value.
This pattern is widely used other places inside item_cmpfunc.cc .
INVOKER-security view access check wrong".
When privilege checks were done for tables used from an
INVOKER-security view which in its turn was used from
a DEFINER-security view connection's active security
context was incorrectly used instead of security context
with privileges of the second view's creator.
This meant that users which had enough rights to access
the DEFINER-security view and as result were supposed to
be able successfully access it were unable to do so in
cases when they didn't have privileges on underlying tables
of the INVOKER-security view.
This problem was caused by the fact that for INVOKER-security
views TABLE_LIST::security_ctx member for underlying tables
were set to 0 even in cases when particular view was used from
another DEFINER-security view. This meant that when checks of
privileges on these underlying tables was done in
setup_tables_and_check_access() active connection security
context was used instead of context corresponding to the
creator of caller view.
This fix addresses the problem by ensuring that underlying
tables of an INVOKER-security view inherit security context
from the view and thus correct security context is used for
privilege checks on underlying tables in cases when such view
is used from another view with DEFINER-security.
Item_func_spatial_collection::fix_length_and_dec didn't call parent's method, so
the maybe_null was set to '0' after it. But in this case the result was
just NULL, that caused wrong behaviour.
per-file comments:
mysql-test/r/gis.result
Bug #57321 crashes and valgrind errors from spatial types
test result updated.
mysql-test/t/gis.test
Bug #57321 crashes and valgrind errors from spatial types
test case added.
sql/item_geofunc.h
Bug #57321 crashes and valgrind errors from spatial types
Item_func_geometry::fix_length_and_dec() called in
Item_func_spatial_collection::fix_length_and_dec().
TIMESTAMP.
Item_cache::get_cache wasn't treating TIMESTAMP as a DATETIME value thus
returning string cache for items with TIMESTAMP type. This led to incorrect
TIMESTAMP -> INT conversion and to a wrong query result.
Fixed by using Item::is_datetime function to check for DATETIME type group.
If the ::single_value_transformer() find an existing HAVING condition it used
to do the transformation:
1) HAVING cond -> (HAVING Cond) AND (cond_guard (Item_ref_null_helper(...))
As the AND condition in 1) is Mc'Carty evaluated, the
right side of the AND cond should be executed only if the
original 'HAVING evaluated' to true.
However, as we failed to set 'top_level' for the tranformed HAVING condition,
'abort_on_null' was FALSE after transformation. An
UNKNOWN having condition will then not terminate evaluation of the
transformed having condition, and we incorrectly continued
into the Item_ref_null_helper() part.
get_year_value() contains code to convert 2-digits year to
4-digits. The fix for Bug#49910 added a check on the size of
the underlying field so that this conversion is not done for
YEAR(4) values. (Since otherwise one would convert invalid
YEAR(4) values to valid ones.)
The existing check does not work when Item_cache is used, since
it is not detected when the cache is based on a Field. The
reported change in behavior is due to Bug#58030 which added
extra cached items in min/max computations.
The elegant solution would be to implement
Item_cache::real_item() to return the underlying Item.
However, some side effects are observed (change in explain
output) that indicates that such a change is not straight-
forward, and definitely not appropriate for an MRU.
Instead, a Item_cache::field() method has been added in order
to get access to the underlying field. (This field() method
eliminates the need for Item_cache::eq_def() used in
test_if_ref(), but in order to limit the scope of this fix,
that code has been left as is.)
On windows, an #endif in a wrong place was causing an early
return from mysql_load and thus the LOAD DATA LOCAL was not
executed. This problem was fixed by moving the #endif to the
right place.
The following code was missing
if ((stat_info.st_mode & S_IFIFO) == S_IFIFO)
is_fifo = 1;
which is required to properly configure and read from the
IO_CACHE when a named pipe is used. So it was re-introduced
before the #endif.
tmptable needed
The function DEFAULT() works by modifying the the data buffer pointers (often
referred to as 'record' or 'table record') of its argument. This modification
is done during name resolution (fix_fields().) Unfortunately, the same
modification is done when creating a temporary table, because default values
need to propagate to the new table.
Fixed by skipping the pointer modification for fields that are arguments to
the DEFAULT function.
if max_allowed_packet >= 16M.
This bug was introduced by patch for bug#42503.
This patch restores behaviour that there was before patch
for bug#42503 was applied.
Problem: DATE_ADD() is a hybrid function and can return
DATE, DATETIME or VARCHAR data type depending on arguments.
In case of VARCHAR data type, DATE_ADD() reported "binary" character set,
which was wrong.
Fix: make DATE_ADD() return @character_set_connection in VARCHAR context.
@ mysql-test/include/ctype_numconv.inc
Adding tests
@ mysql-test/r/ctype_binary.result
Adding tests
@ mysql-test/r/ctype_cp1251.result
Adding tests
@ mysql-test/r/ctype_latin1.result
Adding tests
@ mysql-test/r/ctype_ucs.result
Adding tests
@ mysql-test/r/ctype_utf8.result
Adding tests
@ sql/item_strfunc.cc
- Moving code from Item_str_ascii_func::val_str() to
Item_str_func::val_str_from_val_str_ascii(), as
this code needs to be shared by Item_date_add_interval.
- Adding str2 parameter to be used as a buffer, instead of
using private ascii_buf member.
@ sql/item_strfunc.h
- Moving code from Item_str_ascii_func::val_str() to
Item_str_func::val_str_from_val_str_ascii()
- Removing "String *val_str_convert_from_ascii(String *str, String *ascii_buf)"
prototype as it was neither used nor declared.
@ sql/item_timefunc.h
- Overwriting parent's charset_for_protocol() method,
becase we need to behave differenlty in VARCHAR and DATE/DATETYPE context.
- Adding ascii_buf for conversion.
- Adding val_str_ascii() prototype.
- Adding val_str() which uses newly added
Item_str_func::val_str_from_val_str_ascii(),
passing ascii_buf as a conversion buffer.
to crash mysqld".
handler::pushed_cond was not always properly reset when table objects where
recycled via the table cache.
handler::pushed_cond is now set to NULL in handler::ha_reset(). This should
prevent pushed conditions from (incorrectly) re-apperaring in later queries.
and 'THREAD_SAFE_CLIENT'.
As of MySQL 5.5, we no longer support non-threaded
builds. This patch removes all references to the
obsolete THREAD and THREAD_SAFE_CLIENT preprocessor
symbols. These were used to distinguish between
threaded and non-threaded builds.
multiple columns in the partition key
ndb crash if duplicate columns in the partitioning key.
Backport from mysql-5.1-telco-7.0, see bug#53354.
Changed from case sensitive field name comparision
to non case sensitive too.
OPTIMIZE TABLE
OPTIMIZE TABLE for InnoDB tables is handled as recreate + analyze.
The triggered assert checked that an error had been reported if either
recreate or analyze failed. However the assert failed to take into
account that they could have failed because OPTIMIZE TABLE had been
victim of KILL QUERY, KILL CONNECTION or server shutdown.
This patch adjusts the assert to take this possibility into account.
The problem was only noticeable on debug versions of the server.
Test case added to innodb_mysql_sync.test.
and Order By
When having a UNION statement in a subquery, with no
referenced tables (or only a reference to the virtual
table 'dual'), the UNION did not allow an ORDER BY clause.
i.e:
SELECT(SELECT 1 AS a UNION
SELECT 0 AS a
ORDER BY a) AS b or
SELECT(SELECT 1 AS a FROM dual UNION
SELECT 0 as a
ORDER BY a) AS b
In addition, an ORDER BY / LIMIT clause was not accepted
in subqueries even for single SELECT statements with no
referenced tables (or with 'dual' as table reference)
i.e:
SELECT(SELECT 1 AS a ORDER BY a) AS b or
SELECT(SELECT 1 AS a FROM dual ORDER BY a) AS b
The fix was to allow an optional ORDER BY/LIMIT clause to
the grammar for these cases.
See also: Bug#57986
if embedded in a SELECT
An ORDER BY clause was bound to the incorrect
(sub-)statement when used in a UNION context.
In a query like:
SELECT * FROM a UNION SELECT * FROM b ORDER BY c
the result of SELECT * FROM b is sorted, and then
combined with a. The correct behaviour is that
the ORDER BY clause should be applied on the
final set. Similar behaviour was seen on LIMIT
clauses as well.
In a UNION statement, there will be a select_lex
object for each of the two selects, and a
select_lex_unit object that describes the UNION
itself. Similarly, the same behaviour was also
seen on derived tables.
The bug was caused by using a grammar rule for
ORDER BY and LIMIT that bound these elements
to thd->lex->current_select, which points to the
last of the two selects, instead of to the
fake_select_lex member of the master select_lex_unit
object.
Truncate the maximum result length (64-bit wide type) to fit into
the item maximum length (32-bit wide type). This is possible as
this specific branch is only used if the maximum result length
is less than 0x1000000 (MAX_BLOB_WIDTH), which fits comfortably
in a 32-bit wide type.
From a user perspective, the problem is that a FLUSH LOGS or SIGHUP
signal could end up associating the stdout and stderr to random
files. In the case of this bug report, the streams would end up
associated to InnoDB ibd files.
The freopen(3) function is not thread-safe on FreeBSD. What this
means is that if another thread calls open(2) during freopen()
is executing that another thread's fd returned by open(2) may get
re-associated with the file being passed to freopen(3). See FreeBSD
PR number 79887 for reference:
http://www.freebsd.org/cgi/query-pr.cgi?pr=79887
This problem is worked around by substituting a internal hook within
the FILE structure. This avoids the loss of atomicity by not having
the original fd closed before its duplicated.
Patch based on the original work by Vasil Dimov.
After fix of bug#25192, load_defaults() will add an args separator
to distinguish options loaded from configure files from that provided
in the command line. One problem of this is that the args separator
would be added no matter the application need it or not.
Fixed the problem by adding an option:
bool my_getopt_use_args_separator;
to control whether the separator will be added or not. And also
added functions:
bool my_getopt_is_args_separator(const char* arg);
to check if the argument is the separator or not.
mysqlbinlog only prints "use $database" statements to its output stream
when the active default database changes between events. This will cause
"No Database Selected" error when dropping and recreating that database.
To fix the problem, we clear print_event_info->db when printing an event
of CREATE/DROP/ALTER database statements, so that the Query_log_event
after such statements will be printed with the use 'db' anyway except
transaction keywords.
mysqlbinlog only prints "use $database" statements to its output stream
when the active default database changes between events. This will cause
"No Database Selected" error when dropping and recreating that database.
To fix the problem, we clear print_event_info->db when printing an event
of CREATE/DROP/ALTER database statements, so that the Query_log_event
after such statements will be printed with the use 'db' anyway except
transaction keywords.
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
ASSERT happens due to improper calculation of the max_length
in Item_func_div object, if dividend has max_length == 0 then
Item_func_div::max_length is set to 0 under some circumstances.
The fix:
If decimals == NOT_FIXED_DEC then set
Item_func_div::max_length to max possible
DOUBLE length value.
multiline inserts into partition
Bug#57071: EXTRACT(WEEK from date_col) cannot be
allowed as partitioning function
Renamed function according to reviewers comments.
Bug#57071: EXTRACT(WEEK from date_col) cannot be allowed as partitioning function
There were functions allowed as partitioning functions
that implicit allowed cast. That could result in unacceptable
behaviour.
Solution was to check that the arguments of date and time functions
have allowed types (field and date/datetime/time depending on function).
Item_sum_max/Item_sum_min incorrectly set null_value flag and
attempt to get result in parent functions leads to crash.
This happens due to double evaluation of the function argumet.
First evaluation happens in the comparator and second one
happens in Item_cache::cache_value().
The fix is to introduce new Item_cache object which
holds result of the argument and use this cached value
as an argument of the comparator.
Normally, auto_increment value is generated for the column by
inserting either NULL or 0 into it. NO_AUTO_VALUE_ON_ZERO
suppresses this behavior for 0 so that only NULL generates
the auto_increment value. This behavior is also followed by
a slave, specifically by the SQL Thread, when applying events
in the statement format from a master. However, when applying
events in the row format, the flag was ignored thus causing
an assertion failure:
"Assertion failed: next_insert_id == 0, file .\handler.cc"
In fact, we never need to generate a auto_increment value for
the column when applying events in row format on slave. So we
don't allow it to happen by using 'MODE_NO_AUTO_VALUE_ON_ZERO'.
Refactoring: Get rid of all the sql_mode checks to rows_log_event
when applying it for avoiding problems caused by the inconsistency
of the sql_mode on slave and master as the sql_mode is not set for
Rows_log_event.
Normally, auto_increment value is generated for the column by
inserting either NULL or 0 into it. NO_AUTO_VALUE_ON_ZERO
suppresses this behavior for 0 so that only NULL generates
the auto_increment value. This behavior is also followed by
a slave, specifically by the SQL Thread, when applying events
in the statement format from a master. However, when applying
events in the row format, the flag was ignored thus causing
an assertion failure:
"Assertion failed: next_insert_id == 0, file .\handler.cc"
In fact, we never need to generate a auto_increment value for
the column when applying events in row format on slave. So we
don't allow it to happen by using 'MODE_NO_AUTO_VALUE_ON_ZERO'.
Refactoring: Get rid of all the sql_mode checks to rows_log_event
when applying it for avoiding problems caused by the inconsistency
of the sql_mode on slave and master as the sql_mode is not set for
Rows_log_event.