integer constants.
This bug is introduced by the fix for bug#16377. Before the fix the
Item_func_between::fix_length_and_dec method converted the second and third
arguments to the type of the first argument if they were constant and the first
argument is of the DATE/DATETIME type. That approach worked well for integer
constants and sometimes produced bad result for string constants. The fix for
the bug#16377 wrongly removed that code at all and as a result of this the
comparison of a DATETIME field and an integer constant was carried out in a
wrong way and sometimes led to wrong result sets.
Now the Item_func_between::fix_length_and_dec method converts the second and
third arguments to the type of the first argument if they are constant, the
first argument is of the DATE/DATETIME type and the DATETIME comparator isn't
applicable.
In acl_getroot_no_password(), use a separate variable for traversing the acl_users list so that the last entry is not used when no matching entries are found.
The value of "low-priority-updates" option and the LOW PRIORITY
prefix was taken into account at parse time.
This caused triggers (among others) to ignore this flag (if
supplied for the DML statement).
Moved reading of the LOW_PRIORITY flag at run time.
Fixed an incosistency when handling
SET GLOBAL LOW_PRIORITY_UPDATES : now it is in effect for
delayed INSERTs.
Tested by checking the effect of LOW_PRIORITY flag via a
trigger.
This is an additional fix.
Item::val_xxx methods are supposed to use original data source and
Item::val_xxx_result methods to use the item's result field. But for the
Item_func_set_user_var class val_xxx_result methods were mapped to val_xxx
methods. This leads, in particular, to producing bad sort keys and thus
wrong order of the result set of queries with group by/order by clauses.
The set of val_xxx_result methods is added to the Item_func_set_user_var
class. It's the same as the val_xxx set of method but uses the result_field
to return a value.
using a derived table over a grouping subselect.
This crash happens only when materialization of the derived tables
requires creation of auxiliary temporary table, for example when
a grouping operation is carried out with usage of a temporary table.
The crash happened because EXPLAIN EXTENDED when printing the query
expression made an attempt to use the objects created in the mem_root
of the temporary table which has been already freed by the moment
when printing is called.
This bug appeared after the method Item_field::print() had been
introduced.
Problem: we may create a deadlock committing changes in the mysql_alter_table() when
LOCK_open is set. Moreover, "in some variants of the ALTER TABLE commit
happens earlier, outside of LOCK_open, in other later - inside. It's no good, a storage
engine code that is called in between could expect a consistency - either there is a
transaction or there is not".
Fix: move the commit to happen earlier and outside of the LOCK_open.
corruption errors: 126,134,145
When one thread attempts to lock two (or more) tables and another
thread executes statement that aborts these locks (e.g. REPAIR
TABLE) we may get a table object with wrong lock type in a table
cache.
For example if SELECT FROM t1,t2 was aborted, subsequent INSERT
INTO t1 may be executed under read lock.
As a result we may get various table corruptions and even a server
crash.
This is fixed by resetting lock type in case lock was aborted by
another thread.
I failed to create reasonable test case for this bug.
Implementation of mysql_multi_update did not call multi_update::send_error method in some cases
(see the test reported on bug page and test cases in changeset).
Fixed with deploying the method, ::send_error() is refined to get binlogging code which works whenever
there is modified non-transactional table.
thd->no_trans_update.stmt flag is set in to TRUE to ease testing though being the beginning of relative
bug#27417 fix (addresses a part of those issues).
Eliminating two minor issues (small bugs) in multi_update methods.
This patch for multi-update also addresses a part of the issues reported in bug#13270,bug#23333.
The end_update() function uses the Item::save_org_in_field() function to
save original values of items into the group buffer. But for the
Item_func_set_user_var this method was mapped to the save_in_field method.
The latter function wrongly decides to use the result_field. This leads to
saving incorrect value in the grouping buffer and wrong result of the whole
query.
The can_use_result_field argument of the bool type is added to the
Item_func_set_user_var::save_in_field() function. If it is set to FALSE
then the item's result field won't be used. Otherwise it will be detected
whether the result field will be used (old behaviour).
Two wrapping functions for the function above are added to the
Item_func_set_user_var class:
the save_in_field(Field *field, bool no_conversions) - it calls the above
function with the can_use_result_field set to TRUE.
the save_org_in_field(Field *field) - same, but the can_use_result_field
is set to FALSE.
ON conditions from JOIN expression were ignored at CHECK OPTION
check when updating a multi-table view with CHECK OPTION.
The st_table_list::prep_check_option function has been
modified to to take into account ON conditions at CHECK OPTION check
It was also changed to build the check option condition only once
for any update used in PS/SP.
When the same VIEW was created at the master side twice,
malformed (truncated after the word 'AS') query string
was forwarded to client side, so error messages on the
master and client was different, and replication was
broken.
The mysql_register_view function call failed
too early: fields of `view' output argument of this
function was not filled yet with correct data required
for query replication.
The mysql_register_view function also copied pointers to
local buffers into a memory allocated by the caller.
problem #1: udf_example.so does not get built on AIX
solution#1: build it yourself using
cd sql; gcc -g -I ../include/ -I /usr/include/ -lpthread \
-shared -o udf_example.so udf_example.c; mv udf_example.so \
.libs/
problem#2 (the bug): udf_example fails because it does not
recognize the variable LD_LIBRARY_PATH when doing dl_open(),
it looks at LIBPATH
solution#2: add the library path to LIBPATH
problem#3: udf_example returns the wrong result length since
it relies on strmov to return a pointer to the end of the
string that it copies. On AIX builds, where m_string.h is not
included (m_string defines a macro expanding strmov to stpcpy),
there is a macro expanding strmov to strcpy, which returns a
pointer to the first character.
solution#3: define strmov as stpcpy.
problem#4: #2 applies on hp-ux as well, but this platform
looks at SHLIB_PATH
solution#4: added the library path to SHLIB_PATH
Problem:
HASH indexes on VARCHAR columns with binary collations did not ignore trailing spaces from strings before comparisons. This could result in duplicate records being successfully inserted into a MEMORY table with unique key constraints.
As a direct consequence of the above, internal MEMORY tables used for GROUP BY calculation in testcases for bug #27643 contained duplicate rows which resulted in duplicate key errors when converting those temporary tables to MyISAM. Additionally, that error was incorrectly converted to the 'table is full' error.
Solution:
- ignore trailing spaces in VARCHAR fields with binary collations when calculating hashes.
- return a proper error from create_myisam_from_heap() when conversion fails.
mysqld crashed when a long-running explain query was killed from
another connection.
When the current thread caught a kill signal executing the function
best_extension_by_limited_search it just silently returned to
the calling function greedy_search without initializing elements of
the join->best_positions array.
However, the greedy_search function ignored thd->killed status
after a calls to the best_extension_by_limited_search function, and
after several calls the greedy_search function used an uninitialized
data from the join->best_positions[idx] to search position in the
join->best_ref array.
That search failed, and greedy_search tried to call swap_variables
function with NULL argument - that caused a crash.
ENUM fields internally store their values as integers and may use integer
values as indexes to their values. Invalid values are mapped to zero value.
When storing an empty string the ENUM field fails to find an appropriate value
and tries to convert the provided string to integer. The conversion also
fails and error is returned even if the thd->count_cuted_fields is set to
CHECK_FIELD_IGNORE. This makes the range optimizer wrongly decide that an
impossible range is present.
Now the Field_enum::store() returns error while storing an empty string only
if the thd->count_cuted_fields isn't set to CHECK_FIELD_IGNORE.
The result of the CHECK OPTION condition evaluation over an
updated record and records of merged tables was arbitrary and
dependant on the order of records in the merged tables during
the execution of SELECT statement.
The CHECK OPTION expression was evaluated over expired record
buffers (with arbitrary data in the fields).
Rowids of tables used in the CHECK OPTION expression were
added to temporary table rows. The multi_update::do_updates()
method was modified to restore necessary record buffers
before evaluation of the CHECK OPTION condition.
Integer values with 10 digits may or may not fit into an int column
(e.g. 2147483647 vs 6147483647).
Thus when creating a temp table column for such an int we must
use bigint instead.
Fixed to use bigint.
Also subsituted a "magic number" with a named constant.
type assertion.
The bug was introduced by the patch for bug #16377.
The "+ INTERVAL" (Item_date_add_interval) function detects its result type
by the type of its first argument. But in some cases it returns STRING
as the result type. This happens when, for example, the first argument is a
DATE represented as string. All this makes the get_datetime_value()
function misinterpret such result and return wrong DATE/DATETIME value.
To avoid such cases in the fix for #16377 the code that detects correct result
field type on the first execution was added to the
Item_date_add_interval::get_date() function. Due to this the result
field type of the Item_date_add_interval item stored by the send_fields()
function differs from item's result field type at the moment when
the item is actually sent. It causes an assertion failure.
Now the get_datetime_value() detects that the DATE value is returned by
some item not only by checking the result field type but also by comparing
the returned value with the 100000000L constant - any DATE value should be
less than this value.
Removed result field type adjusting code from the
Item_date_add_interval::get_date() function.
Refining the tests since pb revealed the older version's fragality - the error from SF() due to killed
may be different on different env:s.
DBUG_ASSERT instead of assert.
longer showing SP names.
SHOW CREATE VIEW uses Item::print() methods to reconstruct the
statement text from the parse tree.
The print() method for stored procedure calls needs allocate
space to print the function's quoted name.
It was incorrectly calculating the length of the buffer needed
(was too short).
Fixed to reflect the actual space needed.
When a Windows console application that has an open console (e.g. mysqld-nt
started with the --console option) encounters certain type of errors
(like no floppy disk in a floppy drive) the OS will pop-up an
"abort/retry/ignore" dialog and block the application (depending on a
registry setting : see http://msdn2.microsoft.com/en-us/embedded/aa731206.aspx
for details).
Fixed by disabling the dialog popups for every error except a GPF and
alignment errors. This is safe to do as the actual error gets reported
(and handled) to mysqld.
The reason for the bug was that replaying of a query on slave could not be possible since its event
was recorded with the killed error. Due to the specific of handling INSERT, which per-row-while-loop is
unbreakable to killing, the query on transactional table should have not appeared in binlog unless
there was a call to a stored routine that got interrupted with killing (and then there must be an error
returned out of the loop).
The offered solution added the following rule for binlogging of INSERT that accounts the above
specifics:
For INSERT on transactional-table if the error was not set the only raised flag
is harmless and is ignored via masking out on time of creation of binlog event.
For both table types the combination of raised error and KILLED flag indicates that there
was potentially partial execution on master and consistency is under the question.
In that case the code continues to binlog an event with an appropriate killed error.
The fix relies on the specified behaviour of stored routine that must propagate the error
to the top level query handling if the thd->killed flag was raised in the routine execution.
The patch adds an arg with the default killed-status-unset value to Query_log_event::Query_log_event.
- A race condition caused brief unavailablility when trying to acccess
a table.
- The unprotected variable 'grant_option' wasn't intended to alternate
during normal execution. Variable initialization moved to grant_init
a lines responsible for the alternation are removed.
When storing a large number to a FLOAT or DOUBLE field with fixed length, it could be incorrectly truncated if the field's length was greater than 31.
This patch also does some code cleanups to be able to reuse code which is common between Field_float::store() and Field_double::store().
constant outer tables did not return null complemented
rows when conditions were evaluated to FALSE.
Wrong results were returned because the conditions over constant
outer tables, when being pushed down, were erroneously enclosed
into the guard function used for WHERE conditions.
The root cause of this bug is related to the function skip_rear_comments,
in sql_lex.cc
Recent code changes in skip_rear_comments changed the prototype from
"const uchar*" to "const char*", which had an unforseen impact on this test:
(endp[-1] < ' ')
With unsigned characters, this code filters bytes of value [0x00 - 0x20]
With *signed* characters, this also filters bytes of value [0x80 - 0xFF].
This caused the regression reported, considering cyrillic characters in the
parameter name to be whitespace, and truncated.
Note that the regression is present both in 5.0 and 5.1.
With this fix:
- [0x80 - 0xFF] bytes are no longer considered whitespace.
This alone fixes the regression.
In addition, filtering [0x00 - 0x20] was found bogus and abusive,
so that the code now filters uses my_isspace when looking for whitespace.
Note that this fix is only addressing the regression affecting UTF-8
in general, but does not address a more fundamental problem with
skip_rear_comments: parsing a string *backwards*, starting at end[-1],
is not safe with multi-bytes characters, so that end[-1] can confuse the
last byte of a multi-byte characters with a characters to filter out.
The only known impact of this remaining issue affects objects that have to
meet all the conditions below:
- the object is a FUNCTION / PROCEDURE / TRIGGER / EVENT / VIEW
- the body consist of only *1* instruction, and does *not* contain a
BEGIN-END block
- the instruction ends, lexically, with <ident> <whitespace>* ';'?
For example, "select <ident>;" or "return <ident>;"
- The last character of <ident> is a multi-byte character
- the last byte of this character is ';' '*', '/' or whitespace
In this case, the body of the object will be truncated after parsing,
and stored in an invalid format.
This last issue has not been fixed in this patch, since the real fix
will be implemented by Bug 25411 (trigger code truncated), which is caused
by the very same code.
The real problem is that the function skip_rear_comments is only a
work-around, and should be removed entirely: see the proposed patch for
bug 25411 for details.
On some Linux distributions with both LinuxThreads and NPTL glibc versions available, statically built binaries can crash, because linker defaults to LinuxThreads when linking statically, but calls to external libraries (like libnss) are resolved to NPTL versions.
Since there is nothing we can do in the code to work that around, just give user an advice on how to fix that, if a crash happened on such a binary/OS combination.
bug #26842: master binary log contains invalid queries - replication fails
bug #12826: Possible to get inconsistent slave using SQL syntax Prepared Statements
Problem:
binlogging PS' we may produce syntacticly incorrect queries in the binlog replacing
some parameters with variable names (instead of variable values).
E.g. in the reported case of "limit ?" clause: replacing "?" with "@var"
produces "limit @var" which is not a correct SQL syntax.
Also it may lead to different query execution on slave if we
set and use a variable in the same statement, e.g.
"insert into t1 values (@x:=@x+1, ?)"
Fix: make the stored statement string created upon its execution use variable values
(instead of names) to fill placeholders.
- The "mysql client in mysqld"(which is used by
replication and federated) should use alarms instead of setting
socket timeout value if the rest of the server uses alarm. By
always calling 'my_net_set_write_timeout'
or 'my_net_set_read_timeout' when changing the timeout value(s), the
selection whether to use alarms or timeouts will be handled by
ifdef's in those two functions.
- Move declaration of 'vio_timeout' into "vio_priv.h"
CHECK OPTION and a subquery in WHERE condition.
The abort was triggered by setting the value of join->tables for
subqueries in the function JOIN::cleanup. This function was called
after an invocation of the JOIN::join_free method for subqueries
used in WHERE condition.
If a stored function or a trigger was killed it had aborted but no error
was thrown. This allows the caller statement to continue without a notice.
This may lead to a wrong data being inserted/updated to/deleted as in such
cases the correct result of a stored function isn't guaranteed. In the case
of triggers it allows the caller statement to ignore kill signal and to
waste time because of re-evaluation of triggers that always will fail
because thd->killed flag is still on.
Now the Item_func_sp::execute() and the sp_head::execute_trigger() functions
check whether a function or a trigger were killed during execution and
throws an appropriate error if so.
Now the fill_record() function stops filling record if an error was reported
through thd->net.report_error.
being used without being def
Inside method Item_func_unsigned::val_int, the variable value
can be returned without being initialized when the CAST argument
is of type DECIMAL and has a NULL value. This gives a run-time
error when building debug binaries using Visual C++ 2005.
Solution: Initialize value to 0
Bug #23667 "CREATE TABLE LIKE is not isolated from alteration
by other connections"
Bug #18950 "CREATE TABLE LIKE does not obtain LOCK_open"
As well as:
Bug #25578 "CREATE TABLE LIKE does not require any privileges
on source table".
The first and the second bugs resulted in various errors and wrong
binary log order when one tried to execute concurrently CREATE TABLE LIKE
statement and DDL statements on source table or DML/DDL statements on its
target table.
The problem was caused by incomplete protection/table-locking against
concurrent statements implemented in mysql_create_like_table() routine.
We solve it by simply implementing such protection in proper way (see
comment for sql_table.cc for details).
The third bug allowed user who didn't have any privileges on table create
its copy and therefore circumvent privilege check for SHOW CREATE TABLE.
This patch solves this problem by adding privilege check, which was missing.
Finally it also removes some duplicated code from mysql_create_like_table().
Note that, altough tests covering concurrency-related aspects of CREATE TABLE
LIKE behaviour will only be introduced in 5.1, they were run manually for
this patch as well.
When processing the USE/FORCE index hints
the optimizer was not checking if the indexes
specified are enabled (see ALTER TABLE).
Fixed by:
Backporting the fix for bug 20604 to 5.0
mode.
When a new DATE/DATETIME field without default value is being added by the
ALTER TABLE the '0000-00-00' value is used as the default one. But it wasn't
checked whether such value was allowed by the set sql mode. Due to this
'0000-00-00' values was allowed for DATE/DATETIME fields even in the
NO_ZERO_DATE mode.
Now the mysql_alter_table() function checks whether the '0000-00-00' value
is allowed for DATE/DATETIME fields by the set sql mode.
The new error_if_not_empty flag is used in the mysql_alter_table() function
to indicate that it should abort if the table being altered isn't empty.
The new new_datetime_field field is used in the mysql_alter_table() function
for error throwing purposes.
The new error_if_not_empty parameter is added to the copy_data_between_tables()
function to indicate the it should return error if the source table isn't empty.
my_decimal in some cases can contain more decimal digits than
is officially supported (DECIMAL_MAX_PRECISION), so we need to
prepare bigger buffer for the resulting string.
Problem: we may get syntactically incorrect queries in the binary log
if we use a string value user variable executing a PS which
contains '... limit ?' clause, e.g.
prepare s from "select 1 limit ?";
set @a='qwe'; execute s using @a;
Fix: raise an error in such cases.
is involved.
The Arg_comparator::compare_datetime() comparator caches its arguments if
they are constants i.e. const_item() returns true. The
Item_func_get_user_var::const_item() returns true or false based on
the current query_id and the query_id where the variable was created.
Thus even if a query can change its value its const_item() still will return
true. All this leads to a wrong comparison result when an object of the
Item_func_get_user_var class is involved.
Now the Arg_comparator::can_compare_as_dates() and the
get_datetime_value() functions never cache result of the GET_USER_VAR()
function (the Item_func_get_user_var class).
Conversion errors when constructing the condition for an
IN predicates were treated as if the affected column contains
NULL. If such a IN predicate is inside NOT we get wrong
results.
Corrected the handling of conversion errors in an IN predicate
that is resolved by unique_subquery (through
subselect_uniquesubquery_engine).
Command line and configuration file option 'key_cache_block_size'
was reduced by MALLOC_OVERHEAD (8 in a production server, 36 in a
debug server) from the user supplied value and restricted it to
the greatest multiple of 512 less or equal to the reduced value.
This patch changes option 'key_cache_block_size' to not deduce
MALLOC_OVERHEAD from the input value. However, the restriction
to a multiple of 512 is still done.
Made year 2000 handling more uniform
Removed year 2000 handling out from calc_days()
The above removes some bugs in date/datetimes with year between 0 and 200
Now we get a note when we insert a datetime value into a date column
For default values to CREATE, don't give errors for warning level NOTE
Fixed some compiler failures
Added library ws2_32 for windows compilation (needed if we want to compile with IOCP support)
Removed duplicate typedef TIME and replaced it with MYSQL_TIME
Better (more complete) fix for: Bug#21103 "DATE column not compared as DATE"
Fixed properly Bug#18997 "DATE_ADD and DATE_SUB perform year2K autoconversion magic on 4-digit year value"
Fixed Bug#23093 "Implicit conversion of 9912101 to date does not match cast(9912101 as date)"
- Since isinf() portability across various platforms and
compilers is a complicated question, we should not use
it directly. Instead, the my_isinf() macro should be used,
which is defined as an alias to the system-defined isinf()
if it is safe to use, or a workaround implementation otherwise
Bug#21483 "Server abort or deadlock on INSERT DELAYED with another
implicit insert"
Also fixes and adds test cases for bugs:
20497 "Trigger with INSERT DELAYED causes Error 1165"
21714 "Wrong NEW.value and server abort on INSERT DELAYED to a
table with a trigger".
Post-review fixes.
Problem:
In MySQL INSERT DELAYED is a way to pipe all inserts into a
given table through a dedicated thread. This is necessary for
simplistic storage engines like MyISAM, which do not have internal
concurrency control or threading and thus can not
achieve efficient INSERT throughput without support from SQL layer.
DELAYED INSERT works as follows:
For every distinct table, which can accept DELAYED inserts and has
pending data to insert, a dedicated thread is created to write data
to disk. All user connection threads that attempt to
delayed-insert into this table interact with the dedicated thread in
producer/consumer fashion: all records to-be inserted are pushed
into a queue of the dedicated thread, which fetches the records and
writes them.
In this design, client connection threads never open or lock
the delayed insert table.
This functionality was introduced in version 3.23 and does not take
into account existence of triggers, views, or pre-locking.
E.g. if INSERT DELAYED is called from a stored function, which,
in turn, is called from another stored function that uses the delayed
table, a deadlock can occur, because delayed locking by-passes
pre-locking. Besides:
* the delayed thread works directly with the subject table through
the storage engine API and does not invoke triggers
* even if it was patched to invoke triggers, if triggers,
in turn, used other tables, the delayed thread would
have to open and lock involved tables (use pre-locking).
* even if it was patched to use pre-locking, without deadlock
detection the delayed thread could easily lock out user
connection threads in case when the same table is used both
in a trigger and on the right side of the insert query:
the delayed thread would not release locks until all inserts
are complete, and user connection can not complete inserts
without having locks on the tables used on the right side of the
query.
Solution:
These considerations suggest two general alternatives for the
future of INSERT DELAYED:
* it is considered a full-fledged alternative to normal INSERT
* it is regarded as an optimisation that is only relevant
for simplistic engines.
Since we missed our chance to provide complete support of new
features when 5.0 was in development, the first alternative
currently renders infeasible.
However, even the second alternative, which is to detect
new features and convert DELAYED insert into a normal insert,
is not easy to implement.
The catch-22 is that we don't know if the subject table has triggers
or is a view before we open it, and we only open it in the
delayed thread. We don't know if the query involves pre-locking
until we have opened all tables, and we always first create
the delayed thread, and only then open the remaining tables.
This patch detects the problematic scenarios and converts
DELAYED INSERT to a normal INSERT using the following approach:
* if the statement is executed under pre-locking (e.g. from
within a stored function or trigger) or the right
side may require pre-locking, we detect the situation
before creating a delayed insert thread and convert the statement
to a conventional INSERT.
* if the subject table is a view or has triggers, we shutdown
the delayed thread and convert the statement to a conventional
INSERT.
function.
A wrong condition was used to check that the
Arg_comparator::can_compare_as_dates() function calculated the value of the
string constant. When comparing a non-const STRING function with a constant
DATETIME function it leads to saving an arbitrary value as a cached value of
the DATETIME function.
Now the Arg_comparator::set_cmp_func() function initializes the const_value
variable to the impossible DATETIME value (-1) and this const_value is
cached only if it was changed by the Arg_comparator::can_compare_as_dates()
function.