TO POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
ALTER TABLE MODIFY/CHANGE ... FIRST did nothing except renaming
columns if new version of the table had exactly the same
structure as the old one (i.e. as result of such statement, names
of columns changed their order as specified but data in columns
didn't). The same thing happened for ALTER TABLE DROP COLUMN/ADD
COLUMN statements which were supposed to produce new version of
table with exactly the same structure as the old version of table.
I.e. in the latter case the result was the same as if old column
was renamed instead of being dropped and new column with default
as value being created.
Both these problems were caused by the fact that ALTER TABLE
implementation incorrectly interpreted both these situations as
simple renaming of columns and assumed that in-place ALTER TABLE
algorithm could have been used for them.
This patch fixes this problem by ensuring that in cases when some
column is moved to the first position or some column is dropped
the default ALTER TABLE algorithm involving table copying is
always used. This is achieved by detecting such situations in
mysql_prepare_alter_table() and setting Alter_info::change_level
to ALTER_TABLE_DATA_CHANGED for them.
mysql-test/r/alter_table.result:
Added test for bug #12652385 - "61493: REORDERING COLUMNS TO
POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
mysql-test/t/alter_table.test:
Added test for bug #12652385 - "61493: REORDERING COLUMNS TO
POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
sql/sql_table.cc:
Changed mysql_prepare_alter_table() to detect situations in
which we some column moved to the first position or some column
is dropped and ensure that such ALTER TABLE statements won't
be carried out using in-place algorithm. The latter could have
happened before this patch if new version of table had the same
structure as the old one (except the column names).
SYNTAX TRIGGERS IN ANY WAY
Table with triggers which were using deprecated (5.0-only) syntax became
unavailable for any DML and DDL after upgrade to 5.1 version of server.
Attempt to execute any statement on such a table resulted in parsing
error reported. Since this included DROP TRIGGER and DROP TABLE
statements (actually, the latter was allowed but was not functioning
properly for such tables) it was impossible to fix the problem without
manual operations on .TRG and .TRN files in data directory.
The problem was that failure to parse trigger body (due to 5.0-only
syntax) when opening trigger file for a table prevented the table
from being open. This made all operations on the table impossible
(except DROP TABLE which due to peculiarity in its implementation
dropped the table but left trigger files around).
This patch solves this problem by silencing error which occurs when
we parse trigger body during table open. Error message is preserved
for the future use and table is marked as having a broken trigger.
We also try to analyze parse tree to recover trigger name, which
will be needed in order to drop the broken trigger. DML statements
which invoke triggers on the table marked as having broken trigger
are prohibited and emit saved error message. The same happens for
DDL which change triggers except DROP TRIGGER and DROP TABLE which
try their best to do what was requested. Table becomes no longer
marked as having broken trigger when last such trigger is dropped.
mysql-test/r/trigger-compat.result:
Add results for test case for bug#45235
mysql-test/t/trigger-compat.test:
Add test case for bug#45235.
sql/sp_head.cc:
Added protection against MEM_ROOT double restoring to
sp_head::restore_thd_mem_root() method. Since this
method can be sometimes called twice during parsing
of stored routine (the first time during normal flow
of parsing, and the second time when a syntax error
is detected) we need to shortcut execution of the
method to avoid damaging MEM_ROOT by the second
consecutive call to this method.
sql/sql_trigger.cc:
Added error handler Deprecated_trigger_syntax_handler to
catch non-OOM errors during parsing of trigger body.
Added handling of parse errors into method
Table_triggers_list::check_n_load().
sql/sql_trigger.h:
Added new members to handle broken triggers and error messages.
THE EVENT STATUS.
Any ALTER EVENT statement on a disabled event enabled it back
(unless this ALTER EVENT statement explicitly disabled the event).
The problem was that during processing of an ALTER EVENT statement
value of status field was overwritten unconditionally even if new
value was not specified explicitly. As a consequence this field
was set to default value for status which corresponds to ENABLE.
The solution is to check if status field was explicitly specified in
ALTER EVENT statement before assigning new value to status field.
mysql-test/r/events_bugs.result:
test's result for Bug#11764334 was added.
mysql-test/t/events_bugs.test:
new test for Bug#11764334 was added.
sql/event_db_repository.cc:
mysql_event_fill_row() was modified: set value for status field
in events tables only in case if statement CREATE EVENT
is being processed or if this value was set in ALTER EVENT
statement.
Event_db_repository::create_event was modified: removed redundant
setting of status field after return from call to mysql_event_fill_row().
sql/event_parse_data.h:
Event_parse_data structure was modified: added flag
status_changed that is set to true if status's value
was changed in ALTER EVENT statement.
sql/sql_yacc.yy:
Set flag status_changed if status was set in ALTER EVENT
statement.
Problem: in case of wrong data insert into indexed GEOMETRY fields
(e.g. NULL value for a not NULL field) MyISAM reported
"ERROR 126 (HY000): Incorrect key file for table; try to repair it"
due to misuse of the key deletion function.
Fix: always use R-tree key functions for R-tree based indexes
and B-tree key functions for B-tree based indexes.
mysql-test/r/gis-rtree.result:
Bug#11764487: myisam corruption with insert ignore and invalid spatial data
- test result.
mysql-test/t/gis-rtree.test:
Bug#11764487: myisam corruption with insert ignore and invalid spatial data
- test case.
storage/myisam/mi_update.c:
Bug#11764487: myisam corruption with insert ignore and invalid spatial data
- handling update errors check for HA_ERR_NULL_IN_SPATIAL as well to be
consistent with mi_write();
- always use keyinfo->ck_delete()/ck_insert() instead of _mi_ck_delete()/_mi_ck_write()
to handle index properly, as it may be of B-tree or R-tree type.
storage/myisam/mi_write.c:
Bug#11764487: myisam corruption with insert ignore and invalid spatial data
- always use keyinfo->ck_delete() instead of _mi_ck_delete() to handle
index properly, as it may be of B-tree or R-tree type.
will create multiple running events.
A CREATE IF NOT EXIST on an event that existed and was enabled caused
multiple instances of the event to run. Disabling the event didn't help.
If the event was dropped, the event stopped running, but when created
again, multiple instances of the event were still running. The only way
to get out of this situation was to restart the server.
The problem was that Event_db_repository::create_event() didn't return
enough information to discriminate between situation when event didn't
exist and was created and when event did exist and was not created
(but a warning was emitted). As result in the latter case event
was added to in-memory queue of events second time. And this led to
unwarranted multiple executions of the same event.
The solution is to add out-parameter to Event_db_repository::create_event()
method which will signal that event was not created because it already
exists and so it should not be added to the in-memory queue.
mysql-test/r/events_bugs.result:
Added results for test for Bug#12546938.
mysql-test/t/events_bugs.test:
Added test for Bug#12546938.
sql/event_db_repository.cc:
Event_db_repository::create_event was modified: set newly added out-parameter
event_already_exists to true value if event wasn't created because event
already existed and IF NOT EXIST clause was present.
sql/event_db_repository.h:
Added out-parameter 'event_already_exists' to create_event() method.
sql/events.cc:
Events::create_event was modified: insert new element into
event queue only if event was actually created.
Assertion happens due to missing NULL value check in
Item_func_round::fix_length_and_dec() function.
The fix: added NULL value check for second parameter.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
added NULL value check for second parameter.
LEAK WITH PARTITIONED ARCHIVE TABLES
CHECK TABLE against archive table, when file descriptors
are exhausted, caused server crash.
Archive didn't handle errors when opening data file for
CHECK TABLE.
mysql-test/r/archive_debug.result:
A test case for BUG#12402794.
mysql-test/t/archive_debug.test:
A test case for BUG#12402794.
storage/archive/azio.c:
A test case for BUG#12402794.
storage/archive/ha_archive.cc:
Handle init_archive_reader() failure.
There are two problems:
1. There is a missing check for 'year' parameter(year can not be greater than 9999) in
makedate function. fix: added check that year can not be greater than 9999.
2. There is a missing check for zero date in from_days() function.
fix: added zero date check into Item_func_from_days::get_date()
function.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.cc:
--added check that year can not be greater than 9999 for makedate() function
--added zero date check into Item_func_from_days::get_date() function
Impementing Test Review Comment.
Bug test scenario:
SELECT is not returning result set for "equal" (=) and "NULL safe equal
operator" (<=>) on BIT data type. Extending this scenario for all data types
WORK WITH --START-POSITION
If setting --start-position to start after the FD event, mysqlbinlog
will output an error stating that it has not found an FD event.
However, its not that mysqlbinlog does not find it but rather that it
does not processes it in the regular way (i.e., it does not print it).
Given that one is using --base64-output=DECODE-ROWS then not printing
it is actually fine.
To fix this, we make mysqlbinlog not to complain when it has not
printed the FD event, is outputing in base64, but is decoding the
rows.
The problem was that wrong structure of mysql.event was not detected and
the server continued to use wrongly-structured data.
The fix is to check the structure of mysql.event after opening before
any use. That makes operations with events more strict -- some operations
that might work before throw errors now. That seems to be Ok.
Another side-effect of the patch is that if mysql.event is corrupted,
unrelated DROP DATABASE statements issue an SQL warning about inability
to open mysql.event table.
The partitioning engine checked the auto_increment column even if it was not to be written,
triggering a DBUG_ASSERT.
Fixed by checking if table->write_set for that column was set.
Partitions can have different ref_length (position data length).
Removed DBUG_ASSERT which crashed debug builds when using
MAX_ROWS on some partitions.
calc_daynr() function returns negative result
if malformed date with zero year and month is used.
Attempt to calculate week day on negative value
leads to crash. The fix is return NULL for
'W', 'a', 'w' specifiers if zero year and month is used.
Additional fix for calc_daynr():
--added assertion that result can not be negative
--return 0 if zero year and month is used
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql-common/my_time.c:
--added assertion that result can not be negative
--return 0 if zero year and month is used
sql/item_timefunc.cc:
eturn NULL for 'W', 'a', 'w' specifiers
if zero year and month is used.
Select from a view with the underlying HAVING clause failed with a
message: "1356: View '...' references invalid table(s) or column(s)
or function(s) or definer/invoker of view lack rights to use them"
The bug is a regression of the fix for bug 11750328 - 40825 (similar
case, but the HAVING cause references an aliased field).
In the old fix for bug 40825 the Item_field::name_length value has
been used in place of the real length of Item_field::name. However,
in some cases Item_field::name_length is not in sync with the
actual name length (TODO: combine name and name_length into a
solid String field).
The Item_ref::print() method has been modified to calculate actual
name length every time.
mysql-test/r/view.result:
Test case for bug #11829681
mysql-test/t/view.test:
Test case for bug #11829681
sql/item.cc:
Bug #11829681 - 60295: ERROR 1356 ON VIEW THAT EXECUTES FINE AS A QUERY
The Item_ref::print() method has been modified to calculate actual
name length every time.
sql/item.h:
Minor commentary.
create_schema if auto-generate-sql also set.
mysqlslap uses a schema to run its tests on and later
drops it if auto-generate-sql is used. This can be a
problem, if the schema is an already existing one.
If create-schema is used with auto-generate-sql option,
mysqlslap while performing the cleanup, drops the specified
database.
Fixed by introducing an option --no-drop, which, if used,
will prevent the dropping of schema at the end of the test.
client/client_priv.h:
Bug#11765157 - 58090: mysqlslap drops schema specified in
create_schema if auto-generate-sql also set.
Added an option.
client/mysqlslap.c:
Bug#11765157 - 58090: mysqlslap drops schema specified in
create_schema if auto-generate-sql also set.
Introduced an option 'no-drop' to forbid the removal of schema
even if 'create' or 'auto-generate-sql' options are used.
mysql-test/r/mysqlslap.result:
Added a testcase for Bug#11765157.
mysql-test/t/mysqlslap.test:
Added a testcase for Bug#11765157.
mysql-test/t/loaddata.test:
test for bug; without fix, running the test with --valgrind would show the leak
and make the test fail.
sql/sql_load.cc:
* In READ_INFO class, 'need_end_io_cache' is true as long as init_io_cache() was called,
so if it's true, we need to call end_io_cache(), to free memory allocated
by init_io_cache(). No matter the value of 'error'. In the bug's scenario,
'error' was set to true in read_sep_field() because
'1' (read from file) isn't suitable to load into a geometric column. Because of
'error', end_io_cache() was not called.
Note: end_io_cache() calls my_b_flush_io_cache(), which will do nothing wrong given
that the file is opened for reads only; see the init_io_cache() call which uses
only those read-only types:
(get_it_from_net) ? READ_NET : (is_fifo ? READ_FIFO : READ_CACHE).
IF the cache were rather used to write to the file, my_b_flush_io_cache() may
write to it, and it may be questionable to write to the file
if 'error' is true. But here there's no problem.
* Now that 'need_end_io_cache' is checked even if 'error' is true, it needs
to be initialized in all cases.
* Bonus: move some variables to the initialization list.
Before sorting HAVING condition is split into two parts,
first part is a table related condition and the rest of is
HAVING part. Extraction of HAVING part does not take into account
the fact that some of conditions might be non-const but
have 'used_tables' == 0 (independent subqueries)
and because of that these conditions are cut off by
make_cond_for_table() function.
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
Bug#11764671 57533: UNINITIALISED VALUES IN COPY_AND_CONVERT (SQL_STRING.CC) WITH CERTAIN CHA
When ROUND evaluates decimal result it uses Item::decimal
value as fraction value for the result. In some cases
Item::decimal is greater than real result fraction value
and uninitialised memory of result(decimal) buffer can be
used in further calculations. Issue is introduced by
Bug33143 fix. The fix is to remove erroneous assignment.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
remove erroneous assignment
.0
The bug was fixed by the patch for bug number BUG 11763109 - 55779: SELECT
DOES NOT WORK PROPERLY IN MYSQL SERVER VERSION "5.1.42 SUSE MYSQL (Exact same
fix as was proposed for this bug.) Since the motivation for the two bug
reports was completely different, however, it still makes sense to push the
test case.
This patch contains only the test case.
Some multibyte sequences could be considered by my_mbcharlen() functions
as multibyte character but more exact my_ismbchar() does not think so.
In such a case this multibyte sequences is pushed into 'stack' buffer which
is too small to accommodate the sequence.
The fix is to allocate stack buffer in
compliance with max character length.
mysql-test/r/loaddata.result:
test case
mysql-test/t/loaddata.test:
test case
sql/sql_load.cc:
allocate stack buffer in compliance with max character length.
Valgrind warnings were caused by comparing index values to an un-initialized field.
mysql-test/r/subselect.result:
New test cases.
mysql-test/t/subselect.test:
New test cases.
sql/opt_sum.cc:
Add thd to opt_sum_query enabling it to test for errors.
If we have a non-nullable index, we cannot use it to match null values,
since set_null() will be ignored, and we might compare uninitialized data.
sql/sql_select.cc:
Add thd to opt_sum_query, enabling it to test for errors.
sql/sql_select.h:
Add thd to opt_sum_query, enabling it to test for errors.
There are two problems with ANALYSE():
1. Memory leak
it happens because do_select() can overwrite
JOIN::procedure field(with zero value in our case) and
JOIN destructor don't free the memory allocated for
JOIN::procedure. The fix is to save original JOIN::procedure
before do_select() call and restore it after do_select
execution.
2. Wrong result
If ANALYSE() procedure is used for the statement with LIMIT clause
it could retrun empty result set. It happens because of missing
analyse::end_of_records() call. First end_send() function call
returns NESTED_LOOP_QUERY_LIMIT and second call of end_send() with
end_of_records flag enabled does not happen. The fix is to return
NESTED_LOOP_OK from end_send() if procedure is active.
mysql-test/r/analyse.result:
test case
mysql-test/t/analyse.test:
test case
sql/sql_select.cc:
--save original JOIN::procedure before do_select() call and
restore it after do_select execution.
--return NESTED_LOOP_OK from end_send() if procedure is active
When we create temporary result table for UNION
incorrect max_length for YEAR field is used and
it leads to incorrect field value and incorrect
result string length as YEAR field value calculation
depends on field length.
The fix is to use underlying item max_length for
Item_sum_hybrid::max_length intialization.
mysql-test/r/func_group.result:
test case
mysql-test/t/func_group.test:
test case
sql/field.cc:
added assert
sql/item_sum.cc:
init Item_sum_hybrid::max_length with
use underlying item max_length for
INT result type.
Valgrind warning happens due to early null values check
in Item_func_in::fix_length_and_dec(before item evaluation).
As result null value items with uninitialized values are
placed into array and it leads to valgrind warnings during
value array sorting.
The fix is to check null value after item evaluation, item
is evaluated in in_array::set() method.
mysql-test/r/func_in.result:
test case
mysql-test/t/func_in.test:
test case
sql/item_cmpfunc.cc:
The fix is to check null value after item evaluation.
on lctn2 systems
There was a local variable in get_all_tables() to store the
"original" value of the database name as it can get lowercased
depending on the lower_case_table_name value.
get_all_tables() iterates over database names and for each
database iterates over the tables in it.
The "original" db name was assigned in the table names loop.
Thus the first table is ok, but the second and subsequent tables
get the lowercased name from processing the first table.
Fixed by moving the assignment of the original database name
from the inner (table name) to the outer (database name) loop.
Test suite added.
In the string context the MIN() and MAX() functions don't take
into account the unsignedness of the UNSIGNED BIGINT argument
column.
I.e.:
CREATE TABLE t1 (a BIGINT UNSIGNED);
INSERT INTO t1 VALUES (18446668621106209655);
SELECT CONCAT(MAX(a)) FROM t1;
returns -75452603341961.
mysql-test/r/func_group.result:
Test case for bug #11766094.
mysql-test/t/func_group.test:
Test case for bug #11766094.
sql/item.cc:
Bug #11766094 - 59132: MIN() AND MAX() REMOVE UNSIGNEDNESS
The Item_cache_int::val_str() method has been modified to
take into account the unsigned_flag value when converting
data to string.
Valgrind warning happens due to missing NULL value check in
Item::get_date. The fix is to add this check.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item.cc:
added check for NULL value
Valgrind warning happens because null values check happens too late
in Item_func_month::val_str(after result string calculation).The fix
is to check null value before result string calculation.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.h:
check null value before result string calculation.
ASSERTION TABLE->DB_STAT FAILED IN
SQL_BASE.CC::OPEN_TABLE() DURING I_S Q
This assert could be triggered if a statement requiring a name
lock on a table (e.g. DROP TRIGGER) executed concurrently
with an I_S query which also used the table.
One connection first started an I_S query that opened a given table.
Then another connection started a statement requiring a name lock
on the same table. This statement was blocked since the table was
in use by the I_S query. When the I_S query resumed and tried to
open the table again as part of get_all_tables(), it would encounter
a table instance with an old version number representing the pending
name lock. Since I_S queries ignore version checks and thus pending
name locks, it would try to continue. This caused it to encounter
the assert. The assert checked that the TABLE instance found with a
different version, was a real, open table. However, since this TABLE
instance instead represented a pending name lock, the check would
fail and trigger the assert.
This patch fixes the problem by removing the assert. It is ok for
TABLE::db_stat to be 0 in this case since the TABLE instance can
represent a pending name lock.
Test case added to lock_sync.test.
Issue:
======
Test case Correction for bug#11751148.
mysql-test/r/events_bugs.result:
Result file Correction for bug#11751148.
mysql-test/t/events_bugs.test:
Test case Correction for bug#11751148.
Valgrind warning happens due to missing NULL value check in
Item_func::val_decimal. The fix is to add this check.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_func.cc:
added check for NULL value
Valgrind warning happens due to uninitialized cached_format_type field
which is used later in Item_func_str_to_date::val_str method.
The fix is to init cached_format_type field.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.cc:
init cached_format_type field