BOGUS "THE TABLE MYSQL.PROC IS MISSING,..."
There was a race condition between loading a stored routine
(function/procedure/trigger) specified by fully qualified name
SCHEMA_NAME.PROC_NAME and dropping the stored routine database.
The problem was that there is a window for race condition when one server
thread tries to load a stored routine being executed and the other thread
tries to drop the stored routine schema.
This condition race window exists in implementation of function
mysql_change_db() called by db_load_routine() during loading of stored
routine to cache. Function mysql_change_db() calls check_db_dir_existence()
that might failed because specified database was dropped during concurrent
execution of DROP SCHEMA statement. db_load_routine() calls mysql_change_db()
with flag 'force_switch' set to 'true' value so when referenced db is not found
then my_error() is not called and function mysql_change_db() returns ok.
This shadows information about schema opening error in db_load_routine().
Then db_load_routine() makes attempt to parse stored routine that is failed.
This makes to return error to sp_cache_routines_and_add_tables_aux() but since
during error generation a call to my_error wasn't made and hence
THD::main_da wasn't set we set the generic "mysql.proc table corrupt" error
when running sp_cache_routines_and_add_tables_aux().
The fix is to install an error handler inside db_load_routine() for
the mysql_op_change_db() call, and check later if the ER_BAD_DB_ERROR
was caught.
sql/sql_db.cc:
Added synchronization point "before_db_dir_check" to emulate a race condition during
processing of CALL/DROP SCHEMA.
TO POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
ALTER TABLE MODIFY/CHANGE ... FIRST did nothing except renaming
columns if new version of the table had exactly the same
structure as the old one (i.e. as result of such statement, names
of columns changed their order as specified but data in columns
didn't). The same thing happened for ALTER TABLE DROP COLUMN/ADD
COLUMN statements which were supposed to produce new version of
table with exactly the same structure as the old version of table.
I.e. in the latter case the result was the same as if old column
was renamed instead of being dropped and new column with default
as value being created.
Both these problems were caused by the fact that ALTER TABLE
implementation incorrectly interpreted both these situations as
simple renaming of columns and assumed that in-place ALTER TABLE
algorithm could have been used for them.
This patch fixes this problem by ensuring that in cases when some
column is moved to the first position or some column is dropped
the default ALTER TABLE algorithm involving table copying is
always used. This is achieved by detecting such situations in
mysql_prepare_alter_table() and setting Alter_info::change_level
to ALTER_TABLE_DATA_CHANGED for them.
mysql-test/r/alter_table.result:
Added test for bug #12652385 - "61493: REORDERING COLUMNS TO
POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
mysql-test/t/alter_table.test:
Added test for bug #12652385 - "61493: REORDERING COLUMNS TO
POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
sql/sql_table.cc:
Changed mysql_prepare_alter_table() to detect situations in
which we some column moved to the first position or some column
is dropped and ensure that such ALTER TABLE statements won't
be carried out using in-place algorithm. The latter could have
happened before this patch if new version of table had the same
structure as the old one (except the column names).
SYNTAX TRIGGERS IN ANY WAY
Table with triggers which were using deprecated (5.0-only) syntax became
unavailable for any DML and DDL after upgrade to 5.1 version of server.
Attempt to execute any statement on such a table resulted in parsing
error reported. Since this included DROP TRIGGER and DROP TABLE
statements (actually, the latter was allowed but was not functioning
properly for such tables) it was impossible to fix the problem without
manual operations on .TRG and .TRN files in data directory.
The problem was that failure to parse trigger body (due to 5.0-only
syntax) when opening trigger file for a table prevented the table
from being open. This made all operations on the table impossible
(except DROP TABLE which due to peculiarity in its implementation
dropped the table but left trigger files around).
This patch solves this problem by silencing error which occurs when
we parse trigger body during table open. Error message is preserved
for the future use and table is marked as having a broken trigger.
We also try to analyze parse tree to recover trigger name, which
will be needed in order to drop the broken trigger. DML statements
which invoke triggers on the table marked as having broken trigger
are prohibited and emit saved error message. The same happens for
DDL which change triggers except DROP TRIGGER and DROP TABLE which
try their best to do what was requested. Table becomes no longer
marked as having broken trigger when last such trigger is dropped.
mysql-test/r/trigger-compat.result:
Add results for test case for bug#45235
mysql-test/t/trigger-compat.test:
Add test case for bug#45235.
sql/sp_head.cc:
Added protection against MEM_ROOT double restoring to
sp_head::restore_thd_mem_root() method. Since this
method can be sometimes called twice during parsing
of stored routine (the first time during normal flow
of parsing, and the second time when a syntax error
is detected) we need to shortcut execution of the
method to avoid damaging MEM_ROOT by the second
consecutive call to this method.
sql/sql_trigger.cc:
Added error handler Deprecated_trigger_syntax_handler to
catch non-OOM errors during parsing of trigger body.
Added handling of parse errors into method
Table_triggers_list::check_n_load().
sql/sql_trigger.h:
Added new members to handle broken triggers and error messages.
THE EVENT STATUS.
Any ALTER EVENT statement on a disabled event enabled it back
(unless this ALTER EVENT statement explicitly disabled the event).
The problem was that during processing of an ALTER EVENT statement
value of status field was overwritten unconditionally even if new
value was not specified explicitly. As a consequence this field
was set to default value for status which corresponds to ENABLE.
The solution is to check if status field was explicitly specified in
ALTER EVENT statement before assigning new value to status field.
mysql-test/r/events_bugs.result:
test's result for Bug#11764334 was added.
mysql-test/t/events_bugs.test:
new test for Bug#11764334 was added.
sql/event_db_repository.cc:
mysql_event_fill_row() was modified: set value for status field
in events tables only in case if statement CREATE EVENT
is being processed or if this value was set in ALTER EVENT
statement.
Event_db_repository::create_event was modified: removed redundant
setting of status field after return from call to mysql_event_fill_row().
sql/event_parse_data.h:
Event_parse_data structure was modified: added flag
status_changed that is set to true if status's value
was changed in ALTER EVENT statement.
sql/sql_yacc.yy:
Set flag status_changed if status was set in ALTER EVENT
statement.
Problem: in case of wrong data insert into indexed GEOMETRY fields
(e.g. NULL value for a not NULL field) MyISAM reported
"ERROR 126 (HY000): Incorrect key file for table; try to repair it"
due to misuse of the key deletion function.
Fix: always use R-tree key functions for R-tree based indexes
and B-tree key functions for B-tree based indexes.
mysql-test/r/gis-rtree.result:
Bug#11764487: myisam corruption with insert ignore and invalid spatial data
- test result.
mysql-test/t/gis-rtree.test:
Bug#11764487: myisam corruption with insert ignore and invalid spatial data
- test case.
storage/myisam/mi_update.c:
Bug#11764487: myisam corruption with insert ignore and invalid spatial data
- handling update errors check for HA_ERR_NULL_IN_SPATIAL as well to be
consistent with mi_write();
- always use keyinfo->ck_delete()/ck_insert() instead of _mi_ck_delete()/_mi_ck_write()
to handle index properly, as it may be of B-tree or R-tree type.
storage/myisam/mi_write.c:
Bug#11764487: myisam corruption with insert ignore and invalid spatial data
- always use keyinfo->ck_delete() instead of _mi_ck_delete() to handle
index properly, as it may be of B-tree or R-tree type.
will create multiple running events.
A CREATE IF NOT EXIST on an event that existed and was enabled caused
multiple instances of the event to run. Disabling the event didn't help.
If the event was dropped, the event stopped running, but when created
again, multiple instances of the event were still running. The only way
to get out of this situation was to restart the server.
The problem was that Event_db_repository::create_event() didn't return
enough information to discriminate between situation when event didn't
exist and was created and when event did exist and was not created
(but a warning was emitted). As result in the latter case event
was added to in-memory queue of events second time. And this led to
unwarranted multiple executions of the same event.
The solution is to add out-parameter to Event_db_repository::create_event()
method which will signal that event was not created because it already
exists and so it should not be added to the in-memory queue.
mysql-test/r/events_bugs.result:
Added results for test for Bug#12546938.
mysql-test/t/events_bugs.test:
Added test for Bug#12546938.
sql/event_db_repository.cc:
Event_db_repository::create_event was modified: set newly added out-parameter
event_already_exists to true value if event wasn't created because event
already existed and IF NOT EXIST clause was present.
sql/event_db_repository.h:
Added out-parameter 'event_already_exists' to create_event() method.
sql/events.cc:
Events::create_event was modified: insert new element into
event queue only if event was actually created.
Assertion happens due to missing NULL value check in
Item_func_round::fix_length_and_dec() function.
The fix: added NULL value check for second parameter.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
added NULL value check for second parameter.
LEAK WITH PARTITIONED ARCHIVE TABLES
CHECK TABLE against archive table, when file descriptors
are exhausted, caused server crash.
Archive didn't handle errors when opening data file for
CHECK TABLE.
mysql-test/r/archive_debug.result:
A test case for BUG#12402794.
mysql-test/t/archive_debug.test:
A test case for BUG#12402794.
storage/archive/azio.c:
A test case for BUG#12402794.
storage/archive/ha_archive.cc:
Handle init_archive_reader() failure.
There are two problems:
1. There is a missing check for 'year' parameter(year can not be greater than 9999) in
makedate function. fix: added check that year can not be greater than 9999.
2. There is a missing check for zero date in from_days() function.
fix: added zero date check into Item_func_from_days::get_date()
function.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.cc:
--added check that year can not be greater than 9999 for makedate() function
--added zero date check into Item_func_from_days::get_date() function
Impementing Test Review Comment.
Bug test scenario:
SELECT is not returning result set for "equal" (=) and "NULL safe equal
operator" (<=>) on BIT data type. Extending this scenario for all data types
WORK WITH --START-POSITION
If setting --start-position to start after the FD event, mysqlbinlog
will output an error stating that it has not found an FD event.
However, its not that mysqlbinlog does not find it but rather that it
does not processes it in the regular way (i.e., it does not print it).
Given that one is using --base64-output=DECODE-ROWS then not printing
it is actually fine.
To fix this, we make mysqlbinlog not to complain when it has not
printed the FD event, is outputing in base64, but is decoding the
rows.
The query was re-written *after* we had tagged it with NON_AGG_FIELD_USED.
Remove the flag before continuing.
mysql-test/r/explain.result:
Update test case for Bug#48295.
mysql-test/r/subselect.result:
New test case.
mysql-test/t/explain.test:
Update test case for Bug#48295.
mysql-test/t/subselect.test:
New test case.
sql/item.cc:
Use accessor functions for non_agg_field_used/agg_func_used.
sql/item_subselect.cc:
Remove non_agg_field_used when we rewrite query '1 < some (...)' => '1 < max(...)'
sql/item_sum.cc:
Use accessor functions for non_agg_field_used/agg_func_used.
sql/mysql_priv.h:
Remove unused #defines.
sql/sql_lex.cc:
Initialize new member variables.
sql/sql_lex.h:
Replace full_group_by_flag with two boolean flags,
and itroduce accessors for manipulating them.
sql/sql_select.cc:
Use accessor functions for non_agg_field_used/agg_func_used.
The problem was that wrong structure of mysql.event was not detected and
the server continued to use wrongly-structured data.
The fix is to check the structure of mysql.event after opening before
any use. That makes operations with events more strict -- some operations
that might work before throw errors now. That seems to be Ok.
Another side-effect of the patch is that if mysql.event is corrupted,
unrelated DROP DATABASE statements issue an SQL warning about inability
to open mysql.event table.
The partitioning engine checked the auto_increment column even if it was not to be written,
triggering a DBUG_ASSERT.
Fixed by checking if table->write_set for that column was set.
USING '..' ON WINDOWS
Backport of the fix to 5.0 (to be null-merged to 5.1).
Moved the test into the main test suite.
Made mysql-test-run.pl to not use symlinks for sdtdata as the symlinks
are now properly recognized by secure_file_priv.
Made sure the paths in load_file(), LOAD DATA and SELECT .. INTO OUTFILE
that are checked against secure_file_priv in a correct way similarly to 5.1
by the extended is_secure_file_path() backport before the comparison.
Added an extensive test with all the variants of upper/lower case,
slash/backslash and case sensitivity.
Added few comments to the code.
Partitions can have different ref_length (position data length).
Removed DBUG_ASSERT which crashed debug builds when using
MAX_ROWS on some partitions.
calc_daynr() function returns negative result
if malformed date with zero year and month is used.
Attempt to calculate week day on negative value
leads to crash. The fix is return NULL for
'W', 'a', 'w' specifiers if zero year and month is used.
Additional fix for calc_daynr():
--added assertion that result can not be negative
--return 0 if zero year and month is used
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql-common/my_time.c:
--added assertion that result can not be negative
--return 0 if zero year and month is used
sql/item_timefunc.cc:
eturn NULL for 'W', 'a', 'w' specifiers
if zero year and month is used.
DEFINITION OF ANY ROUTINE.
This follow-up patch removes SHOW PROCEDURE CODE from the test
case as this command is only available on debug versions of the
server and therefore caused the test to fail on release builds.
DEFINITION OF ANY ROUTINE.
The problem was that having the SELECT privilege any column of the
mysql.proc table by mistake allowed the user to see the definition
of all routines (using SHOW CREATE PROCEDURE/FUNCTION and SHOW
PROCEDURE/FUNCTION CODE).
This patch fixes the problem by making sure that those commands
are only allowed if the user has the SELECT privilege on the
mysql.proc table itself.
Test case added to sp-security.test.
Select from a view with the underlying HAVING clause failed with a
message: "1356: View '...' references invalid table(s) or column(s)
or function(s) or definer/invoker of view lack rights to use them"
The bug is a regression of the fix for bug 11750328 - 40825 (similar
case, but the HAVING cause references an aliased field).
In the old fix for bug 40825 the Item_field::name_length value has
been used in place of the real length of Item_field::name. However,
in some cases Item_field::name_length is not in sync with the
actual name length (TODO: combine name and name_length into a
solid String field).
The Item_ref::print() method has been modified to calculate actual
name length every time.
mysql-test/r/view.result:
Test case for bug #11829681
mysql-test/t/view.test:
Test case for bug #11829681
sql/item.cc:
Bug #11829681 - 60295: ERROR 1356 ON VIEW THAT EXECUTES FINE AS A QUERY
The Item_ref::print() method has been modified to calculate actual
name length every time.
sql/item.h:
Minor commentary.
create_schema if auto-generate-sql also set.
mysqlslap uses a schema to run its tests on and later
drops it if auto-generate-sql is used. This can be a
problem, if the schema is an already existing one.
If create-schema is used with auto-generate-sql option,
mysqlslap while performing the cleanup, drops the specified
database.
Fixed by introducing an option --no-drop, which, if used,
will prevent the dropping of schema at the end of the test.
client/client_priv.h:
Bug#11765157 - 58090: mysqlslap drops schema specified in
create_schema if auto-generate-sql also set.
Added an option.
client/mysqlslap.c:
Bug#11765157 - 58090: mysqlslap drops schema specified in
create_schema if auto-generate-sql also set.
Introduced an option 'no-drop' to forbid the removal of schema
even if 'create' or 'auto-generate-sql' options are used.
mysql-test/r/mysqlslap.result:
Added a testcase for Bug#11765157.
mysql-test/t/mysqlslap.test:
Added a testcase for Bug#11765157.
mysql-test/t/loaddata.test:
test for bug; without fix, running the test with --valgrind would show the leak
and make the test fail.
sql/sql_load.cc:
* In READ_INFO class, 'need_end_io_cache' is true as long as init_io_cache() was called,
so if it's true, we need to call end_io_cache(), to free memory allocated
by init_io_cache(). No matter the value of 'error'. In the bug's scenario,
'error' was set to true in read_sep_field() because
'1' (read from file) isn't suitable to load into a geometric column. Because of
'error', end_io_cache() was not called.
Note: end_io_cache() calls my_b_flush_io_cache(), which will do nothing wrong given
that the file is opened for reads only; see the init_io_cache() call which uses
only those read-only types:
(get_it_from_net) ? READ_NET : (is_fifo ? READ_FIFO : READ_CACHE).
IF the cache were rather used to write to the file, my_b_flush_io_cache() may
write to it, and it may be questionable to write to the file
if 'error' is true. But here there's no problem.
* Now that 'need_end_io_cache' is checked even if 'error' is true, it needs
to be initialized in all cases.
* Bonus: move some variables to the initialization list.
Before sorting HAVING condition is split into two parts,
first part is a table related condition and the rest of is
HAVING part. Extraction of HAVING part does not take into account
the fact that some of conditions might be non-const but
have 'used_tables' == 0 (independent subqueries)
and because of that these conditions are cut off by
make_cond_for_table() function.
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
Bug#11764671 57533: UNINITIALISED VALUES IN COPY_AND_CONVERT (SQL_STRING.CC) WITH CERTAIN CHA
When ROUND evaluates decimal result it uses Item::decimal
value as fraction value for the result. In some cases
Item::decimal is greater than real result fraction value
and uninitialised memory of result(decimal) buffer can be
used in further calculations. Issue is introduced by
Bug33143 fix. The fix is to remove erroneous assignment.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
remove erroneous assignment
.0
The bug was fixed by the patch for bug number BUG 11763109 - 55779: SELECT
DOES NOT WORK PROPERLY IN MYSQL SERVER VERSION "5.1.42 SUSE MYSQL (Exact same
fix as was proposed for this bug.) Since the motivation for the two bug
reports was completely different, however, it still makes sense to push the
test case.
This patch contains only the test case.
Some multibyte sequences could be considered by my_mbcharlen() functions
as multibyte character but more exact my_ismbchar() does not think so.
In such a case this multibyte sequences is pushed into 'stack' buffer which
is too small to accommodate the sequence.
The fix is to allocate stack buffer in
compliance with max character length.
mysql-test/r/loaddata.result:
test case
mysql-test/t/loaddata.test:
test case
sql/sql_load.cc:
allocate stack buffer in compliance with max character length.