Removes the regression bug#38751.
sql/ha_partition.cc:
post push fix for bug#38804 (back port of bug#33479)
Removes the regression bug#38751.
archive relies on a ha_archive::info call to flush data before
the copy takes place in alter table.
This ensures that all partitions gets a info call, without having
to always forward info(HA_STATUS_AUTO) to all partitions.
DBUG_EXECUTE_IF macro. The usage of the former caused breakage in other
trees as it got removed from the dbug library.
sql/sql_base.cc:
Rework code to remove unreliable usage of _db_script_keyword_.
The optimizer pulls up aggregate functions which should be aggregated in
an outer select. At some point it may substitute such a function for a field
in the temporary table. The setup_copy_fields function doesn't take this
into account and may overrun the copy_field buffer.
Fixed by filtering out the fields referenced through the specialized
reference for aggregates (Item_aggregate_ref).
Added an assertion to make sure bugs that cause similar discrepancy
don't go undetected.
mysql-test/r/func_group.result:
Bug #37348: test case
mysql-test/t/func_group.test:
Bug #37348: test case
sql/item.cc:
Bug #37348: Added a way to distinguish Item_aggregate_ref from the other types of refs
sql/item.h:
Bug #37348: Added a way to distinguish Item_aggregate_ref from the other types of refs
sql/sql_select.cc:
Bug #37348:
- Don't consider copying field references
seen through Item_aggregate_ref
- check for discrepancies between the number of expected
fields that need copying and the actual fields copied.
The '@' symbol can not be used in the host name according to rfc952.
The fix:
added function check_host_name(LEX_STRING *str)
which checks that all symbols in host name string are valid and
host name length is not more than max host name length
(just moved check_string_length() function from the parser into check_host_name()).
mysql-test/r/create.result:
test result
mysql-test/t/create.test:
test case
sql/mysql_priv.h:
added function check_host_name(LEX_STRING *str)
sql/sql_parse.cc:
added function check_host_name(LEX_STRING *str)
which checks that all symbols in host name string are valid and
host name length is not more than max host name length(HOSTNAME_LENGTH).
sql/sql_yacc.yy:
using newly added function check_host_name()
The problem:
I_S views table does not check the presence of SHOW_VIEW_ACL|SELECT_ACL
privileges for a view. It leads to discrepancy between SHOW CREATE VIEW
and I_S.VIEWS.
The fix:
added appropriate check.
mysql-test/r/information_schema_db.result:
test result
mysql-test/t/information_schema_db.test:
test case
sql/sql_show.cc:
The problem:
I_S views table does not check the presence of SHOW_VIEW_ACL|SELECT_ACL
privileges for a view. It leads to discrepancy between SHOW CREATE VIEW
and I_S.VIEWS.
The fix:
added appropriate check.
The Blackhole engine did not support row-based replication
since the delete_row(), update_row(), and the index and range
searching functions were not implemented.
This patch adds row-based replication support for the
Blackhole engine by implementing the two functions mentioned
above, and making the engine pretend that it has found the
correct row to delete or update when executed from the slave
SQL thread by implementing index and range searching functions.
It is necessary to only pretend this for the SQL thread, since
a SELECT executed on the Blackhole engine will otherwise never
return EOF, causing a livelock.
mysql-test/extra/binlog_tests/blackhole.test:
Blackhole now handles row-based replication.
mysql-test/extra/rpl_tests/rpl_blackhole.test:
Test helper file for testing that blackhole actually
writes something to the binary log on the slave.
mysql-test/suite/binlog/t/binlog_multi_engine.test:
Replication now handles row-based replcation.
mysql-test/suite/rpl/t/rpl_blackhole.test:
Test that Blackhole works with primary key, index, or none.
sql/log_event.cc:
Correcting code to only touch filler bits and leave
all other bits alone. It is necessary since there is
no guarantee that the engine will be able to fill in
the bits correctly (e.g., the blackhole engine).
storage/blackhole/ha_blackhole.cc:
Adding definitions for update_row() and delete_row() to return OK
when executed from the slave SQL thread with thd->query == NULL
(indicating that row-based replication events are being processed).
Changing rnd_next(), index_read(), index_read_idx(), and
index_read_last() to return OK when executed from the slave SQL
thread (faking that the row has been found so that processing
proceeds to update/delete the row).
storage/blackhole/ha_blackhole.h:
Enabling row capabilities for engine.
Defining write_row(), update_row(), and delete_row().
Making write_row() private (as it should be).
When analyzing the possible index use cases the server was re-using an internal structure.
This is wrong, as this internal structure gets updated during the analysis.
Fixed by making a copy of the internal structure for every place it needs to be used.
Also stopped the generation of empty SEL_TREE structures that unnecessary
complicate the analysis.
mysql-test/r/index_merge.result:
Bug#37943: test case
mysql-test/t/index_merge.test:
Bug#37943: test case
sql/opt_range.cc:
Bug#37943:
- Make copy constructors for SEL_TREE and sub-structures and use them when OR-ing trees.
- don't generate empty SEL_TREEs. Return NULL instead.
Bug#37536: Thread scheduling causes performance degradation at low thread count
Deprecated --skip-thread-priority startup option as newer versions of
the server won't change the thread priorities by default.
Giving threads different priorities might yield marginal improvements
in some platforms (where it actually works) but on the other hand it
might cause significant degradation depending on the thread count and
number of processors. Meddling with the thread priorities is a not a
safe bet as it is very dependent on the behavior of the cpu scheduler
and system where MySQL is being run.
From MySQL 6.0 and up the default behavior is that of not modifying
the threads priorities.
sql/mysqld.cc:
Deprecate --skip-thread-priority
This patch contains fixes for two problems:
1. As originally reported, the server crashed on Mac OS X when trying to access
an EXAMPLE table after the EXAMPLE plugin was installed.
It turned out that the dynamically loaded EXAMPLE plugin called the
function hash_earch() from a Mac OS X system library, instead of
hash_earch() from MySQL's mysys library. Makefile.am in storage/example
does not include libmysys. So the Mac OS X linker arranged the hash_search()
function to be linked to the system library when the shared object is
loaded.
One possible solution would be to include libmysys into the linkage of
dynamic plugins. But then we must have a libmysys.so, which must be
used by the server too. This could have a minimal performance impact,
but foremost the change seems to bee too risky at the current state of
MySQL 5.1.
The selected solution is to rename MySQL's hash_search() to my_hash_search()
like it has been done before with hash_insert() and hash_reset().
Since this is the third time, we need to rename a hash_*() function,
I did renamed all hash_*() functions to my_hash_*().
To avoid changing a zillion calls to these functions, and announcing
this to hundreds of developers, I added defines that map the old names
to the new names.
This change is in hash.h and hash.c.
2. The other problem was improper implementation of the handlerton-to-plugin
mapping. We use a fixed-size array to hold a plugin reference for each
handlerton. On every install of a handler plugin, we allocated a new slot
of the array. On uninstall we did not free it. After some uninstall/install
cycles the array overflowed. We did not check for overflow.
One fix is to check for overflow to stop the crashes.
Another fix is to free the array slot at uninstall and search for a free slot
at plugin install.
This change is in handler.cc.
include/hash.h:
Bug#37958 - test main.plugin crash on Mac OS X when selecting from EXAMPLE engine.
Renamed hash_*() functions to my_hash_*().
Added defines that map old names to new names.
mysys/hash.c:
Bug#37958 - test main.plugin crash on Mac OS X when selecting from EXAMPLE engine.
Renamed hash_*() functions to my_hash_*().
sql/handler.cc:
Bug#37958 - test main.plugin crash on Mac OS X when selecting from EXAMPLE engine.
Protect against a failing ha_initialize_handlerton() in ha_finalize_handlerton().
Free hton2plugin slot on uninstall of a handler plugin.
Reuse freed slost of the hton2plugin array.
Protect against array overrun.
from stored procedure.
Problem: we replace all references to local variables in stored procedures
with NAME_CONST(name, value) logging to the binary log. However, if the
value's collation differs we might get an 'illegal mix of collation'
error as we don't pass the collation to the function.
Fix: pass the value's collation to NAME_CONST().
Note: actually we should pass to NAME_CONST() the value's derivation as well.
It's impossible without the parser modifying. Now we always set the
derivation to DERIVATION_IMPLICIT, the same as local variables have.
mysql-test/r/binlog.result:
Fix for bug#39182: Binary log producing incompatible character set query
from stored procedure.
- test result.
mysql-test/r/ctype_cp932_binlog.result:
Fix for bug#39182: Binary log producing incompatible character set query
from stored procedure.
- results adjusted.
mysql-test/r/rpl_sp.result:
Fix for bug#39182: Binary log producing incompatible character set query
from stored procedure.
- results adjusted.
mysql-test/t/binlog.test:
Fix for bug#39182: Binary log producing incompatible character set query
from stored procedure.
- test case.
sql/item.cc:
Fix for bug#39182: Binary log producing incompatible character set query
from stored procedure.
- allow NAME_CONST() to get _charset'foo' COLLATE 'bar' strings
(see Item_func_set_collation).
sql/sp_head.cc:
Fix for bug#39182: Binary log producing incompatible character set query
from stored procedure.
- pass the value's collation to NAME_CONST().
Post-merge bug fix: lock_type is a enumeration type and not a bit mask.
sql/sql_cache.cc:
Check for lock type explicitly. Also err on the safe side and
invalidate the query cache for any write lock.
Server created "arc" directories inside database directories and
maintained there useless copies of .frm files.
Creation and renaming procedures of those copies as well as
creation of "arc" directories has been discontinued.
Removal procedure has been kept untouched to be able to
cleanup existent database directories by the DROP DATABASE
query. Also view renaming procedure has been updated to remove
these directories.
sql/parse_file.cc:
Fixed bug #17823: 'arc' directories inside database directories.
View/table creation and renaming procedures maintained
backup copies of .frm files. Those copies are unused yet,
so this feature was incomplete and unnecessary.
1. Unwanted code has been hidden by FRM_ARCHIVE ifdefs
(the FRM_ARCHIVE macro is not defined).
2. Renaming procedure has been modified to remove obsolete
"arc" directories.
sql/parse_file.h:
Fixed bug #17823: 'arc' directories inside database directories.
The "thd" parameter has been added to the rename_in_schema_file()
function.
sql/sql_db.cc:
Fixed bug #17823: 'arc' directories inside database directories.
Scope of the mysql_rm_arc_files() function has been changed to
global for use from the parse_file.cc file.
sql/sql_view.cc:
Fixed bug #17823: 'arc' directories inside database directories.
Added the "thd" argument to rename_in_schema_file() calls.
JOIN for the subselect wasn't cleaned if we came upon an error
during sub_select() execution. That leads to the assertion failure
in close_thread_tables()
part of the 6.0 code backported
per-file comments:
mysql-test/r/sp-error.result
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test result
mysql-test/t/sp-error.test
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
test case
sql/sp_head.cc
Bug#37949 Crash if argument to SP is a subquery that returns more than one row
lex->unit.cleanup() call added if not substatement
The problem is that when statement-based replication was enabled,
statements such as INSERT INTO .. SELECT FROM .. and CREATE TABLE
.. SELECT FROM need to grab a read lock on the source table that
does not permit concurrent inserts, which would in turn be denied
if the source table is a log table because log tables can't be
locked exclusively.
The solution is to not take such a lock when the source table is
a log table as it is unsafe to replicate log tables under statement
based replication. Furthermore, the read lock that does not permits
concurrent inserts is now only taken if statement-based replication
is enabled and if the source table is not a log table.
include/thr_lock.h:
Introduce yet another lock type that my get upgraded depending
on the binary log format. This is not a optimal solution but
can be easily improved later.
mysql-test/r/log_tables.result:
Add test case result for Bug#34306
mysql-test/suite/binlog/r/binlog_stm_row.result:
Add test case result for Bug#34306
mysql-test/suite/binlog/t/binlog_stm_row.test:
Add test case for Bug#34306
mysql-test/t/log_tables.test:
Add test case for Bug#34306
sql/lock.cc:
Assert that TL_READ_DEFAULT is not a real lock type.
sql/mysql_priv.h:
Export new function.
sql/mysqld.cc:
Remove using_update_log.
sql/sql_base.cc:
Introduce function that returns the appropriate read lock type
depending on how the statement is going to be replicated. It will
only take a TL_READ_NO_INSERT log if the binary is enabled and the
binary log format is statement-based and the table is not a log table.
sql/sql_parse.cc:
Remove using_update_log.
sql/sql_update.cc:
Use new function to choose read lock type.
sql/sql_yacc.yy:
The lock type is now decided at open_tables time. This old behavior was
actually misleading as the binary log format can be dynamically switched
and this would not change for statements that have already been parsed
when the binary log format is changed (ie: prepared statements).
In order to improve the performance when replicating to partitioned
myisam tables with row-based format, the number of rows of current
rows log event is estimated and used to setup storage engine for bulk
inserts.
A stored procedure involving substrings could crash the server on certain
platforms because of invalid memory reads.
During storing the new blob-field value, the cached value's address range
overlapped that of the new field value. This caused problems when the
cached value storage was reallocated to provide access for a new
characater set representation. The patch checks the address ranges, and if
they overlap, the new field value is copied to a new storage before it is
converted to the new character set.
mysql-test/r/sp.result:
Added result set
mysql-test/t/sp.test:
Added test case
sql/field.cc:
The source and destination address ranges of a character conversion must not overlap or the 'from' address will be invalidated as the temporary value-
object is re-allocated to fit the new character set.
sql/field.h:
Added comments
and
Bug#33555: Group By Query does not correctly aggregate partitions
Backport of bug-33257 which is the same bug.
read_range_*() calls was not passed to the partition handlers,
but was translated to index_read/next family calls.
Resulting in duplicates rows and wrong aggregations.
mysql-test/r/partition_range.result:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
Updated result file
mysql-test/t/partition_range.test:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
Re-enabled the test
sql/ha_partition.cc:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
backport of bug-33257, correct handling of read_range_* calls,
without converting them to index_read/next calls
sql/ha_partition.h:
Bug#30573: Ordered range scan over partitioned tables returns some rows twice
backport of bug-33257, correct handling of read_range_* calls,
without converting them to index_read/next calls
The fix for bug 31887 was incomplete : it assumes that all the
field types returned by the IS_NUM macro are descendants of
Item_num and tries to zero-fill the values before doing constant
substitution with such fields when they are compared to constant string
values.
The only exception to this is Field_timestamp : it's in the IS_NUM
macro, but is not a descendant of Field_num.
Fixed by excluding timestamp fields (Field_timestamp) when zero-filling
when converting the constant to compare with to a string.
Note that this will not exclude the timestamp columns from const
propagation.
mysql-test/r/compare.result:
Bug #39353: test case
mysql-test/t/compare.test:
Bug #39353: test case
sql/item.cc:
Bug #39353: don't zero-fill timestamp fields when const propagating
to a string : they'll be converted to a string in a date/time format
and not as an integer.
columns data types
The "SELECT @lastId, @lastId := Id FROM t" query returns
different result sets depending on the type of the Id column
(INT or BIGINT).
Note: this fix doesn't cover the case when a select query
references an user variable and stored function that
updates a value of that variable, in this case a result
is indeterminate.
The server uses incorrect assumption about a constantness of
an user variable value as a select list item:
The server caches a last query number where that variable
was changed and compares this number with a current query
number. If these numbers are different, the server guesses,
that the variable is not updating in the current query, so
a respective select list item is a constant. However, in some
common cases the server updates cached query number too late.
The server has been modified to memorize user variable
assignments during the parse phase to take them into account
on the next (query preparation) phase independently of the
order of user variable references/assignments in a select
item list.
mysql-test/r/user_var.result:
Added test case for bug #26020.
mysql-test/t/user_var.test:
Added test case for bug #26020.
sql/item_func.cc:
An update of entry and update_query_id variables has been
moved from Item_func_set_user_var::fix_fields() to a separate
method, Item_func_set_user_var::set_entry().
sql/item_func.h:
1. The Item_func_set_user_var::set_entry() method has been
added to update Item_func_set_user_var::entry.
2. The Item_func_set_user_var::entry_thd field has beend
added to update Item_func_set_user_var::entry only when
needed.
sql/sql_base.cc:
Fix: setup_fiedls() calls Item_func_set_user_var::set_entry()
for all items from the thd->lex->set_var_list before the first
call of ::fix_fields().
sql/sql_lex.cc:
The lex_start function has been modified to reset
the st_lex::set_var_list list.
sql/sql_lex.h:
New st_lex::set_var_list field has been added to
memorize all user variable assignments in the current
select query.
sql/sql_yacc.yy:
The variable_aux rule has been modified to memorize
in-query user variable assignments in the
st_lex::set_var_list list.
NO_BACKSLASH_ESCAPES was not heeded in LOAD DATA INFILE
and SELECT INTO OUTFILE. It is now.
mysql-test/r/loaddata.result:
Show that SQL-mode NO_BACKSLASH_ESCAPES is heeded in
INFILE/OUTFILE, and that dump/restore cycles work!
mysql-test/t/loaddata.test:
Show that SQL-mode NO_BACKSLASH_ESCAPES is heeded in
INFILE/OUTFILE, and that dump/restore cycles work!
sql/sql_class.cc:
Add function to enquire whether ESCAPED BY was given.
When doing SELECT...OUTFILE, use ESCAPED BY if specifically
given; otherwise use sensible default value depending on
SQL-mode features NO_BACKSLASH_ESCAPES.
sql/sql_class.h:
Add function to enquire whether ESCAPED BY was given.
sql/sql_load.cc:
When doing LOAD DATA INFILE, use ESCAPED BY if specifically
given; otherwise use sensible default value depending on
SQL-mode features NO_BACKSLASH_ESCAPES.
Fix the write_record function to record auto increment
values in a consistent way.
mysql-test/r/auto_increment.result:
Updated the test result file with the output of the
new test case added to verify this bug.
mysql-test/t/auto_increment.test:
Added a new test case to verify this bug.
sql/sql_insert.cc:
The algorithm for the write_record function
in sql_insert.cc is (more emphasis given to
the parts that deal with the autogenerated values)
1) If a write fails
1.1) save the autogenerated value to avoid
thd->insert_id_for_cur_row to become 0.
1.2) <logic to handle INSERT ON DUPLICATE KEY
UPDATE and REPLACE>
2) record the first successful insert id.
explanation of the failure
--------------------------
As long as 1.1) was executed 2) worked fine.
1.1) was always executed when REPLACE worked
with the last row update optimization, but
in cases where 1.1) was not executed 2)
would fail and would result in the autogenerated
value not being saved.
solution
--------
repeat a check for thd->insert_id_for_cur_row
being zero similar to 1.1) before 2) and ensure
that the correct value is saved.