had broken the 5.1 behaviour of --log-error: --log-error without argument sent to stderr instead of writing
to a file with an autogenerated name.
mysql-test/suite/sys_vars/t/log_error_func.test:
test that error log is created and shown in SHOW VARIABLES.
Interestingly the error log's path is apparently relative if --log-error=argument is used, but
may be absolute or relative if --log-error(no argument) is used (because then the path is derived from
that of pidfile_name, which can be absolute or relative, depending on if autogenerated or not).
mysql-test/suite/sys_vars/t/log_error_func2.test:
test that error log is created and shown in SHOW VARIABLES
mysql-test/suite/sys_vars/t/log_error_func3.test:
test that error log is empty in SHOW VARIABLES
sql/mysql_priv.h:
id for option --log-error
sql/mysqld.cc:
No --log-error means "write errors to stderr", whereas --log-error
without argument means "write errors to a file". So we cannot use the default logic
of class sys_var_charptr, which treats "option not used" the same as "option used
without argument" and uses the same default for both. We need to catch "option used",
in mysqld_get_one_option(), and then "without argument". Setting to "" makes sure
that init_server_components() will create the log, with an autogenerated name.
sql/sys_vars.cc:
need to give the option a numeric id so that we can catch it in mysqld_get_one_option()
Bug#51676 Server crashes on SELECT, ORDER BY on 'utf8mb4' column
An additional fix. We should use 0xFFFD as a weight for supplementary
characters, not the "weight for character U+FFFD".
Bug#51675 Server crashes on inserting 4 byte char. after ALTER TABLE to 'utf8mb4'
Bug#51676 Server crashes on SELECT, ORDER BY on 'utf8mb4' column
include/m_ctype.h:
Defining MY_CS_REPLACEMENT_CHARACTER
mysql-test/r/ctype_utf8mb4.result:
Adding tests
mysql-test/t/ctype_utf8mb4.test:
Adding tests
strings/ctype-uca.c:
Don't use UCA data for characters higher than 0xFFFF.
strings/ctype-ucs2.c:
Using newly defined MY_CS_REPLACEMENT_CHARACTER
strings/ctype-utf8.c:
Using newly defined MY_CS_REPLACEMENT_CHARACTER
Removing unesed variable "plane".
autotools runs
- Fix recognition of --with-debug=full in configure wrapper
- Remove CMakeCache.txt in configure wrapper, to match the original
- Fix recognition of max-no-ndb
- Fix broken dependencies of mysql_fix_privilege_table.sql from
mysql_system_tables.sql and mysql_system_tables_fix.sql
- Add "distclean target" that informs user about appropriate bzr command
cmake/configure.pl:
- Recognize --with-debug=full, map to WITH_DEBUG_FULL
- remove CMakeCache.txt, so the configuration is no more sticky
(to match the original configure behavior)
cmake/plugin.cmake:
- Recognize WITH_MAX_NO_NDB, this fixes missing storage engines after BUILD/*max-no-ndb scripts
mysql-test/CMakeLists.txt:
test-force uses the same macros (MTR_FORCE) as test-bt* now
scripts/CMakeLists.txt:
- Fix broken dependency when producing mysql_fix_privilege_tables.sql, reported by Davi.
We now concatenate 2 scripts in custom command that
has dependency on both scripts rather than concatenating them at cmake time.
sql/CMakeLists.txt:
Address frequently asked question "where is distclean" by implementing distclean target
that does nothing except pointing to appropriate
bzr command.
It is better not to call "bzr clean-tree" automatically, without user consent.
It could clean new files that were meant to be added.
Diagnostics_area::set_ok_status on DROP FUNCTION
This assert tests that the server is not trying to send "ok" to
the client if an error has occured during statement processing.
In this case, the assert was triggered by lock timeout errors when
accessing system tables to do an implicit REVOKE after executing
DROP FUNCTION/PROCEDURE. In practice, this was only likely to
happen with very low values for "lock_wait_timeout" (in the bug report
1 second was used). These errors were ignored and the server tried
to send "ok" to the client, triggering the assert.
The patch for Bug#45225 introduced lock timeouts for metadata locks.
This made it possible to get timeouts when accessing system tables.
Note that a followup patch for Bug#45225 pushed after this
bug was reported, changed accessing of system tables such
that the user-supplied timeout value is ignored and the maximum
timeout value is used instead. This exact bug was therefore
only noticeable in the period between the initial Bug#45225 patch
and the followup patch.
However, the same problem could occur for any errors during revoking
of privileges - not just timeouts. This patch fixes the problem by
making sure that any errors during revoking of privileges are
reported to the client.
Test case added to sp-destruct.test. Since the original bug is not
reproducable now that system tables are accessed using a a long
timeout value, this test instead calls DROP FUNCTION with a grant
system table missing.
Add deprecation warning when variable optimizer_search_depth is given
the value 63.
mysql-test/r/greedy_optimizer.result
Updated with warning text.
mysql-test/r/mysqld--help-notwin.result
Updated with warning from mysqld --help --verbose.
mysql-test/r/mysqld--help-win.result
Updated with warning from mysqld --help --verbose.
sql/sys_vars.cc
Added an update check function to the constructor invocation for
the optimizer_search_depth variable. The function emits a
warning message for the value 63.
There was auto-reconnecting by slave earlier than a prescribed by slave_net_timeout value.
The issue happened on 64bit solaris that spotted rather incorrect casting of
the ulong slave_net_timeout into the uint of mysql.options.read_timeout.
Notice, that there is no reason for slave_net_timeout to be of type of ulong.
Since it's primarily passed as arg to mysql_options the type can be made
as uint to avoid all conversion hassles.
That's what the fixes are made.
A "side" effect of the patch is a new value for the max of slave_net_timeout
to be the max of the unsigned int type (therefore to vary across platforms).
Note, a regression test can't be made to run reliably without making it to last over some
20 secs. That's why it is placed in suite/large_tests.
mysql-test/suite/large_tests/r/rpl_slave_net_timeout.result:
the new test results.
mysql-test/suite/large_tests/t/rpl_slave_net_timeout-slave.opt:
Initialization of the option that yields slave_net_timeout's default.
sql/mysql_priv.h:
changing type for slave_net_timeout from ulong to uint
sql/mysqld.cc:
changing type for slave_net_timeout from ulong to uint
sql/sys_vars.cc:
Refining the max value for slave_net_timeout to be as the max for uint type.
Extend and implement the grammar that allows to FLUSH WITH READ LOCK
a list of tables, rather than all of them.
Incompatible grammar change:
Previously one could perform FLUSH TABLES, HOSTS, PRIVILEGES in a single
statement.
After this change, FLUSH TABLES must always be alone on the list.
Judging by the test suite, however, the old extended syntax
was never or very rarely used.
The new statement requires RELOAD ACL global privilege and
LOCK_TABLES_ACL | SELECT_ACL on individual tables.
In other words, it's an atomic combination of LOCK TALBES <list> READ
and FLUSH TABLES <list>, and requires respective privileges.
For additional information about the semantics, please
see WL#5000 and the comment for flush_tables_with_read_lock()
function in sql_parse.cc
mysql-test/r/flush.result:
Update test results (WL#5000).
mysql-test/t/flush.test:
Add test coverage for WL#5000.
sql/sql_yacc.yy:
Allow FLUSH TABLES <table_list> WITH READ LOCK.
Disallow FLUSH TABLES <table_list>, flush_options.
Before this fix, the performance schema file instrumentation would treat:
- a relative path to a file
- an absolute path to the same file
as two different files.
This would lead to:
- separate aggregation counters
- file leaks when a file is removed.
With this fix, a relative and absolute path are resolved to the same file instrument.
The problem was that ALTER TABLE on a merge table which was locked
using LOCK TABLE ... WRITE, by mistake gave
ER_TABLE_NOT_LOCKED_FOR_WRITE.
During opening of the table to be ALTERed, open_table() tried to
get an upgradable metadata lock. In LOCK TABLEs mode, this lock
must already exist (i.e. taken by LOCK TABLE) as new locks of this
type cannot be acquired for fear of deadlock. So in LOCK TABLEs
mode, open_table() tried to find an existing upgradable lock for
the table to be altered.
The problem was that open_table() also tried to find upgradable
metadata locks for children of merge tables even if no such
locks are needed to execute ALTER TABLE on merge tables.
This patch fixes the problem by making sure that open tables code
only searches for upgradable metadata locks for the merge table
and not for the merge children tables.
The patch also fixes a related bug where an upgradable metadata
lock was aquired outside of LOCK TABLEs mode even if the table in
question was temporary. This bug meant that LOCK TABLES or DDL on
temporary tables by mistake could be blocked/aborted by locks held
on base tables with the same table name by other connections.
Test cases added to merge.test and lock_multi.test.
The problem is that cond->fix_fields(thd, 0) breaks
condition(cuts off 'having'). The reason of that is
that NULL valued Item pointer is present in the
middle of Item list and it breaks the Item processing
loop.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/item_cmpfunc.h:
added ASSERT to make sure that we do not add NULL valued Item pointer
sql/sql_select.cc:
skip adding an item to condition if Item pointer is NULL.
skip adding a list to condition if this list is empty.
performance degradation.
Filesort + join cache combination is preferred to full index scan because it
is usually faster. But it's not the case when the index is clustered one.
Now test_if_skip_sort_order function prefers filesort only if index isn't
clustered.
mysql-test/r/innodb_mysql.result:
Added a test case for the bug#50843.
mysql-test/t/innodb_mysql.test:
Added a test case for the bug#50843.
sql/sql_select.cc:
Bug#50843: Filesort used instead of clustered index led to
performance degradation.
Now test_if_skip_sort_order function prefers filesort only if index isn't
clustered.
Attempts to execute RESET statements within a transaction that
had acquired metadata locks, led to an assertion failure on
debug servers. This bug didn't cause any problems on release
builds.
The triggered assert is designed to check that caches are not
flushed or reset while having active transactions. It is triggered
if acquired metadata locks exist that are not from LOCK TABLE or
HANDLER statements.
In this case it was triggered by RESET QUERY CACHE while having
an active transaction that had acquired locks. The reason the
assertion was triggered, was that RESET statements, unlike the
similar FLUSH statements, was not causing an implicit commit.
This patch fixes the problem by making sure RESET statements
commit the current transaction before executing. The commit
causes acquired metadata locks to be released, preventing the
assertion from being triggered.
Incompatible change: This patch changes RESET statements so
that they cause an implicit commit.
Test case added to query_cache.test.
Propagation of a large unsigned numeric constant
in the WHERE expression led to wrong result.
For example,
"WHERE a = CAST(0xFFFFFFFFFFFFFFFF AS USIGNED) AND FOO(a)",
where a is an UNSIGNED BIGINT, and FOO() accepts strings,
was transformed to "... AND FOO('-1')".
That has been fixed.
Also EXPLAIN EXTENDED printed incorrect numeric constants in
transformed WHERE expressions like above. That has been
fixed too.
mysql-test/r/bigint.result:
Added test case for bug #45360.
mysql-test/t/bigint.test:
Added test case for bug #45360.
sql/item.cc:
Bug #45360: wrong results
As far as Item_int_with_ref (and underlaying Item_int)
class accepts both signed and unsigned 64bit values,
Item_int::val_str and Item_int::print methods have been
modified to take into account unsigned_flag.
bool MDL_context::try_acquire_lock(MDL_request*)
This assert was triggered in the following way:
1) HANDLER OPEN t1 from connection 1
2) DROP TABLE t1 from connection 2. This will block due to the metadata lock
held by the open handler in connection 1.
3) DML statement (e.g. INSERT) from connection 1. This will close the table
opened by the HANDLER in 1) and release its metadata lock. This is done due
to the pending exclusive metadata lock from 2).
4) DROP TABLE t1 from connection 2 now completes and removes table t1.
5) HANDLER READ from connection 1. Since the handler table was closed in 3),
the handler code will try to reopen the table. First a new metadata lock on
t1 will be granted before the command fails since the table was removed in 4).
6) HANDLER READ from connection 1. This caused the assert.
The reason for the assert was that the MDL_request's pointer to the lock
ticket was not reset when the statement failed. HANDLER READ then tried to
acquire a lock using the same MDL_request object, triggering the assert.
This bug was only noticeable on debug builds and did not cause any problems
on release builds.
This patch fixes the problem by assuring that the pointer to the metadata
lock ticket is reset when reopening of handler tables fails.
Test case added to handler.inc
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
mysql-test/r/join.result:
Added a test case for bug #50335.
mysql-test/t/join.test:
Added a test case for bug #50335.
sql/sql_select.cc:
Removing the assertion in eq_ref_table() as it does not make
any sense. We also have to take care of the 'found' counter so
that we count multiple references only once. We rely on this
fact later in eq_ref_table().