All numeric operators and functions on integer, floating point
and DECIMAL values now throw an 'out of range' error rather
than returning an incorrect value or NULL, when the result is
out of supported range for the corresponding data type.
Some test cases in the test suite had to be updated
accordingly either because the test case itself relied on a
value returned in case of a numeric overflow, or because a
numeric overflow was the root cause of the corresponding bugs.
The latter tests are no longer relevant, since the expressions
used to trigger the corresponding bugs are not valid anymore.
However, such test cases have been adjusted and kept "for the
record".
In BUG#49562 we fixed the case where numeric user var events
would not serialize the flag stating whether the value was signed
or unsigned (unsigned_flag). This fixed the case that the slave
would get an overflow while treating the unsigned values as
signed.
In this bug, we find that the unsigned_flag can sometimes change
between the moment that the user value is recorded for binlogging
purposes and the actual binlogging time. Since we take the
unsigned_flag from the runtime variable data, at binlogging time,
and the variable value is comes from the copy taken earlier in
the execution, there may be inconsistency in the
User_var_log_event between the variable value and its
unsigned_flag.
We fix this by also copying the unsigned_flag of the
user_var_entry when its value is copied, for binlogging
purposes. Later, at binlogging time, we use the copied
unsigned_flag and not the one in the runtime user_var_entry
instance.
DDL no longer aborts mysql_lock_tables(), and hence
we no longer need to support need_reopen flag of this
call.
Remove the flag, and all the code in the server
that was responsible for handling the case when
it was set. This allowed to simplify:
open_and_lock_tables_derived(), the delayed thread,
multi-update.
Rename MYSQL_LOCK_IGNORE_FLUSH to MYSQL_OPEN_IGNORE_FLUSH,
since we now only support this flag in open_table().
Rename MYSQL_LOCK_PERF_SCHEMA to MYSQL_LOCK_LOG_TABLE,
to avoid confusion.
Move the wait for the global read lock for cases
when we do updates in SELECT f1() or DO (UPDATE) to
open_table() from mysql_lock_tables(). When waiting
for the read lock, we could raise need_reopen flag,
which is no longer present in mysql_lock_tables().
Since the block responsible for waiting for GRL
was moved, MYSQL_LOCK_IGNORE_GLOBAL_READ_LOCK
was renamed to MYSQL_OPEN_IGNORE_GLOBAL_READ_LOCK.
This deadlock could occour betweeen one connection executing
SET GLOBAL EVENT_SCHEDULER= ON and another executing SET GLOBAL
EVENT_SCHEDULER= OFF. The bug was introduced by WL#4738.
The first connection would hold LOCK_event_metadata (protecting
the global variable) while trying to lock LOCK_global_system_variables
starting the event scheduler thread (in THD:init()).
The second connection would hold LOCK_global_system_variables
while trying to get LOCK_event_scheduler after stopping the event
scheduler inside event_scheduler_update().
This patch fixes the problem by not using LOCK_event_metadata to
protect the event_scheduler variable. It is still protected using
LOCK_global_system_variables. This fixes the deadlock as it removes
one of the two mutexes used to produce it.
However, this patch opens up the possibility that the event_scheduler
variable and the real event_scheduler state can become out of sync
(e.g. variable = OFF, but scheduler running). But this can only
happen under very unlikely conditions - two concurrent SET GLOBAL
statments, with one thread interrupted at the exact wrong moment.
This is preferable to having the possibility of a deadlock.
This patch also fixes a bug where it was possible to exit create_event()
without releasing LOCK_event_metadata if running out of memory during
its exection.
No test case added since a repeatable test case would have required
excessive use of new sync points. Instead we rely on the fact that
this bug was easily reproduceable using RGQ tests.
There are two issues fixed here:
1. We needed to update the result file, for some of
mysqlbinlog_* tests, because now the some padding chars
are not output anymore.
2. We needed to change the Field_string::pack so that
for BINARY types the padding chars are not packed
(lengthsp will return full length for these types).
The problem is introduced by WL#4435 "Support OUT-parameters in
prepared statements".
When a statement that has out parameters was reprepared,
the reprepare request error was ignored, and an
attempt to send out parameters to the client was made.
Since the out parameter list was not initialized in case
of an error, this attempt led to a crash.
Don't try to send out parameters to the client
if an error occurred in statement execution.
In BUG#51787 we were using the wrong charset to print out the
data. We were using the field charset for the string that would
hold the information. This caused the assertion, because the
string length was not aligned with UTF32 bytes requirements for
storage.
We fix this by using &my_charset_latin1 in the string object
instead of the field->charset(). As a side-effect, we needed to
extend the show_sql_type interface so that it took the field
charset is now passed as a parameter, so that one is able to
calculate the correct field size.
In BUG#51716 we had issues with Field_string::pack and
Field_string::unpack. When packing, the length was incorrectly
calculated. When unpacking, the padding the string would be
padded with the wrong bytes (a few bytes less than it should).
We fix this by resorting to charset abstractions (functions) that
calculate the correct length when packing and pad correctly the
string when unpacking.
LOCK kills the server.
Prohibit FLUSH TABLES WITH READ LOCK application to views or
temporary tables.
Fix a subtle bug in the implementation when we actually
did not remove table share objects from the table cache after
acquiring exclusive locks.
The problem was that in read only mode (read_only enabled),
the server would mistakenly deny data modification attempts
for temporary tables which belong to a transactional storage
engine (eg. InnoDB).
The solution is to allow transactional temporary tables to be
modified under read only mode. As a whole, the read only mode
does not apply to any kind of temporary table.
Ensure that we store the correct cached_field_type whenever we cache Field items
(in this case it allows us to compare dates as dates, rather than strings)
Before this fix, the performance schema instrumentation
in mdl.h / mdl.cc was incomplete, causing:
- build warnings,
- no data collection for the performance schema
This fix:
- added instrumentation helpers for the new preferred
reader read write lock, mysql_prlock_*
- implemented completely the performance schema
instrumentation of mdl.h / mdl.cc
on decimal column
The problem was that there was no check to disallow DECIMAL
columns in the code (it was accepted as if it was INTEGER).
Solution was to correctly disallow DECIMAL columns in
COLUMNS partitioning. As documented.
when cmake is used for building in a symlinked directory,
and confguration is later adjusted with "cmake-gui ." After it,
GenServerSource fails with "no rule for <filename>". The reason
for the error is that cmake-gui resolves "." as realpath and rules
are generated accordingly, while "cmake" used symlinked path
The fix uses ${CMAKE_CURRENT_BINARY_DIR} instead of
${CMAKE_BINARY_DIR}/sql for generated files.
This causes CMake to use relative file names so
relative file names when generating make rules.
Using relative filenames avoids the problem of
refering to the same directory using 2 different paths.
Besides, using ${CMAKE_CURRENT_BINARY_DIR} is
a commonly used style when working with generated
files.
autotools runs
- Fix recognition of --with-debug=full in configure wrapper
- Remove CMakeCache.txt in configure wrapper, to match the original
- Fix recognition of max-no-ndb
- Fix broken dependencies of mysql_fix_privilege_table.sql from
mysql_system_tables.sql and mysql_system_tables_fix.sql
- Add "distclean target" that informs user about appropriate bzr command
Diagnostics_area::set_ok_status on DROP FUNCTION
This assert tests that the server is not trying to send "ok" to
the client if an error has occured during statement processing.
In this case, the assert was triggered by lock timeout errors when
accessing system tables to do an implicit REVOKE after executing
DROP FUNCTION/PROCEDURE. In practice, this was only likely to
happen with very low values for "lock_wait_timeout" (in the bug report
1 second was used). These errors were ignored and the server tried
to send "ok" to the client, triggering the assert.
The patch for Bug#45225 introduced lock timeouts for metadata locks.
This made it possible to get timeouts when accessing system tables.
Note that a followup patch for Bug#45225 pushed after this
bug was reported, changed accessing of system tables such
that the user-supplied timeout value is ignored and the maximum
timeout value is used instead. This exact bug was therefore
only noticeable in the period between the initial Bug#45225 patch
and the followup patch.
However, the same problem could occur for any errors during revoking
of privileges - not just timeouts. This patch fixes the problem by
making sure that any errors during revoking of privileges are
reported to the client.
Test case added to sp-destruct.test. Since the original bug is not
reproducable now that system tables are accessed using a a long
timeout value, this test instead calls DROP FUNCTION with a grant
system table missing.
Add deprecation warning when variable optimizer_search_depth is given
the value 63.
mysql-test/r/greedy_optimizer.result
Updated with warning text.
mysql-test/r/mysqld--help-notwin.result
Updated with warning from mysqld --help --verbose.
mysql-test/r/mysqld--help-win.result
Updated with warning from mysqld --help --verbose.
sql/sys_vars.cc
Added an update check function to the constructor invocation for
the optimizer_search_depth variable. The function emits a
warning message for the value 63.
There was auto-reconnecting by slave earlier than a prescribed by slave_net_timeout value.
The issue happened on 64bit solaris that spotted rather incorrect casting of
the ulong slave_net_timeout into the uint of mysql.options.read_timeout.
Notice, that there is no reason for slave_net_timeout to be of type of ulong.
Since it's primarily passed as arg to mysql_options the type can be made
as uint to avoid all conversion hassles.
That's what the fixes are made.
A "side" effect of the patch is a new value for the max of slave_net_timeout
to be the max of the unsigned int type (therefore to vary across platforms).
Note, a regression test can't be made to run reliably without making it to last over some
20 secs. That's why it is placed in suite/large_tests.
on Windows".
On platforms where read-write lock implementation does not
prefer readers by default (Windows, Solaris) server might
have deadlocked while detecting MDL deadlock.
MDL deadlock detector relies on the fact that read-write
locks which are used in its implementation prefer readers
(see new comment for MDL_lock::m_rwlock for details).
So far MDL code assumed that default implementation of
read/write locks for the system has this property.
Indeed, this turned out ot be wrong, for example, for
Windows or Solaris. Thus MDL deadlock detector might have
deadlocked on these systems.
This fix simply adds portable implementation of read/write
lock which prefer readers and changes MDL code to use this
new type of synchronization primitive.
No test case is added as existing rqg_mdl_stability test can
serve as one.
Extend and implement the grammar that allows to FLUSH WITH READ LOCK
a list of tables, rather than all of them.
Incompatible grammar change:
Previously one could perform FLUSH TABLES, HOSTS, PRIVILEGES in a single
statement.
After this change, FLUSH TABLES must always be alone on the list.
Judging by the test suite, however, the old extended syntax
was never or very rarely used.
The new statement requires RELOAD ACL global privilege and
LOCK_TABLES_ACL | SELECT_ACL on individual tables.
In other words, it's an atomic combination of LOCK TALBES <list> READ
and FLUSH TABLES <list>, and requires respective privileges.
For additional information about the semantics, please
see WL#5000 and the comment for flush_tables_with_read_lock()
function in sql_parse.cc
The problem was that ALTER TABLE on a merge table which was locked
using LOCK TABLE ... WRITE, by mistake gave
ER_TABLE_NOT_LOCKED_FOR_WRITE.
During opening of the table to be ALTERed, open_table() tried to
get an upgradable metadata lock. In LOCK TABLEs mode, this lock
must already exist (i.e. taken by LOCK TABLE) as new locks of this
type cannot be acquired for fear of deadlock. So in LOCK TABLEs
mode, open_table() tried to find an existing upgradable lock for
the table to be altered.
The problem was that open_table() also tried to find upgradable
metadata locks for children of merge tables even if no such
locks are needed to execute ALTER TABLE on merge tables.
This patch fixes the problem by making sure that open tables code
only searches for upgradable metadata locks for the merge table
and not for the merge children tables.
The patch also fixes a related bug where an upgradable metadata
lock was aquired outside of LOCK TABLEs mode even if the table in
question was temporary. This bug meant that LOCK TABLES or DDL on
temporary tables by mistake could be blocked/aborted by locks held
on base tables with the same table name by other connections.
Test cases added to merge.test and lock_multi.test.
The problem is that cond->fix_fields(thd, 0) breaks
condition(cuts off 'having'). The reason of that is
that NULL valued Item pointer is present in the
middle of Item list and it breaks the Item processing
loop.
performance degradation.
Filesort + join cache combination is preferred to full index scan because it
is usually faster. But it's not the case when the index is clustered one.
Now test_if_skip_sort_order function prefers filesort only if index isn't
clustered.