Item_in_subselect::pushed_cond_guards[] array is allocated only when
left_expr->maybe_null. And it is used (for row expressions) when
left_expr->element_index(i)->maybe_null.
For left_expr being a multi-column subquery, its maybe_null is
always false when the subquery doesn't use tables (see
Item_singlerow_subselect::fix_length_and_dec()
and subselect_single_select_engine::fix_length_and_dec()),
otherwise it's always true.
But row elements can be NULL regardless, so let's always allocate
pushed_cond_guards for multi-column subqueries, no matter whether
its maybe_null was forced to true or false.
In get_mm_tree we have to change Field_geom::geom_type to
GEOMETRY as we have to let storing all types of the spatial features
in the field. So now we restore the original geom_type as it's
done.
Problem was that in a circular replication setup the master remembers
position to events it has generated itself when reading from a slave.
If there are no new events in the queue from the slave, a
Gtid_list_log_event is generated to remember the last skipped event.
The problem happens if there is a network delay and we generate a
Gtid_list_log_event in the middle of the transaction, in which case there
will be an implicit comment and a new transaction with serverid=0 will be
logged.
The fix was to not generate any Gtid_list_log_events in the middle of a
transaction.
log_calc_max_ages(): Use the requested size in the check, instead of
the detected redo log size. The redo log will be resized at startup
if it differs from what has been requested.
The bug happens because of a combination of unfortunate circumstances:
1. Arguments args[0] and args[2] of Item_func_concat point recursively
(through Item_direct_view_ref's) to the same Item_func_conv_charset.
Both args[0]->args[0]->ref[0] and args[2]->args[0]->ref[0] refer to
this Item_func_conv_charset.
2. When Item_func_concat::args[0]->val_str() is called,
Item_func_conv_charset::val_str() writes its result to
Item_func_conc_charset::tmp_value.
3. Then, for optimization purposes (to avoid copying),
Item_func_substr::val_str() initializes Item_func_substr::tmp_value
to point to the buffer fragment owned by Item_func_conv_charset::tmp_value
Item_func_substr::tmp_value is returned as a result of
Item_func_concat::args[0]->val_str().
4. Due to optimization to avoid memory reallocs,
Item_func_concat::val_str() remembers the result of args[0]->val_str()
in "res" and further uses "res" to collect the return value.
5. When Item_func_concat::args[2]->val_str() is called,
Item_func_conv_charset::tmp_value gets overwritten (see #1),
which effectively overwrites args[0]'s Item_func_substr::tmp_value (see #3),
which effectively overwrites "res" (see #4).
This patch does the following:
a. Changes Item_func_conv_charset::val_str(String *str) to use
tmp_value and str the other way around. After this change tmp_value
is used to store a temporary result, while str is used to return the value.
The fixes the second problem (without SUBSTR):
SELECT CONCAT(t2,'-',t2) c2
FROM (SELECT CONVERT(t USING latin1) t2 FROM t1) sub;
As Item_func_concat::val_str() supplies two different buffers when calling
args[0]->val_str() and args[2]->val_str(), in the new reduction the result
created during args[0]->val_str() does not get overwritten by
args[2]->val_str().
b. Fixing the same problem in val_str() for similar classes
Item_func_to_base64
Item_func_from_base64
Item_func_weight_string
Item_func_hex
Item_func_unhex
Item_func_quote
Item_func_compress
Item_func_uncompress
Item_func_des_encrypt
Item_func_des_decrypt
Item_func_conv_charset
Item_func_reverse
Item_func_soundex
Item_func_aes_encrypt
Item_func_aes_decrypt
Item_func_buffer
c. Fixing Item_func::val_str_from_val_str_ascii() the same way.
Now Item_str_ascii_func::ascii_buff is used for temporary value,
while the parameter passed to val_str() is used to return the result.
This fixes the same problem when conversion (from ASCII to e.g. UCS2)
takes place. See the ctype_ucs.test for example queries that returned
wrong results before the fix.
d. Some Item_func descendand classes had temporary String buffers
(tmp_value and tmp_str), but did not really use them.
Removing these temporary buffers from:
Item_func_decode_histogram
Item_func_format
Item_func_binlog_gtid_pos
Item_func_spatial_collection:
e. Removing Item_func_buffer::tmp_value, because it's not used any more.
f. Renaming Item_func_[un]compress::buffer to "tmp_value",
for consistency with other classes.
Note, this patch does not fix the following classes
(although they have a similar problem):
Item_str_conv
Item_func_make_set
Item_char_typecast
They have a complex implementations and simple swapping between "tmp_value"
and "str" won't work. These classes will be fixed separately.
The problem lies in how CURRENT_ROLE is defined. The
Item_func_current_role inherits from Item_func_sysconst, which defines
a safe_charset_converter to be a const_charset_converter.
During view creation, if there is no role previously set, the current_role()
function returns NULL.
This is captured on item instantiation and the
const_charset_converter call subsequently returns an Item_null.
In turn, the function is replaced with Item_null and the view is
then created with an Item_null instead of Item_func_current_role.
Without this patch, the first SHOW CREATE VIEW from the testcase would
have a where clause of WHERE role_name = NULL, while the second SHOW
CREATE VIEW would show a correctly created view.
The same applies for the DATABASE function, as it can change as well.
There is an additional problem with CURRENT_ROLE() when used in a
prepared statement. During prepared statement creation we used to set
the string_value of the function to the current role as well as the
null_value flag. During execution, if CURRENT_ROLE was not null, the
null_value flag was never set to not-null during fix_fields.
Item_func_current_user however can never be NULL so it did not show this
problem in a view before. At the same time, the CURRENT_USER() can not
be changed between prepared statement execution and creation so the
implementation where the value is stored during fix_fields is
sufficient.
Note also that DATABASE() function behaves differently during prepared
statements. See bug 25843 for details or commit
7e0ad09edf
The problem lies in not checking role privileges as well during SHOW
DATABASES command. This problem is also apparent for SHOW CREATE
DATABASE command.
Other SHOW COMMANDS make use of check_access, which in turn makes use of
acl_get for both priv_user and priv_role parts, which allows them to
function correctly.
Add a test case for corrupting SYS_TABLES.TYPE,
and for ROW_FORMAT=REDUNDANT, the unused field SYS_TABLES.MIX_LEN
that must be ignored (InnoDB before MySQL 5.5 wrote uninitialized
garbage to this column).
MariaDB 10.0 appears to validate the SYS_TABLES.TYPE properly.
This is a test-only change.
in innodb_read_only mode.
The reason for the hang is that there was no notification received about
completed read io. File handles are bound to completion_port, and there
were no background "write" threads that would be waiting on completion_port,
only 2 "read" threads waiting on read_completion_port were active.
The fix is to use a single IO completion port for all IOs, if
innodb_read_only is set.
When the server is started in innodb_read_only mode, there cannot be
any writes to persistent InnoDB/XtraDB files. Just like the creation
of buf_flush_page_cleaner_thread is skipped in this case, also
the creation of the XtraDB-specific buf_flush_lru_manager_thread
should be skipped.
When a slow shutdown is performed soon after spawning some work for
background threads that can create or commit transactions, it is possible
that new transactions are started or committed after the purge has finished.
This is violating the specification of innodb_fast_shutdown=0, namely that
the purge must be completed. (None of the history of the recent transactions
would be purged.)
Also, it is possible that the purge threads would exit in slow shutdown
while there exist active transactions, such as recovered incomplete
transactions that are being rolled back. Thus, the slow shutdown could
fail to purge some undo log that becomes purgeable after the transaction
commit or rollback.
srv_undo_sources: A flag that indicates if undo log can be generated
or the persistent, whether by background threads or by user SQL.
Even when this flag is clear, active transactions that already exist
in the system may be committed or rolled back.
innodb_shutdown(): Renamed from innobase_shutdown_for_mysql().
Do not return an error code; the operation never fails.
Clear the srv_undo_sources flag, and also ensure that the background
DROP TABLE queue is empty.
srv_purge_should_exit(): Do not allow the purge to exit if
srv_undo_sources are active or the background DROP TABLE queue is not
empty, or in slow shutdown, if any active transactions exist
(and are being rolled back).
srv_purge_coordinator_thread(): Remove some previous workarounds
for this bug.
innobase_start_or_create_for_mysql(): Set buf_page_cleaner_is_active
and srv_dict_stats_thread_active directly. Set srv_undo_sources before
starting the purge subsystem, to prevent immediate shutdown of the purge.
Create dict_stats_thread and fts_optimize_thread immediately
after setting srv_undo_sources, so that shutdown can use this flag to
determine if these subsystems were started.
dict_stats_shutdown(): Shut down dict_stats_thread. Backported from 10.2.
srv_shutdown_table_bg_threads(): Remove (unused).
This is actually a legacy bug:
SQL_SELECT::test_quick_select() was called
with SQL_SELECT::head not set.
It looks like that this problem can be
reproduced only on queries with ORDER BY
that use IN predicates converted to semi-joins.
This patch corrects the fix for bug mdev-7599.
When the min/max optimization of the function
opt_sum_query() optimizes away all tables of
a subquery it should not ever be rolled back.
If the optimizer chose an execution plan where
a semi-join nest were materialized and the
result of materialization was scanned to access
other tables by ref access it could build a key
over columns of the tables from the nest that
were actually inaccessible.
The patch performs a proper check whether a key
that uses columns of the tables from a materialized
semi-join nest can be employed to access outer tables.
innodb_page_size_small: A new set of combinations, for
innodb_page_size up to 16k. In MariaDB 10.0, this does not
make a difference, but in 10.1 and later, innodb_page_size
would cover 32k and 64k, for which ROW_FORMAT=COMPRESSED
is not available.
Enable these combinations in a few InnoDB tests.
InnoDB shutdown assumes that once the server has entered
SRV_SHUTDOWN_FLUSH_PHASE, no change to persistent data is allowed.
It was possible for the master thread to wake up while shutdown
is executing in SRV_SHUTDOWN_FLUSH_PHASE or
even in SRV_SHUTDOWN_LAST_PHASE.
We do not yet know if further crashes at shutdown are possible.
Also, we do not know if all the observed crashes could be explained
by the race conditions that we are now fixing.
srv_shutdown_print_master_pending(): Remove a redundant ut_time() call.
srv_shutdown(): Renamed from srv_master_do_shutdown_tasks().
srv_master_thread(): Do not resume after shutdown has been initiated.
RPL_SEMI_SYNC_MASTER_CLIENTS=1
Analysis: Uninstalling rpl_semi_sync_slave on slave
will trigger removing the slave logic on Master which
will reduce Rpl_semi_sync_master_clients by one number.
But it happens asynchronously on Master. Having assert
to check this value with zero will have problems on
slow pb2 machines.
Fix: Change assert into wait_for_status_var condition.
Problem:-
This crash happens because logged stmt is quite big and while writing
Annotate_rows_log_event it throws EFBIG error but we ignore this error
and do not call cache_data->set_incident().
Solution:-
When we normally write Binlog_log_event we check for error EFBIG, but we did
do this for Annotate_rows_log_event. We check for this error and call
cache_data->set_incident() accordingly.
# Conflicts:
# sql/log.cc
This is another correction of the patch for bug mdev-12670.
If a derived table is merged into a select with STRAIGHT_JOIN
modifier all IN subquery predicates contained in the
specification of the derived table cannot be subject to
conversion to semi-joins.
This patch is a correction of the patch for bug mdev-12670.
With the current code handling semi-joins the following must
be taken into account.
Conversion of an IN subquery predicate into semi-join
has to be blocked if the predicate occurs:
(a) in the ON expression of an outer join
(b) in the ON expression of an inner join embedded directly
or indirectly in the inner nest of an outer join.
The patch for mdev-12670 blocked conversion to semi-joins only
in the case (a), but not in the case (b). This patch blocks
the conversion in both cases.
When an IN subquery predicate was converted to a semi-join that were
materialized and the result of the materialization happened to be
the last in the execution plan then any conjunctive condition with RAND()
turned out to be lost.
Fixed by attaching this condition to the last top base table.
bunch of bugs when external_lock() fails on unlock:
* mi_lock_database() used mi_mark_crashed() under share->intern_lock,
but mi_mark_crashed() itself locks this mutex.
* handler::close() required table to be unlocked, but failed
external_lock didn't count as unlock
* mysql_unlock_tables() ignored all unlock errors, but they still set
the error status in stmt_da.
At some conditions the function opt_sum_query() can apply MIN/MAX
optimizations to to Item_sum objects of a select These optimizations
becomes invalid if this select is the subquery of an IN subquery
predicate that is converted to a EXISTS subquery. Thus in this case
the MIX/MAX optimizations that have been applied in opt_sum_query()
must be rolled back.
This bug appeared in 5.3 when the code for the cost base choice between
materialization and in-to-exists transformation of non-correlated
IN subqueries was introduced. Before this code in-to-exists
transformations were always performed before the call of opt_sum_query().