a "if"
Bug #41913 mysqltest cannot source files from if inside while
Some commands require additional processing which only works first time
Keep content for write_file or append_file with the st_command struct
Add tests for those cases to mysqltest.test
results in server crash
check_group_min_max_predicates() assumed the input condition
item to be one of COND_ITEM, SUBSELECT_ITEM, or FUNC_ITEM.
Since a condition of the form "field" is also a valid condition
equivalent to "field <> 0", using such a condition in a query
where the loose index scan was chosen resulted in a debug
assertion failure.
Fixed by handling conditions of the FIELD_ITEM type in
check_group_min_max_predicates().
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
When a connection is dropped any remaining temporary table is also automatically
dropped and the SQL statement of this operation is written to the binary log in
order to drop such tables on the slave and keep the slave in sync. Specifically,
the current code base creates the following type of statement:
DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`;
Unfortunately, appending the database to the table name in this manner circumvents
the replicate-rewrite-db option (and any options that check the current database).
To solve the issue, we started writing the statement to the binary as follows:
use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
binlog
Mixing transactional (T) and non-transactional (N) tables on behalf of a
transaction may lead to inconsistencies among masters and slaves in STATEMENT
mode. The problem stems from the fact that although modifications done to
non-transactional tables on behalf of a transaction become immediately visible
to other connections they do not immediately get to the binary log and therefore
consistency is broken. Although there may be issues in mixing T and M tables in
STATEMENT mode, there are safe combinations that clients find useful.
In this bug, we fix the following issue. Mixing N and T tables in multi-level
(e.g. a statement that fires a trigger) or multi-table table statements (e.g.
update t1, t2...) were not handled correctly. In such cases, it was not possible
to distinguish when a T table was updated if the sequence of changes was N and T.
In a nutshell, just the flag "modified_non_trans_table" was not enough to reflect
that both a N and T tables were changed. To circumvent this issue, we check if an
engine is registered in the handler's list and changed something which means that
a T table was modified.
Check WL 2687 for a full-fledged patch that will make the use of either the MIXED or
ROW modes completely safe.
Problem was that the partition containing NULL values
was pruned away, since '2001-01-01' < '2001-02-00' but
TO_DAYS('2001-02-00') is NULL.
Added the NULL partition for RANGE/LIST partitioning on TO_DAYS()
function to be scanned too.
Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode
(SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST
partitioned table would add it).
There were a problem since pruning uses the field
for comparison (while evaluate_join_record uses longlong),
resulting in pruning failures when comparing DATE to DATETIME.
Fix was to always comparing DATE vs DATETIME as DATETIME,
by adding ' 00:00:00' to the DATE string.
And adding optimization for comparing with 23:59:59, so that
DATETIME_col > '2001-02-03 23:59:59' ->
TO_DAYS(DATETIME_col) > TO_DAYS('2001-02-03 23:59:59') instead
of '>='.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
INSERT ... SELECT ...
Problem was that when bulk insert is used on an empty
table/partition, it disables the indexes for better
performance, but in this specific case it also tries
to read from that partition using an index, which is
not possible since it has been disabled.
Solution was to allow index reads on disabled indexes
if there are no records.
Also reverted the patch for bug#38005, since that was a workaround
in the partitioning engine instead of a fix in myisam.
(temporary) TABLE, crash
Problem: if one has an open "HANDLER t1", further "TRUNCATE t1"
doesn't close the handler and leaves handler table hash in an
inconsistent state, that may lead to a server crash.
Fix: TRUNCATE should implicitly close all open handlers.
Doc. request: the fact should be described in the manual accordingly.
view that has Group By
Table access rights checking function check_grant() assumed
that no view is opened when it's called.
This is not true with nested views where the inner view
needs materialization. In this case the view is already
materialized when check_grant() is called for it.
This caused check_grant() to not look for table level
grants on the materialized view table.
Fixed by checking if a view is already materialized and if
it is check table level grants using the original table name
(not the ones of the materialized temp table).
on SHOW CREATE TRIGGER + MERGE table
Problem: SHOW CREATE TRIGGER erroneously relies on fact
that we have the only underlying table for a trigger
(wrong for merge tables).
Fix: remove erroneous assert().
Replication SQL thread does not set database default charset to
thd->variables.collation_database properly, when executing LOAD DATA binlog.
This bug can be repeated by using "LOAD DATA" command in STATEMENT mode.
This patch adds code to find the default character set of the current database
then assign it to thd->db_charset when slave server begins to execute a relay log.
The test of this bug is added into rpl_loaddata_charset.test