Problem:
If there is a predicate on a column referenced by MIN/MAX and
that predicate is not present in all the disjunctions on
keyparts earlier in the compound index, Loose Index Scan will
not return correct result.
Analysis:
When loose index scan is chosen, range optimizer currently
groups all the predicates that contain group parts separately
and minmax parts separately. It therefore applies all the
conditions on the group parts first to the fetched row.
Then in the call to next_max, it processes the conditions
which have min/max keypart.
For ex in the following query:
Select f1, max(f2) from t1 where (f1 = 10 and f2 = 13) or
(f1 = 3) group by f1;
Condition (f2 = 13) would be applied even for rows that
satisfy (f1 = 3) thereby giving wrong results.
Solution:
Do not choose loose_index_scan for such cases. So a new rule
WA2 is introduced to take care of the same.
WA2: "If there are predicates on C, these predicates must
be in conjuction to all predicates on all earlier keyparts
in I."
Todo the same, fix reuses the function get_constant_key_infix().
Since this funciton will fail for all multi-range conditions, it
is re-written to recognize that if the sub-conditions are
equivalent across the disjuncts: it will now succeed.
And to achieve this a new helper function is introduced called
all_same().
The fix also moves the test of NGA3 up to the former only
caller, get_constant_key_infix().
mysql-test/r/group_min_max_innodb.result:
Added test result change for Bug#17909656
mysql-test/t/group_min_max_innodb.test:
Added test cases for Bug#17909656
sql/opt_range.cc:
Introduced Rule WA2 because of Bug#17909656
Typo leading to not including the last list values (partition).
Also improved pruning to skip last partition if not used.
rb#4762 approved by Aditya and Marko.
- Fixed bug that we where using wrong checksum algorithm when using VARCHAR with fixed lenth rows
- Ensure in myisampack that HA_OPTION_NULL_FIELDS is set for tables with null fields.
mysql-test/r/myisampack.result:
Updated results
mysql-test/t/myisampack.test:
Added more tests
storage/myisam/mi_open.c:
Use correct checksum algorithm when we have VARCHAR fields with fixed length records
storage/myisam/myisampack.c:
Ensure HA_OPTION_NULL_FIELDS is set for tables with null fields.
(This was not set by default for not compressed tables without checksums to keep MyISAM tables compatible with MySQL)
Problem: Uninstallation of semi sync plugin causes replication to
break.
Analysis: A semisync enabled replication is mutual agreement between
Master and Slave when the connection (I/O thread) is established.
Once I/O thread is started and if semisync is enabled on both
master and slave, master appends special magic header to events
using semisync plugin functions and sends it to slave. And slave
expects that each event will have that special magic header format
and reads those bytes using semisync plugin functions.
When semi sync replication is in use if users execute
uninstallation of the plugin on master, slave gets confused while
interpreting that event's content because it expects special
magic header at the beginning of the event. Slave SQL thread will
be stopped with "Missing magic number in the header" error.
Similar problem will happen if uninstallation of the plugin happens
on slave when semi sync replication is in in use. Master sends
the events with magic header and slave does not know about the
added magic header and thinks that it received a corrupted event.
Hence slave SQL thread stops with "Found corrupted event" error.
Fix: Uninstallation of semisync plugin will be blocked when semisync
replication is in use and will throw 'ER_UNKNOWN_ERROR' error.
To detect that semisync replication is in use, this patch uses
semisync status variable values.
> On Master, it checks for 'Rpl_semi_sync_master_status' to be OFF
before allowing the uninstallation of rpl_semi_sync_master plugin.
>> Rpl_semi_sync_master_status is OFF when
>>> there is no dump thread running
>>> there are no semisync slaves
> On Slave, it checks for 'Rpl_semi_sync_slave_status' to be OFF
before allowing the uninstallation of rpl_semi_sync_slave plugin.
>> Rpl_semi_sync_slave_status is OFF when
>>> there is no I/O thread running
>>> replication is asynchronous replication.
It is triple bug with one test suite:
1. Incorrect outer table detection
2. Incorrect leaf table processing for multi-update (should be full like for usual updates and inserts)
3. ON condition fix_fields() fould be called for all tables of the query.
do_start_slave_sql() is called from maybe_exit().
We should not recurse when maybe_exit() is called for an error during do_start_slave_sql().
Also remove a meaningless (but safe) "goto err".
ARE PERMANENTLY SKIPPED IN 5.5/5.6).
The problem was that some result files were not updated,
so the tests were skipped.
The fix is to record updated result files.
BREAKS RBR
Analysis:
--------
A table created using a query of the format:
CREATE TABLE t1 AS SELECT REPEAT('A',1000) DIV 1 AS a;
breaks the Row Based Replication.
The query above creates a table having a field of datatype
'bigint' with a display width of 3000 which is beyond the
maximum acceptable value of 255.
In the RBR mode, CREATE TABLE SELECT statement is
replicated as a combination of CREATE TABLE statement
equivalent to one the returned by SHOW CREATE TABLE and
row events for rows inserted. When this CREATE TABLE event
is executed on the slave, an error is reported:
Display width out of range for column 'a' (max = 255)
The following is the output of 'SHOW CREATE TABLE t1':
CREATE TABLE t1(`a` bigint(3000) DEFAULT NULL)
ENGINE=InnoDB DEFAULT CHARSET=latin1;
The problem is due to the combination of two facts:
1) The above CREATE TABLE SELECT statement uses the display
width of the result of DIV operation as the display width
of the column created without validating the width for out
of bound condition.
2) The DIV operation incorrectly returns the length of its first
argument as the display width of its result; thus allowing
creation of a table with an incorrect display width of 3000
for the field.
Fix:
----
This fix changes the DIV operation implementation to correctly
evaluate the display width of its result. We check if DIV's
results estimated width crosses maximum width for integer
value (21) and if yes set it to this maximum value.
This patch also fixes fixes maximum display width evaluation
for DIV function when its first argument is in UCS2.
Bug#18396916 MAIN.OUTFILE_LOADDATA TEST FAILS ON ARM, AARCH64, PPC/PPC64
The recorded results for the failing tests were wrong.
They were introduced by the patch for
Bug#30946 mysqldump silently ignores --default-character-set when used with --tab
Correct results were returned for platforms where 'char' is implemented as unsigned.
This was reported as
Bug#46895 Test "outfile_loaddata" fails (reproducible)
Bug#11755168 46895: TEST "OUTFILE_LOADDATA" FAILS (REPRODUCIBLE)
The patch for that bug fixed only parts of the problem,
leaving the incorrect results in the .result file.
Solution: use 'uchar' for field_terminator and line_terminator on all platforms.
Also: remove some un-necessary casts, leaving the ones we actually need.
MDEV-5980: EITS: if condition is used for REF access, its selectivity is still in filtered%
MDEV-5985: EITS: selectivity estimates look illogical for join and non-key equalities
MDEV-6003: EITS: ref access, keypart2=const vs keypart2=expr - inconsistent filtered% value
- Made a number of fixes in table_cond_selectivity() so that it returns
correct selectivity estimates.
- Added comments in related code.
Better comments
A typo in include file caused the timeout counter to be 10x less than
expected. It was fixed in 5.5.20+ along with a bigger change, now also
fixing in 5.1-5.3.
MDEV-5985: EITS: selectivity estimates look illogical for join and non-key equalities
MDEV-6003: EITS: ref access, keypart2=const vs keypart2=expr - inconsistent filtered% value
- Made a number of fixes in table_cond_selectivity() so that it returns
correct selectivity estimates.
- Added comments in related code.
The problem is the async binlog checkpointing; this could on rare
occasions occur too late, causing SHOW BINLOG EVENTS to show the
wrong events and cause .result file difference.
Replication caches the character sets used in a query, to be able to quickly
reuse them for the next query in the common case of them not having changed.
In parallel replication, this caching needs to be per-worker-thread. The
code was not modified to handle this correctly, so the caching in one worker
could cause another worker to run a query using the wrong character set,
causing replication corruption.
Back-ported from the mysql 5.6 code line the patch with
the following comment:
Fix for Bug#11757108 CHANGE IN EXECUTION PLAN FOR COUNT_DISTINCT_GROUP_ON_KEY
CAUSES PEFORMANCE REGRESSION
The cause for the performance regression is that the access strategy for the
GROUP BY query is changed form using "index scan" in mysql-5.1 to use "loose
index scan" in mysql-5.5. The index used for group by is unique and thus each
"loose scan" group will only contain one record. Since loose scan needs to
re-position on each "loose scan" group this query will do a re-position for
each index entry. Compared to just reading the next index entry as a normal
index scan does, the use of loose scan for this query becomes more expensive.
The cause for selecting to use loose scan for this query is that in the current
code when the size of the "loose scan" group is one, the formula for
calculating the cost estimates becomes almost identical to the cost of using
normal index scan. Differences in use of integer versus floating point arithmetic
can cause one or the other access strategy to be selected.
The main issue with the formula for estimating the cost of using loose scan is
that it does not take into account that it is more costly to do a re-position
for each "loose scan" group compared to just reading the next index entry.
Both index scan and loose scan estimates the cpu cost as:
"number of entries needed too read/scan" * ROW_EVALUATE_COST
The results from testing with the query in this bug indicates that the real
cost for doing re-position four to eight times higher than just reading the
next index entry. Thus, the cpu cost estimate for loose scan should be increased.
To account for the extra work to re-position in the index we increase the
cost for loose index scan to include the cost of navigating the index.
This is modelled as a function of the height of the b-tree:
navigation cost= ceil(log(records in table)/log(indexes per block))
* ROWID_COMPARE_COST;
This will avoid loose index scan being used for indexes where the "loose scan"
group contains very few index entries.
back-ported the patch for bug #13256831 from mysql-5.6 code line.
Here's the comment this patch was provided with:
Fixed bug#13256831 - ERROR 1032 (HY000): CAN'T FIND RECORD.
This bug only occurs if a user tries to update a base table using
an updatable view and this view was created as a join for which
the clause 'WITH CHECK OPTION' was specified.
The reason for the bug was that when such an update was
executed, row positions were not properly handled for tables
that were not updated but had constraints that had to be
checked due to the 'WITH CHECK OPTION' clause.
The reason for the bug was that when such update is executed
then for tables specified in the view definition and
also listed in the 'WITH CHECK OPTION' clause the positioning to
row being updated is not performed.
Both bugs are caused by the same problem: the function optimize_cond() should
update the value of *cond_equal rather than the value of join->cond_equal,
because it is called not only for the WHERE condition, but for the HAVING
condition as well.
WRITTEN WHILE ROWS REMAINS
Problem:
========
When truncate table fails while using transactional based
engines even though the operation errors out we still
continue and log it to binlog. Because of this master has
data but the truncate will be written to binary log which
will cause inconsistency.
Analysis:
========
Truncate table can happen either through drop and create of
table or by deleting rows. In the second case the existing
code is written in such a way that even if an error occurs
the truncate statement will always be binlogged. Which is not
correct.
Binlogging of TRUNCATE TABLE statement should check whether
truncate is executed "transactionally or not". If the table
is transaction based we log the TRUNCATE TABLE only on
successful completion.
If table is non transactional there are possibilities that on
error we could have partial changes done hence in such cases
we do log in spite of errors as some of the lines might have
been removed, so the statement has to be sent to slave.
Fix:
===
Using table handler whether truncate table is being executed
in transaction based mode or not is identified and statement
is binlogged accordingly.
mysql-test/suite/binlog/r/binlog_truncate_kill.result:
Added test case to test the fix for Bug#17942050.
mysql-test/suite/binlog/t/binlog_truncate_kill.test:
Added test case to test the fix for Bug#17942050.
sql/sql_truncate.cc:
Check if truncation is successful or not and retun appropriate
return values so that binlogging can be done based on that.
sql/sql_truncate.h:
Added a new enum.
Add a testcase and backport this fix:
Bug#14338686: MYSQL IS GENERATING DIFFERENT AND SLOWER
(IN NEWER VERSIONS) EXECUTION PLAN
PROBLEM:
While checking for an index to sort for the order by clause
in this query
"SELECT datestamp FROM contractStatusHistory WHERE
contract_id = contracts.id ORDER BY datestamp asc limit 1;"
we do not calculate the number of rows to be examined correctly.
As a result we choose index 'idx_contractStatusHistory_datestamp'
defined on the 'datestamp' field, rather than choosing index
'contract_id'. And hence the lower performance.
ANALYSIS:
While checking if an index is present to give the records in
sorted order(datestamp), we consider the selectivity of the
'ref_key'(contract_id here) using 'table->quick_condition_rows'.
'ref_key' here can be an index from 'REF_ACCESS' or from 'RANGE'.
As this is a 'REF_ACCESS', 'table->quick_condition_rows' is not
set to the actual value which is 2. Instead is set to the number
of tuples present in the table indicating that every row that
is selected would be satisfying the condition present in the query.
Hence, the selectivity becomes 1 even when we choose the index
on the order by column instead of the join_condition.
But, in reality as only 2 rows satisy the condition, we need to
examine half of the entire data set to get one tuple when we
choose index on the order by column.
Had we chosen the 'REF_ACCESS' we would have examined only 2 tuples.
Hence the delay in executing the query specified.
FIX:
While calculating the selectivity of the ref_key:
For REF_ACCESS consider quick_rows[ref_key] if range
optimizer has an estimate for this key. Else consider
'rec_per_key' statistic.
For RANGE ACCESS consider 'table->quick_condition_rows'.
- Make JOIN::const_key_parts include keyparts for which
the WHERE clause has an equality in form
"t.key_part=reference_outside_this_select"
- This allows to avoid filesort'ing in some cases (and also
avoid a difficult choice between using filesort or using an index)
The problem was that the view substitute its fields (on prepare) with reverting the change after execution. After prepare on optimization exists2in convertion substituted arguments of '=' with constsnt '1', but then one of the arguments of '=' was reverted to the view field reference.This lead to incorrect WHERE condition on the second execution.
To fix the problem we replace whole '=' with '1' permannently.
MDEV-5984: EITS: Incorrect filtered% value for single-table select with range access
- Fix calculate_cond_selectivity_for_table() to work correctly with range accesses
over multi-component keys:
= First, take selectivity of all possible range scans into account. Remember which
fields were used bt the range scans.
= Then, calculate selectivity produced by sargable predicates on fields. If a
field was used in a possible range access, assume its selectivity is already
taken into account.
- Fix table_cond_selectivity(): when quick select is used, selectivity of
COND(table) is taken into account in matching_candidates_in_table(). In
table_cond_selectivity() we should not apply it for the second time.
MDEV-5971 Asymmetry between CAST(DATE'2001-00-00') to INT and TO CHAR in prepared statements
Consistently set maybe_null flag, even not-NULL temporal literal may become NULL
in the restrictive sql_mode.
This will help the following replicaition scenario:
MariaDB 10.0 master (statement replication) -> MariaDB 10.0 slave (row based replication) -> MySQL or MariaDB 5.x slave
mysql-test/r/mysqld--help.result:
Updated help text
mysql-test/suite/rpl/r/create_or_replace_mix.result:
Added more tests
mysql-test/suite/rpl/r/create_or_replace_row.result:
Added more tests
mysql-test/suite/rpl/r/create_or_replace_statement.result:
Added more tests
mysql-test/suite/rpl/t/create_or_replace.inc:
Added more tests
sql/handler.h:
Added org_options so that we can detect what come from the query and what was possible added later.
sql/sql_insert.cc:
Only write CREATE OR REPLACE if was originally specified or if we delete a conflicting table as part of create
sql/sql_parse.cc:
Remember orginal create options
sql/sql_table.cc:
Only write CREATE OR REPLACE if was originally specified or if we delete a conflicting table as part of create
sql/sys_vars.cc:
Updated help text
Backported multi_update_check_table_access() from 5.6
The code is slightly different in MariaDB, becasue we instansiate fields in merged tables earlier.
mysql-test/mysql-test-run.pl:
Fixed comment
mysql-test/r/view_grant.result:
Merged test case from 5.6
mysql-test/t/view_grant.test:
Merged test case from 5.6
sql/sql_parse.cc:
Reset orig_want_privilege as this will be rechecked later.
If not, we will have a problem in mysql_multi_update_prepare() for the call to mysql_handle_derived()
sql/sql_update.cc:
Backport multi_update_check_table_access() from 5.6
Fixed use-copy option to mysql-test-run
mysql-test/mysql-test-run.pl:
Fixed use-copy and added comment
sql/log.cc:
Make copy_up_file_and_fill() safe for disk full
mysql-test/r/create_or_replace.result:
More tests for create or replace
mysql-test/t/create_or_replace.test:
More tests for create or replace
sql/log.cc:
Don't use binlog_hton if binlog is not enabmed
sql/sql_base.cc:
We have to call restart_trans_for_tables also if tables where not locked with LOCK TABLES.
If not, we will get a crash in TokuDB
sql/sql_insert.cc:
Don't call binlog_reset_cache() if we don't have binary log open
sql/sql_table.cc:
Don't log to binary log if not open
Better test if we where using create or replace ... select
storage/tokudb/mysql-test/tokudb_mariadb/r/create_or_replace.result:
More tests for create or replace
storage/tokudb/mysql-test/tokudb_mariadb/t/create_or_replace.test:
More tests for create or replace
Copied relevant test cases and code from the MySQL 5.6 tree
Testing of my_use_symdir moved to engines.
mysql-test/r/partition_windows.result:
Updated result file
mysql-test/suite/archive/archive_no_symlink-master.opt:
Testing of symlinks with archive
mysql-test/suite/archive/archive_no_symlink.result:
Testing of symlinks with archive
mysql-test/suite/archive/archive_no_symlink.test:
Testing of symlinks with archive
mysql-test/suite/archive/archive_symlink.result:
Testing of symlinks with archive
mysql-test/suite/archive/archive_symlink.test:
Testing of symlinks with archive
sql/log_event.cc:
Updated comment
sql/partition_info.cc:
Don't test my_use_symdir here
sql/sql_parse.cc:
Updated comment
sql/sql_table.cc:
Don't test my_use_symdir here
sql/table.cc:
Added more DBUG_PRINT
storage/archive/ha_archive.cc:
Give warnings for index_file_name and if we can't use data directory
storage/myisam/ha_myisam.cc:
Give warnings if we can't use data directory or index directory
Bug #3329 Incomplete lower_case_table_names=2 implementation
The problem was that check_db_name() converted database names to lower case also in case of lower_case_table_names=2.
Fixed by removing the conversion in check_db_name for lower_case_table_names = 2 and instead converting db name to
lower case at same places as table names are converted.
Fixed bug that SHOW CREATE DATABASE FOO showed information for database 'foo'.
I also removed some checks of lower_case_table_names when it was enough to use table_alias_charset.
mysql-test/mysql-test-run.pl:
Added --use-copy argument to force mysql-test-run to copy files instead of doing symlinks. This is needed when you run
with test directory on another file system
mysql-test/r/lowercase_table.result:
Updated results
mysql-test/r/lowercase_table2.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_innodb.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_memory.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_myisam.result:
Updated results
mysql-test/t/lowercase_table.test:
Added tests with mixed case databases
mysql-test/t/lowercase_table2.test:
Added tests with mixed case databases
sql/log.cc:
Don't check lower_case_table_names when we can use table_alias_charset
sql/sql_base.cc:
Don't check lower_case_table_names when we can use table_alias_charset
sql/sql_db.cc:
Use cmp_db_names() for checking if current database changed.
mysql_rm_db() now converts db to lower case if lower_case_table_names was used.
Changed database options cache to use table_alias_charset. This fixed a bug where SHOW CREATE DATABASE showed wrong information.
sql/sql_parse.cc:
Change also db name to lower case when file names are changed.
Don't need to story copy of database name anymore when lower_case_table_names == 2 as check_db_name() don't convert in this case.
Updated arguments to mysqld_show_create_db().
When adding table to TABLE_LIST also convert db name to lower case if needed (same way as we do with table names).
sql/sql_show.cc:
mysqld_show_create_db() now also takes original name as argument for output to user.
sql/sql_show.h:
Updated prototype for mysqld_show_create_db()
sql/sql_table.cc:
In mysql_rename_table(), do same conversions to database name as we do for the file name
- Histogram::find_bucket() should not walk off the end of the value range.
- Address review feedback in Histogram::point_selectivity(): different handling
for zero-width buckets, and explanations.
The reason was that a couple of variables that hold number of rows that was used to calculate buffers was uint and caused an overflow.
Fixed by changing variables that could hold number of rows from uint to ulong and also added a cast for this test.
include/heap.h:
Reorder to get better alignment. Changed variables that could hold number of rows from uint to ulong
mysql-test/suite/heap/heap.result:
Added test case
mysql-test/suite/heap/heap.test:
Added test case
mysql-test/suite/plugins/t/server_audit.test:
Added sleep as we want to have disconnect logged before we try a new connect
storage/heap/ha_heap.cc:
Changed variables that could hold number of rows from uint to ulong
Limit number of rows to 4G (as most of the variables that holds rows are ulong anyway)
reset records_changed when key_stat_version is changed to not cause increments for every row changed
storage/heap/ha_heap.h:
changed records_changed to ulong as this can get big
storage/heap/hp_create.c:
Changed variables that could hold number of rows from uint to ulong
Added cast (fixed the original bug)
storage/heap/hp_delete.c:
Changed variables that could hold number of rows from uint to ulong
storage/heap/hp_open.c:
Removed not needed cast
storage/heap/hp_write.c:
Changed variables that could hold number of rows from uint to ulong
support-files/compiler_warnings.supp:
Removed extra : from supression
- Fix Histogram::point_selectivity() to work in the case where the
passed value_pos=0 (or 1) and the first (or the last) bucket in the
histogram has zero value-range (i.e one value).
[Attempt #2]
- Use a new selectivity calculation formula in Histogram::point_selectivity.
The formula is different from the old one because it was developed from scratch.
it doesn't have any possible division-by-zero problems.
don't use mysql-5.6 change.
correct fix: zero-out rounded tail after the number was shifted because
of the carry digit (otherwise the carry digit will be zeroed out too).
Let TABLE_SHARE::tdc.free_tables, TABLE_SHARE::tdc.all_tables,
TABLE_SHARE::tdc.flushed and corresponding invariants be protected by
per-share TABLE_SHARE::tdc.LOCK_table_share instead of global LOCK_open.
Now if CREATE OR REPLACE fails but we have deleted a table already, we will generate a DROP TABLE in the binary log.
This fixes this issue.
In addition, for a failing CREATE OR REPLACE TABLE ... SELECT we don't generate a log of all the inserted rows, only the DROP TABLE.
I added code for not logging DROP TEMPORARY TABLE for tables where the CREATE TABLE was not logged. This code will be activated in 10.1
by removing the code protected by DONT_LOG_DROP_OF_TEMPORARY_TABLES.
mysql-test/suite/rpl/r/create_or_replace_mix.result:
More test cases
mysql-test/suite/rpl/r/create_or_replace_row.result:
More test cases
mysql-test/suite/rpl/r/create_or_replace_statement.result:
More test cases
mysql-test/suite/rpl/t/create_or_replace.inc:
More test cases
sql/log.cc:
Added binlog_reset_cache() to clear the binary log.
sql/log.h:
Added prototype
sql/sql_insert.cc:
If CREATE OR REPLACE TABLE ... SELECT fails:
- Don't log anything if nothing changed
- If table was deleted, log a DROP TABLE.
Remember if we table creation of temporary tables was logged.
sql/sql_table.cc:
Added log_drop_table()
Remember if we table creation of temporary tables was logged.
If CREATE OR REPLACE TABLE ... SELECT fails and a table was deleted, log a DROP TABLE.
sql/sql_table.h:
Added prototype
sql/sql_truncate.cc:
Remember if we table creation of temporary tables was logged.
sql/table.h:
Added table_creation_was_logged
mysql-test/r/create_or_replace2.result:
Added test case
mysql-test/t/create_or_replace.test:
Fixed comment
mysql-test/t/create_or_replace2.test:
Added test case
sql/sql_base.cc:
Safety fix:
Don't let threads with query_id=0 free temporary tables as this may free temporary tables not in use.
This is mostly the case for the slave io threads, as most other threads has thd->query_id != 0.
sql/sql_table.cc:
Added comment.
Ignore kill when opening temporary table for CREATE ... LIKE.
This fixed the original isue
- MDEV-5689 ExtractValue(xml, 'substring(/x,/y)') crashes
- MDEV-5709 ExtractValue() with XPath variable references returns wrong result.
Description:
1. The main problem was that that nodeset_func->fix_fields() was
called in Item_func_xml_extractvalue::val_str() and
Item_func_xml_update::val_str(), which led in some cases to
execution of the XPath engine *before* having a parsed XML value.
Moved to Item_xml_str_func::fix_fields().
2. Cleanup: added a new method Item_xml_str_func::fix_fields() and moved
most of the code from Item_xml_str_func::fix_length_and_dec()
to Item_xml_str_func::fix_fields(), to follow the usual Item layout.
3. Cleanup: a parsed XML value is useless without the raw XML value
it was built from.
Previously the parsed and the raw values where stored in separate String
instances. It was hard to follow how they are synchronized.
Added a helper class XML which contains both parsed and raw values.
Makes things easier to read and modify.
4. MDEV-5709: const_item() could incorrectly return a "true"
result when XPath expression contains users/SP variable references.
Now nodeset_func->const_item() is also taken into account to
catch such cases.
5. Minor code enhancements.
A huge number in the "day" part of an interval made the code to return
a negative date erroneously. Adding a test to return an error on a too
large "day" value.
After constant table row substitution the where condition may be converted
to always true. The function calculate_cond_selectivity_for_table() should
take into account this possibility.
Due to how gap locks work, two transactions could group commit together on the
master, but get lock conflicts and then deadlock due to different thread
scheduling order on slave.
For now, remove these deadlocks by running the parallel slave in READ
COMMITTED mode. And let InnoDB/XtraDB allow statement-based binlogging for the
parallel slave in READ COMMITTED.
We are also investigating a different solution long-term, which is based on
relaxing the gap locks only between the transactions running in parallel for
one slave, but not against possibly external transactions.
When a transaction fails in parallel replication, it should signal the error
to any following transactions doing wait_for_prior_commit() on it. But the
code for this was incorrect, and would not correctly remember a prior error
when sending the signal. This caused corruption when slave stopped due to an
error.
Fix by remembering the error code when we first get an error, and passing the
saved error code to wakeup_subsequent_commits().
Thanks to nanyi607rao who reported this bug on
maria-developers@lists.launchpad.net and analysed the root cause.
LOCAL AND IMPORT ERRORS
Description:
-----------
This bug happens due to the fact that current algorithm is designed
that in the case of LOCAL load of data, in case of the error, the
remaining part of the file is read in order to return the proper
error message to the client side.
But, the problem with current implementation is that data stream
for the client side is cleared only in the case where line delimiters
exist, which is not a case with, for example fixed width
fields.
Fix:
----
Ported patch provided by Sinisa Milivojevic n bug report for this
issue to 5.5+ versions.
As part of this patch code is changed to clear the data stream
by calling new member function "READ_INFO::skip_data_till_eof".
mysql-test/suite/rpl/t/rpl_000011-slave.opt:
Renamed test case as it's slave that needs to restarted
support-files/compiler_warnings.supp:
Fixed bad characters in suppression
mysql-test/suite/rpl/t/rpl_000011-master.opt:
Added master.opt file to ensure that other tests don't interfere with rpl_000011
plugin/server_audit/server_audit.c:
Fixed compiler error on solaris
support-files/compiler_warnings.supp:
Ignore warning from xtradb
- If an UPDATE 1) modifies the key it is using, and 2) has ORDER BY ... LIMIT
which matches the key it is using, Then we should use "Using buffer", not
"Using filesort".
Reason for the bug was an optimization for higher connect speed where we moved when global status was updated,
but forgot to update states when slave thread dies.
Fixed by adding thd->add_status_to_global() before deleting slave thread's thd.
mysys/my_delete.c:
Added missing newline
sql/mysqld.cc:
Use add_status_to_global()
sql/slave.cc:
Added missing add_status_to_global()
sql/sql_class.cc:
Use add_status_to_global()
sql/sql_class.h:
Simplify adding local status to global by adding add_status_to_global()
Automatic merge, except for server_audit.cc that had to be modified slightly
Changes to xtradb and innobase where ignored was these made no sence for 10.0
- With big_tables=ON, materialized table will use Aria (or MyISAM) SE, which
allows prefix key reads. However, the temp.table has rec_per_key=NULL which
causes the optimizer to crash when attempting to read index statistics for a
prefix index read.
- Fixed by providing a rec_per_key array with zeros (i.e. "no statistics data")
mysql-test/r/create_or_replace.result:
Added test of releasing of metadata locks
mysql-test/t/create_or_replace.test:
Added test of releasing of metadata locks
sql/handler.h:
Added marker if table was deleted as part of CREATE OR REPLACE
sql/sql_base.cc:
Added Locked_tables_list::unlock_locked_table()
sql/sql_class.h:
New prototypes
sql/sql_insert.cc:
Unlock metadata locks for deleted table in case of error. Also do unlock tables if this was the only locked table.
sql/sql_table.cc:
Unlock metadata locks for deleted table in case of error. Also do unlock tables if this was the only locked table.
Remove memory warnings if mysql client aborts early
Changed copyright for clients
client/mysql.cc:
Free memory if get_options fails, so that we don't get warnings from safemalloc
include/welcome_copyright_notice.h:
Added SkySQL to client copyrights
mysql-test/valgrind.supp:
Added suppressions for memory leaks from dlopen() for OpenSUSE 12.3
storage/oqgraph/mysql-test/oqgraph/regression_mdev5744.result:
Suppress warning
storage/oqgraph/mysql-test/oqgraph/regression_mdev5744.test:
Suppress warning
The problem was that a big record was allocated on the stack, which casued stack to run out.
Fixed by using my_safe_alloca() instead of my_alloca() when allocating records.
Now only records <= 16384 are allocated on the stack.
mysql-test/r/stack-crash.result:
Added test case
mysql-test/t/stack-crash.test:
Added test case
storage/maria/ma_blockrec.c:
Use my_safe_alloca() instead of my_alloca()
storage/maria/ma_dynrec.c:
Use my_safe_alloca() instead of my_alloca()
storage/maria/maria_def.h:
Added MARIA_MAX_RECORD_ON_STACK
storage/maria/maria_pack.c:
Use my_safe_alloca() instead of my_alloca()
Before, the arrival of same GTID twice in multi-source replication
would cause double-apply or in gtid strict mode an error.
Keep the behaviour, but add an option --gtid-ignore-duplicates which
allows to correctly handle duplicates, ignoring all but the first.
This relies on the user ensuring correct configuration so that
sequence numbers are strictly increasing within each replication
domain; then duplicates can be detected simply by comparing the
sequence numbers against what is already applied.
Only one master connection (but possibly multiple parallel worker
threads within that connection) is allowed to apply events within
one replication domain at a time; any other connection that
receives a GTID in the same domain either discards it (if it is
already applied) or waits for the other connection to not have
any events to apply.
Intermediate patch, as proof-of-concept for testing. The main limitation
is that currently it is only implemented for parallel replication,
@@slave_parallel_threads > 0.
The problem was that a big record was allocated on the stack, which casued stack to run out.
Fixed by using my_safe_alloca() instead of my_alloca() when allocating records.
Now only records <= 16384 are allocated on the stack.
mysql-test/r/stack-crash.result:
Added test case
mysql-test/t/stack-crash.test:
Added test case
storage/maria/ma_blockrec.c:
Use my_safe_alloca() instead of my_alloca()
storage/maria/ma_dynrec.c:
Use my_safe_alloca() instead of my_alloca()
storage/maria/maria_def.h:
Added MARIA_MAX_RECORD_ON_STACK
storage/maria/maria_pack.c:
Use my_safe_alloca() instead of my_alloca()
The issue was that create...trigger part of the test suite used a debug_sync point that before was never triggered (in other words, wrong meaningless test).
With the new create ... replace code the debug sync point is triggered and the test case could not handled that.
I fixed this by adding a wait and go for the debug syncpoint in the test.
Removed some compiler warnings from mysql_cond_timedwait
include/mysql/psi/mysql_thread.h:
Removed compiler warnings
mysql-test/r/create-big.result:
New test result
mysql-test/t/create-big.test:
Fixed test case as create_table_select_before_check_if_exists was not before triggered by the code.
When an rpl_group_info object was returned from the free list, the
rgi->deferred_events_collecting and rgi->deferred_events was not correctly
re-inited. Additionally, the rgi->deferred_events was incorrectly freed in
free_rgi(), which causes unnecessary malloc/free (or crash when re-init is not
done).
Thanks to user nanyi607rao, who reported this bug on maria-developers@.
The calls of the function remove_eq_conds() may change the and/or structure
of the where conditions. So JOIN::equal_cond should be updated for non-recursive
calls of remove_eq_conds().
Reduced number of my_hash_sort_bin() calls from 4 to 1 per query.
Reduced number of memory accesses done by my_hash_sort_bin().
Details:
- let MDL subsystem use pre-calculated hash value for hash
inserts and deletes
- let table cache use pre-calculated MDL hash value
- MDL namespace is excluded from hash value calculation, so that
hash value can be used by table cache as is
- hash value for MDL is calculated as resulting hash value + MDL
namespace
- extended hash implementation to accept user defined hash function
The problem was when a GTID event was part of a group commit, and so contained
a commit id. The code that replaces GTID with a BEGIN event for old slaves did
not correctly handle this case.
Fix the code so that the GTID with commit id can also be properly replaced
with a BEGIN query event. The extra two bytes are in the BEGIN event replaced
with a dummy, empty time zone string.
- Make do_fill_table() use join_tab->cache_select->cond if it is present. When
join_tab->cache_select->cond is present, join_tab->select_cond doesn't have any
conditions that are usable for I_S optimizations.
Older master has no GTID events, so such events are not available for
deciding on scheduling of event groups and so on.
With this patch, we run such events from old masters single-threaded, in the
sql driver thread.
This seems better than trying to make the parallel code handle the data from
older masters; while possible, this would require a lot of testing (as well as
possibly some extra overhead in the scheduling of events), which hardly seems
worthwhile.
use the Alter_inplace_info::ALTER_COLUMN_OPTION flag if
field/column flags were altered.
change ha_example to use check_if_supported_inplace_alter() instead
of obsolete check_if_incompatible_data()
- Table locks now ends with state "After table lock"
- Open table now ends with state "After opening tables"
- All calls to close_thread_tables(), not only from mysql_execute_command(), has state "closing tables"
- Added state "executing" for mysql admin commands, like CACHE INDEX, REPAIR TABLE etc.
- Added state "Finding key cache" for CACHE INDEX
- Added state "Filling schema table" when we generate temporary table for SHOW commands and information schema.
Other things:
Add limit from innobase for thread_sleep_delay. This fixed a failing tests case.
Added db.opt to support-files to make 'make package' work
mysql-test/suite/funcs_1/datadict/processlist_val.inc:
Use new state
mysql-test/suite/funcs_1/r/processlist_priv_no_prot.result:
Updated test result because of new state
mysql-test/suite/funcs_1/r/processlist_val_no_prot.result:
Updated test result because of new state
sql/CMakeLists.txt:
Have option files in support-files
sql/lock.cc:
Added new state 'After table lock'
sql/sql_admin.cc:
Added state "executing" and "Sending data" for mysql admin commands, like CACHE INDEX, REPAIR TABLE etc.
Added state "Finding key cache"
sql/sql_base.cc:
open tables now ends with state "After table lock", instead of NULL
sql/sql_parse.cc:
Moved state "closing tables" to close_thread_tables()
sql/sql_show.cc:
Added state "Filling schema table" when we generate temporary table for SHOW commands and information schema.
storage/xtradb/buf/buf0buf.c:
Removed compiler warning
storage/xtradb/handler/ha_innodb.cc:
Add limit from innobase for thread_sleep_delay. This fixed a failing tests case.
support-files/db.opt:
cmakes needs this to create data/test directory
The problem was that on opening all tables TL_WRITE, than local tables which is not updated set to TL_READ, but underlying tables of MyISAMmrg left untouched.
Prartition engine has not this problem.
All cases where lock_type assigned is not changed because call of virtual function is not cheap.
Fixed crashing bug for union queries where there was no real tables.
mysql-test/r/group_by.result:
Added test case
mysql-test/t/group_by.test:
Added test case
sql/db.opt:
Removed genrated file
sql/item.cc:
Handled case when table_list->pos_in_tables is not set. Can only happens when there is no real tables in query