Fixed unclean prototype declaration. According to ISO/IEC 9899:TC2:
...
10. The special case of an unnamed parameter of type void as the only item in
the list specifies that the function has no parameters.
...
14. An identifier list declares only the identifiers of the parameters of the
function. An empty list in a function declarator that is part of a
definition of that function specifies that the function has no parameters.
The empty list in a function declarator that is not part of a definition of
that function specifies that no information about the number or types of the
parameters is supplied. 124)
...
6.11.6 Function declarators
The use of function declarators with empty parentheses (not prototype-format
parameter type declarators) is an obsolescent feature.
...
Patch contributed by Michal Hrusecky.
The test restarts the server and expects that the feedback plugin
will send a report on shutdown, and will write about it in the error
log. But the server is only given 10 sec to shut down properly,
which is not always enough.
Added a parameter to restart_mysqld.inc, and set it to a bigger
value in feedback_plugin_send
The culprit is the feedback_plugin_load test, which is run both separately
and from inside feedback_plugin_send.test. The test queries I_S.FEEDBACK,
and every time it does, 'FEEDBACK used' value is increased.
Fixed by checking that the value is increased instead of recording the actual
value in the result file.
The real problem is that when innodb.xa_recovery test intentionally
crashes the server, system tables can be opened and marked as crashed,
and the next test in line gets blamed for the error which appears
in the error log.
Fixed by flushing the tables before crashing the server
Use sync_with_master_gtid.inc instead of --sync_with_master. The latter is
not correct because the test case uses RESET MASTER; this invalidates the
existing binlog positions on the slave.
In this particular case, there was a small window where --sync_with_master
could trigger too early (on the old position), causing the test case to miss
one event.
The crash was caused by range optimizer using RANGE_OPT_PARAM::min_key
(and max_key) to store keys. Buffer size was a good upper bound for
range analysis and partition pruning, but not for EITS selectivity
calculations.
Fixed by making these buffers variable-size. The sizes are calculated
from [pseudo]indexes used for range analysis.
Adjust the test case to try and avoid some sporadic failures on loaded test
hosts.
The wait for SQL thread to stop may complete before worker threads
have completed.
The code was using the wrong variable when comparing the binlog name
for the UNTIL position. This could cause the comparison to fail after
binlog rotation, in turn causing the UNTIL clause to not trigger slave
stop.
If a transaction T1 needs to wait for a transaction T2, T2's commit will
skip the normal binlog_commit_wait_usec delay, in order not to needlessly
stall throughput.
This works by checking if T2 is already ready to commit. If so, it is woken
up. If not, we set a flag in T2 so that when it gets ready to commit, it
will do so immediately.
But there was a potential race due to insufficient locking, if T2 gets ready
to commit just at the point where T1 does the check. If the race hits, the
wakeup (and early commit) of T2 might be lost.
The race is only theoretical (from code inspection, no known test case), but
seems best to fix it anyway, by properly locking LOCK_prepare_ordered around
the check.
The assertion is there to catch cases where we rollback while
mark_start_commit() is active. This can allow following event groups
to be replicated too early, causing conflicts.
But in this case, we have an _explicit_ ROLLBACK event in the binlog,
which should not assert.
We fix this by delaying the mark_start_commit() in the explicit
ROLLBACK case. It seems safest to delay this in ROLLBACK case anyway,
and there should be no reason to try to optimise this corner case.
Problem: Not all permanent Item_direct_view_ref was in permanent list of used items of the view.
Solution: Detect creating permenent view/derived table reference and put them in the permanent list at once.
Test failed because it hit net_write_timeout. It might happen in
different circumstances, and that's not what the testcase tests,
so the timeout is now set to a bigger value.
PROBLEMS
Description:- Server variable "--lower_case_tables_names"
when set to "0" on windows platform which does not support
case sensitive file operations leads to problems. A warning
message is printed in the error log while starting the
server with "--lower_case_tables_names=0". Also according to
the documentation, seting "lower_case_tables_names" to "0"
on a case-insensitive filesystem might lead to index
corruption.
Analysis:- The problem reported in the bug is:-
Creating an INNODB table 'a' and executing a query, "INSERT
INTO a SELECT a FROM A;" on a server started with
"--lower_case_tables_names=0" and running on a
case-insensitive filesystem leads innodb to flat spin.
Optimizer thinks that "a" and "A" are two different tables
as the variable "lower_case_table_names" is set to "0". As a
result, optimizer comes up with a plan which does not need a
temporary table. If the same table is used in select and
insert, a temporary table is needed. This incorrect
optimizer plan leads to infinite insertions.
Fix:- If the server is started with
"--lower_case_tables_names" set to 0 on a case-insensitive
filesystem, an error, "The server option
'lower_case_table_names'is configured to use case sensitive
table names but the data directory is on a case-insensitive
file system which is an unsupported combination. Please
consider either using a case sensitive file system for your
data directory or switching to a case-insensitive table name
mode.", is printed in the server error log and the server
exits.
- Make semi-join optimizer not to choose LooseScan
when 1) the index is not covered and 2) full index
scan will be required.
- Make sure that the code in make_join_select() that may change
full index scan into a range scan is not invoked when the table
uses full scan.
DESCRIPTION
===========
Inability of mysql LOAD XML command to handle empty XML
tags i.e. <row><tag/></row>. Also the behaviour is wrong
and (different than above) when there is a space in empty
tag i.e. <row><tag /></row>
ANALYSIS
========
In read_xml() the case where we encounter a close tag ('/')
we're decreasing the 'level' blindly which is wrong.
Actually when its an without-space-empty-tag (succeeding
char is '>'), we need to skip the decrement. In other words
whenever we hit a close tag ('/'), decrease the 'level'
only when (i) It's not an (without space) empty tag i.e.
<tag/> or, (ii) It is of format <row col="val" .../>
FIX
===
The switch case for '/' is modified. We've removed the
blind decrement of 'level'. We do it only when its not an
without-space-empty-tag. Also we are setting 'in_tag' to
false to let program know that we're done reading current
tag (required in the case of format <row col="val" .../>)
VIEW
It appears that the code refactoring done as part of the
patch for the MySQL BUG#11749859 fixed this issue. This
issue is not reproducible on MySQL 5.5+ versions now.
As part of this patch, the test file "mysqldump.test" has
been updated to remove the comment which was referring to
the bug and also the line which suppresses the warning.
Analysis :
==========
During JOIN::prepare of sub-query which creates the
derived tables we call setup_procedure. Here we call
fix_fields for parameters of procedure clause. Calling
setup_procedure at this point may cause issue. If
sub-query is one of parameter being fixed it might
lead to complicated dependencies on derived tables
being prepared.
SOLUTION :
==========
In 5.6 with WL#6242, we have made procedure clause
parameters can only be NUM, so sub-queries are not
allowed as parameters. So in 5.5 we can block
sub-queries in procedure clause parameters.
This eliminates above conflicting dependencies.
of more than 45G with a key_cache_block_size of 1024 or less.
The problem was that some of the arguments to my_multi_malloc() got to be
more than 4G.
Fix:
- Inntroduced my_multi_malloc_large() that can handle big regions.
- Changed MyISAM and Aria key caches to use my_multi_malloc_large().
I didn't change the default my_multi_malloc() as this would be a too big
patch and we don't allocate 4G blocks anywhere else.
PROBLEM
Whenever we insert in unique secondary index we take shared
locks on all possible duplicate record present in the table.
But while during a replace on the unique secondary index ,
we take exclusive and locks on the all duplicate record.
When the records are deleted, they are first delete marked
and later purged by the purge thread. While purging the
record we call the lock_update_delete() which in turn calls
lock_rec_inherit_to_gap() to inherit locks of the deleted
records. In repeatable read mode we inherit all the locks
from the record to the next record but in the read commited
mode we skip inherting them as gap type locks. We make a
exception here if the lock on the records is in shared mode
,we assume that it is set during insert for unique secondary
index and needs to be inherited to stop constraint violation.
We didnt handle the case when exclusive locks are set during
replace, we skip inheriting locks of these records and hence
causing constraint violation.
FIX
While inheriting the locks,check whether the transaction is
allowed to do TRX_DUP_REPLACE/TRX_DUP_IGNORE, if true
inherit the locks.
[ Revewied by Jimmy #rb9709]
The root cause is that x86 has a stronger memory model than the ARM
processors. And the GCC builtins didn't issue the correct fences when
setting/unsetting the lock word. In particular during the mutex release.
The solution is rewriting atomic TAS operations: replace '__sync_' by
'__atomic_' if possible.
Reviewed-by: Sunny Bains <sunny.bains@oracle.com>
Reviewed-by: Bin Su <bin.x.su@oracle.com>
Reviewed-by: Debarun Banerjee <debarun.banerjee@oracle.com>
Reviewed-by: Krunal Bauskar <krunal.bauskar@oracle.com>
RB: 9782
RB: 9665
RB: 9783
send_result_set_metadata
Analysis
--------
Cursor inside trigger accessing NEW/OLD row leads server exit.
The reason for the bug was that implementation of function
create_tmp_table() was not considering Item::TRIGGER_FIELD_ITEM
as possible alternative for type of class being instantiated.
This was resulting in a mismatch between a number of columns
in result list and temp table definition. This mismatch leads
to the failure of assertion
DBUG_ASSERT(send_result_set_metadata.elements == item_list.elements)
in the method Materialized_cursor::send_result_set_metadata
in debug mode.
Fix:
---
Added code to consider Item::TRIGGER_FIELD_ITEM as valid
type while creating fields.