Problem was with deleting non existing .frm file for a storage engine that
doesn't have .frm files (yet)
Fixed by not giving an error for non existing .frm files for storage engines
that are using discovery
Fixed also valgrind supression related to the given test case
in default installation.
Added plugin-dir to the [client] section of the generated my.ini,
so that installed services (MSI or mysql_install_db.exe) would be able to
find plugin directory.
Cherry-pick: f4a0af070ce49abae60040f6f32e1074309c27fb
Author: Dmitry Lenev <dmitry.lenev@oracle.com>
Date: Mon Jul 25 16:06:52 2016 +0300
Fix for bug #16672723 "CAN'T FIND TEMPORARY TABLE".
Attempt to execute prepared CREATE TABLE SELECT statement which used
temporary table in the subquery in FROM clause and stored function
failed with unwarranted ER_NO_SUCH_TABLE error. The same happened
when such statement was used in stored procedure and this procedure
was re-executed.
The problem occurred because execution of such prepared statement/its
re-execution as part of stored procedure incorrectly set
Query_table_list::query_tables_own_last marker, indicating the last
table which is directly used by statement. As result temporary table
used in the subquery was treated as indirectly used/belonging to
prelocking list and was not pre-opened by open_temporary_tables()
call before statement execution. Thus causing ER_NO_SUCH_TABLE errors
since our code assumes that temporary tables need to be correctly
pre-opened before statement execution.
This problem became visible only in version 5.6 after patches related to
bug 11746602/27480 "EXTEND CREATE TEMPORARY TABLES PRIVILEGE TO ALLOW
TEMP TABLE OPERATIONS" since they have introduced pre-opening of temporary
tables for statements.
Incorrect setting of Query_table_list::query_tables_own_last happened
in LEX::first_lists_tables_same() method which is called by CREATE TABLE
SELECT implementation as part of LEX::unlink_first_table(), which temporary
excludes table list element for table being created from the query table
list before handling SELECT part.
LEX::first_lists_tables_same() tries to ensure that global table list of
the statement starts with the first table list element from the first
statement select. To do this it moves such table list element to the head
of the global table list. If this table happens to be last directly-used
table for the statement, query_tables_own_last marker is pointing to it.
Since this marker was not updated when table list element was moved we
ended up with all tables except the first table separated by it as if
they were not directly used by statement (i.e. belonged to prelocked
tables list).
This fix changes code of LEX::first_lists_tables_same() to update
query_tables_own_last marker in cases when it points to the table
being moved. It is set to the table which precedes table being moved
in this case.
Make the slave SQL thread always output to the error log the message "Slave
SQL thread exiting, replication stopped in ..." whenever it previously
outputted "Slave SQL thread initialized, starting replication ...".
Before this patch, it was somewhat inconsistent in which cases the message
would be output and in which not, depending on the exact time and cause of
the condition that caused the SQL thread to stop.
The fix for bug mdev-5104 did not take into account that
for any call of setup_order the size of ref_array must
be big enough. This patch fixes this problem.
Problem:- In replication if slave has extra persistent column then these
column are not computed while applying write-set from master.
Solution:- While applying row events from server, we will generate values
for extra persistent columns.
In file sql/filesort.cc,when merge_buffers() is called then
- queue_remove(&queue,0) is called
- For the function queue_remove there is assertion states that the element to be removed should have index >=1
- this is causing the assertion to fail.
Fixed by removing the top element.
The patch b96c196f1c added a new call for
safe_charset_converter() without a corresponding fix_fields().
In case of a sub-query the created Item remained in non-fixed state.
The problem did not show up with literal derived expressions, only
subselects were affected. This patch adds a corresponding fix_fields()
to the previously added safe_charset_converter().
The bug occurred when a subquery
- has a reference to outside, to grand-parent query or further up
- is converted to a semi-join (i.e. merged into its parent).
Then the reference to outside had form Item_ref(Item_field(...)).
- Conversion to semi-join would call item->fix_after_pullout() for the
outside reference.
- Item_ref::fix_after_pullout would call Item_field->fix_after_pullout
- The Item_field would construct a new Name_resolution_context object
This process ignored the fact that the Item_field does not belong to
any of the subselects being flattened.
The result was crash in the next call to Item_field::fix_fields(), where
we would try to use an invalid Name_resolution_context object.
Fixed by not creating Name_resolution_context object if the Item_field's
context does not belong to the subselect(s) that were flattened.
This change is a backport from 10.0 to 5.5 for:
1. The full patch for:
MDEV-4841 Wrong character set of ADDTIME() and DATE_ADD()
9adb6e991e
2. A small fragment of:
MDEV-5298 Illegal mix of collations on timestamp
03f6778d61
which overrides Item_temporal_hybrid_func::cmp_type(),
and adds a new line into cache_temporal_4265.result.
because thd->update_server_status() is used to measure the query time
for the slow log (not only to set protocol level flags),
it needs to be called also when the server isn't going to send
anything to the client.
Role names with trailing whitespaces are truncated in length as of
956e92d908 to fix MDEV-8609. The problem
is that the code that creates role mappings expects the string to be null
terminated.
Add the null terminator to account for that as well. In the future
the rest of the code can be cleaned up to never assume c style strings
but only LEX_STRINGS.
Different fix. Don't allow Item_func_sp to be evaluated unless
all tables are prelocked.
Extend the test case to make sure Item_func_sp::val_str is called
(the table must have at least one row for that).
This reverts commit 035a5ac62a.
Two minor problems and one regression:
1. caching the value in str_result. Other Item methods may use it,
destroying the cache. See, for example, Item::save_in_field, where
str_result is moved to use a local buffer (this failed main.grant)
2. Item_func_conv_charset::safe is now set too late, it's initialized
only in val_str() but checked before that, this failed many tests
in optimized builds.
to fix 1 - use tmp_result instead of str_result, to fix 2, use
the else branch in the Item_func_conv_charset constructor to set
safe purely from charset properties.
But this introduces a regression, constant strings can no longer be
converted, say, from utf8 to latin1 (because 'safe' will be false).
This fails few tests too. There is no way to fix it without reverting
the commit and converting constants, as before, in the constructor.
When JOIN::destroy() is called for a JOIN object that has
- join->tmp_join != NULL
- also has join->table[0]->sort
then the latter was not cleaned up.
This could cause a memory leak and/or asserts in the subsequent queries.
Fixed by adding a cleanup call.
The problem was that null_value was not set to "false" on a well-formed row.
If an ill-formed row was followed by a well-forned row, null_value remained
"true" in the call of Item::send() for the well-formed row.
be consistent and don't include the table name into the error message,
no other CREATE TABLE error does it.
(the crash happened, because thd->lex->query_tables was NULL)
Due to the collation used on the roles_mapping_hash, key comparison
would work in a case-insensitive manner. This is incorrect from the
roles mapping perspective. Make use of a case-sensitive collation for that hash,
the same one used for the acl_roles hash.
The function Item_func_isnull::update_used_tables() must
handle the case when the predicate is over not nullable
column in a special way.
This is actually a bug of MariaDB 5.3/5.5, but it's probably
hard to demonstrate that it can cause problems there.
Partially backporting MDEV-9874 from 10.2 to 10.0
READ_INFO::read_field() raised the ER_INVALID_CHARACTER_STRING error
when reading an escape character followed by a multi-byte character.
Raising wellformedness errors in READ_INFO::read_field() was wrong,
because the main goal of READ_INFO::read_field() is to *unescape* the
data which was presumably escaped using mysql_real_escape_string(),
using the same character set with the one specified in
"LOAD DATA INFILE ... CHARACTER SET ..." (or assumed by default).
During LOAD DATA, multi-byte characters are not always scanned as a single
entity! In case of escaped data, parts of a multi-byte character can be
scanned on different loop iterations. So the old code erroneously tested
welformedness in the middle of a multi-byte character.
Moreover, the data after unescaping can go into a BLOB field, not a text field.
Wellformedness tests are meaningless in this case.
Ater this patch, wellformedness is only checked later, during
Field::store(str,length,cs) time. The loop that scans bytes only
makes sure to revert the changes made by mysql_real_escape_string().
Note, in some cases users can supply data which did not really go through
mysql_real_escape_string() and was escaped by some other means,
or was not escaped at all. The file reported in this MDEV contains
the string "\ä", which is an example of such improperly escaped data, as
- either there should be two backslashes: "\\ä"
- or there should be no backslashes at all: "ä"
mysql_real_escape_string() could not generate "\ä".
The crash happened when if my_error() was called for any reasons during loading
(e.g. a bad multi-byte sequence or a bad GEOMETRY value was found).
The server sent both error and progress packets, so the client disconnected.
The server then crashed on a assert about a wrong packet order in Debug build.
The server also tried to read from a closed socket when calling
READ_INFO::skip_data_till_eof().
As the crash happened only with "mysql" running in interactive mode,
no tests are possible. The problem was not reproducible with
"mysqltest" or "mysql" in batch mode.
The SQL thread keeps track of the position in the current relay log from
which to read the next event. This position is not normally used, but a
certain interaction with the IO thread can cause the SQL thread to re-open
the relay log and seek to the stored position.
In parallel replication, there were a couple of places where the position
was not updated. This created a race where a re-open of the relay log could
seek to the wrong position and start re-reading and processing events
already handled once, causing various kinds of problems.
Fix this by moving the position update into a single place in
apply_event_and_update_pos(), which should ensure that the position is
always updated in the parallel replication case.
This problem was found from the testcase of MDEV-10863, but it is logically
a separate problem.
This has no functional changes, but it helps avoid merge problems from 10.0
to 10.1. In 10.0, code that checks for parallel replication uses
opt_slave_parallel_threads > 0, but this check needs to be
mi->using_parallel() in 10.1. By using the same check in 10.0 (with
unchanged semantics), merge problems to 10.1 are avoided.