In file sql/filesort.cc,when merge_buffers() is called then
- queue_remove(&queue,0) is called
- For the function queue_remove there is assertion states that the element to be removed should have index >=1
- this is causing the assertion to fail.
Fixed by removing the top element.
The patch b96c196f1c added a new call for
safe_charset_converter() without a corresponding fix_fields().
In case of a sub-query the created Item remained in non-fixed state.
The problem did not show up with literal derived expressions, only
subselects were affected. This patch adds a corresponding fix_fields()
to the previously added safe_charset_converter().
The bug occurred when a subquery
- has a reference to outside, to grand-parent query or further up
- is converted to a semi-join (i.e. merged into its parent).
Then the reference to outside had form Item_ref(Item_field(...)).
- Conversion to semi-join would call item->fix_after_pullout() for the
outside reference.
- Item_ref::fix_after_pullout would call Item_field->fix_after_pullout
- The Item_field would construct a new Name_resolution_context object
This process ignored the fact that the Item_field does not belong to
any of the subselects being flattened.
The result was crash in the next call to Item_field::fix_fields(), where
we would try to use an invalid Name_resolution_context object.
Fixed by not creating Name_resolution_context object if the Item_field's
context does not belong to the subselect(s) that were flattened.
This change is a backport from 10.0 to 5.5 for:
1. The full patch for:
MDEV-4841 Wrong character set of ADDTIME() and DATE_ADD()
9adb6e991e
2. A small fragment of:
MDEV-5298 Illegal mix of collations on timestamp
03f6778d61
which overrides Item_temporal_hybrid_func::cmp_type(),
and adds a new line into cache_temporal_4265.result.
because thd->update_server_status() is used to measure the query time
for the slow log (not only to set protocol level flags),
it needs to be called also when the server isn't going to send
anything to the client.
Issue:
------
While using the LOAD statement to insert data into an
updateable view, the check to verify whether a column
is actually updatable is missing.
Solution for 5.5 and 5.6:
-------------------------
For a view whose column-list in specified in the LOAD
command, this check is not performed. This fix adds the
check.
This is a partial backport of Bug#21097485.
Solution for 5.7 and trunk:
---------------------------
For a view whose column-list is specified in the LOAD
command, this check is already performed. This fix adds the
same check when no column-list is specified.
Role names with trailing whitespaces are truncated in length as of
956e92d908 to fix MDEV-8609. The problem
is that the code that creates role mappings expects the string to be null
terminated.
Add the null terminator to account for that as well. In the future
the rest of the code can be cleaned up to never assume c style strings
but only LEX_STRINGS.
Different fix. Don't allow Item_func_sp to be evaluated unless
all tables are prelocked.
Extend the test case to make sure Item_func_sp::val_str is called
(the table must have at least one row for that).
When JOIN::destroy() is called for a JOIN object that has
- join->tmp_join != NULL
- also has join->table[0]->sort
then the latter was not cleaned up.
This could cause a memory leak and/or asserts in the subsequent queries.
Fixed by adding a cleanup call.
The problem was that null_value was not set to "false" on a well-formed row.
If an ill-formed row was followed by a well-forned row, null_value remained
"true" in the call of Item::send() for the well-formed row.
Description :
=============
When a MTR test run is started, it initializes the server and creates
the datadir under '$MYSQL_TEST_DIR/var'('/tmp/var' or '/dev/shm/var'
if --mem option is used) location and then copies it to the datadir
location of server(s).
If $parallel == 1, datadir location of the server is
'$MYSQL_TEST_DIR/var/data'. If $parallel > 1, datadir location of any
server is '$MYSQL_TEST_DIR/var/<thread_num>/data'.
This is the reason MTR searches for the initialized datadir in 2
locations('$opt_vardir' and '$opt_vardir/..') from the current vardir
location..
But this can cause few problems. If a directory with the name 'data'
already exists under '$MYSQL_TEST_DIR' and if the MTR run is started
with parallel value 1, then
1. copytree($install_db, '$opt_vardir/..') command will fail if the
user doesn't have the access permission to '$MYSQL_TEST_DIR/data'
directory.
2. Unnecessary contents from '$MYSQL_TEST_DIR/data' directory will be
copied to server datadir location and this might affect the server
startup.
Fix :
=====
Depending on the $parallel value decide whether the path for the
initialize datadir is "$opt_vardir"(i.e $parallel = 1) or
"$opt_vardir/.."(i.e $parallel > 1).
Reviewed-by: Deepa Dixit <deepa.dixit@oracle.com>
Reviewed-by: Srikanth B R <srikanth.b.r@oracle.com>
RB: 14773
be consistent and don't include the table name into the error message,
no other CREATE TABLE error does it.
(the crash happened, because thd->lex->query_tables was NULL)
Due to the collation used on the roles_mapping_hash, key comparison
would work in a case-insensitive manner. This is incorrect from the
roles mapping perspective. Make use of a case-sensitive collation for that hash,
the same one used for the acl_roles hash.
The function Item_func_isnull::update_used_tables() must
handle the case when the predicate is over not nullable
column in a special way.
This is actually a bug of MariaDB 5.3/5.5, but it's probably
hard to demonstrate that it can cause problems there.
Partially backporting MDEV-9874 from 10.2 to 10.0
READ_INFO::read_field() raised the ER_INVALID_CHARACTER_STRING error
when reading an escape character followed by a multi-byte character.
Raising wellformedness errors in READ_INFO::read_field() was wrong,
because the main goal of READ_INFO::read_field() is to *unescape* the
data which was presumably escaped using mysql_real_escape_string(),
using the same character set with the one specified in
"LOAD DATA INFILE ... CHARACTER SET ..." (or assumed by default).
During LOAD DATA, multi-byte characters are not always scanned as a single
entity! In case of escaped data, parts of a multi-byte character can be
scanned on different loop iterations. So the old code erroneously tested
welformedness in the middle of a multi-byte character.
Moreover, the data after unescaping can go into a BLOB field, not a text field.
Wellformedness tests are meaningless in this case.
Ater this patch, wellformedness is only checked later, during
Field::store(str,length,cs) time. The loop that scans bytes only
makes sure to revert the changes made by mysql_real_escape_string().
Note, in some cases users can supply data which did not really go through
mysql_real_escape_string() and was escaped by some other means,
or was not escaped at all. The file reported in this MDEV contains
the string "\ä", which is an example of such improperly escaped data, as
- either there should be two backslashes: "\\ä"
- or there should be no backslashes at all: "ä"
mysql_real_escape_string() could not generate "\ä".
The flag TABLE_LIST::fill_me must be reset to false at the prepare
phase for any materialized derived table used in the executed query.
Otherwise if the optimizer decides to generate a key for such a table
it is generated only for the first execution of the query.
MDEV-10780 Server crashes in in create_tmp_table
MDEV-11265 Access defied when CREATE VIIEW v1 AS SELECT DEFAULT(column) FROM t1
Item_default_value and Item_insert_value erroneously derive from Item_field
but forgot to override some methods that apply only to true fields,
so the server code mixes Item_{default|insert}_value instances with real
table fields (i.e. true Item_field) in some cases.
Overriding a few methods to avoid this.
TODO: we should eventually derive Item_default_value (and Item_insert_value)
directly from Item, as they don't really need the entire Item_field,
Item_ident and Item_result_field functionality.
Only the member "Field *field" related functionality is actually needed,
like val_xxx(), is_null(), get_geometry_type(), charset_for_protocol(), etc.
This occured when the SQL thread (but not the IO thread) stops while
GTID and parallel replication are used with multiple domain ids in the
GTID position, and is restarted.
In this case, the SQL needs to start some way back in the relay log,
applying or skipping events within each replication domain as
appropriate.
The SQL threads starts at the beginning of an old relay log file, and
this position may be in the middle of an event group. The bug was that
such partial event group could be re-applied, causing replication
corruption.
This patch fixes the issue, by making sure to skip any initial events
that were part of an earlier (already applied) event group.
LOAD DATA AT MASTER.
Revert "BUG#23080148 - BACKPORT BUG 14653594 AND BUG 20683959 TO"
This reverts commit 1d31f5b3090d129382b50b95512f2f79305715a1.
The commit causes replication incompatibility between minor revisions
and based on discussion with Srinivasarao, the patch is reverted.
In the function create_key_parts_for_pseudo_indexes()
the key part structures of pseudo-indexes created for
BLOB fields were set incorrectly.
Also the key parts for long fields must be 'truncated'
up to the maximum length acceptable for key parts.
1. When min/max value is provided the null flag for it must be set to 0
in the bitmap Culumn_statistics::column_stat_nulls.
2. When the calculation of the selectivity of the range condition
over a column requires min and max values for the column then we
have to check that these values are provided.