extra2_read_len resolved by keeping the implementation
in sql/table.cc by exposed it for use by ha_partition.cc
Remove identical implementation in unireg.h
(ref: bfed2c7d57)
commit '6de482a6fefac0c21daf33ed465644151cdf879f'
10.3 no longer errors in truncate_notembedded.test
but per comments, a non-crash is all that we are after.
whenever possible, partitioning should use the full
partition plugin name, not the one byte legacy code.
Normally, ha_partition can get the engine plugin from
table_share->default_part_plugin.
But in some cases, e.g. in DROP TABLE, the table isn't
opened, table_share is NULL, and ha_partition has to parse
the frm, much like dd_frm_type() does.
temporary_tables.cc, sql_table.cc:
When dropping a table, it must be deleted in the engine
first, then frm file. Because frm can be the only true
source of metadata that the engine might need for DROP.
table.cc:
when opening a partitioned table, if the engine for
partitions is not found, do not fallback to MyISAM.
not every index-using plan sets bits in table->quick_keys.
QUICK_ROR_INTERSECT_SELECT, for example, doesn't.
Use the fact that select->quick is set instead.
Also allow EXPLAIN to work.
- Server incorrectly downgrading the MDL after prepare phase when
table is empty. mdl_exclusive_after_prepare is being set in
prepare phase only. But mdl_exclusive_after_prepare condition was
misplaced and checked before prepare phase by
commit d270525dfd and it is now
changed to check after prepare phase.
- main.innodb_mysql_sync test case was changed to avoid locking
optimization when table is empty.
The follwoing could happen if InnoDB did some background work while
test was running:
@@ -5,6 +5,8 @@
SELECT LOCK_MODE, LOCK_TYPE, TABLE_SCHEMA, TABLE_NAME FROM information_schema.metadata_lock_info;
LOCK_MODE LOCK_TYPE TABLE_SCHEMA TABLE_NAME
MDL_SHARED_HIGH_PRIO Table metadata lock test t1
+MDL_SHARED Table metadata lock mysql innodb_table_stats
+MDL_SHARED Table metadata lock mysql innodb_index_stat
The sys_var class has the deprecation_substitute member to mark the
deprecated variables. As it's set, the server produces warnings when
these variables are used. However, the plugin has no means to utilize
that functionality.
So, the PLUGIN_VAR_DEPRECATED flag is introduced to set the
deprecation_substitute with the empty string. A non-empty string can
make the warning more informative, but there's no nice way seen to
specify it, and not that needed at the moment.
Added ability to disable/enable (--disable_view_protocol/--enable_view_protocol) view-protocol in tests.
When the option "--disable_view_protocol" is used util connections are closed.
Added new test for checking view-protocol
The problem was that "group_min_max optimization" does not work if
some aggregate functions, like COUNT(*), is used.
The function get_best_group_min_max() is using the join->sum_funcs
array to check which aggregate functions are used.
The bug was that aggregates in HAVING where not yet added to
join->sum_funcs at the time get_best_group_min_max() was called.
Fixed by populate join->sum_funcs already in prepare, which means that
all sum functions will be in join->sum_funcs in get_best_group_min_max().
A benefit of this approach is that we can remove several calls to
make_sum_func_list() from the code and simplify the function.
I removed some wrong setting of 'sort_and_group'.
This variable is set when alloc_group_fields() is called, as part
of allocating the cache needed by end_send_group() and does not need
to be set by other functions.
One problematic thing was that Spider is using *join->sum_funcs to detect
at which stage the optimizer is and do internal calculations of aggregate
functions. Updating join->sum_funcs early caused Spider to fail when trying
to find min/max values in opt_sum_query().
Fixed by temporarily resetting sum_funcs during opt_sum_query().
Reviewer: Sergei Petrunia
The problem was that get_best_group_min_max() did not check if fields used
by the "group_min_max optimization" where used in sub queries.
Because of this, it did not detect that a key (b,a) was used in the WHERE
clause for the statement:
SELECT DISTINCT b FROM t1 WHERE EXISTS ( SELECT 1 FROM DUAL WHERE a > 1 ).
Fixed by also traversing the sub queries when checking if a field is used.
This disables group_min_max_optimization for the above query.
Reviewer: Sergei Petrunia
MENT-328 wrongly assumed that the backup failed because of warnings from
mariabackup about not found files. This is normal (and the error message
should be deleted).
randgen failed because mariabackup didn't retry BACKUP STAGE BLOCK DDL
if it failed with a deadlock.
To simplify things, I implemented the retry loop in the server as
this particular deadlock should be quickly resolved.
when it's run directly after main.mysql_json_mysql_upgrade
because mysqld--help-aria starts a second mysqld that reads the plugin
table, so it has to be flushed and closed at that time.
Item::save_str_in_field() passes &Item::str_value as a parameter
to val_str().
Item_func::make_empty_result() also fills and returns str_value.
As a result, in the reported scenario in
Item_func::val_str_from_val_str_ascii()
both "str" and "res" pointed to Item::str_value,
which made the DBUG_ASSERT inside String::copy()
(preventing copying to itself) crash:
if ((null_value= str->copy(res->ptr(), res->length(),
&my_charset_latin1, collation.collation,
&errors)))
Fix:
- Adding a String* parameter to make_empty_result()
- Passing the val_str() parameter to make_empty_string().
The Item_in_subselect::in_strategy keeps the value and as the error
happens the condition isn't modified. That leads to wrong ::fix_fields
execution on second PS run. Also the select->table_list is merged
but not restored if an error happens, which causes hanging loops on
the third PS execution.
This bug may affect the queries that uses a grouping derived table with
grouping list containing references to columns from different tables if
the optimizer decides to employ the split optimization for the derived
table. In some very specific cases it may affect queries with a grouping
derived table that refers only one base table.
This bug was caused by an improper fix for the bug MDEV-25128. The fix
tried to get rid of the equality conditions pushed into the where clause
of the grouping derived table T to which the split optimization had been
applied. The fix erroneously assumed that only those pushed equalities
that were used for ref access of the tables referenced by T were needed.
In fact the function remove_const() that figures out what columns from the
group list can be removed if the split optimization is applied can uses
other pushed equalities as well.
This patch actually provides a proper fix for MDEV-25128. Rather than
trying to remove invalid pushed equalities referencing the fields of SJM
tables with a look-up access the patch attempts not to push such equalities.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The 10.5 version of the patch.
Removing DEFAULT from INFORMATION_SCHEMA columns.
DEFAULT in read-only tables is rather meaningless.
Upgrade should go smoothly.
Also fixes:
MDEV-20254 Problems with EMPTY_STRING_IS_NULL and I_S tables
Removing DEFAULT from INFORMATION_SCHEMA columns.
DEFAULT in read-only tables is rather meaningless.
Upgrade should go smoothly.
Also fixes:
MDEV-20254 Problems with EMPTY_STRING_IS_NULL and I_S tables
Hybrid functions (IF, COALESCE, etc) did not preserve the JSON property
from their arguments. The same problem was repeatable for single row subselects.
The problem happened because the method Item::is_json_type() was inconsistently
implemented across the Item hierarchy. For example, Item_hybrid_func
and Item_singlerow_subselect did not override is_json_type().
Solution:
- Removing Item::is_json_type()
- Implementing specific JSON type handlers:
Type_handler_string_json
Type_handler_varchar_json
Type_handler_tiny_blob_json
Type_handler_blob_json
Type_handler_medium_blob_json
Type_handler_long_blob_json
- Reusing the existing data type infrastructure to pass JSON
type handlers across all item types, including classes Item_hybrid_func
and Item_singlerow_subselect. Note, these two classes themselves do not
need any changes!
- Extending the data type infrastructure so data types can inherit
their properties (e.g. aggregation rules) from their base data types.
E.g. VARCHAR/JSON acts as VARCHAR, LONGTEXT/JSON acts as LONGTEXT
when mixed to a non-JSON data type. This is done by:
- adding virtual method Type_handler::type_handler_base()
- adding a helper class Type_handler_pair
- refactoring Type_handler_hybrid_field_type methods
aggregate_for_result(), aggregate_for_min_max(),
aggregate_for_num_op() to use Type_handler_pair.
This change also fixes:
MDEV-27361 Hybrid functions with JSON arguments do not send format metadata
Also, adding mtr tests for JSON replication. It was not covered yet.
And the current patch changes the replication code slightly.
The problem affected queries in form:
SELECT FROM (SELECT where Split Materialized is applicable) WHERE 1=0
The problem was caused by this:
- The select in derived table uses two-phase optimization (due to a
possible Split Materialized).
- The primary select has "Impossible where" and so it short-cuts its
optimization.
- The optimization for the SELECT in the derived table is never finished,
and EXPLAIN data structure has a dangling pointer to select #2.
Fixed with this: make JOIN::optimize_stage2() invoke optimization of
derived tables when it is handing a degenerate JOIN with zero tables.
We will not execute the derived tables but we need their query plans
for [SHOW]EXPLAIN.
The issue was that calc_cond_selectivity_for_table prefered ranges with
many parts and when deciding on which selectivity to use.
Fixed by going through ranges according to the number of rows in the range.
This ensures that selectivity from ranges with few rows will be prefered
over ranges with many rows for indexes that uses the same columns.
A query in form
SELECT DISTINCT expr_that_is_inferred_to_be_const LIMIT 0 OFFSET n
produces one row when it should produce none. The issue was in
JOIN_TAB::remove_duplicates() in the piece of logic that tried to
avoid duplicate removal for such cases but didn't account for possible
"LIMIT 0".
Fixed by making Select_limit_counters::set_limit() change OFFSET to 0
when LIMIT is 0.
This bug could affect queries with a grouping derived table containing
equalities in the where clause of its specification if the optimizer
chose to apply split optimization to access the derived table. In such
cases wrong results could be returned from the queries.
When the optimizer considers a possibility of using split optimization
to a derived table it injects equalities joining the derived table with
other tables into the where condition of the derived table. After the join
order for the execution using split optimization has been chosen as the
cheapest the injected equalities that are not used to access the derived
table are removed from the where condition of the derived table.
For this removal the optimizer looks through the conjuncts of the where
condition of the derived table, fetches the equalities and checks whether
they belong to the list of injected equalities.
As the injection of the list was performed just by the insertion of it
into the list of top level AND condition of the where condition some extra
conjuncts from the where condition could be automatically attached to the
end of the list of injected equalities. If such attached conjunct happened
to be an equality predicate it was removed from the where condition of the
derived table and thus lost for checking at the execution phase.
The bug has been fixed by injecting of a shallow copy of the list of the
pushed equalities rather than the list itself leaving the latter intact.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
Problem:
At some point, we made stored rountines fail at CREATE time
instead of execution time in case of this syntax:
IF unknown_variable
...
END IF
As a result, a trigger created before this change and contained an unknown
variable worked in a bad way after upgrade:
- It was displayed with an empty trigger name by SHOW CREATE TRIGGER
- It was displayed with an empty trigger name by INFORMATION_SCHEMA.TRIGGERS
- An attempt to DROP this trigger returned errors - nothing happened.
- DROP TABLE did not remove the .TRN file corresponding to this broken trigger.
Underlying code observations:
The old code assumed that the trigger name resides in the current lex:
if(thd->lex->spname)
m_trigger_name= &thd->lex->spname->m_name;
This is not always the case. Some SP statements (e.g. IF)
do the following in their beginning:
- create a separate local LEX
- set thd->lex to this new local LEX
- push the new local LEX to the stack in sp_head::m_lex
and the following at the end of the statement:
- pop the previous LEX from the stack sp_head::m_lex
- set thd->lex back to the popped value
So when the parse error happens inside e.g. IF statement, thd->lex->spname
is a NULL pointer, because thd->lex points to the local LEX (without SP name)
rather than the top level LEX (with SP name).
Fix:
- Adding a new method sp_head::find_spname_recursive()
which walks inside the LEX stack sp_head::m_lex from
the top (the newest, most local) to the bottom (the oldest),
and finds the one which contains a non-zero spname pointer.
- Using the new method inside
Deprecated_trigger_syntax_handler::handle_condition():
First it still tests thd->lex->spname (like before this change),
and uses it in case it is not empty.
Otherwise (if thd->lex->spname is empty), it calls
sp_head::find_spname_recursive() to find the LEX with a
non-empty spname inside the LEX stack of the current sphead.
Disable LATERAL DERIVED optimization for subqueries that have WITH ROLLUP.
This bug could affect queries with grouping derived tables / views / CTEs
with ROLLUP. The bug could manifest itself if the corresponding
materialized derived tables are subject to split optimization.
The current implementation of the split optimization produces rows
from the derived table in an arbitrary order. So these rows must be
accumulated in another temporary table and sorted according to the
used GROUP BY clause in order to be able to generate the additional
ROLLUP rows.
This patch prohibits to use split optimization for grouping derived
tables / views / CTEs with ROLLUP.
The --skip-write-binary-log added to mysql_tzinfo_to_sql in
MDEV-18778 was only effective if galera was enabled on the server.
This is because it tied together three concepts under one option:
1. binary logging
2. wsrep replication, and
3. using innodb as a transitional table type.
Change 1: small change in help option to reflect this.
To solve the performance problem with Aria tables, LOCK TABLES WRITE
is used to eliminate the need to fdatasync until the UNLOCK TABLES.
If galera isn't enabled, then we also want to use the LOCK TABLE WRITE
mechanism.
The START TRANSACTION added in MDEV-23440 needed to be moved to
before LOCK TABLES otherwise it would cancel their effect.
TRUNCATE TABLE statements also need to be before the LOCK TABLES.
When changing back from InnoDB to Aria, include the ORDER BY that
was originally there in 6aaccbcbf7 and matching the final ALTER
TABLE in the timezonedir branch.
Running: mariadb-tzinfo-to-sql --skip-write-binlog /usr/share/zoneinfo
now generates 16 Aria_transaction_log_syncs from 7053.
Diagnostics_area::set_error_status (interrupted ALTER TABLE under LOCK)
Analysis: KILL_QUERY is not ignored when local memory used exceeds maximum
session memory. Hence the query proceeds, OK is sent and we end up
reopening tables that are marked for reopen. During this, kill status is
eventually checked and assertion failure happens during trying to send error
message because OK has already been sent.
Fix: Ok is already sent so statement has already executed. It is too
late to give error. So ignore kill.
If the optimizer decides to rewrites a NOT IN predicand of the form
outer_expr IN (SELECT inner_col FROM ... WHERE subquery_where)
into the EXISTS subquery
EXISTS (SELECT 1 FROM ... WHERE subquery_where AND
(outer_expr=inner_col OR inner_col IS NULL))
then the pushed equality predicate outer_expr=inner_col can be used for
ref[or_null] access if inner_col is a reference to an indexed column.
In this case if there is a selective range condition over this column then
a Rowid filter may be employed coupled the with ref[or_null] access. The
filter is 'pushed' into the engine and in InnoDB currently it cannot be
used with index look-ups by primary key. The ref[or_null] access can be
used only when outer_expr is not NULL. Otherwise the original predicand
is evaluated to TRUE only if the result set returned by the query
SELECT 1 FROM ... WHERE subquery_where
is empty. When performing this evaluation the executor switches to the
table scan by primary key. Before this patch the pushed filter still
remained marked as active and the engine tried to apply the filter. This
was incorrect and in InnoDB this attempt to use the filter led to an
assertion failure.
This patch fixes the problem by disabling usage of the filter when
outer_expr is evaluated to NULL.
make_join_select() calls const_cond->val_int(). There are edge cases
where const_cond may have a not-yet optimized subquery.
(The subquery will have used_tables() covered by join->const_tables. It
will still have const_item()==false, so other parts of the optimizer
will not try to evaluate it. We should probably mark such subqueries
as constant but that is outside the scope of this MDEV)
The old code erroneously used default_charset_info to compare field names.
default_charset_info can point to any arbitrary collation,
including ucs2*, utf16*, utf32*, including those that do not
support strcasecmp().
my_charset_utf8mb4_unicode_ci, which is used in this scenario:
CREATE TABLE t1 ENGINE=InnoDB WITH SYSTEM VERSIONING AS SELECT 0;
does not support strcasecmp().
Fixing the code to use Lex_ident::streq(), which uses
system_charset_info instead of default_charset_info.
In case when filesort does not use addon field packing (because of
too small potential savings) and uses fixed width addon fields instead,
the field->pack() call can store less bytes when the field maximum
possible field length, e.g. in case of VARCHAR().
The memory between the packed length and addonf->length (the maximum length)
stayed uninitialized, which was reported by Valgrind/MSAN.
The problem was introduced by f52bf92014 in 10.5,
which removed the tail initialization (probably unintentionally).
Restoring the bzero() in the fixed length branch,
so in case when pack() stores less bytes than addonf->length says,
the trailing bytes gets initialized.
Note, before f52bf92014, the bzero()
was under HAVE_valgrind conditional compilation. Now it's being added
unconditionally:
- MSAN also reported the problem, so it's not only Valgrind specific.
- As Serg proposed, conditional initialization is bad - it can have
potentional security problems as the non-initialized memory fragments
can store various pieces of essential information, e.g. passwords.
Repeating execution of a query containing the clause IN with string literals
in environment where the server variable in_predicate_conversion_threshold
is set results in server abnormal termination in case the query is run
as a Prepared Statement and conversion of charsets for string values in the
query are required.
The reason for server abnormal termination is that instances of the class
Item_string created on transforming the IN clause into subquery were created
on runtime memory root that is deallocated on finishing execution of Prepared
statement. On the other hand, references to Items placed on deallocated memory
root still exist in objects of the class table_value_constr. Subsequent running
of the same prepared statement leads to dereferencing of pointers to already
deallocated memory that could lead to undefined behaviour.
To fix the issue the values being pushed into a values list for TVC are created
by cloning their original items. This way the cloned items are allocate on
the PS memroot and as consequences no dangling pointer does more exist.
... '/var/lib/mysql/aria_log_control' for exclusive use, error: 11
As such don't lock under aria_readonly (which is set by opt_help in the handler
initialization.
Fixes: 8eba777c2b
Example build: ./BUILD/compile-pentium64-valgrind-max
Fixes:
- sp-no-valgrind failed if binary was built for valgrind as in this case
mem_root is allocated in very small hunks which the test cannot handle.
Fixed by testing of valgrind build
- truncate_notembedded failed in reap because of more memory used.
Fixed by allowing reap to fail too
This enables optimizer_trace output for the next SQL command.
Identical as if one would have done:
- Store value of @@optimizer_trace
- Set @optimizer_trace="enabled=on"
- Run query
- SELECT * from OPTIMIZER_TRACE
- Restore value of @@optimizer_trace
This is a great time saver when one wants to quickly check the optimizer
trace for a query in a mtr test.
A part of the test main.long_unique attempts to insert records
with two 60,000,001-byte columns. Let us move that test into
a separate file main.long_unique_big, declared as big test,
so that it can be skipped in environments with limited memory.
From 10.4.13, the `mariadb.sys` user was created to replace `root` definers.
- In commit 0253ea7f22, definer of
Add/DropGeometryColumn procedures was changed to `mariadb.sys`, in
`scripts/maria_add_gis_sp.sql.in`.
However, maria_add_gis_sp.sql only applies to new databases created by
installation script. Databases upgraded from old versions will miss this
change.
- In addition, according to commit
0d6d801e5886208b2632247d88da106a543e1032(MDEV-23102), in some scenarios
when root user is replaced it will skip creating `mariadb.sys` user.
This commit is to update the definer from `root` to `mariadb.sys` during
upgrade. It only makes the change if the original definers are root.
Doesn't choose to execute `maria_add_gis_sp.sql` in upgrade script to
recreate the procedures is because of considering the scenarios of
MDEV-23102 that `root` user is replaced and `mariadb.sys` is not created.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Drop and add same key is considered rename (look ALTER_RENAME_INDEX in
fill_alter_inplace_info()). But in this case order of keys may be
changed, because mysql_prepare_alter_table() yet does not know about
rename and treats 2 operations: drop and add.
In that case we disable inplace algorithm for such engines as Memory,
MyISAM and Aria with ALTER_INDEX_ORDER flag. These engines have no
specialized check_if_supported_inplace_alter() and default
handler::check_if_supported_inplace_alter() sees an unknown flag and
returns HA_ALTER_INPLACE_NOT_SUPPORTED.
ha_innobase::check_if_supported_inplace_alter() works differently and
inplace is not disabled (with the help of modified
INNOBASE_INPLACE_IGNORE). add_drop_v_cols fork was also tweaked as it
wrongly failed with MSG_UNSUPPORTED_ALTER_ONLINE_ON_VIRTUAL_COLUMN
when it seen ALTER_INDEX_ORDER.
No-op operation must be still no-op no matter of ALTER_INDEX_ORDER
presence, so we tweek its condition as well.
mysql_prepare_create_table() does my_qsort(sort_keys) on key
info. This sorting is indeterministic: a table is created with one
order and inplace alter may overwrite frm with another order. Since
inplace alter does nothing about key info for MyISAM/Aria storage
engines this results in discrepancy between frm and storage engine key
definitions.
The fix avoids the sorting of keys when no new keys added by ALTER
(and this is ok for MyISAM/Aria since it cannot add new keys inplace).
There is a case when implicit primary key may be changed when removing
NOT NULL from the part of unique key. In that case we update
modified_primary_key which is then used to not skip key sorting.
According to is_candidate_key() there is no other cases when primary
key may be changed implicitly.
Notes:
mi_keydef_write()/mi_keyseg_write() are used only in mi_create(). They
should be used in ha_inplace_alter_table() as well.
Aria corruption detection is unimplemented: maria_check_definition()
is never used!
MySQL 8.0 has this bug as well as of 8.0.26.
mysql_prepare_create_table() does my_qsort(sort_keys) on key
info. This sorting is indeterministic: a table is created with one
order and inplace alter may overwrite frm with another order. Since
inplace alter does nothing about key info for MyISAM/Aria storage
engines this results in discrepancy between frm and storage engine key
definitions.
The fix avoids the sorting of keys when no new keys added by ALTER
(and this is ok for MyISAM/Aria since it cannot add new keys inplace).
Notes:
mi_keydef_write()/mi_keyseg_write() are used only in mi_create(). They
should be used in ha_inplace_alter_table() as well.
Aria corruption detection is unimplemented: maria_check_definition()
is never used!
MySQL 8.0 has this bug as well as of 8.0.26.
This breaks main.long_unique in 10.4. The new result is correct and
should be applied as it just different (original) order of keys.
Long UNIQUE HASH index silently creates virtual column index, which should
be impossible for base columns featuring AUTO_INCREMENT.
Fix: add a relevant check; add new vcol type for a prettier error message.
Also fixes:
MDEV-25399 Assertion `name.length == strlen(name.str)' failed in Item_func_sp::make_send_field
Also fixes a problem that in this scenario:
SET NAMES binary;
SELECT 'some not well-formed utf8 string';
the auto-generated column name copied the binary string value directly
to the Item name, without checking utf8 well-formedness.
After this change auto-generated column names work as follows:
- Zero bytes 0x00 are copied to the name using HEX notation
- In case of "SET NAMES binary", all bytes sequences that do not make
well-formed utf8 characters are copied to the name using HEX notation.
Assertion `!pk->has_virtual()' failed in dict_index_build_internal_clust
while creating PRIMARY key longer than possible to store in the page.
This happened because the key was wrongly deduced as Long UNIQUE supported,
however PRIMARY KEY cannot be of that type. The main reason is that
only 8 bytes are used to store the hash, see HA_HASH_FIELD_LENGTH.
This is also why HA_NOSAME flag is removed (and caused the assertion in
turn) in open_table_from_share:
if (key_info->algorithm == HA_KEY_ALG_LONG_HASH)
{
key_part_end++;
key_info->flags&= ~HA_NOSAME;
}
To make it unique, the additional check is done by
check_duplicate_long_entries call from ha_write_row, and similar one from
ha_update_row.
PRIMARY key is already forbidden, which is checked by the first test in
main.long_unique, however is_hash_field_needed was wrongly deduced to true
in mysql_prepare_create_table in this particular case.
FIX:
* Improve the check for Key::PRIMARY type
* Simplify is_hash_field_needed deduction for a more neat reading
MySQL-5.7 mysql.user tables have a last_password_changed field.
Because before MariaDB-10.4 remained oblivious to this, the act of creating
users or otherwise changing a users row left the last_password_field with 0.
Running a MariaDB-10.4 instance on this would work correctly, until mysql_upgrade
is run, when this 0 value immediately translates to password expired
state.
MySQL-5.7 relied on the password_expired enum to indicate password
expiry so we aren't going to activate password that were expired in
MySQL-5.7.
Thanks Hans Borresen for the bug report and review of the fix.
Uninitialized ref_pointer_array[] because setup_fields() got empty
fields list. mysql_multi_update() for some reason does that by
substituting the fields list with empty total_list for the
mysql_select() call (looks like wrong merge since total_list is not
used anywhere else and is always empty). The fix would be to return
back the original fields list. But this fails update_use_source.test
case:
--error ER_BAD_FIELD_ERROR
update v1 set t1c1=2 order by 1;
Actually not failing the above seems to be ok.
The other fix would be to keep resolve_in_select_list false (and that
keeps outer context from being resolved in
Item_ref::fix_fields()). This fix is more consistent with how SELECT
behaves:
--error ER_SUBQUERY_NO_1_ROW
select a from t1 where a= (select 2 from t1 having (a = 3));
So this patch implements this fix.
There are two fill_record() functions (lines 8343 and 8618). First one
is used when there are some explicit values, the second one is used
for all implicit values. First one does update_default_fields(), the
second one did not. Added update_default_fields() call to the implicit
version of fill_record().
Changes on top of Sachin’s patch. Specifically:
1) Refined the parsing break condition to only change the parser’s
behavior for parsing strings in binary mode (behavior of \0 outside
of strings is unchanged).
2) Prefixed binary_zero_insert.test with ‘mysql_’ to more clearly
associate the purpose of the test.
3) As the input of the test contains binary zeros (0x5c00),
different text editors can visualize this sequence differently, and
Github would not display it at all. Therefore, the input itself was
consolidated into the test and created out of hex sequences to make
it easier to understand what is happening.
4) Extended test to validate that the rows which correspond to the
INSERTS with 0x5c00 have the correct binary zero data.
Reviewed By:
===========
Andrei Elkin <andrei.elkin@mariadb.com>
Problem:- Some binary data is inserted into the table using Jconnector. When
binlog dump of the data is applied using mysql cleint it gives syntax error.
Reason:-
After investigating it turns out to be a issue of mysql client not able to properly
handle \\\0 <0 in binary>. In all binary files where mysql client fails to insert
these 2 bytes are commom (0x5c00)
Solution:-
I have changed mysql.cc to include for the possibility that binary string can
have \\\0 in it
Normally we disable caching of routines in "SHOW CREATE".
Introduce an exception, if debug_dbug="+d,cache_sp_in_show_create".
lock_sync.test needs a way to populate the cache without side effects,
or else it runs into debug_sync timeouts.
So, this possibility to cache will be remain only for very special tests.
The reason for this behavior is that SP get cached, per connection.
The stored_program_cache is size of this cache, which amounts to 256
routines by default. A compiled stored procedure can easily be several
megabytes in size. Thus calling SHOW CREATE PROCEDURE for all stored
procedures, like mysqldump does, can require significant amount of memory.
Fixed by bypassing the cache for "SHOW CREATE". This should normally be
fine also perfomance-wise, as cache is meant to be used for repeated
execution, not repeated SHOW CREATEs.
Added a test to verify that CREATE PROCEDURE + SHOW CREATE PROCEURE do not
cache, i.e amount of allocated memory does not change.
Note, there is a change in existing behavior in an edge case :
If "SHOW CREATE PROCEDURE p1" called from p1, after p1 was altered, now
this will now return altered code. Previour behavior - relied on caching
and would return old code. The previous behavior might was not necessarily
correct.
instead of original name of the column
When doing refactoring of temporary table field creation a mistake was
done when copying the column name when creating internal temporary tables.
For internal temporary tables we should use the original field name, not
the item name (= alias).