Set select_thread_in_use only when we're about to enter into the polling
loop, not sooner, allowing early proces aborts to exist cleanly: the
process won't be waiting for a polling loop that isn't yet polling.
Problem:
=======
- Redundant table fails to insert into the table after
instant drop blob column. Instant drop column only marking
the column as hidden and consecutive insert statement tries
to insert NULL value for the dropped BLOB column and returns
the fixed length of the blob type as 65535. This lead to
row size too large error.
Fix:
====
For redundant table, if the non-fixed dropped column can be null
then set the length of the field type as 0.
Stop skipping const items when selecting but skip them when storing
their results to spider row to avoid storing in mismatching temporary
table fields.
Skip auxiliary fields in SELECTing, and do not store
the (non-existing) results to the corresponding temporary table
accordingly.
When there are BOTH auxiliary fields AND const items in the auxiliary
field items, do not use the spider GBH. This is a rare occasion if it
happens at all and not worth the added complexity to cover it.
Use the original item (item_ptr) in constructing GROUP BY and ORDER
BY, which also means using item->name instead of field->field_name as
aliases in constructing SELECT items. This fixes spurious regressions
caused by the above changes in some tests using ORDER BY, such as
mdev_24517.test. As a by-product, this also fixes MDEV-29546.
Therefore we update mdev_29008.test to include the MDEV-29546 case.
Remove the dead-code, in Spider, which is related to the Spider's
Oracle OCI support. The code has been disabled for a long time and
it is unlikely that the code will be enabled.
During spider query construction of certain cast functions, it
locates the last occurrence of a keyword in the output of the
Item::print() function and append from there to the constructed query
so far. For example, consider the following query
SELECT * FROM t2 ORDER BY CAST(c AS INET6);
It constructs the following query and executes it at the data
node (assuming the data node table is called t0).
select cast(t0.`c` as inet6) ``,t0.`c` `c` from `test`.`t1` t0 order by ``
When the construction has completed the initial part
select cast(t0.`c`
It then attempts to construct the " as inet6" part. To that end, it
calls print() on the Item_typecast_fbt corresponding to the cast item,
and obtains
cast(`test`.`t2`.`c` as inet6)
It then looks for " as ", and places cursor there for appending:
cast(`test`.`t2`.`c` as inet6)
^
In this patch, if the search fails, i.e. there's no " as ...", we
make sure that the cursor is not placed before the beginning of the
string (out of bound).
We also relax the search from " as char" to " as " in the case of
CHAR_TYPECAST_FUNC, since there is more than one Item type with this
func type. For example, "AS INET6" is an Item_typecast_fbt which has
this func type.
When an DDL statement results in a local partition table with
partitions not covering all values in the table, a failure is emitted.
However, when the table in question is a spider table, the issue does
not surface until some future statements (DELETE in the test examples
in this commit) are executed. This is consistent with the design of
spider which aims to minimise connections with the data node. The
resulting error is legitimate and should not result in an assertion
failure. Similarly, a partitioned spider table could have misplaced
rows, so we remove the other assertion as well.
- document tmp_share, which are temporary spider shares with only one
link (no ha)
- simplify spider_get_sys_tables_connect_info() where link_idx is
always 0
- InnoDB fails to set the index information or index number
for the spatial index error HA_ERR_NULL_IN_SPATIAL.
row_build_spatial_index_key(): Initialize the tmp_mbr array completely.
check_if_supported_inplace_alter(): Fix the spelling mistake of alter
Sometimes, in MariaDB Server 10.5 but apparently not in later branches,
the test would hang because con1 and con2 would be blocked in
debug_sync (for example, lock_wait_suspend_thread_enter and
row_ins_sec_index_entry_dup_locks_created) and therefore blocking
the purge of transactions from completing.
To prevent an occasional DEBUG_SYNC induced hang in the test, we will
wait for everything to be purged, except the last 2 transactions.
This change should be null-merged to 10.6, because the test is not
failing in 10.6 or later major versions.
Field_blob::store() has special code for GROUP_CONCAT temporary table
(to store blob values in Blob_mem_storage - this prevents them
from being freed/overwritten when a next row is read).
Field_geom and Field_blob_compressed inherit from Field_blob but they
have their own ::store() method without this special Blob_mem_storage
support.
Considering that non-grouping CONCAT() of such fields converts
them to plain BLOB, let's do the same for GROUP_CONCAT. To do it,
Item_func_group_concat::setup will signal that it's creating
a temporary table for GROUP_CONCAT, and Field_blog::make_new_field()
override will create base Field_blob when under group concat.
Hash index is vcol-based wrapper (MDEV-371). row_end is added to
unique index. So when row_end is updated unique hash index must be
recalculated via vcol_update_fields(). DELETE did not update virtual
fields, so DELETE HISTORY was getting wrong hash value.
The fix does update_virtual_fields() on vers_update_end() so in every
case row_end is updated virtual fields are updated as well.
work consistently on replication
Row-based replication does not execute CREATE .. SELECT but instead
CREATE TABLE. CREATE .. SELECT creates implict system fields on
unusual place: in-between declared fields and select fields. That was
done because select_field_pos logic requires select fields go last in
create_list.
So, CREATE .. SELECT on master and CREATE TABLE on slave create system
fields on different positions and replication gets field mismatch.
To fix this we've changed CREATE .. SELECT to create implicit system
fields on usual place in the end and updated select_field_pos for
handling this case.
heap-buffer-overflow in _mi_put_key_in_record
Rec buffer size depends on vreclength like this:
length= MY_MAX(length, info->s->vreclength);
The problem is rec buffer is allocated before vreclength is
calculated. The fix reallocates rec buffer if vreclength changed.
1. Rec buffer allocated
f0 mi_alloc_rec_buff (...) at ../src/storage/myisam/mi_open.c:738
f1 0x00005f4928244516 in mi_open (...) at ../src/storage/myisam/mi_open.c:671
f2 0x00005f4928210b98 in ha_myisam::open (...)
at ../src/storage/myisam/ha_myisam.cc:847
f3 0x00005f49273aba41 in handler::ha_open (...) at ../src/sql/handler.cc:3105
f4 0x00005f4927995a65 in open_table_from_share (...)
at ../src/sql/table.cc:4320
f5 0x00005f492769f084 in open_table (...) at ../src/sql/sql_base.cc:2024
f6 0x00005f49276a3ea9 in open_and_process_table (...)
at ../src/sql/sql_base.cc:3819
f7 0x00005f49276a29b8 in open_tables (...) at ../src/sql/sql_base.cc:4303
f8 0x00005f49276a6f3f in open_and_lock_tables (...)
at ../src/sql/sql_base.cc:5250
f9 0x00005f49275162de in open_and_lock_tables (...)
at ../src/sql/sql_base.h:509
f10 0x00005f4927a30d7a in open_only_one_table (...)
at ../src/sql/sql_admin.cc:412
f11 0x00005f4927a2c0c2 in mysql_admin_table (...)
at ../src/sql/sql_admin.cc:603
f12 0x00005f4927a2fda8 in Sql_cmd_optimize_table::execute (...)
at ../src/sql/sql_admin.cc:1517
f13 0x00005f49278102e3 in mysql_execute_command (...)
at ../src/sql/sql_parse.cc:6180
f14 0x00005f49278012d7 in mysql_parse (...) at ../src/sql/sql_parse.cc:8236
2. vreclength calculated
f0 ha_myisam::setup_vcols_for_repair (...)
at ../src/storage/myisam/ha_myisam.cc:1002
f1 0x00005f49282138b4 in ha_myisam::optimize (...)
at ../src/storage/myisam/ha_myisam.cc:1250
f2 0x00005f49273b4961 in handler::ha_optimize (...)
at ../src/sql/handler.cc:4896
f3 0x00005f4927a2d254 in mysql_admin_table (...)
at ../src/sql/sql_admin.cc:875
f4 0x00005f4927a2fda8 in Sql_cmd_optimize_table::execute (...)
at ../src/sql/sql_admin.cc:1517
f5 0x00005f49278102e3 in mysql_execute_command (...)
at ../src/sql/sql_parse.cc:6180
f6 0x00005f49278012d7 in mysql_parse (...) at ../src/sql/sql_parse.cc:8236
FYI backtrace was done with
set print frame-info location
set print frame-arguments presence
set width 80
There where unused variable. They were not conditional
on defines, so removed them.
Added an error handing in proc_object if there was no db
as subsequent operations would have failed.
CMake rewriting the tests causes Mroonga to be un-buildable
on build environments where there source directory is read
only.
In the test results, the version wasn't particularly important.
Remove the version dependence of tests.
When calculate_cond_selectivity_for_table() takes into account multi-
column selectivities from range access, it tries to take-into account
that selectivity for some columns may have been already taken into account.
For example, for range access on IDX1 using {kp1, kp2}, the selectivity
of restrictions on "kp2" might have already been taken into account
to some extent.
So, the code tries to "discount" that using rec_per_key[] estimates.
This seems to be wrong and unreliable: the "discounting" may produce a
rselectivity_multiplier number that hints that the overall selectivity
of range access on IDX1 was greater than 1.
Do a conservative fix: if we arrive at conclusion that selectivity of
range access on condition in IDX1 >1.0, clip it down to 1.
storage/connect/tabfmt.cpp:419:24: error: '%.3d' directive writing between 3 and 10 bytes into a region of size 5 [-Werror=format-overflow=]
419 | sprintf(buf, "COL%.3d", i+1);
row_purge_reset_trx_id(): Reserve large enough offsets for accomodating
the maximum width PRIMARY KEY followed by DB_TRX_ID,DB_ROLL_PTR.
Reviewed by: Thirunarayanan Balathandayuthapani
Don't allow the referencing key column from NULL TO NOT NULL
when
1) Foreign key constraint type is ON UPDATE SET NULL
2) Foreign key constraint type is ON DELETE SET NULL
3) Foreign key constraint type is UPDATE CASCADE and referenced
column declared as NULL
Don't allow the referenced key column from NOT NULL to NULL
when foreign key constraint type is UPDATE CASCADE
and referencing key columns doesn't allow NULL values
get_foreign_key_info(): InnoDB sends the information about
nullability of the foreign key fields and referenced key fields.
fk_check_column_changes(): Enforce the above rules for COPY
algorithm
innobase_check_foreign_drop_col(): Checks whether the dropped
column exists in existing foreign key relation
innobase_check_foreign_low() : Enforce the above rules for
INPLACE algorithm
dict_foreign_t::check_fk_constraint_valid(): This is used
by CREATE TABLE statement to check nullability for foreign
key relation.
The method was declared to return an unsigned integer, but it is
really a boolean (and used as such by all callers).
A secondary change is the addition of "const" and "noexcept" to this
method.
In ha_mroonga.cpp, I also added "inline" to the two helper methods of
referenced_by_foreign_key(). This allows the compiler to flatten the
method.
We have found that my_errno can be "passed" to the next commad in some cases.
It is practically impossible to check/fix all cases of my_errno in the server,
plugins and engines so we will reset it as we reset other errors.
The test case will be fixed by CSV engine fix so will be added with it
(see part2).