actually, page_zip_verify_checksum() generally allows all-zeroes
checksums because our CRC32 checksum is something like
crc_1 ^ crc_2 ^ crc_3
Also, all zeroes page is considered correct.
As a side effect fix nasty reinterpret_cast<> UB
Also, since c0f47a4a58 innodb_checksum_algorithm=full_crc32
exists which computes CRC32 in one go (without bitwise arithmetic)
It was:
implicit conversion from 'ha_rows' (aka 'unsigned long long') to 'double'
changes value from 18446744073709551615 to 18446744073709551616
Follow what JOIN::get_examined_rows() does for similar code.
Re-enable main.mysql_client_test on all builders, because
at the moment we do not run any --big-test on buildbot
due to resource constraints.
A number of tests were declared big in
commit eeee1832d7
in an attempt to save resources on buildbot.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
Fixed also get_sweep_read_cost() for HEAP tables.
- Move testing of my_writer to inline functions to avoid calls
- Made more functions inline. Especially thd->thread_started()
is now very optimized!
- Moved Opt_trace_stmt classe to opt_trace_context.h to get critical
functions inline
- Added unlikely() to optimize for not having optimizer trace enabled
- Made THD::trace_started() inline
- Added 'if (trace_enabled())' around some potentially expensive code
(not many found)
- Added ASSERT's to ensure we don't call expensive optimizer trace calls
if optimizer trace is not enabled
- Added length to Json_writer functions to speed up buffer writes
when optimizer trace is enabled.
- Changed LEX_CSTRING argument handling to not send full struct to writer
function on_add_str() functions now trusts length arguments
Stop masking the Data_free values, because innodb_file_per_table=1
is the default.
Also, do mask Update_time after updating tables, even though for
some reason it does appear to matter.
Cleanup install_layout to account for multi-arch setup and remove
redundant defines in debian rules.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
When connections go to same node and deadlock happens, BF abort should
not happen for victim thread. Fixed by guarding
`wsrep_handle_SR_rollback()` so that is called only for SR transactions.
Co-authored-by: Seppo Jaakola <seppo.jaakola@iki.fi>
Co-authored-by: Daniele Sciascia <daniele.sciascia@galeracluster.com>
status threads_connected can temporarily be bigger than max_connections+1
If SHOW STATUS LIKE "Threads_connected" comes after
ER_CON_COUNT_ERROR is sent to the client, but before the counter is
decremented, Threads_connected can differ from the expected value.
ha_partition: Remove redundant 'virtual' keywords and add
missing 'override'.
FIXME: handler::table_type() is not declared virtual, yet ha_partition
and ha_sequence are seemingly trying to override it.
- Flag ALTER_STORED_COLUMN_TYPE set while doing varchar extension
for partition table. Basically all partition supports
can_be_converted_by_engine() then it should be set to
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE.
added cmake checks for pam_ext.h and pam_appl.h headers
added check for pam_syslog()
added pam_syslog() if doesn't exist
all cmake checks performed from inside the plugin
join_cache_level=6+
The patch fixes two similar bugs in the commit 8eeb689e9f
that added multi_range_read support to partitions. The commit opened
a possibility to join a partition table using BKA+MRR. However in some
cases it could lead to wrong results or even crashes.
This could happened when
- index condition pushdown was used to join the table or
- the joined table was an inner table of an outer join and 'not exist'
optimization was applied or
- the join table was the inner table of a semi-join and the first match
optimization was applied
The bugs were in the code of the call-back functions
- partition_multi_range_key_skip_record() and
- partition_multi_range_key_skip_index_tuple().
Each of this function consist only of an invocation of another function.
Yet a wrong parameter was passed at this invocation.
The fix was suggested by Sergey Petrunia and it is apparently in line
with original design.
The corresponding comprehensive test cases demonstrating the problems
caused by the bugs were constructed by me.
If async replication slave thread conflicts with cluster replication,
then the async slave transaction should be BF aborted, and depending on the
state of async slave transaction execution, potentially also replayed.
There were problems in such BF abort implementation and the replaying was not
started.
This pull request contains fixes which make sure that if async slave thread is
marked to abort and replay, it will complete carry out the rollback and
release all locks and resources before starting the replaying. After replaying,
async slave transactions is treated as successful, so the slave thread will
continue as usual, handling next replication event.
There is also new mtr test: galera.galera_slave_replay, which stresses both a
certification failure for async slave thread and a successful BF abort
followed by replaying.
* `--defaults-file` option is showed only in `--help --verbose` if
applied
* `--default-extra-file` is showing correctly now in `--help --verbose`,
previously it was treated as a directory with appended `my.cnf`
Part#2: cleanup:
In the part 1 of the fix, DS-MRR implementation would peek into
the JOIN_TAB to get the rowid filter from
table->reginfo.join_tab->rowid_filter
This doesn't look good from code isolation standpoint (why should a
storage engine assume it is used through a JOIN_TAB?).
Fixed this by storing the 'un-pushed' rowid_filter in the DsMrr_impl
structure. The filter survives across multi_range_read_init() calls.
It is discarded when somebody calls index_end() or rnd_end() and cleans
up the DsMrr_impl.