The SQL thread keeps track of the position in the current relay log from
which to read the next event. This position is not normally used, but a
certain interaction with the IO thread can cause the SQL thread to re-open
the relay log and seek to the stored position.
In parallel replication, there were a couple of places where the position
was not updated. This created a race where a re-open of the relay log could
seek to the wrong position and start re-reading and processing events
already handled once, causing various kinds of problems.
Fix this by moving the position update into a single place in
apply_event_and_update_pos(), which should ensure that the position is
always updated in the parallel replication case.
This problem was found from the testcase of MDEV-10863, but it is logically
a separate problem.
This has no functional changes, but it helps avoid merge problems from 10.0
to 10.1. In 10.0, code that checks for parallel replication uses
opt_slave_parallel_threads > 0, but this check needs to be
mi->using_parallel() in 10.1. By using the same check in 10.0 (with
unchanged semantics), merge problems to 10.1 are avoided.
MDEV-10780 Server crashes in in create_tmp_table
MDEV-11265 Access defied when CREATE VIIEW v1 AS SELECT DEFAULT(column) FROM t1
Item_default_value and Item_insert_value erroneously derive from Item_field
but forgot to override some methods that apply only to true fields,
so the server code mixes Item_{default|insert}_value instances with real
table fields (i.e. true Item_field) in some cases.
Overriding a few methods to avoid this.
TODO: we should eventually derive Item_default_value (and Item_insert_value)
directly from Item, as they don't really need the entire Item_field,
Item_ident and Item_result_field functionality.
Only the member "Field *field" related functionality is actually needed,
like val_xxx(), is_null(), get_geometry_type(), charset_for_protocol(), etc.
This occured when the SQL thread (but not the IO thread) stops while
GTID and parallel replication are used with multiple domain ids in the
GTID position, and is restarted.
In this case, the SQL needs to start some way back in the relay log,
applying or skipping events within each replication domain as
appropriate.
The SQL threads starts at the beginning of an old relay log file, and
this position may be in the middle of an event group. The bug was that
such partial event group could be re-applied, causing replication
corruption.
This patch fixes the issue, by making sure to skip any initial events
that were part of an earlier (already applied) event group.
In the function create_key_parts_for_pseudo_indexes()
the key part structures of pseudo-indexes created for
BLOB fields were set incorrectly.
Also the key parts for long fields must be 'truncated'
up to the maximum length acceptable for key parts.
1. When min/max value is provided the null flag for it must be set to 0
in the bitmap Culumn_statistics::column_stat_nulls.
2. When the calculation of the selectivity of the range condition
over a column requires min and max values for the column then we
have to check that these values are provided.
'mysql.proc' doesn't exist.
The mysql_rm_db() doesn't seem to expect the 'mysql' database
to be deleted. Checks for that added.
Also fixed the bug MDEV-11105 Table named 'db'
has weird side effect.
The db.opt file now removed separately.
cherry-pick from 5.7:
commit 6b24763
Author: Manish Kumar <manish.4.kumar@oracle.com>
Date: Tue Mar 27 13:10:42 2012 +0530
BUG#12977988 - ON STOP SLAVE: ERROR READING PACKET FROM SERVER: LOST CONNECTION
TO MYSQL SERVER
BUG#11761457 - ERROR 2013 + "ERROR READING RELAY LOG EVENT" ON STOP SLAVEBUG#12977988 - ON STOP SLAVE: ERROR READING PACKET FROM SERVER: LOST CONNECTION
TO MYSQL SERVER
Code flow hit incorrect branch while closing table instances before removal.
This branch expects thread to hold open table instance, whereas CREATE OR
REPLACE doesn't actually hold open table instance.
Before CREATE OR REPLACE TABLE it was impossible to hit this condition in
LTM_PRELOCKED mode, thus the problem didn't expose itself during DROP TABLE
or DROP DATABASE.
Fixed by adjusting condition to take into account LTM_PRELOCKED mode, which can
be set during CREATE OR REPLACE TABLE.
The following directives to ignore warnings where in the PerconaFT build in tokudb.
These generate errors when g++ ... -o xxx.so is used to compile are shared object.
As these don't actually hit any warnings they have been removed.
* -Wno-ignored-attributes
* -Wno-pointer-bool-conversion
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
The crash is caused by macro uint3korr() accessing memory (1 byte) past
the end of allocated page. The macro is written such it reads 4 bytes
instead of 3 and discards the value of the last byte.
However, it is not always guaranteed that all uint3korr accesses will be
valid (i.e that the caller allocates an extra byte after the value).
In particular, the tree in Item_func_group_concat does not account for
any extra bytes that it would need for comparison of keys in some cases
(Field_newdate::cmp, Field_medium::cmp)
The fix change uint3korr so it does not access extra bytes.
- don't use stat() for file size, it doesn not handle large size
use GetFileSizeEx() instead
- don't use lseek(), it can't handle large files, use _lseeki64() instead.
- Also, switch off OS file buffering for innochecksum on Windows,
to avoid thrashing file cache.
Prior to this patch name of the user was read from environment variable
USER, with a fallback to 'ODBC', if the environment variable is not set.
The name of the env.variable is incorrect (USERNAME usually contains current
user's name, but not USER), which made client to always determine
current user as 'ODBC'.
The fix is to use GetUserName() instead.
From https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=838914
Fixes CMake so that when building a 32-bit mips binary on a 64-bit
mips machine, the target is not set as 32-bit, which apparently
confused some tests in mroonga.
Now the null is tested using the result set getObject method.
modified: storage/connect/JdbcInterface.java
modified: storage/connect/jdbconn.cpp
modified: storage/connect/jdbconn.h