DOWNGRADED FROM 5.6.11 TO 5.6.10
Problem was new syntax not accepted by previous version.
Fixed by adding version comment of /*!50531 around the
new syntax.
Like this in the .frm file:
'PARTITION BY KEY /*!50611 ALGORITHM = 2 */ () PARTITIONS 3'
and also changing the output from SHOW CREATE TABLE to:
CREATE TABLE t1 (a INT)
/*!50100 PARTITION BY KEY */ /*!50611 ALGORITHM = 1 */ /*!50100 ()
PARTITIONS 3 */
It will always add the ALGORITHM into the .frm for KEY [sub]partitioned
tables, but for SHOW CREATE TABLE it will only add it in case it is the non
default ALGORITHM = 1.
Also notice that for 5.5, it will say /*!50531 instead of /*!50611, which
will make upgrade from 5.5 > 5.5.31 to 5.6 < 5.6.11 fail!
If one downgrades an fixed version to the same major version (5.5 or 5.6) the
bug 14521864 will be visible again, but unless the .frm is updated, it will
work again when upgrading again.
Also fixed so that the .frm does not get updated version
if a single partition check passes.
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).
Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).
These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.
Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.
The solution for this situation is:
1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).
2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.
This new syntax should probably be ignored by NDB.
3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)
CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.
REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.
5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.
This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
CONSISTENT SNAPSHOT OPTION
A transaction is started with a consistent snapshot. After
the transaction is started new indexes are added to the
table. Now when we issue an update statement, the optimizer
chooses an index. When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports
the error code HA_ERR_TABLE_DEF_CHANGED, with message
stating that "insufficient history for index".
This error message is propagated up to the SQL layer. But
the my_error() api is never called. The statement level
diagnostics area is not updated with the correct error
status (it remains in Diagnostics_area::DA_EMPTY).
Hence the following check in the Protocol::end_statement()
fails.
516 case Diagnostics_area::DA_EMPTY:
517 default:
518 DBUG_ASSERT(0);
519 error= send_ok(thd->server_status, 0, 0, 0, NULL);
520 break;
The fix is to backport the fix of bugs 14365043, 11761652
and 11746399.
14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
rb://1227 approved by guilhem and mattiasj.
QUOTING IN REPLICATION
Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.
Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
In fill_schema_table_by_open(): free item list before restoring active arena.
sql/sql_show.cc:
Replaced i_s_arena.free_items with DBUG_ASSERT(i_s_arena.free_list == NULL)
(there's nothing to free in that list)
Reason:
This is a regression happened because of changes done in code refactoring
in 5.1 from 5.0.
Issue:
While doing "Show tables" lex->verbose was being checked to avoid opening
FRM files to get table type. In case of "Show full table", lex->verbose
is true to indicate table type is required. In 5.0, this check was
present which got missing in >=5.5.
Fix:
Added the required check to avoid opening FRM files unnecessarily in case
of "Show tables".
Analysis:
-------------------------------
According to the Manual
(http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html):
"Column, index, stored routine, and event names are not case sensitive on any
platform, nor are column aliases."
In other words, 'lower_case_table_names' does not affect the behaviour of
those identifiers.
On the other hand, trigger names are case sensitive on some platforms,
and case insensitive on others. 'lower_case_table_names' does not affect
the behaviour of trigger names either.
The bug was that SHOW statements did case sensitive comparison
for stored procedure / stored function / event names.
Fix:
Modified the code so that comparison in case insensitive for routines
and events for "SHOW" operation.
FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
The problem was that metadata locking subsystem introduced
too much overhead for queries to I_S which were processed by
opening only .FRM or .TRG files and had to scanned a lot of
tables (e.g. SELECT COUNT(*) FROM I_S.TRIGGERS was affected).
The same effect was not observed for similar queries which
performed full-blown table open in order to fill I_S table.
The problem stemmed from the fact that in case when I_S
implementation opened only .FRM or .TRG file for each table
processed it didn't release metadata lock it has acquired on
the table after finishing its processing. As result, list
of acquired metadata locks were growing until the end of
statement. Since acquisition of each new lock required
search in the list of already acquired locks performance
degraded.
The same effect is not observed when I_S implementation
performs full-blown table open for each table being
processed, as in the latter cases metadata lock on the
table is released right after table processing.
This fix addressed the problem by ensuring that I_S
implementation releases metadata lock after processing
the table in both cases of full-blown table open and in
case when only .FRM or .TRG file is read.
mysql-test/r/information_schema.result:
Added coverage for bug #12828477 - "MDL SUBSYSTEM CREATES BIG
OVERHEAD FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
mysql-test/t/information_schema.test:
Added coverage for bug #12828477 - "MDL SUBSYSTEM CREATES BIG
OVERHEAD FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
sql/sql_show.cc:
Changed fill_schema_table_from_frm() to release metadata lock
it has acquired after processing the .FRM or .TRG file for
table.
Without this step metadata locks acquired for each table
processed will be accumulated. In situation when a lot of
tables are processed by I_S query this will result in
transaction with too many metadata locks. As result
performance of acquisition of new lock will degrade.
when selecting from I_S and views exist, in SP.
Symptoms: re-execution of prepared statement (or statement in a stored
routine) which read from one of I_S tables and which in order to fill
this I_S table had to open a view led to increasing memory consumption.
What happened in this situation was that during the process of view
opening for purpose of I_S filling view-related structures (like its
LEX) were allocated on persistent MEM_ROOT of prepared statement (or
stored routine). Since this MEM_ROOT is not freed until prepared
statement deallocation (or expulsion of stored routine from the cache)
and code responsible for filling I_S is not able to re-use results of
view opening from previous executions this allocation ended up in
memory hogging.
This patch solves the problem by ensuring that when a view opened
for the purpose of I_S filling all its structures are allocated on
non-persistent runtime MEM_ROOT. This is achieved by activating a
temporary Query_arena bound to this MEM_ROOT.
Since this step makes impossible linking of view structures into
LEX of our prepared statement (or stored routine statement) this
patch also changes code filling I_S table to install a proxy LEX
before trying to open a view or a table. Consequently some code
which was responsible for backing-up/restoring parts of LEX when
view/table was opened during filling of I_S table became redundant
and was removed.
This patch doesn't contain test case for this bug as it is hard
to test memory hogging in our test suite.
In sql_class.cc, 'row_count', of type 'ha_rows', was used as last argument for
ER_TRUNCATED_WRONG_VALUE_FOR_FIELD which is
"Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld".
So 'ha_rows' was used as 'long'.
On SPARC32 Solaris builds, 'long' is 4 bytes and 'ha_rows' is 'longlong' i.e. 8 bytes.
So the printf-like code was reading only the first 4 bytes.
Because the CPU is big-endian, 1LL is 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01
so the first four bytes yield 0. So the warning message had "row 0" instead of
"row 1" in test outfile_loaddata.test:
-Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 1
+Warning 1366 Incorrect string value: '\xE1\xE2\xF7' for column 'b' at row 0
All error-messaging functions which internally invoke some printf-life function
are potential candidate for such mistakes.
One apparently easy way to catch such mistakes is to use
ATTRIBUTE_FORMAT (from my_attribute.h).
But this works only when call site has both:
a) the format as a string literal
b) the types of arguments.
So:
func(ER(ER_BLAH), 10);
will silently not be checked, because ER(ER_BLAH) is not known at
compile time (it is known at run-time, and depends on the chosen
language).
And
func("%s", a va_list argument);
has the same problem, as the *real* type of arguments is not
known at this site at compile time (it's known in some caller).
Moreover,
func(ER(ER_BLAH));
though possibly correct (if ER(ER_BLAH) has no '%' markers), will not
compile (gcc says "error: format not a string literal and no format
arguments").
Consequences:
1) ATTRIBUTE_FORMAT is here added only to functions which in practice
take "string literal" formats: "my_error_reporter" and "print_admin_msg".
2) it cannot be added to the other functions: my_error(),
push_warning_printf(), Table_check_intact::report_error(),
general_log_print().
To do a one-time check of functions listed in (2), the following
"static code analysis" has been done:
1) replace
my_error(ER_xxx, arguments for substitution in format)
with the equivalent
my_printf_error(ER_xxx,ER(ER_xxx), arguments for substitution in
format),
so that we have ER(ER_xxx) and the arguments *in the same call site*
2) add ATTRIBUTE_FORMAT to push_warning_printf(),
Table_check_intact::report_error(), general_log_print()
3) replace ER(xxx) with the hard-coded English text found in
errmsg.txt (like: ER(ER_UNKNOWN_ERROR) is replaced with
"Unknown error"), so that a call site has the format as string literal
4) this way, ATTRIBUTE_FORMAT can effectively do its job
5) compile, fix errors detected by ATTRIBUTE_FORMAT
6) revert steps 1-2-3.
The present patch has no compiler error when submitted again to the
static code analysis above.
It cannot catch all problems though: see Field::set_warning(), in
which a call to push_warning_printf() has a variable error
(thus, not replacable by a string literal); I checked set_warning() calls
by hand though.
See also WL 5883 for one proposal to avoid such bugs from appearing
again in the future.
The issues fixed in the patch are:
a) mismatch in types (like 'int' passed to '%ld')
b) more arguments passed than specified in the format.
This patch resolves mismatches by changing the type/number of arguments,
not by changing error messages of sql/share/errmsg.txt. The latter would be wrong,
per the following old rule: errmsg.txt must be as stable as possible; no insertions
or deletions of messages, no changes of type or number of printf-like format specifiers,
are allowed, as long as the change impacts a message already released in a GA version.
If this rule is not followed:
- Connectors, which use error message numbers, will be confused (by insertions/deletions
of messages)
- using errmsg.sys of MySQL 5.1.n with mysqld of MySQL 5.1.(n+1)
could produce wrong messages or crash; such usage can easily happen if
installing 5.1.(n+1) while /etc/my.cnf still has --language=/path/to/5.1.n/xxx;
or if copying mysqld from 5.1.(n+1) into a 5.1.n installation.
When fixing b), I have verified that the superfluous arguments were not used in the format
in the first 5.1 GA (5.1.30 'bteam@astra04-20081114162938-z8mctjp6st27uobm').
Had they been used, then passing them today, even if the message doesn't use them
anymore, would have been necessary, as explained above.
include/my_getopt.h:
this function pointer is used only with "string literal" formats, so we can add
ATTRIBUTE_FORMAT.
mysql-test/collections/default.experimental:
test should pass now
sql/derror.cc:
by having a format as string literal, ATTRIBUTE_FORMAT check becomes effective.
sql/events.cc:
Change justified by the following excerpt from sql/share/errmsg.txt:
ER_EVENT_SAME_NAME
eng "Same old and new event name"
ER_EVENT_SET_VAR_ERROR
eng "Error during starting/stopping of the scheduler. Error code %u"
sql/field.cc:
ER_TOO_BIG_SCALE 42000 S1009
eng "Too big scale %d specified for column '%-.192s'. Maximum is %lu."
ER_TOO_BIG_PRECISION 42000 S1009
eng "Too big precision %d specified for column '%-.192s'. Maximum is %lu."
ER_TOO_BIG_DISPLAYWIDTH 42000 S1009
eng "Display width out of range for column '%-.192s' (max = %lu)"
sql/ha_ndbcluster.cc:
ER_OUTOFMEMORY HY001 S1001
eng "Out of memory; restart server and try again (needed %d bytes)"
(sizeof() returns size_t)
sql/ha_ndbcluster_binlog.cc:
Too many arguments for:
ER_GET_ERRMSG
eng "Got error %d '%-.100s' from %s"
Patch by Jonas Oreland.
sql/ha_partition.cc:
print_admin_msg() is used only with a literal as format, so ATTRIBUTE_FORMAT
works.
sql/handler.cc:
ER_OUTOFMEMORY HY001 S1001
eng "Out of memory; restart server and try again (needed %d bytes)"
(sizeof() returns size_t)
sql/item_create.cc:
ER_TOO_BIG_SCALE 42000 S1009
eng "Too big scale %d specified for column '%-.192s'. Maximum is %lu."
ER_TOO_BIG_PRECISION 42000 S1009
eng "Too big precision %d specified for column '%-.192s'. Maximum is %lu."
'c_len' and 'c_dec' are char*, passed as %d !! We don't know their value
(as strtoul() failed), but they are likely big, so we use INT_MAX.
'len' is ulong.
sql/item_func.cc:
ER_WARN_DATA_OUT_OF_RANGE 22003
eng "Out of range value for column '%s' at row %ld"
ER_CANT_FIND_UDF
eng "Can't load function '%-.192s'"
sql/item_strfunc.cc:
ER_TOO_BIG_FOR_UNCOMPRESS
eng "Uncompressed data size too large; the maximum size is %d (probably, length of uncompressed data was corrupted)"
max_allowed_packet is ulong.
sql/mysql_priv.h:
sql_print_message_func is a function _pointer_.
sql/sp_head.cc:
ER_SP_RECURSION_LIMIT
eng "Recursive limit %d (as set by the max_sp_recursion_depth variable) was exceeded for routine %.192s"
max_sp_recursion_depth is ulong
sql/sql_acl.cc:
ER_PASSWORD_NO_MATCH 42000
eng "Can't find any matching row in the user table"
ER_CANT_CREATE_USER_WITH_GRANT 42000
eng "You are not allowed to create a user with GRANT"
sql/sql_base.cc:
ER_NOT_KEYFILE
eng "Incorrect key file for table '%-.200s'; try to repair it"
ER_TOO_MANY_TABLES
eng "Too many tables; MySQL can only use %d tables in a join"
MAX_TABLES is size_t.
sql/sql_binlog.cc:
ER_UNKNOWN_ERROR
eng "Unknown error"
sql/sql_class.cc:
ER_TRUNCATED_WRONG_VALUE_FOR_FIELD
eng "Incorrect %-.32s value: '%-.128s' for column '%.192s' at row %ld"
WARN_DATA_TRUNCATED 01000
eng "Data truncated for column '%s' at row %ld"
sql/sql_connect.cc:
ER_HANDSHAKE_ERROR 08S01
eng "Bad handshake"
ER_BAD_HOST_ERROR 08S01
eng "Can't get hostname for your address"
sql/sql_insert.cc:
ER_WRONG_VALUE_COUNT_ON_ROW 21S01
eng "Column count doesn't match value count at row %ld"
sql/sql_parse.cc:
ER_WARN_HOSTNAME_WONT_WORK
eng "MySQL is started in --skip-name-resolve mode; you must restart it without this switch for this grant to work"
ER_TOO_HIGH_LEVEL_OF_NESTING_FOR_SELECT
eng "Too high level of nesting for select"
ER_UNKNOWN_ERROR
eng "Unknown error"
sql/sql_partition.cc:
ER_OUTOFMEMORY HY001 S1001
eng "Out of memory; restart server and try again (needed %d bytes)"
sql/sql_plugin.cc:
ER_OUTOFMEMORY HY001 S1001
eng "Out of memory; restart server and try again (needed %d bytes)"
sql/sql_prepare.cc:
ER_OUTOFMEMORY HY001 S1001
eng "Out of memory; restart server and try again (needed %d bytes)"
ER_UNKNOWN_STMT_HANDLER
eng "Unknown prepared statement handler (%.*s) given to %s"
length value (for '%.*s') must be 'int', per the doc of printf()
and the code of my_vsnprintf().
sql/sql_show.cc:
ER_OUTOFMEMORY HY001 S1001
eng "Out of memory; restart server and try again (needed %d bytes)"
sql/sql_table.cc:
ER_TOO_BIG_FIELDLENGTH 42000 S1009
eng "Column length too big for column '%-.192s' (max = %lu); use BLOB or TEXT instead"
sql/table.cc:
ER_NOT_FORM_FILE
eng "Incorrect information in file: '%-.200s'"
ER_COL_COUNT_DOESNT_MATCH_PLEASE_UPDATE
eng "Column count of mysql.%s is wrong. Expected %d, found %d. Created with MySQL %d, now running %d. Please use mysql_upgrade to fix this error."
table->s->mysql_version is ulong.
sql/unireg.cc:
ER_TOO_LONG_TABLE_COMMENT
eng "Comment for table '%-.64s' is too long (max = %lu)"
ER_TOO_LONG_FIELD_COMMENT
eng "Comment for field '%-.64s' is too long (max = %lu)"
ER_TOO_BIG_ROWSIZE 42000
eng "Row size too large. The maximum row size for the used table type, not counting BLOBs, is %ld. You have to change some columns to TEXT or BLOBs"
result set when SQLEXCEPTION is active.
The problem was in a hackish THD::no_warnings_for_error attribute.
When it was set, an error was not written to Warning_info -- only
Diagnostics_area state was changed. That means, Diagnostics_area
might contain error state, which is not present in Warning_info.
The user-visible problem was that in some cases SHOW WARNINGS
returned empty result set (i.e. there were no warnings) while
the previous SQL statement failed. According to the MySQL
protocol errors must be presented in warning list.
The main idea of this patch is to remove THD::no_warnings_for_error.
There were few places where it was used:
- sql_admin.cc, handling of REPAIR TABLE USE_FRM.
- sql_show.cc, when calling fill_schema_table_from_frm().
- sql_show.cc, when calling fill_table().
The fix is to either use internal-error-handlers, or to use
temporary Warning_info storing warnings, which might be ignored.
This patch is needed to fix Bug 11763162 (55843).
on lctn2 systems
There was a local variable in get_all_tables() to store the
"original" value of the database name as it can get lowercased
depending on the lower_case_table_name value.
get_all_tables() iterates over database names and for each
database iterates over the tables in it.
The "original" db name was assigned in the table names loop.
Thus the first table is ok, but the second and subsequent tables
get the lowercased name from processing the first table.
Fixed by moving the assignment of the original database name
from the inner (table name) to the outer (database name) loop.
Test suite added.
(aka BUG#11766883)
- fix review comments
- Rewrite last usage of handler::get_tablespace_name to use
table->s->tablespace directly
- Remove(revert) the addition of default implementation for
handler::get_tablespace_name
- Add comments describing the new TABLE_SHARE members default_storage_media
and tablespace
- Fix usage of incorrect mask for column_format bits, i.e COLUMN_FORMAT_MASK
- Add new "format section" in extra data segment with additional table and
column properties. This was originally introduced in 5.1.20 based MySQL Cluster
- Remove hardcoded STORAGE DISK for table and instead
output the real storage format used. Keep both TABLESPACE
and STORAGE inside same version guard.
- Implement default version of handler::get_tablespace_name() since tablespace
is now available in share and it's unnecessary for each handler to implement.
(the function could actually be removed totally now).
- Add test for combinations of TABLESPACE and STORAGE with CREATE TABLE
and ALTER TABLE
- Add test to show that 5.5 now can read a .frm file created by MySQL Cluster
7.0.22. Although it does not yet show the column level attributes, they are read.
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
cmake/os/SunOS.cmake:
Remove TARGET_OS_SOLARIS
config.h.cmake:
Remove TARGET_OS_SOLARIS
Add PTHREAD_ONCE_INITIALIZER
configure.cmake:
Add function for testing whether we need { PTHREAD_ONCE_INIT } rather than PTHREAD_ONCE_INIT
include/my_pthread.h:
Use PTHREAD_ONCE_INITIALIZER if set by cmake.
include/mysql/psi/mysql_file.h:
Include my_global.h first, to get correct platform definitions.
mysys/ptr_cmp.c:
Hide the unused static functions in #ifdef's on solaris.
Use __sun (defined by both gcc and SunPro cc) rather than TARGET_OS_SOLARIS
sql/my_decimal.cc:
Include my_global.h first, to get correct platform definitions.
sql/mysqld.cc:
Fix signed/unsigned comparison warning.
sql/sql_audit.h:
Include my_global.h first, to get correct platform definitions.
sql/sql_plugin.h:
Include my_global.h first, to get correct platform definitions.
sql/sql_show.cc:
Fix: warning: cast from pointer to integer of different size
sql/sys_vars.h:
Use reinterpret_cast rather than c-style cast.
storage/perfschema/pfs_instr.cc:
Include my_global.h first, to get correct platform definitions.
--Bug#52157 various crashes and assertions with multi-table update, stored function
--Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb
--Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846
--Bug#57352 valgrind warnings when creating view
--Recently discovered problem when a nested materialized derived table is used
before being populated and it leads to incorrect result
We have several modes when we should disable subquery evaluation.
The reasons for disabling are different. It could be
uselessness of the evaluation as in case of 'CREATE VIEW'
or 'PREPARE stmt', or we should disable subquery evaluation
if tables are not locked yet as it happens in bug#54475, or
too early evaluation of subqueries can lead to wrong result
as it happened in Bug#19077.
Main problem is that if subquery items are treated as const
they are evaluated in ::fix_fields(), ::fix_length_and_dec()
of the parental items as a lot of these methods have
Item::val_...() calls inside.
We have to make subqueries non-const to prevent unnecessary
subquery evaluation. At the moment we have different methods
for this. Here is a list of these modes:
1. PREPARE stmt;
We use UNCACHEABLE_PREPARE flag.
It is set during parsing in sql_parse.cc, mysql_new_select() for
each SELECT_LEX object and cleared at the end of PREPARE in
sql_prepare.cc, init_stmt_after_parse(). If this flag is set
subquery becomes non-const and evaluation does not happen.
2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which
process FRM files
We use LEX::view_prepare_mode field. We set it before
view preparation and check this flag in
::fix_fields(), ::fix_length_and_dec().
Some bugs are fixed using this approach,
some are not(Bug#57352, Bug#57703). The problem here is
that we have a lot of ::fix_fields(), ::fix_length_and_dec()
where we use Item::val_...() calls for const items.
3. Derived tables with subquery = wrong result(Bug19077)
The reason of this bug is too early subquery evaluation.
It was fixed by adding Item::with_subselect field
The check of this field in appropriate places prevents
const item evaluation if the item have subquery.
The fix for Bug19077 fixes only the problem with
convert_constant_item() function and does not cover
other places(::fix_fields(), ::fix_length_and_dec() again)
where subqueries could be evaluated.
Example:
CREATE TABLE t1 (i INT, j BIGINT);
INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2);
SELECT * FROM (SELECT MIN(i) FROM t1
WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3;
DROP TABLE t1;
4. Derived tables with subquery where subquery
is evaluated before table locking(Bug#54475, Bug#52157)
Suggested solution is following:
-Introduce new field LEX::context_analysis_only with the following
possible flags:
#define CONTEXT_ANALYSIS_ONLY_PREPARE 1
#define CONTEXT_ANALYSIS_ONLY_VIEW 2
#define CONTEXT_ANALYSIS_ONLY_DERIVED 4
-Set/clean these flags when we perform
context analysis operation
-Item_subselect::const_item() returns
result depending on LEX::context_analysis_only.
If context_analysis_only is set then we return
FALSE that means that subquery is non-const.
As all subquery types are wrapped by Item_subselect
it allow as to make subquery non-const when
it's necessary.
mysql-test/r/derived.result:
test case
mysql-test/r/multi_update.result:
test case
mysql-test/r/view.result:
test case
mysql-test/suite/innodb/r/innodb_multi_update.result:
test case
mysql-test/suite/innodb/t/innodb_multi_update.test:
test case
mysql-test/suite/innodb_plugin/r/innodb_multi_update.result:
test case
mysql-test/suite/innodb_plugin/t/innodb_multi_update.test:
test case
mysql-test/t/derived.test:
test case
mysql-test/t/multi_update.test:
test case
mysql-test/t/view.test:
test case
sql/item.cc:
--removed unnecessary code
sql/item_cmpfunc.cc:
--removed unnecessary checks
--THD::is_context_analysis_only() is replaced with LEX::is_ps_or_view_context_analysis()
sql/item_func.cc:
--refactored context analysis checks
sql/item_row.cc:
--removed unnecessary checks
sql/item_subselect.cc:
--removed unnecessary code
--added DBUG_ASSERT into Item_subselect::exec()
which asserts that subquery execution can not happen
if LEX::context_analysis_only is set, i.e. at context
analysis stage.
--Item_subselect::const_item()
Return FALSE if LEX::context_analysis_only is set.
It prevents subquery evaluation in ::fix_fields &
::fix_length_and_dec at context analysis stage.
sql/item_subselect.h:
--removed unnecessary code
sql/mysql_priv.h:
--Added new set of flags.
sql/sql_class.h:
--removed unnecessary code
sql/sql_derived.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_lex.cc:
--init LEX::context_analysis_only field
sql/sql_lex.h:
--New LEX::context_analysis_only field
sql/sql_parse.cc:
--removed unnecessary code
sql/sql_prepare.cc:
--removed unnecessary code
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_select.cc:
--refactored context analysis checks
sql/sql_show.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
sql/sql_view.cc:
--added LEX::context_analysis_only analysis intialization/cleanup
metadata"
Improved error handling such that queries against Information_Schema.Tables won't
fail if a federated table can't make a remote connection.
mysql-test/r/lock_multi.result:
Updated with warnings that were previously masked.
mysql-test/r/mdl_sync.result:
Updated with warnings that were previously masked.
mysql-test/r/merge.result:
Updated with warnings that were previously masked.
mysql-test/r/show_check.result:
Updated with warnings that were previously masked.
mysql-test/r/view.result:
Updated with warnings that were previously masked.
mysql-test/suite/federated/federated_bug_35333.result:
New test results for bug#35333
mysql-test/suite/federated/federated_bug_35333.test:
New test or bug#35333
sql/sql_show.cc:
If get_schema_tables_record() encounters an error, push a warning,
set the TABLE COMMENT column with the error text, and clear the
error so that the operation can continue.
Improved error handling such that queries against Information_Schema.Tables won't
fail if a Federated table is unable to connect to remote host.
sql/sql_show.cc:
If Handler::Info() fails, save error text in TABLE COMMENTS column, clear error.
metadata"
Improved error handling such that queries against Information_Schema.Tables won't
fail if a federated table can't make a remote connection.
mysql-test/r/merge.result:
Updated with warnings that were previously masked.
mysql-test/r/show_check.result:
Updated with warnings that were previously masked.
mysql-test/r/view.result:
Updated with warnings that were previously masked.
sql/sql_show.cc:
If get_schema_tables_record() encounters an error, push a warning,
set the TABLE COMMENT column with the error text, and clear the
error so that the operation can continue.
mysql-test/suite/sys_vars/t/shared_memory_base_name_basic.test:
The server shared memory name is located in the server's
temporary directory, not in the mysqltest one.
sql/sql_show.cc:
*/ ends a comment, add space to avoid problems.
Problem: Extended characters outside of ASCII range where not displayed
properly in SHOW PROCESSLIST, because thd_info->query was always sent as
system_character_set (utf8). This was wrong, because query buffer
is never converted to utf8 - it is always have client character set.
Fix: sending query buffer using query character set
@ sql/sql_class.cc
@ sql/sql_class.h
Introducing a new class CSET_STRING, a LEX_STRING with character set.
Adding set_query(&CSET_STRING)
Adding reset_query(), to use instead of set_query(0, NULL).
@ sql/event_data_objects.cc
Using reset_query()
@ sql/log_event.cc
Using reset_query()
Adding charset argument to set_query_and_id().
@ sql/slave.cc
Using reset_query().
@ sql/sp_head.cc
Changing backing up and restore code to use CSET_STRING.
@ sql/sql_audit.h
Using CSET_STRING.
In the "else" branch it's OK not to use
global_system_variables.character_set_client.
&my_charset_latin1, which is set in constructor, is fine
(verified with Sergey Vojtovich).
@ sql/sql_insert.cc
Using set_query() with proper character set: table_name is utf8.
@ sql/sql_parse.cc
Adding character set argument to set_query_and_id().
(This is the main point where thd->charset() is stored
into thd->query_string.cs, for use in "SHOW PROCESSLIST".)
Using reset_query().
@ sql/sql_prepare.cc
Storing client character set into thd->query_string.cs.
@ sql/sql_show.cc
Using CSET_STRING to fetch and send charset-aware query information
from threads.
@ storage/myisam/ha_myisam.cc
Using set_query() with proper character set: table_name is utf8.
@ mysql-test/r/show_check.result
@ mysql-test/t/show_check.test
Adding tests