The minimum value differs depending on the OS and mysqld build, so that the test fail spradically.
The check of this value has been changed from check of concrete values to the check of a range that is near by the expected value.
which were determined by the server depending on the os. The solution is to disable warnings in general.
The check of the values only have been done for Linux and Windows. Now, the check has been changed to the check of
ranges (not more concrete values) being near by the expected (set) values.
VARIABLE_VALUE field is decreased to 1024 symbols.
(affected I_S tables: GLOBAL_VARIABLES, SESSION_VARIABLES,
GLOBAL_STATUS, SESSION_STATUS).
The only variable which can be longer than 1024 is
init_connect. The variable will be truncated with warning.
Additional fix:
Added where condition filter which speed up queries which
have where condition with expressions which use VARIABLE_NAME
field.
changed 'charset', 'collation' field length from 64 to MY_CS_NAME_SIZE(32)
in tables:
SCHEMATA, TABLES, COLUMNS, CHARACTER_SETS,
COLLATIONS, COLLATION_CHARACTER_SET_APPLICABILITY
Problem 1: BUG#36625: rpl_redirect doesn't do anything useful. It tests an
obsolete feature that was never fully implemented.
Fix 1: Remove rpl_redirect.
Problem 2: rpl_innodb_bug28430 and rpl_flushlog_loop are disabled despite the
bugs for which they were disabled have been fixed.
Fix 2: Re-enable rpl_innodb_bug28430 and rpl_flushlog_loop.
The partitioning clause is only a very long single line, which is very
hard to interpret for a human. This patch breaks the partitioning
syntax into one line for the partitioning type, and one line per
partition/subpartition.
breaks auto increment
The auto_increment value was not initialized if
the first statement after opening a table was
an 'UPDATE'.
solution was to check initialize if it was not,
before trying to increase it in update.
In certain situations, a scan of the table will return the error
code HA_ERR_RECORD_DELETED, and this error code is not
correctly caught in the Rows_log_event::find_row() function, which
causes an error to be returned for this case.
This patch fixes the problem by adding code to either ignore the
record and continuing with the next one, the the event of a table
scan, or change the error code to HA_ERR_KEY_NOT_FOUND, in the event
that a key lookup is attempted.
The code to get read the value of a system variable was extracting its value
on PREPARE stage and was substituting the value (as a constant) into the parse tree.
Note that this must be a reversible transformation, i.e. it must be reversed before
each re-execution.
Unfortunately this cannot be reliably done using the current code, because there are
other non-reversible source tree transformations that can interfere with this
reversible transformation.
Fixed by not resolving the value at PREPARE, but at EXECUTE (as the rest of the
functions operate). Added a cache of the value (so that it's constant throughout
the execution of the query). Note that the cache also caches NULL values.
Updated an obsolete related test suite (variables-big) and the code to test the
result type of system variables (as per bug 74).
The failure was caused by executing a CREATE-SELECT statement that creates a
table in another database than the current one. In row-based logging, the
CREATE statement was written to the binary log without the database, hence
creating the table in the wrong database, causing the following inserts to
fail since the table didn't exist in the given database.
Fixed the bug by adding a parameter to store_create_info() that will make
the function print the database name before the table name and used that
in the calls that write the CREATE statement to the binary log. The database
name is only printed if it is different than the currently selected database.
The output of SHOW CREATE TABLE has not changed and is still printed without
the database name.
ha_statistic_increment for rpl_temporary
Problem: in some cases master send a special event to reconnecting
slave to keep slave's temporary tables (see #17284) and they still
have references to the "old" SQL slave thread and use them to access
thread's data.
Fix: set temporary tables thread references to the actual SQL slave
thread in such cases.
Adds --general-log-file, --slow-query-log-file command-
line options to match system variables of the same names.
Deprecates --log, --log-slow-queries command-line option
and log, log_slow_queries system-variables for v7.0; they
are superseded by general_log/general_log_file and
slow_query_log/slow_query_log_file, respectively.
case and then select
Problem was that the archive share was using a case insensitive
charset when comparing table names
Solution was to use a case sensitive char set when the table
names are case sensitive
The test failed originally -- did not reset binlogging - for the reason
identified by bug@15580.
However it never can be run on the embedded platfrom for yet another cause -
the embedded can not KILL query.
Comments added to the test particularly relating `reset master'
to the mentioned bug.
The Blackhole engine did not support row-based replication
since the delete_row(), update_row(), and the index and range
searching functions were not implemented.
This patch adds row-based replication support for the
Blackhole engine by implementing the two functions mentioned
above, and making the engine pretend that it has found the
correct row to delete or update when executed from the slave
SQL thread by implementing index and range searching functions.
It is necessary to only pretend this for the SQL thread, since
a SELECT executed on the Blackhole engine will otherwise never
return EOF, causing a livelock.
is inconsistent
+ several improvements
Details:
- The subtest with assignment of floating point numbers to
DECIMAL parameters in functions and procedures checks
now that the final DECIMAL value is the same as if we assign
the floating point numbers to columns, user variables etc.
= The impact of math libs or truncation must be the same.
- Remove storage engine variants of this test because the
stored procedure properties tested do not depend on
the storage engine.
Use the fastest storage engine (MEMORY) for any tables
needed.
- reset global sort_buffer_size to startup value
- Partially improved formatting.