WL#4203 Reorganize and fix the data dictionary tests of
testsuite funcs_1
because the goal to fix
Bug#34532 Some funcs_1 tests do not clean up at end of testing
was partially missed.
Some minor additional modifications are for
WL#4304 Cleanup in funcs_1 tests
WHERE f1 < n ignored row if f1 was indexed integer column and
f1 = TYPE_MAX ^ n = TYPE_MAX+1. The latter value when treated
as TYPE overflowed (obviously). This was not handled, it is now.
Affected tests fixing. After the fix for st_relay_log_info::wait_for_pos() that
handles widely used select('master-bin.xxxx',pos) invoked by mysqltest
there appeared to be four tests that either tried synchronizing when
the slave was stopped or used incorrect synchronization method like
to call `sync_with_master' from the current connection being to the
master itself.
Fixed with correcting the current connection or/and using the correct
synchronization macro when possible.
testsuite funcs_1
1. Fix the following bugs
Bug#30440 "datadict" tests (all engines) fail: Character sets depend on configuration
Solution: Test variants charset_collation_* adjusted to different builds
Bug#32603 "datadict" tests (all engines) fail in "community" tree: "PROFILING" table
Solution: Excluding "PROFILING" table from queries
Bug#33654 "slow log" is missing a line
Solution: Unify the content of the fields TABLES.TABLE_ROWS and
STATISTICS.CARDINALITY within result sets
Bug#34532 Some funcs_1 tests do not clean up at end of testing
Solution: DROP objects/reset global server variables modified during testing
+ let tests missing implementation end before loading of tables
Bug#31421 funcs_1: ndb__datadict fails, discrepancy between scripts and expected results
Solution: Cut <engine>__datadict tests into smaller tests + generate new results.
Bug#33599 INFORMATION_SCHEMA.STATISTICS got a new column INDEX_COMMENT: tests fail (2)
Generation of new results during post merge fix
Bug#33600 CHARACTER_OCTET_LENGTH is now CHARACTER_MAXIMUM_LENGTH * 4
Generation of new results during post merge fix
Bug#33631 Platform-specific replace of CHARACTER_MAXIMUM_LENGTH broken by 4-byte encoding
Generation of new results during post merge fix
+ removal of platform-specific replace routine (no more needed)
2. Restructure the tests
- Test not more than one INFORMATION_SCHEMA view per testscript
- Separate tests of I_S view layout+functionality from content related to the
all time existing databases "information_schema", "mysql" and "test"
- Avoid storage engine related variants of tests which are not sensible to
storage engines at all.
3. Reimplement or add some subtests + cleanup
There is a some probability that even the reviewed changeset
- does not fix all bugs from above or
- contains new bugs which show up on some platforms <> Linux or on one of
the various build types
4. The changeset contains fixes according to
- one code review
- minor bugs within testing code found after code review (accepted by reviewer)
- problems found during tests with 5.0.56 in build environment
returns wrong results
Casting AVG() to DECIMAL led to incorrect results when the arguments
had a non-DECIMAL type, because in this case
Item_sum_avg::val_decimal() performed the division by the number of
arguments twice.
Fixed by changing Item_sum_avg::val_decimal() to not rely on
Item_sum_sum::val_decimal(), i.e. calculate sum and divide using
DECIMAL arithmetics for DECIMAL arguments, and utilize val_real() with
subsequent conversion to DECIMAL otherwise.
MASTER_POS_WAIT return values are different than expected when the server is not a slave.
It returns -1 instead of NULL.
Fixed with correcting st_relay_log_info::wait_for_pos() to return the proper
value in the case of rli info is not inited.
It's impossible to determine which test inside mysql_client_test
failed if the log file is overwritten by mysqltest when dumping
the test case results. Redirect mysql_client_test output to a
separate file.
- Apply Eric Bergen's patch: in join_read_always_key(), move ha_index_init() call
to before the late NULLs filtering code.
- Backport function comments from 6.0.
added new function test_if_data_home_dir() which checks that
path does not contain mysql data home directory.
Using of mysql data home directory in
DATA DIRECTORY & INDEX DIRECTORY is disallowed.
Assertion `0' failed
If ROW item is a part of an expression that also has
aggregate function calls (COUNT/SUM/AVG...), a
"splitting" with an Item::split_sum_func2 function
is applied to that ROW item.
Current implementation of Item::split_sum_func2
replaces this Item_row with a newly created
Item_aggregate_ref reference to it.
Then the row cache tries to work with the
Item_aggregate_ref object as with the Item_row object:
row cache calls row-emulation methods such as cols and
element_index. Item_aggregate_ref (like it's parent
Item_ref) inherits dummy implementations of those
methods from the hierarchy root Item, and call to
them leads to failed assertions and wrong data
output.
Row-emulation virtual functions (cols, element_index, addr,
check_cols, null_inside and bring_value) of Item_ref have
been overloaded to forward calls to an underlying item
reference.
The problem is that passing anything other than a integer to a limit
clause in a prepared statement would fail. This limitation was introduced
to avoid replication problems (e.g: replicating the statement with a
string argument would cause a parse failure in the slave).
The solution is to convert arguments to the limit clause to a integer
value and use this converted value when persisting the query to the log.
NAME_CONST('whatever', -1) * MAX(whatever) bombed since -1 was
not seen as constant, but as FUNCTION_UNARY_MINUS(constant)
while we are at the same time pretending it was a basic const
item. This confused the aggregate handlers in exciting ways.
We now make NAME_CONST() behave more consistently.
Was a double-free of the Unique member of Item_func_group_concat.
This was not causing a crash because the Unique is a descendent of
Sql_alloc.
Fixed to free the Unique only if it was allocated for the instance
of Item_func_group_concat it was referenced from
documentation
While the manual mentions FRAC_SECOND only for the TIMESTAMPADD()
function, it was also possible to use FRAC_SECOND with DATE_ADD(),
DATE_SUB() and +/- INTERVAL.
Fixed the parser to match the manual, i.e. using FRAC_SECOND for
anything other than TIMESTAMPADD()/TIMESTAMPDIFF() now produces a
syntax error.
Additionally, the patch allows MICROSECOND to be used in TIMESTAMPADD/
TIMESTAMPDIFF and marks FRAC_SECOND as deprecated.
log-slave-updates and circul repl
Slave SQL thread may execute one extra event when there are events
skipped by slave I/O thread (e.g. originated by the same server).
Whereas it was requested not to do so by the UNTIL condition.
This happens because we compare with the end position of previously
executed event. This is fine when there are no skipped by slave I/O
thread events, as end position of previous event equals to start
position of to be executed event. Otherwise this position equals to
start position of skipped event.
This is fixed by:
- reading the event to be executed before checking if the until condition
is satisfied.
- comparing the start position of the event to be executed. Since we do
not have the start position available, we compute it by subtracting
event length from end position (which is available).
- if there are no events on the event queue at the slave sql starting
time, that meet until condition, we stop immediately, as in this
case we do not want to wait for next event.
suite)
Under some circumstances a combination of aggregate functions and
GROUP BY in a SELECT query over a VIEW could lead to incorrect
calculation of the result type of the aggregate function. This in
turn could result in incorrect results, or assertion failures on debug
builds.
Fixed by changing the logic in Item_sum_hybrid::fix_fields() so that
the argument's item is dereferenced before calling its type() method.
The problem is that CREATE VIEW statements inside prepared statements
weren't being expanded during the prepare phase, which leads to objects
not being allocated in the appropriate memory arenas.
The solution is to perform the validation of CREATE VIEW statements
during the prepare phase of a prepared statement. The validation
during the prepare phase assures that transformations of the parsed
tree will use the permanent arena of the prepared statement.
a table name.
The problem was that fill_defined_view_parts() did not return
an error if a table is going to be altered. That happened if
the table was already in the table cache. In that case,
open_table() returned non-NULL value (valid TABLE-instance from
the cache).
The fix is to ensure that an error is thrown even if the table
is in the cache.
(This is a backport of the original patch for 5.1)