The REPAIR TABLE ... USE_FRM query silently corrupts data of tables
with old .FRM file version.
The mysql_upgrade client program or the REPAIR TABLE query (without
the USE_FRM clause) can't prevent this trouble, because in the
common case they don't upgrade .FRM file to compatible structure.
1. Evaluation of the REPAIR TABLE ... USE_FRM query has been
modified to reject such tables with the message:
"Failed repairing incompatible .FRM file".
2. REPAIR TABLE query (without USE_FRM clause) evaluation has been
modified to upgrade .FRM files to current version.
3. CHECK TABLE ... FOR UPGRADE query evaluation has been modified
to return error status when .FRM file has incompatible version.
4. mysql_upgrade and mysqlcheck client programs call CHECK TABLE
FOR UPGRADE and REPAIR TABLE queries, so their behaviors have
been changed too to upgrade .FRM files with incompatible
version numbers.
Problem was that mysql_create_view did not remove all comments characters
when writing to binlog, resulting in parse error of stmt on slave side.
Solution was to use the recreated select clause
and add a generated CHECK OPTION clause if needed.
or incorrect.
For better conformance with standard, truncation procedure of CHAR columns
has been changed to ignore truncation of trailing whitespace characters
(note has been removed).
Finally, for columns with non-binary charsets:
1. CHAR(N) columns silently ignore trailing whitespace truncation;
2. VARCHAR and TEXT columns issue Note about truncation.
BLOBs and other columns with BINARY charset are unaffected.
The bug is a regression introduced by the patch for bug32798.
The code in Item_func_group_concat::clear() relied on the 'distinct'
variable to check if 'unique_filter' was initialized. That, however,
is not always valid because Item_func_group_concat::setup() can do
shortcuts in some cases w/o initializing 'unique_filter'.
Fixed by checking the value of 'unique_filter' instead of 'distinct'
before dereferencing.
When a zero length is provided to the my_decimal_length_to_precision
function along with unsigned_flag set to false it returns a negative value.
For queries that employs temporary tables may cause failed assertion or
excessive memory consumption while temporary table creation.
Now the my_decimal_length_to_precision and the my_decimal_precision_to_length
functions take unsigned_flag into account only if the length/precision
argument is non-zero.
impossible WHERE/HAVING clause
(subselect_single_select_engine::exec).
Allocation and initialization of joined table list t1, t2... of
subqueries like:
NOT IN (SELECT ... FROM t1,t2,... WHERE 0)
is optimized out, however server tries to traverse this list.
Grouping or ordering of long values in not indexed BLOB/TEXT columns
with GBK or BIG5 charsets crashes the server.
MySQL server uses sorting (the filesort procedure) in the temporary
table to evaluate the GROUP BY clause in case of lack of suitable index.
That procedure takes into account only first @max_sort_length bytes
(system variable, usually 1024) of TEXT/BLOB sorting key string.
The my_strnxfrm_gbk and my_strnxfrm_big5 fill temporary keys
with data of whole blob length instead of @max_sort_length bytes
length. That buffer overrun has been fixed.
Make maximum blob size to be 2**32-1, regardless of word size.
Fix failure of timestamp with size of 2**31-1. The method of
rounding up to the nearest even number would overflow.
Mtr restarts servers because an option file exist for slave.
The option file for slave forces to skip test.foo table for replication.
But really test case does not invlved test.foo table at all.
So option file can be removed and mtr will not restart servers for test case.
Added --log-slave-updates because test requires it.
The events based on LOAD DATA INFILE masked by --replace_regex instead restarting of slave.
Added waiting start and stop of slave after START|STOP SLAVE statements.
Bug#35335 funcs_1: Some tests fail within load_file during
pushbuild runs
Solution: 1. Move files with input data used in load_file,
load data etc.
from suite/funcs_1/<whatever>
to std_data
2. Use for testsuite funcs_1 the server option
--secure-file-priv=<MYSQLTEST_VARDIR>
3. Outfiles have to be stored under MYSQLTEST_VARDIR
+ changes according to WL#4304 Cleanup in funcs_1 tests
- backport of fixes/improvements made in 5.1 to 5.0
The differences between scripts in 5.0 and 5.1 cause
much additional and annoying work during any upmerge.
- replace error numbers with names
- improved comments
- improved formatting
- Unify storage engine names so that result files for
storage engine variants do not differ (some tests)
- remove a script no more used (tests are done in other scripts)
The problem was that LOAD DATA code (sql_load.cc) didn't take into
account that there may be items, representing references to other
columns. This is a usual case in views. The crash happened because
Item_direct_view_ref was casted to Item_user_var_as_out_param,
which is not a base class.
The fix is to
1) Handle references properly;
2) Ensure that an item is treated as a user variable only when
it is a user variable indeed;
3) Report an error if LOAD DATA is used to load data into
non-updatable column.
Mixing aggregate functions and non-grouping columns is not allowed in the
ONLY_FULL_GROUP_BY mode. However in some cases the error wasn't thrown because
of insufficient check.
In order to check more thoroughly the new algorithm employs a list of outer
fields used in a sum function and a SELECT_LEX::full_group_by_flag.
Each non-outer field checked to find out whether it's aggregated or not and
the current select is marked accordingly.
All outer fields that are used under an aggregate function are added to the
Item_sum::outer_fields list and later checked by the Item_sum::check_sum_func
function.
View definition as SELECT ... FROM DUAL WHERE ... has
valid syntax, but use of such view in SELECT or
SHOW CREATE VIEW syntax causes unexpected syntax error.
Server omits FROM DUAL clause when storing view body
string in a .frm file for further evaluation.
However, syntax of SELECT-witout-FROM query is more
restrictive than SELECT FROM DUAL syntax, and doesn't
allow the WHERE clause.
NOTE: this syntax difference is not documented.
View registration procedure has been modified to
preserve original structure of view's body.
localhost/default port
When creating federated table that points to unspecified host or
localhost on unspecified port or port is 0, small memory leak occurs.
This happens because we make a copy of unix socket path, which is
never freed.
With this fix we do not make a copy of unix socket path, instead
share->socket points to MYSQL_UNIX_ADDR constant directly.
This fix is covered by a test case for BUG34788.
Affects 5.0 only.
correctly - crashes server !
Creating federated table with connect string containing empty
(zero-length) host name and port is evaluated as 0 (port is
incorrect, omitted or 0) crashes server.
This happens because federated calls strcmp() with NULL pointer.
Fixed by avoiding strcmp() call if hostname is set to NULL.
binlogging of insert into a autoincrement blackhole table ignored
an explicit set insert_id.
Fixed with refining of the blackhole's insert method to call
update_auto_increment() that prepares binlogging the insert query
with the preceeding set insert_id.
Note, as the engine does not store any actual data one has to explicitly
provide to the server with the value of the autoincrement column via
set insert_id. Otherwise binlogging will happend with the default
set insert_id=1.
When swapping out heap I_S tables to disk, this is done after plan refinement.
Thus, READ_RECORD::file will still point to the (deleted) heap handler at start
of execution. This causes segmentation fault if join buffering is used and the
query is a star query where the result is found to be empty before accessing
some table. In this case that table has not been initialized (i.e. had its
READ_RECORD re-initialized) before the cleanup routine tries to close the handler.
Fixed by updating READ_RECORD::file when changing handler.
Bug #18453 Warning/error message if there is a mismatch between ...
There were three problems:
1. the reported lack of warnings for the BEFORE syntax of PURGE;
2. the similar lack of warnings for the TO syntax;
3. incompatible behaviour between the two in that the latter blanked out
regardlessly of presence or lack the actual file corresponding to
an index record; the former version gave up at the first mismatch.
fixed with deploying the warning's generation and synronizing logics of
purge_logs() and purge_logs_before_date().
my_stat() is called in either of two branches of purge_logs() (responsible
for the TO syntax of PURGE) similarly to how it has behaved in the BEFORE syntax.
If there is no actual binlog file, my_stat returns NULL and my_delete is
not invoked.
A critical error is reported to the user if a file from the index
could not be retrieved info about or deleted with a system error code
different than ENOENT.
Queries like:
SELECT ROW(1, 2) IN (SELECT t1.a, 2)
FROM t1 GROUP BY t1.a
or
SELECT ROW(1, 2) IN (SELECT t1.a, 2 FROM t2)
FROM t1 GROUP BY t1.a
lead to assertion failure in the
Item_in_subselect::row_value_transformer method in debugging
build, or to unexpected error message in release build:
ERROR 1247 (42S22): Reference '<list ref>' not supported (forward
reference in item list)
Unexpected error message and assertion failure have been
eliminated.
When there are no underlying tables specified for a merge table,
SHOW CREATE TABLE outputs a statement that cannot be executed. The
same is true for mysqldump (it generates dumps that cannot be
executed).
This happens because SQL parser does not accept empty UNION() clause.
This patch changes the following:
- it is now possible to execute CREATE/ALTER statement with
empty UNION() clause.
- the same as above, but still worth noting: it is now possible to
remove underlying tables mapping using ALTER TABLE ... UNION=().
- SHOW CREATE TABLE does not output UNION() clause if there are
no underlying tables specified for a merge table. This makes
mysqldump slightly smaller.
using a trig in SP
For all 5.0 and up to 5.1.12 exclusive, when a stored routine or
trigger caused an INSERT into an AUTO_INCREMENT column, the
generated AUTO_INCREMENT value should not be written into the
binary log, which means if a statement does not generate
AUTO_INCREMENT value itself, there will be no Intvar event (SET
INSERT_ID) associated with it even if one of the stored routine
or trigger caused generation of such a value. And meanwhile, when
executing a stored routine or trigger, it would ignore the
INSERT_ID value even if there is a INSERT_ID value available set
by a SET INSERT_ID statement.
Starting from MySQL 5.1.12, the generated AUTO_INCREMENT value is
written into the binary log, and the value will be used if
available when executing the stored routine or trigger.
Prior fix of this bug in MySQL 5.0 and prior MySQL 5.1.12
(referenced as the buggy versions in the text below), when a
statement that generates AUTO_INCREMENT value by the top
statement was executed in the body of a SP, all statements in the
SP after this statement would be treated as if they had generated
AUTO_INCREMENT by the top statement. When a statement that did
not generate AUTO_INCREMENT value by the top statement but by a
function/trigger called by it, an erroneous Intvar event would be
associated with the statement, this erroneous INSERT_ID value
wouldn't cause problem when replicating between masters and
slaves of 5.0.x or prior 5.1.12, because the erroneous INSERT_ID
value was not used when executing functions/triggers. But when
replicating from buggy versions to 5.1.12 or newer, which will
use the INSERT_ID value in functions/triggers, the erroneous
value will be used, which would cause duplicate entry error and
cause the slave to stop.
The patch for 5.0 fixed it not to generate the erroneous Intvar
event, another patch for 5.1 fixed it to ignore the SET INSERT_ID
value when executing functions/triggers if it is replicating from
a master of buggy versions.
When concurrent inserts were disabled, statements after an INSERT
were not put into the query cache. This happened because we do not
save the current data file length at statement start when
concurrent inserts are disabled. But we checked the always zero
local length against the real file length anyway.
Fixed by doing the check only if concurrent inserts are not diabled.
In cases when TRUNCATE was executed by invoking mysql_delete() rather
than by table recreation (for example, when TRUNCATE was issued on
InnoDB table with is referenced by foreign key) triggers were invoked.
In debug builds this also led to crash because of an assertion, which
assumes that some preliminary actions take place before trigger
invocation, which doesn't happen in case of TRUNCATE.
The fix is not to execute triggers in mysql_delete() when this
function is used by TRUNCATE.
WL#4203 Reorganize and fix the data dictionary tests of
testsuite funcs_1
because the goal to fix
Bug#34532 Some funcs_1 tests do not clean up at end of testing
was partially missed.
Some minor additional modifications are for
WL#4304 Cleanup in funcs_1 tests
WHERE f1 < n ignored row if f1 was indexed integer column and
f1 = TYPE_MAX ^ n = TYPE_MAX+1. The latter value when treated
as TYPE overflowed (obviously). This was not handled, it is now.
Affected tests fixing. After the fix for st_relay_log_info::wait_for_pos() that
handles widely used select('master-bin.xxxx',pos) invoked by mysqltest
there appeared to be four tests that either tried synchronizing when
the slave was stopped or used incorrect synchronization method like
to call `sync_with_master' from the current connection being to the
master itself.
Fixed with correcting the current connection or/and using the correct
synchronization macro when possible.
testsuite funcs_1
1. Fix the following bugs
Bug#30440 "datadict" tests (all engines) fail: Character sets depend on configuration
Solution: Test variants charset_collation_* adjusted to different builds
Bug#32603 "datadict" tests (all engines) fail in "community" tree: "PROFILING" table
Solution: Excluding "PROFILING" table from queries
Bug#33654 "slow log" is missing a line
Solution: Unify the content of the fields TABLES.TABLE_ROWS and
STATISTICS.CARDINALITY within result sets
Bug#34532 Some funcs_1 tests do not clean up at end of testing
Solution: DROP objects/reset global server variables modified during testing
+ let tests missing implementation end before loading of tables
Bug#31421 funcs_1: ndb__datadict fails, discrepancy between scripts and expected results
Solution: Cut <engine>__datadict tests into smaller tests + generate new results.
Bug#33599 INFORMATION_SCHEMA.STATISTICS got a new column INDEX_COMMENT: tests fail (2)
Generation of new results during post merge fix
Bug#33600 CHARACTER_OCTET_LENGTH is now CHARACTER_MAXIMUM_LENGTH * 4
Generation of new results during post merge fix
Bug#33631 Platform-specific replace of CHARACTER_MAXIMUM_LENGTH broken by 4-byte encoding
Generation of new results during post merge fix
+ removal of platform-specific replace routine (no more needed)
2. Restructure the tests
- Test not more than one INFORMATION_SCHEMA view per testscript
- Separate tests of I_S view layout+functionality from content related to the
all time existing databases "information_schema", "mysql" and "test"
- Avoid storage engine related variants of tests which are not sensible to
storage engines at all.
3. Reimplement or add some subtests + cleanup
There is a some probability that even the reviewed changeset
- does not fix all bugs from above or
- contains new bugs which show up on some platforms <> Linux or on one of
the various build types
4. The changeset contains fixes according to
- one code review
- minor bugs within testing code found after code review (accepted by reviewer)
- problems found during tests with 5.0.56 in build environment
returns wrong results
Casting AVG() to DECIMAL led to incorrect results when the arguments
had a non-DECIMAL type, because in this case
Item_sum_avg::val_decimal() performed the division by the number of
arguments twice.
Fixed by changing Item_sum_avg::val_decimal() to not rely on
Item_sum_sum::val_decimal(), i.e. calculate sum and divide using
DECIMAL arithmetics for DECIMAL arguments, and utilize val_real() with
subsequent conversion to DECIMAL otherwise.
MASTER_POS_WAIT return values are different than expected when the server is not a slave.
It returns -1 instead of NULL.
Fixed with correcting st_relay_log_info::wait_for_pos() to return the proper
value in the case of rli info is not inited.
It's impossible to determine which test inside mysql_client_test
failed if the log file is overwritten by mysqltest when dumping
the test case results. Redirect mysql_client_test output to a
separate file.
- Apply Eric Bergen's patch: in join_read_always_key(), move ha_index_init() call
to before the late NULLs filtering code.
- Backport function comments from 6.0.
added new function test_if_data_home_dir() which checks that
path does not contain mysql data home directory.
Using of mysql data home directory in
DATA DIRECTORY & INDEX DIRECTORY is disallowed.
Assertion `0' failed
If ROW item is a part of an expression that also has
aggregate function calls (COUNT/SUM/AVG...), a
"splitting" with an Item::split_sum_func2 function
is applied to that ROW item.
Current implementation of Item::split_sum_func2
replaces this Item_row with a newly created
Item_aggregate_ref reference to it.
Then the row cache tries to work with the
Item_aggregate_ref object as with the Item_row object:
row cache calls row-emulation methods such as cols and
element_index. Item_aggregate_ref (like it's parent
Item_ref) inherits dummy implementations of those
methods from the hierarchy root Item, and call to
them leads to failed assertions and wrong data
output.
Row-emulation virtual functions (cols, element_index, addr,
check_cols, null_inside and bring_value) of Item_ref have
been overloaded to forward calls to an underlying item
reference.
The problem is that passing anything other than a integer to a limit
clause in a prepared statement would fail. This limitation was introduced
to avoid replication problems (e.g: replicating the statement with a
string argument would cause a parse failure in the slave).
The solution is to convert arguments to the limit clause to a integer
value and use this converted value when persisting the query to the log.
NAME_CONST('whatever', -1) * MAX(whatever) bombed since -1 was
not seen as constant, but as FUNCTION_UNARY_MINUS(constant)
while we are at the same time pretending it was a basic const
item. This confused the aggregate handlers in exciting ways.
We now make NAME_CONST() behave more consistently.
Was a double-free of the Unique member of Item_func_group_concat.
This was not causing a crash because the Unique is a descendent of
Sql_alloc.
Fixed to free the Unique only if it was allocated for the instance
of Item_func_group_concat it was referenced from
documentation
While the manual mentions FRAC_SECOND only for the TIMESTAMPADD()
function, it was also possible to use FRAC_SECOND with DATE_ADD(),
DATE_SUB() and +/- INTERVAL.
Fixed the parser to match the manual, i.e. using FRAC_SECOND for
anything other than TIMESTAMPADD()/TIMESTAMPDIFF() now produces a
syntax error.
Additionally, the patch allows MICROSECOND to be used in TIMESTAMPADD/
TIMESTAMPDIFF and marks FRAC_SECOND as deprecated.
log-slave-updates and circul repl
Slave SQL thread may execute one extra event when there are events
skipped by slave I/O thread (e.g. originated by the same server).
Whereas it was requested not to do so by the UNTIL condition.
This happens because we compare with the end position of previously
executed event. This is fine when there are no skipped by slave I/O
thread events, as end position of previous event equals to start
position of to be executed event. Otherwise this position equals to
start position of skipped event.
This is fixed by:
- reading the event to be executed before checking if the until condition
is satisfied.
- comparing the start position of the event to be executed. Since we do
not have the start position available, we compute it by subtracting
event length from end position (which is available).
- if there are no events on the event queue at the slave sql starting
time, that meet until condition, we stop immediately, as in this
case we do not want to wait for next event.
suite)
Under some circumstances a combination of aggregate functions and
GROUP BY in a SELECT query over a VIEW could lead to incorrect
calculation of the result type of the aggregate function. This in
turn could result in incorrect results, or assertion failures on debug
builds.
Fixed by changing the logic in Item_sum_hybrid::fix_fields() so that
the argument's item is dereferenced before calling its type() method.
The problem is that CREATE VIEW statements inside prepared statements
weren't being expanded during the prepare phase, which leads to objects
not being allocated in the appropriate memory arenas.
The solution is to perform the validation of CREATE VIEW statements
during the prepare phase of a prepared statement. The validation
during the prepare phase assures that transformations of the parsed
tree will use the permanent arena of the prepared statement.
a table name.
The problem was that fill_defined_view_parts() did not return
an error if a table is going to be altered. That happened if
the table was already in the table cache. In that case,
open_table() returned non-NULL value (valid TABLE-instance from
the cache).
The fix is to ensure that an error is thrown even if the table
is in the cache.
(This is a backport of the original patch for 5.1)
The test case for the bug#31048 checks that there is no crash on stack
overrun. But due to different stack sizes on different platforms it failed
on some of them.
The new test case check that a query with at least 4 level subquery nesting
works without the stack overrun nesting and other levels of nesting doesn't
cause a crash.
and ps-protocol
Finding a routine should be a transparent operation as
far as the binary log is concerned.
But it was influencing the binary log because of the TIMESTAMP
column in the proc table.
Fixed by preserving and restoring the time_zone usage flag when
searching for a stored routine in the proc table.
breaks replication
NAME_CONST() didn't replicate constant character set and collation
correctly.
With this fix NAME_CONST() inherits collation from the value argument.
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.
when executed in version 5
Zero fill is a field attribute only. So we can't always
propagate constants for zerofill fields : the values and
expression results don't have that flag.
Fixed by converting the const value to a string and
using that in const propagation when the context allows it.
Disable const propagation for fields with ZEROFILL flag in
all the other cases.
from storage engine
Federated may crash a server, return wrong result set, return
"ERROR 1030 (HY000): Got error 1430 from storage engine" message
when local (engine=federated) table has a key against nullable
column.
The problem was wrong implementation of function that creates
WHERE clause for remote query from key.
for wildcard values.
The server ignored escape character before wildcards during
the calculation of priority values for sorting of a privilege
list. (Actually the server counted an escape character as an
ordinary wildcard like % or _). I.e. the table name template
with a wildcard character like 'tbl_1' had higher priority in
a privilege list than concrete table name without wildcards
like 'tbl\_1', and some privileges of 'tbl\_1' was hidden
by privileges for 'tbl_1'.
The get_sort function has been modified to ignore escaped
wildcards as usual.
type conversion.
Instead of copying of whole character string from a temporary
buffer, the server copied a short-living pointer to that string
into a long-living structure. That has been fixed.
and
bug#33932 assertion at handle_slave_sql if init_slave_thread() fails
the asserts were caused by
bug33931: having thd deleted at time of executing err: code plus
a missed initialization;
bug33932: initialization of slave_is_running member was missed;
fixed with relocating mi members initialization and removing delete thd
It is safe to do as deletion happens later explicitly in the caller of
init_slave_thread().
Todo: at merging the test is better to be moved into suite/bugs for 5.x (when x>0).
or trigger crashes server
Under some circumstances a combination of VIEWs, subselects with outer
references and PS/SP/triggers could lead to use of uninitialized memory
and server crash as a result.
Fixed by changing the code in Item_field::fix_fields() so that in cases
when the field is a VIEW reference, we first check whether the field
is also an outer reference, and mark it appropriately before returning.
There was no instruction in the test that enforces the slave successfully connect
to the master.
The way the test was been written allowed the slave to had been late for rendezvous
so that about-connecting time queries to the master failed and are error-logged
to had been seen in Warnings of pb.
Fixed with adding a sychronization primitive to the test.
No test case is possible, observe error logs on pb.
Todo: revise need of rpl_report.pl's rules due to failing execution of
queries from get_master_verion_and_clock().
Any test should try to use a synchornization primitive like the current fix
makes and do not let the slave to miss successful connecting.
The unsignedness of large integer user variables was not being
properly preserved when feeded to prepared statements. This was
happening because the unsigned flags wasn't being updated when
converting the user variable is converted to a parameter.
The solution is to copy the unsigned flag when converting the
user variable to a parameter and take the unsigned flag into
account when converting the integer to a string.
The out of memory error was thrown when the sort buffer size were too small.
This led to a user confusion.
Now filesort throws the error message about sort buffer being too small.
and my_innodb_commit_concurrency global variables.
Type of the my_innodb_autoextend_increment and the
my_innodb_commit_concurrency variables has been changed to
GET_ULONG.
Server handles truncation for assignment of too-long values
into CHAR/VARCHAR/TEXT columns in a different ways when the
truncated characters are spaces:
1. CHAR(N) columns silently ignore end-space truncation;
2. TEXT columns post a truncation warning/error in the
non-strict/strict mode.
3. VARCHAR columns always post a truncation note in
any mode.
Space truncation processing has been synchronised over
CHAR/VARCHAR/TEXT columns: current behavior of VARCHAR
columns has been propagated as standard.
Binary-encoded string/BLOB columns are not affected.
does not use trans tables
There had been two issues.
Rollback statement was recorded in binlog even though a multi-update
had not modified any non-transactional table.
The reason for this artifact was a false initial value of multi_update::transactional_tables.
Yet another artifact that explained on the bug page is that
`ha_autocommit_or_rollback' works differently depending on whether
a transaction engine has been compiled in.
Fixed: with setting multi_update::transactional_tables to zero at initialization
time. Multi-update on non-trans table won't cause ROLLBACK in binlog with
either compilation option.
The 2nd mentioned artifact comprises a self-standing issue (to be reported
separately).
Problem: some collation handlers called incorrect version
of my_like_range_xxx(), which led to wrong min_str and max_str,
so like range optimizer threw away good records.
Fix: changing the wrong handlers to call proper version of
my_like_range_xxx().
When issuing a column level grant on a table which require pre-locking the
server crashed.
The reason behind the crash was that data structures used by the lock api
wasn't properly reinitialized in the case of a column level grant.
on table creates
The problem was in incompatible syntax for key definition in CREATE
TABLE.
5.0 supports only the following syntax for key definition (see "CREATE
TABLE syntax" in the manual):
{INDEX|KEY} [index_name] [index_type] (index_col_name,...)
While 5.1 parser supports the above syntax, the "preferred" syntax was
changed to:
{INDEX|KEY} [index_name] (index_col_name,...) [index_type]
The above syntax is used in 5.1 for the SHOW CREATE TABLE output, which
led to dumps generated by 5.1 being incompatible with 5.0.
Fixed by changing the parser in 5.0 to support both 5.0 and 5.1 syntax
for key definition.
Simple subselects are pulled into upper selects. This operation substitutes the
pulled subselect for the first item from the select list of the subselect.
If an alias is defined for a subselect it is inherited by the replacement item.
As this is done after fix_fields phase this alias isn't showed if the
replacement item is a stored function. This happens because the Item_func_sp::make_field
function makes send field from its result_field and ignores the defined alias.
Now when an alias is defined the Item_func_sp::make_field function sets it for
the returned field.
Two disjuncts containing equalities of the form key=const1 and key=const2 can
be merged into one if const1 is equal to const2. To check it the common
collation of the constants were used rather than the collation of the field key.
For example when the default collation of the constants was cases insensitive
while the collation of the field was case sensitive, then two or-ed equality
predicates key='b' and key='B' incorrectly were merged into one f='b'. As a
result ref access was used instead of range access and wrong result sets were
returned in many cases.
Fixed the problem by comparing constant in the or-ed predicate with collation of
the key field.
The problem is when create/rename/drop users, the statement was logged regardless of error, even if no data has been changed, the statement was logged.
After this patch, create/rename/drop users don't write the binlog if the statement makes no changes, if the statement does make any changes, log the statement with possible error code.
This patch is based on the patch for BUG#29749, which is not pushed
Bug 33983 (Stored Procedures: wrong end <label> syntax is accepted)
The server used to crash when REPEAT or another control instruction
was used in conjunction with labels and a LEAVE instruction.
The crash was caused by a missing "pop" of handlers or cursors in the
code representing the stored program. When executing the code in a loop,
this missing "pop" would result in a stack overflow, corrupting memory.
Code generation has been fixed to produce the missing h_pop/c_pop
instructions.
Also, the logic checking that labels at the beginning and the end of a
statement are matched was incorrect, causing Bug 33983.
End labels, when used, must match the label used at the beginning of a block.
Bug#25347: mysqlcheck -A -r doesn't repair table marked as crashed
mysqlcheck tests nullness of the engine type to know whether the
"table" is a view or not. That also falsely catches tables that
are severly damaged.
Instead, use SHOW FULL TABLES to test whether a "table" is a view
or not.
(Don't add new function. Instead, get original data a smarter way.)
Make it safe for use against databases before when views appeared.
Add new variable m_highest_seen when only peeking at auto_increment NEXTID and not retrieving to cache. Add new method to check tupleId before calling data node
ndb_restore.result, ndb_restore.test:
Changed test to use information_schema to check auto_increment
DictCache.cpp, Ndb.cpp:
Add new variable m_highest_seen when only peeking at auto_increment NEXTID and not retrieving to cache. Add new method to check tupleId before calling data node. When setting the auto_increment value we'll also read up the new value, this is useful if we use the table the first time in this MySQL Server and haven't yet seen the NEXTID value. The kernel will avoid updating since it already has the value but will also read up the NEXTID value to ensure we don't need to do this any more time.
ndb_auto_increment.result:
Updated result file since it was incorrect
The problem occurred when one had a subquery that had an equality X=Y where
Y referred to a named select list expression from the parent select. MySQL
crashed when trying to use the X=Y equality for ref-based access.
Fixed by allowing non-Item_field items in the described case.
The ROUND(X, D) function would change the Item::decimals field during
execution to achieve the effect of a dynamic number of decimal digits.
This caused a series of bugs:
Bug #30617:Round() function not working under some circumstances in InnoDB
Bug #33402:ROUND with decimal and non-constant cannot round to 0 decimal places
Bug #30889:filesort and order by with float/numeric crashes server
Fixed by never changing the number of shown digits for DECIMAL when
used with a nonconstant number of decimal digits.
The name resolution for correlated subqueries and HAVING clauses
failed to distinguish which of two was being performed when there
was a reference to an outer aliased field.
Fixed by adding the condition that HAVING clause name resulotion
is being performed.
value when inserting into a view.
The mysql_prepare_insert function checks all fields of the target table that
directly or indirectly (through a view) are specified in the INSERT
statement to have a default value. This check can be skipped if the INSERT
statement doesn't mention any insert fields. In case of a view this allows
fields that aren't mentioned in the view to bypass the check.
Now fields of the target table are always checked to have a default value
when insert goes into a view.
When resolving references we need to take into consideration
the view "fields" and allow qualified access to them.
Fixed by extending the reference resolution to process view
fields correctly.
server crash.
The filesort implementation has an optimization for subquery execution which
consists of reusing previously allocated buffers. In particular the call to
the read_buffpek_from_file function might be skipped when a big enough buffer
for buffer descriptors (buffpeks) is already allocated. Beside allocating
memory for buffpeks this function fills allocated buffer with data read from
disk. Skipping it might led to using an arbitrary memory as fields' data and
finally to a crash.
Now the read_buffpek_from_file function is always called. It allocates
new buffer only when necessary, but always fill it with correct data.
to be compiled in
The problem was that on a statically built server an attempt to create
a UDF resulted in a different, but reasonable error ("Can't open shared
library" instead of "UDFs are unavailable with the --skip-grant-tables
option"), which caused a failure for the test case for bug #32020.
Fixed by moving the test case for bug #32020 from skip_grants.test to a
separate test to ensure that it is only run when the server is built
with support for dynamically loaded libraries.
read_buffer_size set on master
BUG#33413 show binlog events fails if binlog has event size of close
to max_allowed_packet
The size of Append_block replication event was determined solely by
read_buffer_size whereas the rest of replication code deals with
max_allowed_packet.
When the former parameter was set to larger than the latter there were
two artifacts: the master could not read events from binlog;
show master events did not show.
Fixed with
- fragmenting the used io-cached buffer into pieces each size of less
than max_allowed_packet (bug#30435)
- incrementing show-binlog-events handling thread's max_allowed_packet
with the max estimated for the replication header size
Now, every transaction (including autocommit transactions) start with
a BEGIN and end with a COMMIT/ROLLBACK in the binlog.
Added a test case, and updated lots of test case result files.
w/ Field_date instead of Field_newdate
Field_date was still used in temp table creation.
Fixed by using Field_newdate consistently throughout the server
except when reading tables defined with older MySQL version.
No test suite is possible because both Field_date and Field_newdate
return the same values in all the metadata calls.
When set the server-id dynamically, the server_id member of current thread is not updated.
Update the server_id member of current thread after updated the global variable value.
Complementary patch since LOAD DATA INFILE was not covered in
the previous patch.
This patch adds a check so that the slave skip counter is not
decreased to zero if seeing a BEGIN_LOAD_QUERY_EVENT,
APPEND_BLOCK_EVENT, or CREATE_FILE_EVENT since these cannot
end a group. The group is terminated by an EXECUTE_LOAD_QUERY_
EVENT or DELETE_FILE_EVENT.
at page 1024 with ucs2_bin
Inserting strings with a common prefix into a table with
characterset UCS2 corrupted the table.
An efficient search method was used, which compares end space
with ASCII blank. This doesn't work for character sets like UCS2,
which do not encode blank like ASCII does.
Use the less efficient search method _mi_seq_search()
for charsets with mbminlen > 1.
The checks in the test for bug #12480 were too wide and
made the test to depend on the procedures and triggers
present in the server.
Corrected the test to check only for the procedure and
trigger it creates.
The reason of this bug is that when mysqlbinlog dumps a query, the query is written to
output with a delimeter appended right after it, if the query string ends with a '--'
comment, then the delimeter would be considered as part of the comment, if there are any
statements after this query, then it will cause a syntax error.
Start a newline before appending delimiter after a query string
In a union without braces, the order by at the end is applied to the
overall union. It therefore should not interfere with the individual
select parts of the union.
Fixed by changing our parser rules appropriately.