When EXPLAIN EXTENDED tries to print column names, it checks whether the
referenced table is CONST (in which case, the column's value rather than
its name will be printed). If no proper table is reference (i.e. because
a derived table was used that has since gone out of scope), this will fail
spectacularly.
This ports an equivalent of the fix for Bug 43354.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
Table corruption happens during table reading in ha_tina::find_current_row() func.
Field::store() method returns error(true) if stored value is 0.
The fix:
added special case for enum type which correctly processes 0 value.
Additional fix:
INSERT...(default) and INSERT...() have the same behaviour now for enum type.
Some logic would group by suite always
Disable this if using --noreorder
Also fix getting array from collect_one_suite() in this case
Amended according to previous comment
The problem is that during temporary table creation uneven bits
are not taken into account for hidden fields. It leads to incorrect
calculation&allocation of null bytes size for table record. And
if grouped value is null we set wrong bit for this value(see end_update()).
Fixed by adding separate calculation of uneven bit for hidden fields.
This bug is just one facet of stored routines not being able to
detect changes in meta-data (WL#4179). This particular problem
can be triggered within a single session due to the improper
management of the pre-locking list if the view is expanded after
the pre-locking list is calculated.
Since the overall solution for the meta-data detection issue is
planned for a later release, for now a workaround is used to
fix this particular aspect that only involves a single session.
The workaround is to flush the thread-local stored routine cache
every time a view is created or modified, causing locally cached
routines to be re-evaluated upon invocation.
The problem becomes apparent only if HAVE_purify is undefined.
It related to the part of code placed in open_table_from_share() fuction
where we initialize record buffer only if HAVE_purify is enabled.
So in case of HAVE_purify=OFF record buffer is not initialized
on open table stage.
Next we read key, find NULL value and update appropriate null bit
but do not update record buffer. After that the record is stored
in the join cache(store_record_in_cache). For CHAR fields we
strip trailing spaces and in our case this procedure uses
uninitialized record buffer.
The fix is to skip stripping space procedure in case of null values
for CHAR fields(partially based on 6.0 JOIN_CACHE implementation).
removed in MySQL 6.0
CREATE TABLE... TYPE= returns the warning "The syntax
'TYPE=storage_engine' is deprecated and will be removed in
MySQL 6.0. Please use 'ENGINE=storage_engine' instead"
This syntax is deprecated already from version 5.4.4, so
the message has been changed.
In addition, the deprecation macro was changed to reflect
the ServerPT decision not to include version number in the
warning message.
A number of test result files have been changed as a
consequence of the change in the deprecation macro.
Queries optimized with GROUP_MIN_MAX didn't cleanup KEYREAD
optimization properly. As a result subsequent queries may
return incomplete rows (fields are initialized to default
values).
Grouping by a subquery in a query with a distinct aggregate
function lead to a wrong result (wrong and unordered
grouping values).
There are two related problems:
1) The query like this:
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c
FROM t1 GROUP BY aa
returned wrong result, because the outer reference "t1.a"
in the subquery was substituted with the Item_ref item.
The Item_ref item obtains data from the result_field object
that refreshes once after the end of each group. This data
is not applicable to filesort since filesort() doesn't care
about groups (and doesn't update result_field objects with
copy_fields() and so on). Also that data is not applicable
to group separation algorithm: end_send_group() checks every
record with test_if_group_changed() that evaluates Item_ref
items, but it refreshes those Item_ref-s only after the end
of group, that is a vicious circle and the grouped column
values in the output are shifted.
Fix: if
a) we grouping by a subquery and
b) that subquery has outer references to FROM list
of the grouping query,
then we substitute these outer references with
Item_direct_ref like references under aggregate
functions: Item_direct_ref obtains data directly
from the current record.
2) The query with a non-trivial grouping expression like:
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c
FROM t1 GROUP BY aa+0
also returned wrong result, since JOIN::exec() substitutes
references to top-level aliases in SELECT list with Item_copy
caching items. Item_copy items have same refreshing policy
as Item_ref items, so the whole groping expression with
Item_copy inside returns wrong result in filesort() and
end_send_group().
Fix: include aliased items into GROUP BY item tree instead
of Item_ref references to them.
logging is disabled
The server would hit an assertion because of a DBUG violation.
There was a missing DBUG_RETURN and instead a plain return
was used.
This patch replaces the return with DBUG_RETURN.
into slow log
While processing a statement, down the mysql_parse execution
stack, the thd->enable_slow_log can be assigned to
opt_log_slow_admin_statements, depending whether one is executing
administrative statements, such as ALTER TABLE, OPTIMIZE,
ANALYZE, etc, or not. This can have an impact on slow logging for
statements that are executed after an administrative statement
execution is completed.
When executing statements directly from the user this is fine
because, the thd->enable_slow_log is reset right at the beginning
of the dispatch_command function, ie, everytime a new statement
is set is set to execute.
On the other hand, for slave SQL thread (sql_thd) the story is a
bit different. When in SBR the sql_thd applies statements by
calling mysql_parse. Right after, it calls log_slow_statement
function to log them if they take too long. Calling mysql_parse
directly is fine, but also means that dispatch_command function
is bypassed. As a consequence, thd->enable_slow_log does not get
a chance to be reset before the next statement to be executed by
the sql_thd. If the statement just executed by the sql_thd was an
administrative statement and logging of admin statements was
disabled, this means that sql_thd->enable_slow_log will be set to
0 (disabled) from that moment on. End result: sql_thd stops
logging slow statements.
We fix this by resetting the value of sql_thd->enable_slow_log to
the value of opt_log_slow_slave_statements right after
log_slow_stement is called by the sql_thd.
To 5.x Release
Notes
=====
This is a backport of BUG#23300 into 5.1 GA.
Original cset revid (in betony):
luis.soares@sun.com-20090929140901-s4kjtl3iiyy4ls2h
Description
===========
When using replication, the slave will not log any slow query
logs queries replicated from the master, even if the
option "--log-slow-slave-statements" is set and these take more
than "log_query_time" to execute.
In order to log slow queries in replicated thread one needs to
set the --log-slow-slave-statements, so that the SQL thread is
initialized with the correct switch. Although setting this flag
correctly configures the slave thread option to log slow queries,
there is an issue with the condition that is used to check
whether to log the slow query or not. When replaying binlog
events the statement contains the SET TIMESTAMP clause which will
force the slow logging condition check to fail. Consequently, the
slow query logging will not take place.
This patch addresses this issue by removing the second condition
from the log_slow_statements as it prevents slow queries to be
binlogged and seems to be deprecated.
When using MyIsam tables and processing concurrent DML
statements, the server may be sending back an OK to the client
before actually finishing the transaction commit procedure. This
has been reported before in BUG@37521 and BUG@29334.
This particular test case gets affected, because it performs the
following sequence:
connect (conn2, ...)
connection conn2;
LOAD DATA CONCURRENT ...
disconnect (conn2, ...)
connection master;
sync_slave_with_master
diff_tables
At this point diff_tables may report difference in the table
content (the master seems to be missing the conn2 rows).
To workaround this MyISAM concurrent DML statements issue and
make this test case deterministic, we wait on conn2 until the
rows inserted show up in the table. After this the test case
proceeds as normally would before this patch.
write()/read()
Sometimes stop/restart master or stop/restart salve can cause
network error, which can cause the 'invalid file descriptor
-1 in syscall write()/read()' warnings. All involved test
cases except rpl_slave_load_remove_tmpfile belong to the
kind of network error. So they are expected.
The 'rpl_slave_load_remove_tmpfile' belongs to file error,
but it is testing the file error as following code:
DBUG_EXECUTE_IF("remove_slave_load_file_before_write",
my_close(fd,MYF(0)); fd= -1; my_delete(fname, MYF(0)););
So it's expected too.
To fix the problem, add the valgrind warnings to the global
suppression list to suppress it.
The test case rpl_binlog_corruption fails on windows because when
adding a line to the binary log index file it gets terminated
with a CR+LF (which btw, is the normal case in windows, but not on
Unixes - LF). This causes mismatch between the relay log names,
causing mysqld to report that it cannot find the log file.
We fix this by creating the instrumented index file through
mysql, ie, using SELECT ... INTO DUMPFILE ..., as opposed on
relying on ultimatly OS commands like: -- echo "..." >
index. These changes go into the file and make the procedure
platform independent:
include/setup_fake_relay_log.inc
Side note: when using SELECT ... INTO DUMPFILE ..., one needs to
check if mysqld is running with secure_file_priv. If it is, we do
it in two steps: 1. create the file on the allowed location;
2. move it to the datadir. If it is not, then we just create the
file directly on the datadir (so previous step 2. is not needed).
Performing fulltext prefix search (a word with truncation
operator) may cause a dead-loop. ft_min_word_len value
doesn't matter actually.
The problem was introduced along with "smarter index merge"
optimization.
There was two problems:
The first was the symptom, caused by bad error handling in
ha_partition. It did not handle print_error etc. when
having no partitions (when used by dummy handler).
The second was the real problem that when dropping tables
it reused the table type (storage engine) from when the lock
was asked for, not the table type that it had when gaining
the exclusive name lock. So that it tried to delete tables
from wrong storage engines.
Solutions for the first problem was to accept some handler
calls to the partitioning handler even if it was not setup
with any partitions, and also if possible fallback
to use the base handler's default functions.
Solution for the second problem was to remove the optimization
to reuse the definition from the cache, instead always check
the frm-file when holding the LOCK_open mutex
(updated with a fix for a debug print crash and better
comments as required by reviewer, and removed optimization
to avoid reading the frm-file).
REVOKE/GRANT; ALTER EVENT.
The following statements support the CURRENT_USER() where a user is needed.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENT
but, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, All above statements are rewritten when they are binlogged.
The CURRENT_USER() is expanded to the real user's name and host.
Fixed 2 problems :
1. test_if_order_by_key() was continuing on the primary key
as if it has a primary key suffix (as the secondary keys do).
This leads to crashes in ORDER BY <pk>,<pk>.
Fixed by not treating the primary key as the secondary one
and not depending on it being clustered with a primary key.
2. The cost calculation was trying to read the records
per key when operating on ORDER BYs that order on all of the
secondary key + some of the primary key.
This leads to crashes because of out-of-bounds array access.
Fixed by assuming we'll find 1 record per key in such cases.