There are two problems:
1. There is a missing check for 'year' parameter(year can not be greater than 9999) in
makedate function. fix: added check that year can not be greater than 9999.
2. There is a missing check for zero date in from_days() function.
fix: added zero date check into Item_func_from_days::get_date()
function.
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql/item_timefunc.cc:
--added check that year can not be greater than 9999 for makedate() function
--added zero date check into Item_func_from_days::get_date() function
FREED IN FLUSH_READ_LOCK (VALGRIND WARNING).
The problem was that under some circustances the memory allocated
for Query_tables_list::sroutines was not freed properly.
The cause of this problem was the absence of
LEX::restore_backup_query_tables_list() call in one of the branches
in mysql_table_grant() function.
This assert could be triggered during two phase commit if binary
log was used as transaction coordinator log. The triggered assert
checks that the same number of transaction IDs are processed in
the prepare and commit phases.
The reason it was triggered, was that the transaction consisted
of an INSERT/UPDATE IGNORE that had an ignorable error. Since it
had an error, no row log events were made and therefore
prepared_xids was 0. However, since it was an IGNORE statement,
the statement started a read/write statement transaction, committed
it and completed successfully.
This patch fixes the problem by adjusting the assert to take
this possibility into account.
Test case added to binlog.binlog_innodb_row.test.
If LOAD DATA INFILE featured a SET clause, the name=value pairs
would be regenerated using item::print. Unfortunately, that code
is mostly optimized for EXPLAIN EXTENDED output and such, and can
not be relied on to return valid SQL.
We now name each value its original, user-supplied form and use
that to create LOAD DATA INFILE statements for statement-based
replication.
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result:
minor change in syntactic sugar
mysql-test/suite/rpl/r/rpl_loaddatalocal.result:
add test case
mysql-test/suite/rpl/t/rpl_loaddatalocal.test:
add test case
sql/sql_load.cc:
Do not try to item::print values in LOAD DATA INFILE's
SET clause; they might not even be valid SQL at this
point. Use our saved version instead.
sql/sql_yacc.yy:
If LOAD DATA INFILE has SET name=val clauses, tag the
individual val-parts with the user's version so we can
later replicate that, rather than the smashed pieces
we'd get from item::print once the optimizer's through
with our poor values.
and innodb
The 5.5 version of the patch.
The server doesn't restrict the data that can be inserted into integer columns
with explicitly specified length that's smaller than what the type can handle,
e.g. 1234 can be inserted into an INT(2) column just fine.
Thus, when calcualting the maximum width of expressions involving such
restricted integer columns we need to use the implicit maximum width of
the field instead of the explicitly speficied one.
Fixed the server to use the implicit maximum in such cases and made sure
the implicit maximum is addjusted the same way as the explicit one wrt
signedness.
Fixed several test case results (ctype_*.result, metadata.result and
type_ranges.result) to reflect the extended column widths.
Added a regression test case in distinct.test.
Note : this is the behavior preserving fix that makes 5.5 behave as 5.1 and
earlier. In the mysql trunk we'll add a insert time check for the explict
maximum size.
Attempts to assign value to a table column from trigger by using
NEW.column_name pseudo-variable might result in garbled data.
That happened when:
- the column had a BLOB-based type (e.g. TEXT)
and
- the value being assigned was retrieved from stored routine variable
of the same type.
The problem was that BLOB values were not copied correctly in this
case. Instead of doing a copy of a real value, the value's representation
in record buffer was copied. This representation is essentially a
pointer to a buffer associated with the virtual table for routine
variables where the real value is stored. Since this buffer got
freed once trigger was left or could have changed its contents when
new value was assigned to corresponding routine variable such a shallow
copying resulted in garbled data in NEW.colum_name column.
It worked in 5.1 due to a subtle bug in create_virtual_tmp_table():
- in 5.1 create_virtual_tmp_table() returned a table which
had db_low_byte_first == false.
- in 5.5 and up create_virtual_tmp_table() returns a table which
has db_low_byte_first == true.
Actually, db_low_byte_first == false only for ISAM storage engine,
which was deprecated and removed in 5.0.
Having db_low_byte_first == false led to getting false in the
complex condition for the 2nd "if" in field_conv(), which in turn
led to copy-blob-behavior as a fall-back strategy:
- to->table->s->db_low_byte_first was true (correct value)
- from->table->s->db_low_byte_first was false (incorrect value)
In 5.5 and up that condition is true, which means blob-values are
not copied.
(SUBSTRING inside a stored function works too slow).
The user-visible problem was that the server started to consume memory if a
stored-routine of some sort is executed subsequently. The memory was freed
only after the corresponding connection was closed.
Technically, the problem was that the memory needed for temporary string
conversions was allocated on the connection ("persistent") memory root,
instead of statement one.
The root cause of this problem was the incorrect patch for Bug 55744.
That patch wrongly fixed a crash in prepared-statement-mode introduced by
another patch. The patch for Bug 55744 used wrong condition to check if
prepared statement mode is active (or whether the connection-scoped or
statement-scoped memory root should be used). The thing is that for
prepared statements such conversions should be done in the connection
memory root, so that that the transformations of item-tree were correctly
remembered in the PREPARE-phase.
The fix is to use proper condition to detect prepared-statement-mode and
use proper memory root.
(SUBSTRING inside a stored function works too slow).
Background:
- THD classes derives from Query_arena, thus inherits the 'state'
attribute and related operations (is_stmt_prepare() & co).
- Although these operations are available in THD, they must not
be used. THD has its own attribute to point to the active
Query_arena -- stmt_arena.
- So, instead of using thd->is_stmt_prepare(),
thd->stmt_arena->is_stmt_prepare() must be used. This was the root
cause of Bug 60025.
This patch enforces the proper way of calling those operations.
is_stmt_prepare() & co are declared as private operations
in THD (thus, they are hidden from being called on THD instance).
The patch tries to minimize changes in 5.5.
Manual merge from mysql-5.1 into mysql-5.5.
Conflicts
=========
Text conflict in mysql-test/suite/rpl/t/rpl_row_until.test
Text conflict in sql/handler.h
Text conflict in storage/archive/ha_archive.cc
The problem was that wrong structure of mysql.event was not detected and
the server continued to use wrongly-structured data.
The fix is to check the structure of mysql.event after opening before
any use. That makes operations with events more strict -- some operations
that might work before throw errors now. That seems to be Ok.
Another side-effect of the patch is that if mysql.event is corrupted,
unrelated DROP DATABASE statements issue an SQL warning about inability
to open mysql.event table.
Partitions can have different ref_length (position data length).
Removed DBUG_ASSERT which crashed debug builds when using
MAX_ROWS on some partitions.
calc_daynr() function returns negative result
if malformed date with zero year and month is used.
Attempt to calculate week day on negative value
leads to crash. The fix is return NULL for
'W', 'a', 'w' specifiers if zero year and month is used.
Additional fix for calc_daynr():
--added assertion that result can not be negative
--return 0 if zero year and month is used
mysql-test/r/func_time.result:
test case
mysql-test/t/func_time.test:
test case
sql-common/my_time.c:
--added assertion that result can not be negative
--return 0 if zero year and month is used
sql/item_timefunc.cc:
eturn NULL for 'W', 'a', 'w' specifiers
if zero year and month is used.
Before sorting HAVING condition is split into two parts,
first part is a table related condition and the rest of is
HAVING part. Extraction of HAVING part does not take into account
the fact that some of conditions might be non-const but
have 'used_tables' == 0 (independent subqueries)
and because of that these conditions are cut off by
make_cond_for_table() function.
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
Update for previous patch according to reviewers comments.
Updated the constructors for ha_partitions to use the common
init_handler_variables functions
Added use of defines for size and offset to get better readability for the code that reads
and writes the .par file. Also refactored the get_from_handler_file function.
FAILS ON SOLARIS
This assertion was triggered if gethostbyaddr_r cannot do a
reverse lookup on an ip address. The reason was a missing
DBUG_RETURN macro. The problem affected only debug versions of
the server.
This patch fixes the problem by replacing return with DBUG_RETURN.
No test case added.