that was already analyzed by Oracle: EXPLAIN can return 3 or 4 in "rows"; using replace_column to work around this.
mysql-test/include/index_merge2.inc:
replace "rows" column of some EXPLAINs by "#", if told so
mysql-test/r/index_merge_innodb.result:
result update
mysql-test/t/index_merge_innodb.test:
tell index_merge2.inc to accept random "rows" values in some EXPLAINs; we don't do this in index_merge_myisam.test
which has no randomness here.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
mysql-test/r/type_newdecimal.result:
Add test case result for Bug#45261. Also, update test case to
reflect that an additive operation increases the precision of
the resulting type by 1.
mysql-test/t/type_newdecimal.test:
Add test case for Bug#45261
sql/field.cc:
Added DBUG_ASSERT to ensure object's invariant is maintained.
Implement method to create a field to hold a decimal value
from an item.
sql/field.h:
Explain member variable. Add method to create a new decimal field.
sql/item.cc:
The precision should only be capped when storing the value
on a table. Also, this makes it impossible to calculate the
integer part if Item::decimals (the scale) is larger than the
precision.
sql/item.h:
Simplify calculation of integer part.
sql/item_cmpfunc.cc:
Do not limit the precision. It will be capped later.
sql/item_func.cc:
Use new method for allocating a new decimal field.
Add a specialized method for retrieving the precision
of a user variable item.
sql/item_func.h:
Add method to return the precision of a user variable.
sql/item_sum.cc:
Use new method for allocating a new decimal field.
sql/my_decimal.h:
The integer part could be improperly calculated for a decimal
with 31 digits in the fractional part.
sql/sql_select.cc:
Use new method which truncates the integer or decimal parts
as needed.
- Add conditionals for bundled zlib and innodb plugin.
- Apply patch from bug#46834 to install the test suite in RPMs.
- Add plugins to RPMs. Disable example plugins.
INSERT ... SELECT ...
Problem was that when bulk insert is used on an empty
table/partition, it disables the indexes for better
performance, but in this specific case it also tries
to read from that partition using an index, which is
not possible since it has been disabled.
Solution was to allow index reads on disabled indexes
if there are no records.
Also reverted the patch for bug#38005, since that was a workaround
in the partitioning engine instead of a fix in myisam.
mysql-test/r/partition.result:
Bug#46639: 1030 (HY000): Got error 124 from storage engine on
INSERT ... SELECT ...
updated result file
mysql-test/t/partition.test:
Bug#46639: 1030 (HY000): Got error 124 from storage engine on
INSERT ... SELECT ...
Added testcase
sql/ha_partition.cc:
Bug#46639: 1030 (HY000): Got error 124 from storage engine on
INSERT ... SELECT ...
reverted the patch for bug#38005, since that was a workaround
around this problem, not needed after fixing it in myisam.
storage/myisam/mi_search.c:
Bug#46639: 1030 (HY000): Got error 124 from storage engine on
INSERT ... SELECT ...
Return HA_ERR_END_OF_FILE instead of HA_ERR_WRONG_INDEX
when there are no rows.
(temporary) TABLE, crash
Problem: if one has an open "HANDLER t1", further "TRUNCATE t1"
doesn't close the handler and leaves handler table hash in an
inconsistent state, that may lead to a server crash.
Fix: TRUNCATE should implicitly close all open handlers.
Doc. request: the fact should be described in the manual accordingly.
mysql-test/r/handler_myisam.result:
Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP
(temporary) TABLE, crash
- test result.
mysql-test/t/handler_myisam.test:
Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP
(temporary) TABLE, crash
- test case.
sql/sql_delete.cc:
Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP
(temporary) TABLE, crash
- remove all truncated tables from the HANDLER's hash.
This is a partial correction to the original fix for bug#37098
Get rid of "Installed (but unpackaged)" files in the RPM build
which used a wrong variable.
man/Makefile.am:
Correction to the original fix:
The variable to use is "$(mandir)", "$(manlibdir)" was wrong.
view manipulations
The bespoke flag was not properly reset after last call to
fill_record. Fixed by resetting in caller mysql_update.
mysql-test/r/auto_increment.result:
Bug#46616: Test result.
mysql-test/t/auto_increment.test:
Bug#46616: Test case.
sql/sql_update.cc:
Bug#46616: Fix.
If the SQL Thread fails to execute an event due to a temporary error (e.g.
ER_LOCK_DEADLOCK) and the option "--slave_transaction_retries" is set the SQL
Thread should not be aborted and the transaction should be restarted from the
beginning and re-executed.
Unfortunately, a wrong interpretation of the THD::is_fatal_error was preventing
this behavior. In a nutshell, "this variable is set to TRUE if an execution of a
compound statement cannot continue. In particular, it is used to disable access
to the CONTINUE or EXIT handlers of stored routines. So even temporary errors
may have this variable set.
To fix the bug, we have done what follows:
DBUG_ENTER("has_temporary_error");
- if (thd->is_fatal_error)
- DBUG_RETURN(0);
-
DBUG_EXECUTE_IF("all_errors_are_temporary_errors",
if (thd->main_da.is_error())
{
view that has Group By
Table access rights checking function check_grant() assumed
that no view is opened when it's called.
This is not true with nested views where the inner view
needs materialization. In this case the view is already
materialized when check_grant() is called for it.
This caused check_grant() to not look for table level
grants on the materialized view table.
Fixed by checking if a view is already materialized and if
it is check table level grants using the original table name
(not the ones of the materialized temp table).