Make maximum blob size to be 2**32-1, regardless of word size.
Fix failure of timestamp with size of 2**31-1. The method of
rounding up to the nearest even number would overflow.
Corrects build problems embedded on Windows
Makefile.am:
Install .sym or mysqld-debug if exists
query_cache_debug.test, query_cache_debug.result:
Set more resonable query cache size (bug#35749)
CMakeLists.txt:
Added missing stacktrace.c
Based on contributed patch from Martin Friebe, CLA from 2007-02-24.
The parser lacked support for field sizes after signed long,
when it should extend to 2**32-1.
Now, we correct that limitation, and also make the error handling
consistent for casts.
---
Fix minor complaints of Marc Alff, for patch against B-g#15776.
---
Merge zippy.cornsilk.net:/home/cmiller/work/mysql/bug15776/my50-bug15776
into zippy.cornsilk.net:/home/cmiller/work/mysql/bug15776/my51-bug15776
---
Merge zippy.cornsilk.net:/home/cmiller/work/mysql/bug15776/my51-bug15776
into zippy.cornsilk.net:/home/cmiller/work/mysql/mysql-5.1-build
---
testing
If a binlog file is manually replaced with a namesake directory the internal purging did
not handle the error of deleting the file so that eventually
a post-execution guards fires an assert.
Fixed with reusing a snippet of code for bug@18199 to tolerate lack of the file but no other error
at an attempt to delete it.
The same applied to the index file deletion.
The cset carries pieces of manual merging.
The problem was that LOAD DATA code (sql_load.cc) didn't take into
account that there may be items, representing references to other
columns. This is a usual case in views. The crash happened because
Item_direct_view_ref was casted to Item_user_var_as_out_param,
which is not a base class.
The fix is to
1) Handle references properly;
2) Ensure that an item is treated as a user variable only when
it is a user variable indeed;
3) Report an error if LOAD DATA is used to load data into
non-updatable column.
after few delete statements
Problem: changing a file size might require that it must be
unmapped beforehand.
Fix: unmap the file before changing its size.
The bug allow multiple executing transactions working with non-transactional
to interfere with each others by interleaving the events of different trans-
actions.
Bug is fixed by writing non-transactional events to the transaction cache and
flushing the cache to the binary log at statement commit. To mimic the behavior
of normal statement-based replication, we flush the transaction cache in row-
based mode when there is no committed statements in the transaction cache,
which means we are committing the first one. This means that it will be written
to the binary log as a "mini-transaction" with just the rows for the statement.
Note that the changes here does not take effect when building the server with
HAVE_TRANSACTIONS set to false, but it is not clear if this was possible before
this patch either.
For row-based logging, we also have that when AUTOCOMMIT=1, the code now always
generates a BEGIN/COMMIT pair for single statements, or BEGIN/ROLLBACK pair in the
case of non-transactional changes in a statement that was rolled back. Note that
for the case where changes to a non-transactional table causes a rollback due
to error, the statement will now be logged with a BEGIN/ROLLBACK pair, even
though some changes has been committed to the non-transactional table.
The code for executing indexed ORDER BY was not setting all the
internal fields correctly when selecting to execute ORDER BY over
and index.
Fixed by change the access method to one that will use the
quick indexed access if one is selected while selecting indexed
ORDER BY.
Mixing aggregate functions and non-grouping columns is not allowed in the
ONLY_FULL_GROUP_BY mode. However in some cases the error wasn't thrown because
of insufficient check.
In order to check more thoroughly the new algorithm employs a list of outer
fields used in a sum function and a SELECT_LEX::full_group_by_flag.
Each non-outer field checked to find out whether it's aggregated or not and
the current select is marked accordingly.
All outer fields that are used under an aggregate function are added to the
Item_sum::outer_fields list and later checked by the Item_sum::check_sum_func
function.
Fixes:
- Bug #34920: auto_increment resets to 1 on foreign key creation
We need to use/inherit the passed in autoinc counter for ALTER TABLE
statements too.
View definition as SELECT ... FROM DUAL WHERE ... has
valid syntax, but use of such view in SELECT or
SHOW CREATE VIEW syntax causes unexpected syntax error.
Server omits FROM DUAL clause when storing view body
string in a .frm file for further evaluation.
However, syntax of SELECT-witout-FROM query is more
restrictive than SELECT FROM DUAL syntax, and doesn't
allow the WHERE clause.
NOTE: this syntax difference is not documented.
View registration procedure has been modified to
preserve original structure of view's body.
The bug is a regression introduced in 5.1 by the patch for bug28404.
Under some circumstances test_if_skip_sort_order() could leave some
data structures in an inconsistent state so that some parts of code
could assume the selected execution strategy for GROUP BY/DISTINCT as
a loose index scan (e.g. JOIN_TAB::is_using_loose_index_scan()), while
the actual strategy chosen was an ordered index scan, which led to
wrong data being returned.
Fixed test_if_skip_sort_order() so that when changing the type for a
join table, select->quick is reset not only for EXPLAIN, but for the
actual join execution as well, to not confuse code that depends on its
value to determine the chosen GROUP BY/DISTINCT strategy.
When trying to get the requested amount of memory for the keybuffer,
the out of memory could be signaled if one of the tentative allocations
fail. Later the server would crash (debug assert) when trying to send
a ok packet with a error set.
The solution is only to signal the error if all tentative allocations
for the keybuffer fail.
for YEAR data type.
The problem was that for some unknown reason 0 was not allowed
as a default value for YEAR data type. That was coded before BK.
However the Manual does not say a word about such a limitation.
Also, it looks inconsistent with other data types.
The fix is to allow 0 as a default value.
Fixed the parser to reject SQLSTATE '00000',
since '00000' is the successful completion condition,
and can not be caught by an exception handler in SQL.
correctly - crashes server !
Creating federated table with connect string containing empty
(zero-length) host name and port is evaluated as 0 (port is
incorrect, omitted or 0) crashes server.
This happens because federated calls strcmp() with NULL pointer.
Fixed by avoiding strcmp() call if hostname is set to NULL.
binlogging of insert into a autoincrement blackhole table ignored
an explicit set insert_id.
Fixed with refining of the blackhole's insert method to call
update_auto_increment() that prepares binlogging the insert query
with the preceeding set insert_id.
Note, as the engine does not store any actual data one has to explicitly
provide to the server with the value of the autoincrement column via
set insert_id. Otherwise binlogging will happend with the default
set insert_id=1.
When swapping out heap I_S tables to disk, this is done after plan refinement.
Thus, READ_RECORD::file will still point to the (deleted) heap handler at start
of execution. This causes segmentation fault if join buffering is used and the
query is a star query where the result is found to be empty before accessing
some table. In this case that table has not been initialized (i.e. had its
READ_RECORD re-initialized) before the cleanup routine tries to close the handler.
Fixed by updating READ_RECORD::file when changing handler.