binlogging of insert into a autoincrement blackhole table ignored
an explicit set insert_id.
Fixed with refining of the blackhole's insert method to call
update_auto_increment() that prepares binlogging the insert query
with the preceeding set insert_id.
Note, as the engine does not store any actual data one has to explicitly
provide to the server with the value of the autoincrement column via
set insert_id. Otherwise binlogging will happend with the default
set insert_id=1.
Bug #18453 Warning/error message if there is a mismatch between ...
There were three problems:
1. the reported lack of warnings for the BEFORE syntax of PURGE;
2. the similar lack of warnings for the TO syntax;
3. incompatible behaviour between the two in that the latter blanked out
regardlessly of presence or lack the actual file corresponding to
an index record; the former version gave up at the first mismatch.
fixed with deploying the warning's generation and synronizing logics of
purge_logs() and purge_logs_before_date().
my_stat() is called in either of two branches of purge_logs() (responsible
for the TO syntax of PURGE) similarly to how it has behaved in the BEFORE syntax.
If there is no actual binlog file, my_stat returns NULL and my_delete is
not invoked.
A critical error is reported to the user if a file from the index
could not be retrieved info about or deleted with a system error code
different than ENOENT.
using a trig in SP
For all 5.0 and up to 5.1.12 exclusive, when a stored routine or
trigger caused an INSERT into an AUTO_INCREMENT column, the
generated AUTO_INCREMENT value should not be written into the
binary log, which means if a statement does not generate
AUTO_INCREMENT value itself, there will be no Intvar event (SET
INSERT_ID) associated with it even if one of the stored routine
or trigger caused generation of such a value. And meanwhile, when
executing a stored routine or trigger, it would ignore the
INSERT_ID value even if there is a INSERT_ID value available set
by a SET INSERT_ID statement.
Starting from MySQL 5.1.12, the generated AUTO_INCREMENT value is
written into the binary log, and the value will be used if
available when executing the stored routine or trigger.
Prior fix of this bug in MySQL 5.0 and prior MySQL 5.1.12
(referenced as the buggy versions in the text below), when a
statement that generates AUTO_INCREMENT value by the top
statement was executed in the body of a SP, all statements in the
SP after this statement would be treated as if they had generated
AUTO_INCREMENT by the top statement. When a statement that did
not generate AUTO_INCREMENT value by the top statement but by a
function/trigger called by it, an erroneous Intvar event would be
associated with the statement, this erroneous INSERT_ID value
wouldn't cause problem when replicating between masters and
slaves of 5.0.x or prior 5.1.12, because the erroneous INSERT_ID
value was not used when executing functions/triggers. But when
replicating from buggy versions to 5.1.12 or newer, which will
use the INSERT_ID value in functions/triggers, the erroneous
value will be used, which would cause duplicate entry error and
cause the slave to stop.
The patch for 5.0 fixed it not to generate the erroneous Intvar
event, another patch for 5.1 fixed it to ignore the SET INSERT_ID
value when executing functions/triggers if it is replicating from
a master of buggy versions.
Problem: if the IO slave thread is attempting to connect,
STOP SLAVE waits for the attempt to finish.
It may take a long time.
Fix: don't wait, stop the slave immediately.
MASTER_POS_WAIT return values are different than expected when the server is not a slave.
It returns -1 instead of NULL.
Fixed with correcting st_relay_log_info::wait_for_pos() to return the proper
value in the case of rli info is not inited.
sporadically
Under some circumstances, the mysql_insert_id() value after SELECT ...
INSERT could return a wrong value. This could happen when the last
SELECT ... INSERT did not involve an AUTO_INCREMENT column, but the
value of mysql_insert_id() was changed by some previous statements.
Fixed by checking the value of thd->insert_id_used in
select_insert::send_eof() and returning 0 for mysql_insert_id() if it
is not set.
- Apply Eric Bergen's patch: in join_read_always_key(), move ha_index_init() call
to before the late NULLs filtering code.
- Backport function comments from 6.0.
and Item_direct_ref constructor calls.
Order of ref->field_name and ref->table_name arguments
is of Item_ref and Item_direct_ref in the fix_inner_refs
function is inverted.
added new function test_if_data_home_dir() which checks that
path does not contain mysql data home directory.
Using of mysql data home directory in
DATA DIRECTORY & INDEX DIRECTORY is disallowed.
Assertion `0' failed
If ROW item is a part of an expression that also has
aggregate function calls (COUNT/SUM/AVG...), a
"splitting" with an Item::split_sum_func2 function
is applied to that ROW item.
Current implementation of Item::split_sum_func2
replaces this Item_row with a newly created
Item_aggregate_ref reference to it.
Then the row cache tries to work with the
Item_aggregate_ref object as with the Item_row object:
row cache calls row-emulation methods such as cols and
element_index. Item_aggregate_ref (like it's parent
Item_ref) inherits dummy implementations of those
methods from the hierarchy root Item, and call to
them leads to failed assertions and wrong data
output.
Row-emulation virtual functions (cols, element_index, addr,
check_cols, null_inside and bring_value) of Item_ref have
been overloaded to forward calls to an underlying item
reference.
The problem is that passing anything other than a integer to a limit
clause in a prepared statement would fail. This limitation was introduced
to avoid replication problems (e.g: replicating the statement with a
string argument would cause a parse failure in the slave).
The solution is to convert arguments to the limit clause to a integer
value and use this converted value when persisting the query to the log.
NAME_CONST('whatever', -1) * MAX(whatever) bombed since -1 was
not seen as constant, but as FUNCTION_UNARY_MINUS(constant)
while we are at the same time pretending it was a basic const
item. This confused the aggregate handlers in exciting ways.
We now make NAME_CONST() behave more consistently.
Was a double-free of the Unique member of Item_func_group_concat.
This was not causing a crash because the Unique is a descendent of
Sql_alloc.
Fixed to free the Unique only if it was allocated for the instance
of Item_func_group_concat it was referenced from
There was no way to return an error from the client library
if no MYSQL connections was established.
So here i added variables to store that king of errors and
made functions like mysql_error(NULL) to return these.
documentation
While the manual mentions FRAC_SECOND only for the TIMESTAMPADD()
function, it was also possible to use FRAC_SECOND with DATE_ADD(),
DATE_SUB() and +/- INTERVAL.
Fixed the parser to match the manual, i.e. using FRAC_SECOND for
anything other than TIMESTAMPADD()/TIMESTAMPDIFF() now produces a
syntax error.
Additionally, the patch allows MICROSECOND to be used in TIMESTAMPADD/
TIMESTAMPDIFF and marks FRAC_SECOND as deprecated.
suite)
Under some circumstances a combination of aggregate functions and
GROUP BY in a SELECT query over a VIEW could lead to incorrect
calculation of the result type of the aggregate function. This in
turn could result in incorrect results, or assertion failures on debug
builds.
Fixed by changing the logic in Item_sum_hybrid::fix_fields() so that
the argument's item is dereferenced before calling its type() method.
The problem is that CREATE VIEW statements inside prepared statements
weren't being expanded during the prepare phase, which leads to objects
not being allocated in the appropriate memory arenas.
The solution is to perform the validation of CREATE VIEW statements
during the prepare phase of a prepared statement. The validation
during the prepare phase assures that transformations of the parsed
tree will use the permanent arena of the prepared statement.
a table name.
The problem was that fill_defined_view_parts() did not return
an error if a table is going to be altered. That happened if
the table was already in the table cache. In that case,
open_table() returned non-NULL value (valid TABLE-instance from
the cache).
The fix is to ensure that an error is thrown even if the table
is in the cache.
(This is a backport of the original patch for 5.1)
Executing a prepared statement associated with a materialized
cursor yields to the client a metadata packet with wrong table
and database names. The problem was occurring because the server
was sending the the name of the temporary table used by the cursor
instead of the table name of the original table. The same problem
occurs when selecting from views, in which case the table name was
being sent and not the name of the view.
The solution is to fill the list item from the temporary table but
preserving the table and database names of the original fields. This
is achieved by tweaking the Select_materialize to accept a pointer to
the Materialized_cursor class which contains the item list to be filled.
and ps-protocol
Finding a routine should be a transparent operation as
far as the binary log is concerned.
But it was influencing the binary log because of the TIMESTAMP
column in the proc table.
Fixed by preserving and restoring the time_zone usage flag when
searching for a stored routine in the proc table.
- Replace per-thread signal()'s with SetUnhandledExceptionFilter().
The only remaining signal() is for SIGABRT (default abort()
handler in VS2005 is broken, i.e removes user exception filter)
- remove MessageBox()'es from error handling code
- Windows port for print_stacktrace() and write_core()
- Cleanup, removed some unused functions
Problem is not about intervals and doesn't actually cause 'full table scan'.
We have an optimization for DISTINCT when we have
'DISTINCT field_from_first_join_table' we don't need to read all the
rows from the JOIN-ed table if we found one conforming row.
It stopped working in 5.0 as we return NESTED_LOOP_OK if we came upon
that case in the evaluate_join_record() and that doesn't break the
recordreading loop in sub_select().
Fixed by returning NESTED_LOOP_NO_MORE_ROWS in this case.