Fixed compile-pentium64 scripts
Fixed wrong estimate of update_with_key_prefix in sql-bench
Merge bk-internal.mysql.com:/home/bk/mysql-5.1 into mysql.com:/home/my/mysql-5.1
Fixed unsafe define of uint4korr()
Fixed that --extern works with mysql-test-run.pl
Small trivial cleanups
This also fixes a bug in counting number of rows that are updated when we have many simultanous queries
Move all connection handling and command exectuion main loop from sql_parse.cc to sql_connection.cc
Split handle_one_connection() into reusable sub functions.
Split create_new_thread() into reusable sub functions.
Added thread_scheduler; Preliminary interface code for future thread_handling code.
Use 'my_thread_id' for internal thread id's
Make thr_alarm_kill() to depend on thread_id instead of thread
Make thr_abort_locks_for_thread() depend on thread_id instead of thread
In store_globals(), set my_thread_var->id to be thd->thread_id.
Use my_thread_var->id as basis for my_thread_name()
The above changes makes the connection we have between THD and threads more soft.
Added a lot of DBUG_PRINT() and DBUG_ASSERT() functions
Fixed compiler warnings
Fixed core dumps when running with --debug
Removed setting of signal masks (was never used)
Made event code call pthread_exit() (portability fix)
Fixed that event code doesn't call DBUG_xxx functions before my_thread_init() is called.
Made handling of thread_id and thd->variables.pseudo_thread_id uniform.
Removed one common 'not freed memory' warning from mysqltest
Fixed a couple of usage of not initialized warnings (unlikely cases)
Suppress compiler warnings from bdb and (for the moment) warnings from ndb
it doesn't select.
This bug was fixed along with bug #16861: User defined variable can
have a wrong value if a tmp table was used.
There the fix consisted of Item_func_set_user_var overloading the method
Item::save_in_field. Consider the query from the test case:
INSERT INTO foo( bar, baz )
SELECT
bar,
@newBaz := 1 + baz
FROM
foo
WHERE
quux <= 0.1;
Here the assignment expression '@newBaz := 1 + baz' is represented by an
Item_func_set_user_var. Its member method save_in_field, which writes the
value of this assignment into the result field, writes the val_xxx() value,
which is not updated at this point. In the fix introduced by the patch,
the save_in_field method reads the actual variable value instead.
See also comment for
ChangeSet@1.2368.1.3, 2007-01-09 23:24:56+03:00, evgen@moonbone.local +4 -0
and comment for
Item_func_set_user_var::save_in_field (item_func.cc)
created for sorting.
Any outer reference in a subquery was represented by an Item_field object.
If the outer select employs a temporary table all such fields should be
replaced with fields from that temporary table in order to point to the
actual data. This replacement wasn't done and that resulted in a wrong
subquery evaluation and a wrong result of the whole query.
Now any outer field is represented by two objects - Item_field placed in the
outer select and Item_outer_ref in the subquery. Item_field object is
processed as a normal field and the reference to it is saved in the
ref_pointer_array. Thus the Item_outer_ref is always references the correct
field. The original field is substituted for a reference in the
Item_field::fix_outer_field() function.
New function called fix_inner_refs() is added to fix fields referenced from
inner selects and to fix references (Item_ref objects) to these fields.
The new Item_outer_ref class is a descendant of the Item_direct_ref class.
It additionally stores a reference to the original field and designed to
behave more like a field.
- Starting time of a query sent by bootstrapping wasn't initialized
and starting time defaulted to 0. This later used value by NOW-
item and was translated to 1970-01-01 11:00:00.
- Marketing the time with thd->set_time() before the call to
mysql_parse resolves this issue.
- set_time was refactored to be part of the thd->init_for_queries-
process.
Several problems fixed:
1. There was a "catch-all" context initialization in setup_tables()
that was causing the table that we insert into to be visible in the
SELECT part of an INSERT .. SELECT .. statement with no tables in
its FROM clause. This was making sure all the under-initialized
contexts in various parts of the code are not left uninitialized.
Fixed by removing the "catch-all" statement and initializing the
context in the parser.
2. Incomplete name resolution context when resolving the right-hand
values in the ON DUPLICATE KEY UPDATE ... part of an INSERT ... SELECT ...
caused columns from NATURAL JOIN/JOIN USING table references in the
FROM clause of the select to be unavailable.
Fixed by establishing a proper name resolution context.
3. When setting up the special name resolution context for problem 2
there was no check for cases where an aggregate function without a
GROUP BY effectively takes the column from the SELECT part of an
INSERT ... SELECT unavailable for ON DUPLICATE KEY UPDATE.
Fixed by checking for that condition when setting up the name
resolution context.
im_daemon_life_cycle fails randomly.
1. Move IM-angel functionality into a separate file, create Angel class.
2. Be more verbose;
3. Fix typo in FLUSH INSTANCES implementation;
4. Polishing.
UPDATE contains wrong data if the SELECT employs a temporary table.
If the UPDATE values of the INSERT .. SELECT .. ON DUPLICATE KEY UPDATE
statement contains fields from the SELECT part and the select employs a
temporary table then those fields will contain wrong values because they
aren't corrected to get data from the temporary table.
The solution is to add these fields to the selects all_fields list,
to store pointers to those fields in the selects ref_pointer_array and
to access them via Item_ref objects.
The substitution for Item_ref objects is done in the new function called
Item_field::update_value_transformer(). It is called through the
item->transform() mechanism at the end of the select_insert::prepare()
function.
When checking if an IN predicate can be evaluated using a key
the optimizer makes sure that all the arguments of IN are of
the same result type. To assure that it check whether
Item_func_in::array is filled in.
However Item_func_in::array is set if the types are
the same AND all the arguments are compile time constants.
Fixed by introducing Item_func_in::arg_types_compatible
flag to allow correct checking of the desired condition.
1)
BUG#25507 "multi-row insert delayed + auto increment causes
duplicate key entries on slave" (two concurrrent connections doing
multi-row INSERT DELAYED to insert into an auto_increment column,
caused replication slave to stop with "duplicate key error" (and
binlog was wrong), and BUG#26116 "If multi-row INSERT
DELAYED has errors, statement-based binlogging breaks" (the binlog
was not accounting for all rows inserted, or slave could stop).
The fix is that: in statement-based binlogging, a multi-row INSERT
DELAYED is silently converted to a non-delayed INSERT.
This is supposed to not affect many 5.1 users as in 5.1, the default
binlog format is "mixed", which does not have the bug (the bug is
only with binlog_format=STATEMENT).
We should document how the system delayed_insert thread decides of
its binlog format (which is not modified by this patch):
this decision is taken when the thread is created
and holds until it is terminated (is not affected by any later change
via SET GLOBAL BINLOG_FORMAT). It is also not affected by the binlog
format of the connection which issues INSERT DELAYED (this binlog
format does not affect how the row will be binlogged).
If one wants to change the binlog format of its server with SET
GLOBAL BINLOG_FORMAT, it should do FLUSH TABLES to be sure all
delayed_insert threads terminate and thus new threads are created,
taking into account the new format.
2)
BUG#24432
"INSERT... ON DUPLICATE KEY UPDATE skips auto_increment values".
When in an INSERT ON DUPLICATE KEY UPDATE, using
an autoincrement column, we inserted some autogenerated values and
also updated some rows, some autogenerated values were not used
(for example, even if 10 was the largest autoinc value in the table
at the start of the statement, 12 could be the first autogenerated
value inserted by the statement, instead of 11). One autogenerated
value was lost per updated row. Led to exhausting the range of the
autoincrement column faster.
Bug introduced by fix of BUG#20188; present since 5.0.24 and 5.1.12.
This bug breaks replication from a pre-5.0.24/pre-5.1.12 master.
But the present bugfix, as it makes INSERT ON DUP KEY UPDATE
behave like pre-5.0.24/pre-5.1.12, breaks replication from a
[5.0.24,5.0.34]/[5.1.12,5.1.15]
master to a fixed (5.0.36/5.1.16) slave! To warn users against this when
they upgrade their slave, as agreed with the support team, we add
code for a fixed slave to detect that it is connected to a buggy
master in a situation (INSERT ON DUP KEY UPDATE into autoinc column)
likely to break replication, in which case it cannot replicate so
stops and prints a message to the slave's error log and to SHOW SLAVE
STATUS.
For 5.0.36->[5.0.24,5.0.34] replication or 5.1.16->[5.1.12,5.1.15]
replication we cannot warn as master
does not know the slave's version (but we always recommended to users
to have slave at least as new as master).
As agreed with support, I have asked for an alert to be put into
the MySQL Network Monitoring and Advisory Service.
3) note that I'll re-enable rpl_insert_id as soon as 5.1-rpl gets
the changes from the main 5.1.
duplicate key entries on slave" (two concurrrent connections doing
multi-row INSERT DELAYED to insert into an auto_increment column,
caused replication slave to stop with "duplicate key error" (and
binlog was wrong)), and BUG#26116 "If multi-row INSERT
DELAYED has errors, statement-based binlogging breaks" (the binlog
was not accounting for all rows inserted, or slave could stop).
The fix is that: if (statement-based) binlogging is on, a multi-row
INSERT DELAYED is silently converted to a non-delayed INSERT.
Note: it is not possible to test BUG#25507 in 5.0 (requires mysqlslap),
so it is tested only in the changeset for 5.1. However, BUG#26116
is tested here, and the fix for BUG#25507 is the same code change.
were evaluated.
According to the new rules for string comparison partial indexes on text
columns can be used in the same cases when partial indexes on varchar
columns can be used.