by binlogging some SET ONE_SHOT CHARACTER_SETetc,
which will be enough until we have it more compact and more complete in 5.0. With the present patch,
replication will work ok between 4.1.3 master and slaves, as long as:
- master and slave have the same GLOBAL.COLLATION_SERVER
- COLLATION_DATABASE and CHARACTER_SET_DATABASE are not used
- application does not use the fact that table is created with charset of the USEd db (BUG#2326).
all of which are not too hard to fulfill.
ONE_SHOT is reserved for internal use of mysqlbinlog|mysql and works only for charsets,
so we give error if used for non-charset vars.
Fix for BUG#3875 "mysqlbinlog produces wrong ouput if query uses
variables containing quotes" and BUG#3943 "Queries with non-ASCII literals are not replicated
properly after SET NAMES".
Detecting that master and slave have different global charsets or server ids.
after Monty's review.
- Item_param was rewritten.
- it turns out that we can't convert string data to character set of
connection on the fly, because they first should be written to the binary
log.
To support efficient conversion we need to rewrite prepared statements
binlogging code first.
Moved PS name to Statement class, Statement_map now handles name-to-statement resolution.
Both named and unnamed statements are now executed in one function (sql_prepare.cc:execute_stmt)
Fixed a problem: Malformed sequence of commands from client could cause server to use previously deleted objects.
Some code cleanup and small fixes
init the binlog_cache (THD::transaction.trans_log).
I have checked all places where trans_log is used, because as now
it may not be inited in some cases, we have to be cautious
(will forward this commit mail to Heikki).
Fixed output from mysqlbinlog when using --skip-comments
Fixed warnings from valgrind
Fixed ref_length when used with HEAP tables
More efficent need_conversion()
Fixed error handling in UPDATE with not updateable tables
Fixed bug in null handling in CAST to signed/unsigned
(fixed bug #2526 "--init-file crashes MySQL if contains large select")
Such checking usually works in send_ok, send_eof, but in this case large
result causes interim flushing.
open a file that already existed. The problem was that end_io_cache()
was called even if init_io_cache() was not. This affected both
OUTFILE and DUMPFILE (both fixed). Sometimes wrongly aligned pointer was freed,
sometimes mysqld core dumped.
Other problem was that select_dump::send_error removed the dumpfile,
even if it was created by an earlier run, or by some other program, if
the file permissions just permitted it. Fixed it so that the file will
only be deleted, if an error occurred, but the file was created by mysqld
just a moment ago, in that thread.
On the other hand, select_export did not handle the corresponding garbage
file at all. Both fixed.
After these fixes, a big part of the select_export::prepare and select_dump::prepare
code became identical. Merged the code into a new function called create_file(),
which is now called by the two latter functions.
Regards,
Jani
broken with recent changes' (attempt 2).
Adding Statement_core is better because:
- set_statement() code is shorter and you don't need to modify it when adding
new members to Statement_core
- a bit faster (you don't have virtual call and don't free_root() twice)
Do that short patch instead in hope that set_statement() will be sooner or
later removed entirely
Fixed compiler warnings (a lot of hidden variables detected by the Forte compiler)
Added a lot of 'version_xxx' strings to 'show variables'
Prevent copying of TMP_TABLE_PARAM (This caused core dump bug on Solaris)
Fixed problem with printing sub selects to debug log