Use compiler provided atomic builtins as a 'backend' for
MySQL's atomic primitives. The builtins are available on
a handful of platforms and compilers.
read_buffer_size set on master
BUG#33413 show binlog events fails if binlog has event size of close
to max_allowed_packet
The size of Append_block replication event was determined solely by
read_buffer_size whereas the rest of replication code deals with
max_allowed_packet.
When the former parameter was set to larger than the latter there were
two artifacts: the master could not read events from binlog;
show master events did not show.
Fixed with
- fragmenting the used io-cached buffer into pieces each size of less
than max_allowed_packet (bug#30435)
- incrementing show-binlog-events handling thread's max_allowed_packet
with the max estimated for the replication header size
cause ROLLBACK of statement", part 1. Review fixes.
Do not send OK/EOF packets to the client until we reached the end of
the current statement.
This is a consolidation, to keep the functionality that is shared by all
SQL statements in one place in the server.
Currently this functionality includes:
- close_thread_tables()
- log_slow_statement().
After this patch and the subsequent patch for Bug#12713, it shall also include:
- ha_autocommit_or_rollback()
- net_end_statement()
- query_cache_end_of_result().
In future it may also include:
- mysql_reset_thd_for_next_command().
without PK
Bug#31609 Not all RBR slave errors reported as errors
bug#32468 delete rows event on a table with foreign key constraint fails
The first two bugs comprise idempotency issues.
First, there was no error code reported under conditions of the bug
description although the slave sql thread halted.
Second, executions were different with and without presence of prim key in
the table.
Third, there was no way to instruct the slave whether to ignore an error
and skip to the following event or to halt.
Fourth, there are handler errors which might happen due to idempotent
applying of binlog but those were not listed among the "idempotent" error
list.
All the named issues are addressed.
Wrt to the 3rd, there is the new global system variable, changeble at run
time, which controls the slave sql thread behaviour.
The new variable allows further extensions to mimic the sql_mode
session/global variable.
To address the 4th, the new bug#32468 had to be fixed as it was staying
in the way.
The patch for Bug 26379 (Combination of FLUSH TABLE and
REPAIR TABLE corrupts a MERGE table) fixed this bug too.
However it revealed a new bug that crashed the server.
Flushing a merge table at the moment when it is between open
and attach of children crashed the server.
The flushing thread wants to abort locks on the flushed table.
It calls ha_myisammrg::lock_count() and ha_myisammrg::store_lock()
on the TABLE object of the other thread.
Changed ha_myisammrg::lock_count() and ha_myisammrg::store_lock()
to accept non-attached children. ha_myisammrg::lock_count() returns
the number of MyISAM tables in the MERGE table so that the memory
allocation done by get_lock_data() is done correctly, even if the
children become attached before ha_myisammrg::store_lock() is
called. ha_myisammrg::store_lock() will not return any lock if the
children are not attached.
This is however a change in the handler interface. lock_count()
can now return a higher number than store_lock() stores locks.
This is more safe than the reverse implementation would be.
get_lock_data() in the SQL layer is adjusted accordingly. It sets
MYSQL_LOCK::lock_count based on the number of locks returned by
the handler::store_lock() calls, not based on the numbers returned
by the handler::lock_count() calls. The latter are only used for
allocation of memory now.
No test case. The test suite cannot reliably run FLUSH between
lock_count() and store_lock() of another thread. The bug report
contains a program that can repeat the problem with some
probability.
Default values of variables were not subject to upper/lower bounds
and step, while setting variables was. Bounds and step are also
applied to defaults now; defaults are corrected quietly, values
given by the user are corrected, and a correction-warning is thrown
as needed. Lastly, very large values could wrap around, starting
from 0 again. They are bounded at the maximum value for the
respective data-type now if no lower maximum is specified in the
variable's definition.
There's currently no way of knowing the determinicity of an UDF.
And the optimizer and the sequence() UDFs were making wrong
assumptions about what the is_const member means.
Plus there was no implementation of update_system_tables()
causing the optimizer to overwrite the information returned by
the <udf>_init function.
Fixed by equating the assumptions about the semantics of
is_const and providing a implementation of update_used_tables().
Added a TODO item for the UDF API change needed to make a better
implementation.
Remove the mysql_odbc_escape_string() function. The function
has multi-byte character escaping issues, doesn't honor the
NO_BACKSLASH_ESCAPES mode and is not used anymore by the
Connector/ODBC as of 3.51.17.
failing 'INSTALL PLUGIN' statement doesn't work in embedded server
as we disable library loading there.
Fixed by enabling loading libraries (#define HAVE_DLOPEN), what also
makes UDF working in the embedded server.
corrupts a MERGE table
Bug 26867 - LOCK TABLES + REPAIR + merge table result in
memory/cpu hogging
Bug 26377 - Deadlock with MERGE and FLUSH TABLE
Bug 25038 - Waiting TRUNCATE
Bug 25700 - merge base tables get corrupted by
optimize/analyze/repair table
Bug 30275 - Merge tables: flush tables or unlock tables
causes server to crash
Bug 19627 - temporary merge table locking
Bug 27660 - Falcon: merge table possible
Bug 30273 - merge tables: Can't lock file (errno: 155)
The problems were:
Bug 26379 - Combination of FLUSH TABLE and REPAIR TABLE
corrupts a MERGE table
1. A thread trying to lock a MERGE table performs busy waiting while
REPAIR TABLE or a similar table administration task is ongoing on
one or more of its MyISAM tables.
2. A thread trying to lock a MERGE table performs busy waiting until all
threads that did REPAIR TABLE or similar table administration tasks
on one or more of its MyISAM tables in LOCK TABLES segments do UNLOCK
TABLES. The difference against problem #1 is that the busy waiting
takes place *after* the administration task. It is terminated by
UNLOCK TABLES only.
3. Two FLUSH TABLES within a LOCK TABLES segment can invalidate the
lock. This does *not* require a MERGE table. The first FLUSH TABLES
can be replaced by any statement that requires other threads to
reopen the table. In 5.0 and 5.1 a single FLUSH TABLES can provoke
the problem.
Bug 26867 - LOCK TABLES + REPAIR + merge table result in
memory/cpu hogging
Trying DML on a MERGE table, which has a child locked and
repaired by another thread, made an infinite loop in the server.
Bug 26377 - Deadlock with MERGE and FLUSH TABLE
Locking a MERGE table and its children in parent-child order
and flushing the child deadlocked the server.
Bug 25038 - Waiting TRUNCATE
Truncating a MERGE child, while the MERGE table was in use,
let the truncate fail instead of waiting for the table to
become free.
Bug 25700 - merge base tables get corrupted by
optimize/analyze/repair table
Repairing a child of an open MERGE table corrupted the child.
It was necessary to FLUSH the child first.
Bug 30275 - Merge tables: flush tables or unlock tables
causes server to crash
Flushing and optimizing locked MERGE children crashed the server.
Bug 19627 - temporary merge table locking
Use of a temporary MERGE table with non-temporary children
could corrupt the children.
Temporary tables are never locked. So we do now prohibit
non-temporary chidlren of a temporary MERGE table.
Bug 27660 - Falcon: merge table possible
It was possible to create a MERGE table with non-MyISAM children.
Bug 30273 - merge tables: Can't lock file (errno: 155)
This was a Windows-only bug. Table administration statements
sometimes failed with "Can't lock file (errno: 155)".
These bugs are fixed by a new implementation of MERGE table open.
When opening a MERGE table in open_tables() we do now add the
child tables to the list of tables to be opened by open_tables()
(the "query_list"). The children are not opened in the handler at
this stage.
After opening the parent, open_tables() opens each child from the
now extended query_list. When the last child is opened, we remove
the children from the query_list again and attach the children to
the parent. This behaves similar to the old open. However it does
not open the MyISAM tables directly, but grabs them from the already
open children.
When closing a MERGE table in close_thread_table() we detach the
children only. Closing of the children is done implicitly because
they are in thd->open_tables.
For more detail see the comment at the top of ha_myisammrg.cc.
Changed from open_ltable() to open_and_lock_tables() in all places
that can be relevant for MERGE tables. The latter can handle tables
added to the list on the fly. When open_ltable() was used in a loop
over a list of tables, the list must be temporarily terminated
after every table for open_and_lock_tables().
table_list->required_type is set to FRMTYPE_TABLE to avoid open of
special tables. Handling of derived tables is suppressed.
These details are handled by the new function
open_n_lock_single_table(), which has nearly the same signature as
open_ltable() and can replace it in most cases.
In reopen_tables() some of the tables open by a thread can be
closed and reopened. When a MERGE child is affected, the parent
must be closed and reopened too. Closing of the parent is forced
before the first child is closed. Reopen happens in the order of
thd->open_tables. MERGE parents do not attach their children
automatically at open. This is done after all tables are reopened.
So all children are open when attaching them.
Special lock handling like mysql_lock_abort() or mysql_lock_remove()
needs to be suppressed for MERGE children or forwarded to the parent.
This depends on the situation. In loops over all open tables one
suppresses child lock handling. When a single table is touched,
forwarding is done.
Behavioral changes:
===================
This patch changes the behavior of temporary MERGE tables.
Temporary MERGE must have temporary children.
The old behavior was wrong. A temporary table is not locked. Hence
even non-temporary children were not locked. See
Bug 19627 - temporary merge table locking.
You cannot change the union list of a non-temporary MERGE table
when LOCK TABLES is in effect. The following does *not* work:
CREATE TABLE m1 ... ENGINE=MRG_MYISAM ...;
LOCK TABLES t1 WRITE, t2 WRITE, m1 WRITE;
ALTER TABLE m1 ... UNION=(t1,t2) ...;
However, you can do this with a temporary MERGE table.
You cannot create a MERGE table with CREATE ... SELECT, neither
as a temporary MERGE table, nor as a non-temporary MERGE table.
CREATE TABLE m1 ... ENGINE=MRG_MYISAM ... SELECT ...;
Gives error message: table is not BASE TABLE.
It's not InnoDB specific bug.
Error is in QUEUE code, about the way we handle queue->max_at_top.
It's either '0' or '-2' and we do '^' operation to get the proper
direction. Though queue->compare() function can return '-2' as
a result of comparison sometimes. So we'll get
queue->compare() ^ queue->max_at_top == 0 (when max_at_top is -2)
and _downheap() function code will go wrong way here:
...
if (next_index < elements &&
(queue->compare(queue->first_cmp_arg,
queue->root[next_index]+offset_to_key,
queue->root[next_index+1]+offset_to_key) ^
queue->max_at_top) > 0)
next_index++;
...
Fixed by changing max_at_top to be either 1 or -1, doing
'* max_at_top' to get proper direction.
command and reported to a client.
The fact that a timestamp field will be set to NO on UPDATE wasn't shown
by the SHOW COMMAND and reported to a client through connectors. This led to
problems in the ODBC connector and might lead to a user confusion.
A new filed flag called ON_UPDATE_NOW_FLAG is added.
Constructors of the Field_timestamp set it when a field should be set to NOW
on UPDATE.
The get_schema_column_record function now reports whether a timestamp field
will be set to NOW on UPDATE.
The "mysql client in mysqld"(which is used by
replication and federated) should use alarms instead of setting
socket timeout value if the rest of the server uses alarm. By
always calling 'my_net_set_write_timeout'
or 'net_set_read_timeout' when changing the timeout value(s), the
selection whether to use alarms or timeouts will be handled by
ifdef's in those two functions.
This is minimal backport of patch for BUG#26664, which was pushed
to 5.0 and up.
Affects 4.1 only.
A user could not override system-wide settings in their ~/.my.cnf,
because the DEFAULT_SYSCONFDIR was being searched last. Also, in
some configurations (especially when the --sysconfdir compile-time
option is set to /etc or /etc/mysql), the system-wide my.cnf file
was read multiple times, causing confusion and potential problems.
Rearrange default directories to conform to the manual and logic.
Move --sysconfdir=<path> (DEFAULT_SYSCONFDIR) from the last default
directory to the middle of the list. $HOME/.my.cnf should be last,
so the user is able to override the system-wide settings.
Change init_default_directories() to remove duplicates from the
list.
No functionality added or changed.
This is a pre-requisite for the fix for Bug#12713 Error in a stored
function called from a SELECT doesn't cause ROLLBACK of statem
Address post-review comments.
file .\ha_innodb.
Problem: if a partial unique key followed by a non-partial one we declare
the second one as a primary key.
Fix: sort non-partial unique keys before partial ones.
ucs2 doesn't provide required by fulltext ctype array. Crash
happens because fulltext attempts to use unitialized ctype
array.
Fixed by converting ucs2 fields to compatible utf8 analogue.
When replicating an update pair (before image, after image) under row-based
replication, and the before image is not found on the slave, the after image
was not discared, and was hence read as a before image for the next row.
Eventually, this lead to an after image being read outside the block of rows
in the event, causing an assertion to fire.
This patch fixes this by reading the after image in the event that the row
was not found on the slave, adds some extra debug assertion to catch future
errors earlier, and also adds a few non-debug checks to prevent reading
outside the block of the event.
CPUs / Intel's ICC compile
The bug is a combination of two problems:
1. IA64/ICC MySQL binaries use glibc's qsort(), not the one in mysys.
2. The order relation implemented by join_tab_cmp() is not transitive,
i.e. it is possible to choose such a, b and c that (a < b) && (b < c)
but (c < a). This implies that result of a sort using the relation
implemented by join_tab_cmp() depends on the order in which
elements are compared, i.e. the result is implementation-specific. Since
choose_plan() uses qsort() to pre-sort the
join tables using join_tab_cmp() as a compare function, the results of
the sorting may vary depending on qsort() implementation.
It is neither possible nor important to implement a better ordering
algorithm in join_tab_cmp(). Therefore the only way to fix it is to
force our own qsort() to be used by renaming it to my_qsort(), so we don't depend
on linker to decide that.
This patch also "fixes" bug #20530: qsort redefinition violates the
standard.
(compiler issue ?)
Problem:
Improper compile-time flags on AIX prevented use of files > 2 GB. This
resulted in Max_data_length being truncated to 2 GB by MyISAM code.
Solution:
Reverted large-file changes from the fix for bug10776. We need to define
_LARGE_FILES on AIX to have support for files > 2 GB.
Since _LARGE_FILE_API is incompatible with _LARGE_FILES and may be
automatically defined by including standards.h, we also need a
workaround to avoid this conflict.
- Reserver namespace and place in frm for TABLE_CHECKSUM and PAGE_CHECKSUM create options
- Added syncing of directory when creating .frm files
- Portability fixes
- Added missing cast that could cause bugs
- Code cleanups
- Made some bit functions inline
- Moved things out of myisam.h to my_handler.h to make them more accessable
- Renamed some myisam variables and defines to make them more globaly usable (as they are used outside of MyISAM)
- Fixed bugs in error conditions
- Use compiler time asserts instead of run time
- Fixed indentation
HA_EXTRA_PREPARE_FOR_DELETE -> HA_EXTRA_PREPARE_FOR_DROP as the old name was wrong
(Added a define for old value to ensure we don't break any old code)
Added HA_EXTRA_PREPARE_FOR_RENAME as a signal for rename (before we used a DROP signal which is wrong)
- Initialize error messages early to get better errors when mysqld or an engine fails to start
- Fix windows bug that query_performance_frequency was not initialized if registry code failed
- thread_stack -> my_thread_stack_size
"Disabled plugin is provoking Valgrind error"
If there are any auto-alloced string plug-in options, memory is
allocated during the call for handle_options(). We must free this
memory if we are not installing the plug-in.
It's not possible to use WaitForSingleObject to wait
on a CRITICAL_SECTION, instead use the TryEnterCriticalSection function.
- if "mutex" was already taken => return EBUSY
- if "mutex" was aquired => return 0
The parser uses ulonglong to store the LIMIT number. This number
then is stored into a variable of type ha_rows. ha_rows is either
4 or 8 byte depending on the BIG_TABLES define from config.h
So an overflow may occur (and LIMIT becomes zero) while storing an
ulonglong value in ha_rows.
Fixed by :
1. Using the maximum possible value for ha_rows on overflow
2. Defining BIG_TABLES for the windows builds (to match the others)
make sure that if builder configured with a non-standard (!= 3306)
default TCP port that value actually gets used throughout. if they
didn't configure a value, assume "use a sensible default", which
will be read from /etc/services or, failing that, from the factory
default. That makes the order of preference
- command-line option
- my.cnf, where applicable
- $MYSQL_TCP_PORT environment variable
- /etc/services (unless configured --with-tcp-port)
- default port (--with-tcp-port=... or factory default)
The patch limits read_buffer_size and read_rnd_buffer_size by 2 GB on all platforms for the following reasons:
- I/O code in mysys, code in mf_iocache.c and in some storage engines do not currently work with sizes > 2 GB for those buffers
- even if the above had been fixed, Windows POSIX read() and write() calls are not 2GB-safe, so setting those buffer to sizes > 2GB would not work correctly on 64-bit Windows.
c++config.h now has the following code:
// For example, <windows.h> is known to #define min and max as macros...
#undef min
#undef max
So, our defines in my_global.h are undefined when <new> header
is included.
Move definitions of min()/max() to the end of my_global.h.
When locking a "fast" mutex a static variable cpu_count
was used as a flag to initialize itself on the first usage
by calling sysconf() and setting non-zero value.
This is not thread and optimization safe on some
platforms. That's why the global initialization needs
to be done once in a designated function.
This will also speed up the usage (by a small bit)
because it won't have to check if it's initialized on
every call.
Fixed by moving the fast mutexes initialization out of
my_pthread_fastmutex_lock() to fastmutex_global_init()
and call it from my_init()
Faster thr_alarm()
Added 'Opened_files' status variable to track calls to my_open()
Don't give warnings when running mysql_install_db
Added option --source-install to mysql_install_db
I had to do the following renames() as used polymorphism didn't work with Forte compiler on 64 bit systems
index_read() -> index_read_map()
index_read_idx() -> index_read_idx_map()
index_read_last() -> index_read_last_map()
(Regression, caused by a patch for the bug 22646).
Problem: when result type of date_format() was changed from
binary string to character string, mixing date_format()
with a ascii column in CONCAT() stopped to work.
Fix:
- adding "repertoire" flag into DTCollation class,
to mark items which can return only pure ASCII strings.
- allow character set conversion from pure ASCII to other character sets.
Fixed compiler warnings, errors and link errors
Fixed new bug on Solaris with gethrtime()
Added --debug-check option to all mysql clients to print errors and memory leaks
Added --debug-info to all clients. This now works as --debug-check but also prints memory and cpu usage
mysqld hasn't been built on AIX with ndb-everything in quite a while.
this allowed a variety of changes to be added that broke the AIX build
for both the GNU and IBM compilers (but the IBM suite in particular).
Changeset lets build to complete on AIX 5.2 for users of the GNU and
the IBM suite both. Tudo bem?
to "my_config.h". Not to pollute the top directory, and to get more control
over what is included. Made the include path for "libedit" pick up its own
"config.h" first.
--long-query-time is now given in seconds with microseconds as decimals
--min_examined_row_limit added for slow query log
long_query_time user variable is now double with 6 decimals
Added functions to get time in microseconds
Added faster time() functions for system that has gethrtime() (Solaris)
We now do less time() calls.
Added field->in_read_set() and field->in_write_set() for easier field manipulation by handlers
set_var.cc and my_getopt() can now handle DOUBLE variables.
All time() calls changed to my_time()
my_time() now does retry's if time() call fails.
Added debug function for stopping in mysql_admin_table() when tables are locked
Some trivial function and struct variable renames to avoid merge errors.
Fixed compiler warnings
Initialization of some time variables on windows moved to my_init()
Reorder structure elements to make structures smaller and faster on 64 bit systems
This is a first step in cleaning up the client include files (but should be enough to allow us to do future fixes without breaking the library)
This change is part of WL#2872, Make client library extensible.
Now we don't take any mutexes when creating or dropping internal HEAP tables during SELECT.
Change buffer sizes to size_t to make keycache 64 bit safe on platforms where sizeof(ulong) != sizeof(size_t)
Actually, this testcase will fail generally on all testing platforms.
The bugs come from the inconsistent bitmap between rpl master and slave.
In log_event.cc, the n_bits of m_cols and m_cols_ai are intialized with octal-ceiling
m_width, in fact, their n_bits should be equal to m_width.
Wrong n_bits will cause bitmap_bits_set() get incorrect value in unpack_row()
in rpl_record.cc,
then an assertion in unpack_row() will fail and crash sql thread.
DBUG_ASSERT(null_ptr == row_data + master_null_byte_count);
Meanwhile, because of binlog_prepare_pending_rows_event() changed with correct
m_cols, some results of specific testcases should be updated:
binlog_multi_engine.test
ndb_binlog_multi.test
rpl_ndb_dd_partitions.test
rpl_ndb_log.test
rpl_truncate_7ndb.test
rpl_truncate_7ndb_2.test
In addition, to ensure rows replication correct between master and slave after the patch,
two 'select * from t1' are added in extra/rpl_tests/rpl_log.test, and some testcases include
rpl_log.test, therefore, the results of these testcases should be updated likewise:
rpl_stm_log.test
rpl_row_log.test
rpl_ndb_log.test
rpl_row_log_innodb.test
Totally, results of nine testcases are updated.
By default MyISAM overwrites .MYD and .MYI files no
DATA DIRECTORY option is used. This can lead to two tables
using the same .MYD and .MYI files (that can't be dropped).
To prevent CREATE TABLE from overwriting a file a new option
is introduced : keep_files_on_create
When this is on the CREATE TABLE throws an error if either
the .MYD or .MYI exists for a MyISAM table.
The option is off by default (resulting in compatible behavior).
causing update of a different column
Post-pushbuild fix.
bitmap_set_bit() is an inline function in DEBUG builds and
a macro in non-DEBUG builds. The latter evaluates its 'bit'
argument twice. So one must not use increment/decrement operators
on this argument.
Moved increment of pointer out of bitmap_set_bit() call.
Add more accessors to MySQL internals in mysql/plugin.h, for storage
engine plugins.
Add some accessors specific to the InnoDB storage engine, to allow
InnoDB to be compiled as a plugin (without MYSQL_SERVER). InnoDB
has additional requirements, due to its foreign key support, etc.
leads to the table corruption
New Field::store() method implemented to explicitly set thd->count_cuted_fields
before value storing, instead of (incorrectly) setting it in the CSV storage engine.
Thread row counter now properly incremented during check and repair in the CSV engine.
"Federated INSERT failures"
Federated does not correctly handle "INSERT...ON DUPLICATE KEY UPDATE"
However, implementing such support is not reasonably possible without
increasing complexity of the storage engine: checking that constraints
on remote server match local server and parsing error messages.
This patch causes 'ON DUPLICATE KEY' to fail with ER_DUP_KEY message
if a conflict occurs and not to fail silently.
- BUG#11986: Stored routines and triggers can fail if the code
has a non-ascii symbol
- BUG#16291: mysqldump corrupts string-constants with non-ascii-chars
- BUG#19443: INFORMATION_SCHEMA does not support charsets properly
- BUG#21249: Character set of SP-var can be ignored
- BUG#25212: Character set of string constant is ignored (stored routines)
- BUG#25221: Character set of string constant is ignored (triggers)
There were a few general problems that caused these bugs:
1. Character set information of the original (definition) query for views,
triggers, stored routines and events was lost.
2. mysqldump output query in client character set, which can be
inappropriate to encode definition-query.
3. INFORMATION_SCHEMA used strings with mixed encodings to display object
definition;
1. No query-definition-character set.
In order to compile query into execution code, some extra data (such as
environment variables or the database character set) is used. The problem
here was that this context was not preserved. So, on the next load it can
differ from the original one, thus the result will be different.
The context contains the following data:
- client character set;
- connection collation (character set and collation);
- collation of the owner database;
The fix is to store this context and use it each time we parse (compile)
and execute the object (stored routine, trigger, ...).
2. Wrong mysqldump-output.
The original query can contain several encodings (by means of character set
introducers). The problem here was that we tried to convert original query
to the mysqldump-client character set.
Moreover, we stored queries in different character sets for different
objects (views, for one, used UTF8, triggers used original character set).
The solution is
- to store definition queries in the original character set;
- to change SHOW CREATE statement to output definition query in the
binary character set (i.e. without any conversion);
- introduce SHOW CREATE TRIGGER statement;
- to dump special statements to switch the context to the original one
before dumping and restore it afterwards.
Note, in order to preserve the database collation at the creation time,
additional ALTER DATABASE might be used (to temporary switch the database
collation back to the original value). In this case, ALTER DATABASE
privilege will be required. This is a backward-incompatible change.
3. INFORMATION_SCHEMA showed non-UTF8 strings
The fix is to generate UTF8-query during the parsing, store it in the object
and show it in the INFORMATION_SCHEMA.
Basically, the idea is to create a copy of the original query convert it to
UTF8. Character set introducers are removed and all text literals are
converted to UTF8.
This UTF8 query is intended to provide user-readable output. It must not be
used to recreate the object. Specialized SHOW CREATE statements should be
used for this.
The reason for this limitation is the following: the original query can
contain symbols from several character sets (by means of character set
introducers).
Example:
- original query:
CREATE VIEW v1 AS SELECT _cp1251 'Hello' AS c1;
- UTF8 query (for INFORMATION_SCHEMA):
CREATE VIEW v1 AS SELECT 'Hello' AS c1;
Sometimes the number of really updated rows (with changed
column values) cannot be determined at the server level
alone (e.g. if the storage engine does not return enough
column values to verify that). So the only dependable way
in such cases is to let the storage engine return that
information if possible.
Fixed the bug at server level by providing a way for the
storage engine to return information about wether it
actually updated the row or the old and the new column
values are the same. It can do that by returning
HA_ERR_RECORD_IS_THE_SAME in ha_update_row().
Note that each storage engine may choose not to try to
return this status code, so this behaviour remains
storage engine specific.
MySQL uses _beginthread()/_endthread() instead of
_beginthreadex()/_endthreadex() to create/end its threads
on Windows.
According to MSDN _endthread() does close the thread handle.
So there's no need the handle to be closed explicitly.
Besides : WaitForSingleObject(, INFINITE) != WAIT_OBJECT_0) is
true for all practical cases as the other two possible return
codes (according to MSDN) cannot happen in that case the
CloseHandle() was actually a dead code.
Fixed by removing the CloseHandle() call. No test case added
because it's not possible to test for absence of dead code.
Problem: long and long long types mess in a comparison may lead to wrong results on some platforms.
Fix: prefer [unsigned] long long as [u]longlong as it's used unconditionally in many places.
The patch contains the following changes:
- Introduce auxilary functions to convenient work with character sets:
- resolve_charset();
- resolve_collation();
- get_default_db_collation();
- Introduce lex_string_set();
- Refactor Table_trigger_list::process_triggers() &
sp_head::execute_trigger() to be consistent with other code;
- Move reusable code from add_table_for_trigger() into
build_trn_path(), check_trn_exists() and load_table_name_for_trigger()
to be used in the following patch.
- Rename triggers_file_ext and trigname_file_ext into TRN_EXT and
TRG_EXT respectively.
and is not described in the manual
- Adding missing initialization for utf8 collations
- Minor code clean-ups: renaming variables,
moving code into a new separate function.
- Adding test, to check that both ucs2 and utf8 user
defined collations work (ucs2_test_ci and utf8_test_ci)
- Adding Vietnamese collation as a complex user defined
collation example.
The value of "low-priority-updates" option and the LOW PRIORITY
prefix was taken into account at parse time.
This caused triggers (among others) to ignore this flag (if
supplied for the DML statement).
Moved reading of the LOW_PRIORITY flag at run time.
Fixed an incosistency when handling
SET GLOBAL LOW_PRIORITY_UPDATES : now it is in effect for
delayed INSERTs.
Tested by checking the effect of LOW_PRIORITY flag via a
trigger.
Setting a key_cache_block_size which is not a power of 2
could corrupt MyISAM tables.
A couple of computations in the key cache code use bit
operations which do only work if key_cache_block_size
is a power of 2.
Replaced bit operations by arithmetic operations
to make key cache able to handle block sizes that are
not a power of 2.
When storing a large number to a FLOAT or DOUBLE field with fixed length, it could be incorrectly truncated if the field's length was greater than 31.
This patch also does some code cleanups to be able to reuse code which is common between Field_float::store() and Field_double::store().
Don't try determine stack direction at configure time
compiler_flag.m4:
Use AC_TRY_COMPILE and AC_TRY_LINK instead of AC_TRY_RUN where possible
misc.m4, configure.in:
Use fourth argument to AC_TRY_RUN, to be used in cross compilation
Don't try determine stack direction at configure time