When we load the slave state from the mysql.gtid_slave_pos at server start, we
need to load all the rows into the in-memory hash, not just the most recent
one in each replication domain. Otherwise we accumulate cruft in the form of
old rows each time the server restarts.
In record_gtid(), too many rows were deleted from the slave position
hash - we need to always keep on to the most recent committed row,
so we have a valid slave position at all times.
Also, fixing a bug in STR_TO_DATE(). It erroneously returned
error in strict mode for dates like '0000-01-01'
(zero year, but non-zero month and day).
According to the manual:
- NO_ZERO_DATE disallows 0000-00-00 (all date parts are zeros)
- NO_ZERO_IN_DATE disallows zero month (YYYY-00-DD) or day (YYYY-MM-00).
0000-01-01 is a valid date, even in strict mode.
modified:
mysql-test/r/func_time.result
mysql-test/r/strict.result
mysql-test/t/func_time.test
mysql-test/t/strict.test
sql/item_timefunc.cc
sql/sql_time.h
pending merges:
Alexander Barkov 2013-06-17 MDEV-4635 Crash in UNIX_TIMESTAMP(STR_TO_DAT...
- temporary tables now works
- mysql-system_tables updated to not use temporary tables
- PASSWORD() function fixed
- Support for STATS_AUTO_RECALC, STATS_PERSISTENT and STATS_SAMPLE_PAGES table options
Remove special handling for ~ in the middle of the path in cleanup_dir_name() - according to Monty, this was ancient code that tried to emulate Emacs behavior
See also MySQL Bug #39750 and similar ones.
Fix my_delete() on Windows, to safely remvoe files on Windows, including files that are opened by another threads in the same process, antiviruses and backup applications. If file to be deleted is also opened by another thread, the file is renamed to unique name prior to deletion - this makes it possible to create file with the same name right after deletion.
With this patch my_delete_allow_opened() becomes obsolete and is replaced with my_delete().
This patch is rework of the patch http://lists.mysql.com/commits/59327 for MySQL bug#39750.
Implement discovery of table non-existence, and related changes:
1. Split GTS_FORCE_DISCOVERY (that was meaning two different things in
two different functions) into GTS_FORCE_DISCOVERY and GTS_USE_DISCOVERY.
2. Move GTS_FORCE_DISCOVERY implementation into open_table_def().
3. In recover_from_failed_open() clear old errors *before* discovery,
not after successful discovery. The final error should come
from the discovery.
4. On forced discovery delete table .frm first. Discovery will write
a new one, if desired.
5. If the frm file exists, but not the table in the engine, force
rediscovery if the engine supports it.
1. default db type for partitions was stored as 1-byte DB_TYPE code,
which doesn't work for dynamically generated codes.
2. storage engine plugin for default db type wasn't locked at all,
which could trivially crash for dynamic plugins.
Now the storage engine name is stored in the extra2 section,
and the plugin is correctly locked.
1. DROP DATABASE should use ha_discover_table_names(), not look at .frm files.
2. filename_to_tablename() also encodes temp file names #sql- -> #mysql50##sql
3. no special treatment for #sql- files, no TABLE_LIST::internal_tmp_table
4. discover also table file names, that start from #
Non-blocking client currently can be build on Windows, GCC on i386 and x64, or any OS wth ucontext.h header. Prior to this patch, build failed if neither of these conditions is true.
Fix to avoid compiler errors in these case - non-blocking API would not be useful on , but otherwise everything will work as before.
Partitioning didn't store the name of default storage engine for partitions
in the frm file - it only store the typecode. Typecodes aren't stable, and
might vary depending on the order in which storage engines are loaded (can
be changed even from my.cnf without recompilation).
As a temporary workaround for 5.5, we hard-code Aria's typecode, to make sure it
never changes.
modified:
storage/connect/tabmul.cpp
- Fix regression error on field types. Cannot eliminate BINARY_FLAG that
is on for DATE columns. Update result from dbf.test case.
modified:
storage/connect/ha_connect.cc
storage/connect/mysql-test/connect/r/dbf.result
SHOW PROCESSLIST might see a thread that started executing a query *after*
processlist has started. Don't show a negative or huge wrapped-around query execution time.