Our RPL_VERSION_HACK prefix caused MySQL clients to always report 5.5
major and minor versions, even if a specific fake version is passed via
my.cnf or command line parameters. When a specific version is requested,
don't employ the RPL_VERSION_HACK prefix within the server handshake
packet.
Also, include fixes by Vladislav Vaintroub to the
aws_key_management plugin. The AWS C++ SDK specifically depends on
OPENSSL_LIBRARIES, not generic SSL_LIBRARIES (such as YaSSL).
* based on RANGE pruning by COLUMNS (sys_trx_end) condition
* removed DEFAULT; AS OF NOW is always last; current VERSIONING as last non-empty (or first empty)
* Min/Max stats in TABLE_SHARE
* ALTER TABLE ADD PARTITION adds before AS OF NOW partition
* one `AS OF NOW`, multiple `VERSIONING` partitions;
* rotation of `VERSIONING` partitions by record count, time period;
* rotation is multi-threaded;
* conventional subpartitions as bottom level for versioned partitions;
* `DEFAULT` keyword selects first `VERSIONING` partition;
* ALTER TABLE ADD/DROP partition;
* REBUILD PARTITION basic operation.
because mysql->net.thd was reset to NULL in mysql_real_connect()
and thd_increment_bytes_received() didn't do anything.
Fix:
* set mysql->net.thd to current_thd instread.
* remove the test for non-null THD from a very often used
function thd_increment_bytes_received().
Intermediate commit.
Implement status variables to aid the DBA in determining the need
and/or effectiveness of the per-engine mylsq.gtid_slave_pos feature:
transactions_multi_engine
Number of transactions that changed data in multiple (transactional)
storage engines.
rpl_transactions_multi_engine
Number of replicated transactions that involved changes in multiple
(transactional) storage engines, before considering the update of the
mysql.gtid_slave_posXXX table.
transactions_gtid_foreign_engine
Number of replicated transactions where the update of the
mysql.gtid_slave_posXXX table had to choose a storage engine that did not
otherwise participate in the transaction.
Benefits of this patch:
- Removed a lot of calls to strlen(), especially for field_string
- Strings generated by parser are now const strings, less chance of
accidently changing a string
- Removed a lot of calls with LEX_STRING as parameter (changed to pointer)
- More uniform code
- Item::name_length was not kept up to date. Now fixed
- Several bugs found and fixed (Access to null pointers,
access of freed memory, wrong arguments to printf like functions)
- Removed a lot of casts from (const char*) to (char*)
Changes:
- This caused some ABI changes
- lex_string_set now uses LEX_CSTRING
- Some fucntions are now taking const char* instead of char*
- Create_field::change and after changed to LEX_CSTRING
- handler::connect_string, comment and engine_name() changed to LEX_CSTRING
- Checked printf() related calls to find bugs. Found and fixed several
errors in old code.
- A lot of changes from LEX_STRING to LEX_CSTRING, especially related to
parsing and events.
- Some changes from LEX_STRING and LEX_STRING & to LEX_CSTRING*
- Some changes for char* to const char*
- Added printf argument checking for my_snprintf()
- Introduced null_clex_str, star_clex_string, temp_lex_str to simplify
code
- Added item_empty_name and item_used_name to be able to distingush between
items that was given an empty name and items that was not given a name
This is used in sql_yacc.yy to know when to give an item a name.
- select table_name."*' is not anymore same as table_name.*
- removed not used function Item::rename()
- Added comparision of item->name_length before some calls to
my_strcasecmp() to speed up comparison
- Moved Item_sp_variable::make_field() from item.h to item.cc
- Some minimal code changes to avoid copying to const char *
- Fixed wrong error message in wsrep_mysql_parse()
- Fixed wrong code in find_field_in_natural_join() where real_item() was
set when it shouldn't
- ER_ERROR_ON_RENAME was used with extra arguments.
- Removed some (wrong) ER_OUTOFMEMORY, as alloc_root will already
give the error.
TODO:
- Check possible unsafe casts in plugin/auth_examples/qa_auth_interface.c
- Change code to not modify LEX_CSTRING for database name
(as part of lower_case_table_names)
Intermediate commit.
Implement a --gtid-pos-auto-engines system variable. The variable is a list
of engines for which mysql.gtid_slave_pos_ENGINE should be auto-created if
needed.
This commit only implements the option variable. It is not yet used to
actually auto-create any tables, nor is there a corresponding command-line
version of the option yet.
Working features:
CREATE OR REPLACE [TEMPORARY] SEQUENCE [IF NOT EXISTS] name
[ INCREMENT [ BY | = ] increment ]
[ MINVALUE [=] minvalue | NO MINVALUE ]
[ MAXVALUE [=] maxvalue | NO MAXVALUE ]
[ START [ WITH | = ] start ] [ CACHE [=] cache ] [ [ NO ] CYCLE ]
ENGINE=xxx COMMENT=".."
SELECT NEXT VALUE FOR sequence_name;
SELECT NEXTVAL(sequence_name);
SELECT PREVIOUS VALUE FOR sequence_name;
SELECT LASTVAL(sequence_name);
SHOW CREATE SEQUENCE sequence_name;
SHOW CREATE TABLE sequence_name;
CREATE TABLE sequence-structure ... SEQUENCE=1
ALTER TABLE sequence RENAME TO sequence2;
RENAME TABLE sequence TO sequence2;
DROP [TEMPORARY] SEQUENCE [IF EXISTS] sequence_names
Missing features
- SETVAL(value,sequence_name), to be used with replication.
- Check replication, including checking that sequence tables are marked
not transactional.
- Check that a commit happens for NEXT VALUE that changes table data (may
already work)
- ALTER SEQUENCE. ANSI SQL version of setval.
- Share identical sequence entries to not add things twice to table list.
- testing insert/delete/update/truncate/load data
- Run and fix Alibaba sequence tests (part of mysql-test/suite/sql_sequence)
- Write documentation for NEXT VALUE / PREVIOUS_VALUE
- NEXTVAL in DEFAULT
- Ensure that NEXTVAL in DEFAULT uses database from base table
- Two NEXTVAL for same row should give same answer.
- Oracle syntax sequence_table.nextval, without any FOR or FROM.
- Sequence tables are treated as 'not read constant tables' by SELECT; Would
be better if we would have a separate list for sequence tables so that
select doesn't know about them, except if refereed to with FROM.
Other things done:
- Improved output for safemalloc backtrack
- frm_type_enum changed to Table_type
- Removed lex->is_view and replaced with lex->table_type. This allows
use to more easy check if item is view, sequence or table.
- Added table flag HA_CAN_TABLES_WITHOUT_ROLLBACK, needed for handlers
that want's to support sequences
- Added handler calls:
- engine_name(), to simplify getting engine name for partition and sequences
- update_first_row(), to be able to do efficient sequence implementations.
- Made binlog_log_row() global to be able to call it from ha_sequence.cc
- Added handler variable: row_already_logged, to be able to flag that the
changed row is already logging to replication log.
- Added CF_DB_CHANGE and CF_SCHEMA_CHANGE flags to simplify
deny_updates_if_read_only_option()
- Added sp_add_cfetch() to avoid new conflicts in sql_yacc.yy
- Moved code for add_table_options() out from sql_show.cc::show_create_table()
- Added String::append_longlong() and used it in sql_show.cc to simplify code.
- Added extra option to dd_frm_type() and ha_table_exists to indicate if
the table is a sequence. Needed by DROP SQUENCE to not drop a table.
Define my_thread_id as an unsigned type, to avoid mismatch with
ulonglong. Change some parameters to this type.
Use size_t in a few more places.
Declare many flag constants as unsigned to avoid sign mismatch
when shifting bits or applying the unary ~ operator.
When applying the unary ~ operator to enum constants, explictly
cast the result to an unsigned type, because enum constants can
be treated as signed.
In InnoDB, change the source code line number parameters from
ulint to unsigned type. Also, make some InnoDB functions return
a narrower type (unsigned or uint32_t instead of ulint;
bool instead of ibool).
Applied lost in a merge revision 7f38a07:
MDEV-10043 - main.events_restart fails sporadically in buildbot (crashes upon
shutdown)
There was race condition between shutdown thread and event worker threads.
Shutdown thread waits for thread_count to become 0 in close_connections(). It
may happen so that event worker thread was started but didn't increment
thread_count by this time. In this case shutdown thread may miss wait for this
working thread and continue deinitialization. Worker thread in turn may continue
execution and crash on deinitialized data.
Fixed by incrementing thread_count before thread is actually created like it is
done for connection threads.
Also let event scheduler not to inc/dec running threads counter for symmetry
with other "service" threads.
==== Description ====
Flashback can rollback the instances/databases/tables to an old snapshot.
It's implement on Server-Level by full image format binary logs (--binlog-row-image=FULL), so it supports all engines.
Currently, it’s a feature inside mysqlbinlog tool (with --flashback arguments).
Because the flashback binlog events will store in the memory, you should check if there is enough memory in your machine.
==== New Arguments to mysqlbinlog ====
--flashback (-B)
It will let mysqlbinlog to work on FLASHBACK mode.
==== New Arguments to mysqld ====
--flashback
Setup the server to use flashback. This enables binary log in row mode
and will enable extra logging for DDL's needed by flashback feature
==== Example ====
I have a table "t" in database "test", we can compare the output with "--flashback" and without.
#client/mysqlbinlog /data/mysqldata_10.0/binlog/mysql-bin.000001 -vv -d test -T t --start-datetime="2013-03-27 14:54:00" > /tmp/1.sql
#client/mysqlbinlog /data/mysqldata_10.0/binlog/mysql-bin.000001 -vv -d test -T t --start-datetime="2013-03-27 14:54:00" -B > /tmp/2.sql
Then, importing the output flashback file (/tmp/2.log), it can flashback your database/table to the special time (--start-datetime).
And if you know the exact postion, "--start-postion" is also works, mysqlbinlog will output the flashback logs that can flashback to "--start-postion" position.
==== Implement ====
1. As we know, if binlog_format is ROW (binlog-row-image=FULL in 10.1 and later), all columns value are store in the row event, so we can get the data before mis-operation.
2. Just do following things:
2.1 Change Event Type, INSERT->DELETE, DELETE->INSERT.
For example:
INSERT INTO t VALUES (...) ---> DELETE FROM t WHERE ...
DELETE FROM t ... ---> INSERT INTO t VALUES (...)
2.2 For Update_Event, swapping the SET part and WHERE part.
For example:
UPDATE t SET cols1 = vals1 WHERE cols2 = vals2
--->
UPDATE t SET cols2 = vals2 WHERE cols1 = vals1
2.3 For Multi-Rows Event, reverse the rows sequence, from the last row to the first row.
For example:
DELETE FROM t WHERE id=1; DELETE FROM t WHERE id=2; ...; DELETE FROM t WHERE id=n;
--->
DELETE FROM t WHERE id=n; ...; DELETE FROM t WHERE id=2; DELETE FROM t WHERE id=1;
2.4 Output those events from the last one to the first one which mis-operation happened.
For example:
otherwise we'd need to store sql_mode *per vcol*
(consider CREATE INDEX...) and how SHOW CREATE TABLE would
support that?
Additionally, get rid of vcol::expr_str, just to make sure
the string is always generated and never leaked in the
original form.
Add some event types for the compressed event, there are:
QUERY_COMPRESSED_EVENT,
WRITE_ROWS_COMPRESSED_EVENT_V1,
UPDATE_ROWS_COMPRESSED_EVENT_V1,
DELETE_POWS_COMPRESSED_EVENT_V1,
WRITE_ROWS_COMPRESSED_EVENT,
UPDATE_ROWS_COMPRESSED_EVENT,
DELETE_POWS_COMPRESSED_EVENT.
These events inheritance the uncompressed editor events. One of their constructor functions and write
function have been overridden for uncompressing and compressing. Anything but this is totally the same.
On slave, The IO thread will uncompress and convert them When it receiving the events from the master.
So the SQL and worker threads can be stay unchanged.
Now we use zlib as compress algorithm. It maybe support other algorithm in the future.
When a deadlock kill is detected inside the storage engine, the kill
is not done immediately, to avoid calling back into the storage engine
kill_query method with various lock subsystem mutexes held. Instead the
kill is queued and done later by a slave background thread.
This patch in preparation for fixing TokuDB optimistic parallel
replication, as well as for removing locking hacks in InnoDB/XtraDB in
10.2.
Signed-off-by: Kristian Nielsen <knielsen at knielsen-hq.org>
This makes it easier to setup master as on only have to set --log-bin.
Before this patch if one did set up the master with just --log-bin, slaves
could not connect until server_id was set on the master, which could be
both confusing and hard to do.
- Fixed typos
- Added --core-on-failure to mysql-test-run
- More DBUG_PRINT in viosocket.c
- Don't forget CLIENT_REMEMBER_OPTIONS for compressed slave protocol
- Removed not used stage variables
Since wsrep_sync_wait & wsrep_causal_reads variables are related,
they are always kept in sync whenever one of them changes.
Same is tried on server start, where wsrep_sync_wait get updated
based on wsrep_causal_reads' value. But, since wsrep_causal_reads
is OFF by default, wsrep_sync_wait's value gets modified and loses
its WSREP_SYNC_WAIT_BEFORE_READ bit.
Fixed by syncing wsrep_sync_wait & wsrep_causal_reads values
individually on server start in mysqld_get_one_option() based
on command line arguments used.
- Change some static variables to dynamic to ensure that we don't do any memory
allocations before server starts or stops
- Print more memory information on SIGHUP. Fixed output.
- Write out if memory was lost if run with --debug-at-exit
- Fixed wrong #ifdef in sql_cache.cc
filesort and init_read_record() for the same table.
This will simplify code for WINDOW FUNCTIONS (MDEV-6115)
- Filesort_info renamed to SORT_INFO and moved to filesort.h
- filesort now returns SORT_INFO
- init_read_record() now takes a SORT_INFO parameter.
- unique declaration is moved to uniques.h
- subselect caching of buffers is now more explicit than before
- filesort_buffer is now reusable even if rec_length has changed.
- filsort_free_buffers() and free_io_cache() calls are removed
- Remove one malloc() when using get_addon_fields()
Other things:
- Added --debug-assert-on-not-freed-memory option to make it easier to
debug some not-freed-memory issues.
* only copy args[0] to args[2] after fix_fields (when all item
substitutions have already happened)
* change QT_ITEM_FUNC_NULLIF_TO_CASE (that allows to print NULLIF
as CASE) to QT_ITEM_ORIGINAL_FUNC_NULLIF (that prohibits it).
So that NULLIF-to-CASE is allowed by default and only disabled
explicitly for SHOW VIEW|FUNCTION|PROCEDURE and mysql_make_view.
By default it is allowed (in particular in error messages and
debug output, that can happen anytime before or after optimizer).
Creating a CONNECT object on client connect and pass this to the working thread which creates the THD.
Split LOCK_thread_count to different mutexes
Added LOCK_thread_start to syncronize threads
Moved most usage of LOCK_thread_count to dedicated functions
Use next_thread_id() instead of thread_id++
Other things:
- Thread id now starts from 1 instead of 2
- Added cast for thread_id as thread id is now of type my_thread_id
- Made THD->host const (To ensure it's not changed)
- Removed some DBUG_PRINT() about entering/exiting mutex as these was already logged by mutex code
- Fixed that aborted_connects and connection_errors_internal are counted in all cases
- Don't take locks for current_linfo when we set it (not needed as it was 0 before)
GENERATED BY THE EXP() FUNCTION
When generating the error message for numeric overflow, pass a flag to
Item::print() that prevents it from expanding constant expressions and
parameters to the values they evaluate to.
For consistency, also pass the flag to Item::print() when
Item_func_spatial_collection::fix_length_and_dec() generates an error
message. It doesn't make any difference at the moment, since constant
expressions haven't been evaluated yet when this function is called.