Commit graph

627 commits

Author SHA1 Message Date
Brandon Nesterenko
81402c1318 MDEV-25222: mysqlbinlog --base64-output wrong option default drops BINLOG from output
Problem:
=======
The ALWAYS option of the mariadb-binlog --base64-output flag
formats its output incorrectly. This option is deprecated, and
MySQL 8.0 has removed it entirely.

Solution:
========
Adhere to MySQL and remove this option from MariaDB.

Behavioral Changes:
==================
Use Case: ./mariadb-binlog --base64-output
 Previous Behavior: Sets base64-output mode to always
 New Behavior: Error message indicating incomplete argument

Use Case: ./mariadb-binlog --base64-output=always
 Previous Behavior: Sets base64-output mode to always
 New Behavior: Error message indicating invalid argument value

Reviewed By:
==========
Andrei Elkin: <andrei.elkin@mariadb.com>
2021-05-17 14:38:46 -06:00
Nikita Malyavin
3f55c56951 Merge branch bb-10.4-release into bb-10.5-release 2021-05-05 23:57:11 +03:00
Nikita Malyavin
509e4990af Merge branch bb-10.3-release into bb-10.4-release 2021-05-05 23:03:01 +03:00
Nikita Malyavin
a8a925dd22 Merge branch bb-10.2-release into bb-10.3-release 2021-05-04 14:49:31 +03:00
Sujatha
abe6eb10a6 MDEV-16146: MariaDB slave stops with following errors.
Problem:
========
180511 11:07:58 [ERROR] Slave I/O: Unexpected master's heartbeat data:
heartbeat is not compatible with local info;the event's data: log_file_name
mysql-bin.000009 log_pos 1054262041, Error_code: 1623

Analysis:
=========
In replication setup when master server doesn't have any events to send to
slave server it sends an 'Heartbeat_log_event'. This event carries the
current binary log filename and offset details. The offset values is stored
within 4 bytes of event header. When the size of binary log is higher than
UINT32_MAX the log_pos values will not fit in 4 bytes memory.  It overflows
and hence slave stops with an error.

Fix:
===
Since we cannot extend the common_header of Log_event class, a greater than
4GB value of Log_event::log_pos is made to be transported with a HeartBeat
event's sub-header.  Log_event::log_pos in such case is set to zero to
indicate that the 8 byte sub-header is allocated in the event.

In case of cross version replication following behaviour is expected

OLD - Server without fix
NEW - Server with fix

OLD<->NEW : works bidirectionally as long as the binlog offset is
            (normally) within 4GB.

When log_pos > UINT32_MAX
OLD->NEW  : The 'log_pos' is bound to overflow and NEW slave may report
            an invalid event/incompatible heart beat event error.
NEW->OLD  : Since patched server sets log_pos=0 on overflow, OLD slave will
            report invalid event error.
2021-04-30 20:34:31 +05:30
Monty
3c4b8440eb Trivial fixups, no code changes
- Indentation changes
- Fixed wrong name for used in DBUG_ENTER
- Added some code comments
2020-10-21 03:09:29 +03:00
Marko Mäkelä
5ff7e68c7e Merge 10.4 into 10.5 2020-09-04 18:44:44 +03:00
Marko Mäkelä
c9cf6b13f6 Merge 10.3 into 10.4 2020-09-03 15:53:38 +03:00
Andrei Elkin
caa35f8e25 MDEV-16372 ER_BASE64_DECODE_ERROR upon replaying binary log via mysqlbinlog --verbose
(This commit is for 10.3 and upper branches)

In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
      BINLOG '
        base64 encoded data for A
        ### verbose section for A
        base64 encoded data for B
        ### verbose section for B
      '/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.

The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
      BINLOG '
        base64 encoded data for A
        base64 encoded data for B
      '/*!*/;
        ### verbose section for A
        ### verbose section for B

Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
and to Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
2020-08-31 18:38:57 +03:00
Andrei Elkin
6112a0f93d MDEV-16372 ER_BASE64_DECODE_ERROR upon replaying binary log via mysqlbinlog --verbose
(This commit is exclusively for 10.2 branch. Do not merge it to 10.3)

In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
      BINLOG '
        base64 encoded data for A
        ### verbose section for A
        base64 encoded data for B
        ### verbose section for B
      '/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.

The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
      BINLOG '
        base64 encoded data for A
        base64 encoded data for B
      '/*!*/;
        ### verbose section for A
        ### verbose section for B

Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
and to Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
2020-08-31 18:37:44 +03:00
Oleksandr Byelkin
48b5777ebd Merge branch '10.4' into 10.5 2020-08-04 17:24:15 +02:00
Oleksandr Byelkin
57325e4706 Merge branch '10.3' into 10.4 2020-08-03 14:44:06 +02:00
Oleksandr Byelkin
c32f71af7e Merge branch '10.2' into 10.3 2020-08-03 13:41:29 +02:00
Oleksandr Byelkin
ef7cb0a0b5 Merge branch '10.1' into 10.2 2020-08-02 11:05:29 +02:00
Ian Gilfillan
d2982331a6 Code comment spellfixes 2020-07-22 23:18:12 +02:00
Monty
60f08dd555 MDEV-22925 ALTER TABLE s3_table ENGINE=Aria can cause failure on slave
When converting a table (test.s3_table) from S3 to another engine, the
following will be logged to the binary log:

DROP TABLE IF EXISTS test.t1;
CREATE OR REPLACE TABLE test.t1 (...) ENGINE=new_engine
INSERT rows to test.t1 in binary-row-log-format

The bug is that the above statements are logged one by one to the binary
log. This means that a fast slave, configured to use the same S3 storage
as the master, would be able to execute the DROP and CREATE from the
binary log before the master has finished the ALTER TABLE.
In this case the slave would ignore the DROP (as it's on a S3 table) but
it will stop on CREATE of the local tale, as the table is still exists in
S3. The REPLACE part will be ignored by the slave as it can't touch the
S3 table.

The fix is to ensure that all the above statements is written to binary
log AFTER the table has been deleted from S3.
2020-06-19 12:03:13 +03:00
Monty
120b73a069 Speed up writing to encrypted binlogs
MDEV-21604

Added "virtual" low level write function encrypt_or_write that is set
to point to either normal or encrypted write functions.

This patch also fixes a possible memory leak if writing to binary log fails.
2020-03-24 21:00:03 +02:00
Monty
91ab42a823 Clean up and speed up interfaces for binary row logging
MDEV-21605 Clean up and speed up interfaces for binary row logging
MDEV-21617 Bug fix for previous version of this code

The intention is to have as few 'if' as possible in ha_write() and
related functions. This is done by pre-calculating once per statement the
row_logging state for all tables.

Benefits are simpler and faster code both when binary logging is disabled
and when it's enabled.

Changes:
- Added handler->row_logging to make it easy to check it table should be
  row logged. This also made it easier to disabling row logging for system,
  internal and temporary tables.
- The tables row_logging capabilities are checked once per "statements
  that updates tables" in THD::binlog_prepare_for_row_logging() which
  is called when needed from THD::decide_logging_format().
- Removed most usage of tmp_disable_binlog(), reenable_binlog() and
  temporary saving and setting of thd->variables.option_bits.
- Moved checks that can't change during a statement from
  check_table_binlog_row_based() to check_table_binlog_row_based_internal()
- Removed flag row_already_logged (used by sequence engine)
- Moved binlog_log_row() to a handler::
- Moved write_locked_table_maps() to THD::binlog_write_table_maps() as
  most other related binlog functions are in THD.
- Removed binlog_write_table_map() and binlog_log_row_internal() as
  they are now obsolete as 'has_transactions()' is pre-calculated in
  prepare_for_row_logging().
- Remove 'is_transactional' argument from binlog_write_table_map() as this
  can now be read from handler.
- Changed order of 'if's in handler::external_lock() and wsrep_mysqld.h
  to first evaluate fast and likely cases before more complex ones.
- Added error checking in ha_write_row() and related functions if
  binlog_log_row() failed.
- Don't clear check_table_binlog_row_based_result in
  clear_cached_table_binlog_row_based_flag() as it's not needed.
- THD::clear_binlog_table_maps() has been replaced with
  THD::reset_binlog_for_next_statement()
- Added 'MYSQL_OPEN_IGNORE_LOGGING_FORMAT' flag to open_and_lock_tables()
  to avoid calculating of binary log format for internal opens. This flag
  is also used to avoid reading statistics tables for internal tables.
- Added OPTION_BINLOG_LOG_OFF as a simple way to turn of binlog temporary
  for create (instead of using THD::sql_log_bin_off.
- Removed flag THD::sql_log_bin_off (not needed anymore)
- Speed up THD::decide_logging_format() by remembering if blackhole engine
  is used and avoid a loop over all tables if it's not used
  (the common case).
- THD::decide_logging_format() is not called anymore if no tables are used
  for the statement. This will speed up pure stored procedure code with
  about 5%+ according to some simple tests.
- We now get annotated events on slave if a CREATE ... SELECT statement
  is transformed on the slave from statement to row logging.
- In the original code, the master could come into a state where row
  logging is enforced for all future events if statement could be used.
  This is now partly fixed.

Other changes:
- Ensure that all tables used by a statement has query_id set.
- Had to restore the row_logging flag for not used tables in
  THD::binlog_write_table_maps (not normal scenario)
- Removed injector::transaction::use_table(server_id_type sid, table tbl)
  as it's not used.
- Cleaned up set_slave_thread_options()
- Some more DBUG_ENTER/DBUG_RETURN, code comments and minor indentation
  changes.
- Ensure we only call THD::decide_logging_format_low() once in
  mysql_insert() (inefficiency).
- Don't annotate INSERT DELAYED
- Removed zeroing pos_in_table_list in THD::open_temporary_table() as it's
  already 0
2020-03-24 21:00:03 +02:00
Monty
6a9e24d046 Added support for replication for S3
MDEV-19964 S3 replication support

Added new configure options:
s3_slave_ignore_updates
"If the slave has shares same S3 storage as the master"

s3_replicate_alter_as_create_select
"When converting S3 table to local table, log all rows in binary log"

This allows on to configure slaves to have the S3 storage shared or
independent from the master.

Other thing:
Added new session variable '@@sql_if_exists' to force IF_EXIST to DDL's.
2020-03-24 21:00:02 +02:00
Andrei Elkin
c8ae357341 MDEV-742 XA PREPAREd transaction survive disconnect/server restart
Lifted long standing limitation to the XA of rolling it back at the
transaction's
connection close even if the XA is prepared.

Prepared XA-transaction is made to sustain connection close or server
restart.
The patch consists of

    - binary logging extension to write prepared XA part of
      transaction signified with
      its XID in a new XA_prepare_log_event. The concusion part -
      with Commit or Rollback decision - is logged separately as
      Query_log_event.
      That is in the binlog the XA consists of two separate group of
      events.

      That makes the whole XA possibly interweaving in binlog with
      other XA:s or regular transaction but with no harm to
      replication and data consistency.

      Gtid_log_event receives two more flags to identify which of the
      two XA phases of the transaction it represents. With either flag
      set also XID info is added to the event.

      When binlog is ON on the server XID::formatID is
      constrained to 4 bytes.

    - engines are made aware of the server policy to keep up user
      prepared XA:s so they (Innodb, rocksdb) don't roll them back
      anymore at their disconnect methods.

    - slave applier is refined to cope with two phase logged XA:s
      including parallel modes of execution.

This patch does not address crash-safe logging of the new events which
is being addressed by MDEV-21469.

CORNER CASES: read-only, pure myisam, binlog-*, @@skip_log_bin, etc

Are addressed along the following policies.
1. The read-only at reconnect marks XID to fail for future
   completion with ER_XA_RBROLLBACK.

2. binlog-* filtered XA when it changes engine data is regarded as
   loggable even when nothing got cached for binlog.  An empty
   XA-prepare group is recorded. Consequent Commit-or-Rollback
   succeeds in the Engine(s) as well as recorded into binlog.

3. The same applies to the non-transactional engine XA.

4. @@skip_log_bin=OFF does not record anything at XA-prepare
   (obviously), but the completion event is recorded into binlog to
   admit inconsistency with slave.

The following actions are taken by the patch.

At XA-prepare:
   when empty binlog cache - don't do anything to binlog if RO,
   otherwise write empty XA_prepare (assert(binlog-filter case)).

At Disconnect:
   when Prepared && RO (=> no binlogging was done)
     set Xid_cache_element::error := ER_XA_RBROLLBACK
     *keep* XID in the cache, and rollback the transaction.

At XA-"complete":
   Discover the error, if any don't binlog the "complete",
   return the error to the user.

Kudos
-----
Alexey Botchkov took to drive this work initially.
Sergei Golubchik, Sergei Petrunja, Marko Mäkelä provided a number of
good recommendations.
Sergei Voitovich made a magnificent review and improvements to the code.
They all deserve a bunch of thanks for making this work done!
2020-03-14 22:45:48 +02:00
Sergei Golubchik
7c58e97bf6 perfschema memory related instrumentation changes 2020-03-10 19:24:22 +01:00
Oleksandr Byelkin
980108ceeb MDEV-21833 Make slave_run_triggers_for_rbr enforce triggers to run on slave, even when there are triggers on the master
A bit changed patch of Anders Karlsson with examples added.

New parameters "ENFORCE" to slave-run-triggers-for-rbr added.
2020-03-09 22:16:43 +02:00
Marko Mäkelä
a983b24407 Merge 10.4 into 10.5 2020-01-28 14:17:09 +02:00
Oleksandr Byelkin
bfc24bb2ec Merge branch '10.3' into 10.4 2020-01-24 14:50:23 +01:00
Oleksandr Byelkin
ceda5f724f Merge branch '10.2' into 10.3 2020-01-24 14:16:20 +01:00
Oleksandr Byelkin
f2ccfcaca1 Merge branch '10.1' into 10.2 2020-01-24 13:46:49 +01:00
Sujatha
599a06098b MDEV-21490: binlog tests fail with valgrind: Conditional jump or move depends on uninitialised value in sql_ex_info::init
Problem:
=======
P1) Conditional jump or move depends on uninitialised value(s)
    sql_ex_info::init(char const*, char const*, bool) (log_event.cc:3083)

code: All the following variables are not initialized.
----
  return ((cached_new_format != -1) ? cached_new_format :
    (cached_new_format=(field_term_len > 1 || enclosed_len > 1 ||
    line_term_len > 1 || line_start_len > 1 || escaped_len > 1)));

P2) Conditional jump or move depends on uninitialised value(s)
    Rows_log_event::Rows_log_event(char const*, unsigned
      int, Format_description_log_event const*) (log_event.cc:9571)

Code: Uninitialized values is reported for 'var_header_len' variable.
----
  if (var_header_len < 2 || event_len < static_cast<unsigned
      int>(var_header_len + (post_start - buf)))

P3) Conditional jump or move depends on uninitialised value(s)
    Table_map_log_event::pack_info(Protocol*) (log_event.cc:11553)

code:'m_table_id' is uninitialized.
----
  void Table_map_log_event::pack_info(Protocol *protocol)
  ...
  size_t bytes= my_snprintf(buf, sizeof(buf), "table_id: %lu (%s.%s)",
                              m_table_id, m_dbnam, m_tblnam);

Fix:
===
P1 - Fix)
Initialize cached_new_format,field_term_len, enclosed_len, line_term_len,
line_start_len, escaped_len members in default constructor.

P2 - Fix)
"var_header_len" is initialized by reading the event buffer. In case of an
invalid event the buffer will contain invalid data. Hence added a check to
validate the event data. If event_len is smaller than valid header length
return immediately.

P3 - Fix)
'm_table_id' within Table_map_log_event is initialized by reading data from
the event buffer. Use 'VALIDATE_BYTES_READ' macro to validate the current
state of the buffer. If it is invalid return immediately.
2020-01-24 13:35:03 +05:30
Marko Mäkelä
28c89b7151 Merge 10.4 into 10.5 2019-12-16 07:47:17 +02:00
Oleksandr Byelkin
a15234bf4b Merge branch '10.3' into 10.4 2019-12-09 15:09:41 +01:00
Oleksandr Byelkin
008ee867a4 Merge branch '10.2' into 10.3 2019-12-04 17:46:28 +01:00
Oleksandr Byelkin
f8b5e147da Merge branch '10.1' into 10.2 2019-12-03 14:45:06 +01:00
Faustin Lammler
2df2238cb8 Lintian complains on spelling error
The lintian check complains on spelling error:
https://salsa.debian.org/mariadb-team/mariadb-10.3/-/jobs/95739
2019-12-02 12:41:13 +02:00
seppo
5c68343db7 MDEV-18497 CTAS async replication from mariadb master crashes galera nodes (#1410)
This PR contains a mtr test for reproducing a failure with replicating create table as select statement (CTAS) through asynchronous mariadb replication to mariadb galera cluster.
The problem happens when CTAS replication contains both create table statement followed by row events for populating the table. In such situation, the galera node operating as mariadb replication slave, will first replicate only the create table part into the cluster, and then perform another replication containing both the create table and row events. This will lead all other nodes to fail for duplicate table create attempt, and crash due to this failure.

PR contains also a fix, which identifies the situation when CTAS has been replicated, and makes further scan in async replication stream to see if there are following row events. The slave node will replicate either single TOI in case the CTAS table is empty, or if CTAS table contains rows, then single bundled write set with create table and row events is replicated to galera cluster.

This fix should keep master server's GTID's for CTAS replication in sync with GTID's in galera cluster.
2019-11-18 15:18:00 +02:00
Sachin
967c14c04e MDEV-20477 Merge binlog extended metadata support from the upstream
Cherry-pick the commits the mysql and some changes.
WL#4618 RBR: extended table metadata in the binary log

This patch extends Table Map Event. It appends some new fields for
more metadata. The new metadata includes:
- Signedness of Numberic Columns
- Character Set of Character Columns and Binary Columns
- Column Name
- String Value of SET Columns
- String Value of ENUM Columns
- Primary Key
- Character Set of SET Columns and ENUM Columns
- Geometry Type

Some of them are optional, the patch introduces a GLOBAL system
variable to control it. It is binlog_row_metadata.
- Scope:   GLOBAL
- Dynamic: Yes
- Type:    ENUM
- Values:  {NO_LOG, MINIMAL, FULL}
- Default: NO_LOG
  Only Signedness, character set and geometry type are logged if it is MINIMAL.
  Otherwise all of them are logged.

Also add a binlog_type_info() to field, So that we can have extract
relevant binlog info from field.
2019-09-11 15:09:35 +05:30
Oleksandr Byelkin
c07325f932 Merge branch '10.3' into 10.4 2019-05-19 20:55:37 +02:00
Marko Mäkelä
be85d3e61b Merge 10.2 into 10.3 2019-05-14 17:18:46 +03:00
Marko Mäkelä
26a14ee130 Merge 10.1 into 10.2 2019-05-13 17:54:04 +03:00
Vicențiu Ciorbaru
cb248f8806 Merge branch '5.5' into 10.1 2019-05-11 22:19:05 +03:00
Vicențiu Ciorbaru
5543b75550 Update FSF Address
* Update wrong zip-code
2019-05-11 21:29:06 +03:00
Oleksandr Byelkin
93ac7ae70f Merge branch '10.3' into 10.4 2019-02-21 14:40:52 +01:00
Oleksandr Byelkin
65c5ef9b49 dirty merge 2019-02-07 13:59:31 +01:00
Marko Mäkelä
081fd8bfa2 Merge 10.1 into 10.2 2019-02-02 11:40:02 +02:00
Andrei Elkin
5d48ea7d07 MDEV-10963 Fragmented BINLOG query
The problem was originally stated in
  http://bugs.mysql.com/bug.php?id=82212
The size of an base64-encoded Rows_log_event exceeds its
vanilla byte representation in 4/3 times.
When a binlogged event size is about 1GB mysqlbinlog generates
a BINLOG query that can't be send out due to its size.

It is fixed with fragmenting the BINLOG argument C-string into
(approximate) halves when the base64 encoded event is over 1GB size.
The mysqlbinlog in such case puts out

    SET @binlog_fragment_0='base64-encoded-fragment_0';
    SET @binlog_fragment_1='base64-encoded-fragment_1';
    BINLOG @binlog_fragment_0, @binlog_fragment_1;

to represent a big BINLOG.
For prompt memory release BINLOG handler is made to reset the BINLOG argument
user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
is assigned.

Notice the 2 fragments are enough, though the client and server still may
need to tweak their @@max_allowed_packet to satisfy to the fragment
size (which they would have to do anyway with greater number of
fragments, should that be desired).

On the lower level the following changes are made:

Log_event::print_base64()
  remains to call encoder and store the encoded data into a cache but
  now *without* doing any formatting. The latter is left for time
  when the cache is copied to an output file (e.g mysqlbinlog output).
  No formatting behavior is also reflected by the change in the meaning
  of the last argument which specifies whether to cache the encoded data.

Rows_log_event::print_helper()
  is made to invoke a specialized fragmented cache-to-file copying function
  which is

copy_cache_to_file_wrapped()
  that takes care of fragmenting also optionally wraps encoded
  strings (fragments) into SQL stanzas.

my_b_copy_to_file()
  is refactored to into my_b_copy_all_to_file(). The former function
  is generalized
  to accepts more a limit argument to constraint the copying and does
  not reinitialize anymore the cache into reading mode.
  The limit does not do any effect on the fully read cache.
2019-01-24 20:44:50 +02:00
Alexander Barkov
269da4bf19 MDEV-5377 Row-based replication of MariaDB temporal data types with FSP>0 into a different column type 2018-12-04 15:44:14 +04:00
Monty
30ebc3ee9e Add likely/unlikely to speed up execution
Added to:
- if (error)
- Lex
- sql_yacc.yy and sql_yacc_ora.yy
- In header files to alloc() calls
- Added thd argument to thd_net_is_killed()
2018-05-07 00:07:32 +03:00
Alexander Barkov
9472f0f4a4 Adding "const" qualifier into a few methods and parameters in the LOAD code
Methods:
- Item_user_var_as_out_param::print_for_load()
- sql_exchange::escaped_given(void)

Parameters:
- sql_exchange in write_execute_load_query_log_event()
- sql_exchange in mysql_load()
- sql_exchange in Load_log_event::Load_log_event()

Also, removing cast to "char*" in a few places in
Load_log_event::Load_log_event()
2018-03-15 23:29:48 +04:00
Vladislav Vaintroub
6c279ad6a7 MDEV-15091 : Windows, 64bit: reenable and fix warning C4267 (conversion from 'size_t' to 'type', possible loss of data)
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.

This fix excludes rocksdb, spider,spider, sphinx and connect for now.
2018-02-06 12:55:58 +00:00
Monty
f55dc7f733 Change C_STRING_WITH_LEN to STRING_WITH_LEN
This preserves const str for constant strings

Other things
- A few variables where changed from LEX_STRING to LEX_CSTRING
- Incident_log_event::Incident_log_event and record_incident where
  changed to take LEX_CSTRING* as an argument instead of LEX_STRING
2018-01-30 21:33:56 +02:00
Monty
a7e352b54d Changed database, tablename and alias to be LEX_CSTRING
This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db

Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
  for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
  correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
  handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
  NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
2018-01-30 21:33:55 +02:00
Aleksey Midenkov
c59c1a0736 System Versioning 1.0 pre8
Merge branch '10.3' into trunk
2018-01-10 12:36:55 +03:00