1. Passes wsrep_sst_auth_value to SST scripts via WSREP_SST_OPT_AUTH envronmental variable, so it never appears on the command line
2. In mysqldump and xtrabackup* SST scripts which rely on MySQL authentication, instead of passing password on the command line, SST script sets MYSQL_PWD environment variable, so that password also never appears on the mysqldump/innobackupex command line.
When wsrep is enabled, for any update on innodb tables, the
corresponding keys are appended to galera's transaction writeset
(wsrep_append_keys()). However, for LOAD DATA, this got skipped
if binary logging was disabled or it was non-ROW based.
As a result, while the updates from LOAD DATA on non-partitioned
tables replicated fine as wsrep implicitly enables binary logging
(if not enabled, explicitly), the same did not work on partitioned
tables as for partitioned tables the binary logging gets disabled
temporarily (ha_partition::write_row()).
Fixed by removing the unwanted conditions from the check.
Also backported some changes from 10.0-galera to make sure
wsrep_load_data_splitting affects LOAD DATA commands only.
In galera cluster, when myisam replication is enabled
(wsrep_replicate_myisam=ON), DML statements are replicated
in open_tables(). However, in case of prepared statements,
for an INSERT, open_tables() gets invoked twice. Once for
COM_STMT_PREPARE (to validate and prepare INSERT) and later
for COM_STMT_EXECUTE. As a result, the command gets replicated
twice. Same happens for REPLACE, UPDATE and DELETE commands.
Fixed by adding a check to not replicate during 'prepare'
phase. Also changed the order of conditions to make it more
efficient. Lastly, in order to support wsrep_dirty_reads, made
changes to allow COM_STMT_XXX commands to continue past initial
check even when wsrep is not ready.
RENAME TABLE, unlike other DDLs, was getting replicated before
the access check was performed. As a result, the command could
get get replicated and thus executed on other nodes, even if it
fails on the originating node due to permission issues. Fixed by
moving the logic to check user privileges before replicating the
command.
THD::>save_prep_leaf_list was set to true by multi-table update
statements with mergeable selects and never reset.
Make every statement reset it at start.
Correct fix for this bug.
The problem was that Item_func_group_concat() was calling
setup_order(), passing args as the second argument,
ref_pointer_array. While ref_pointer_array should have free
space at the end, as setup_order() can append elements to it.
In this particular case args[] elements were overwritten when
setup_order() was pushing new elements into ref_pointer_array.
Alternative fix that doesn't cause view.test crash in --ps:
Remember when Item_ref was fixed right in the constructor
and did not have a full Item_ref::fix_fields() call. Later
in PS/SP, after Item_ref::cleanup, we use this knowledge
to avoid doing full fix_fields() for items that were never
supposed to be fix_field'ed.
Simplify the test case.
execution of PS
GROUP_CONCAT() with ORDER BY column position may crash server on PS reexecution.
The problem was that arguments array of GROUP_CONCAT() was adjusted to point to
temporary elements (resolved ORDER BY fields) during first execution.
This patch expands rev. 08763096cb to restore original arguments array as well.
There is several different ways to incorrectly define
foreign key constraint. In many cases earlier MariaDB
versions the error messages produced by these cases
are not very clear and helpful. This patch improves
the warning messages produced by foreign key parsing.
The old code used pthread_setspecific() to store temporary data used by the thread.
This is not safe when used with thread pool, as the thread may change for the transaction.
The fix is to save the data in THD, which is guaranteed to be properly freed.
I also fixed the code so that we don't do a malloc() for every transaction.
field.cc
- Fixed warning about overlapping memory copy (backport from 10.0)
Item_subselect.cc
- Fixed core dump in main.view
- Problem was that thd->lex->current_select->master_unit()->item was not set, which caused crash in maxr_as_dependent
sql/mysqld.cc
- Got error on shutdown as we where freeing mutex before all THD objects was freed
(~THD uses some mutex). Fixed by during shutdown freeing THD inside mutex.
sql/log.cc
- log_space_lock and LOCK_log where locked in inconsistenly. Fixed by not having a log_space_lock around purge_logs.
sql/slave.cc
- Remove unnecessary log_space_lock
- Move cond_broadcast inside lock to ensure we don't miss the signal
Analysis: At check_trx_exists function InnoDB allocates
a new trx if no trx is found from thd but this newly
allocated trx is not registered to thd. This is unsafe,
because nothing prevents InnoDB plugin from being uninstalled
while there's active transaction. This can cause crashes, hang
and any other odd behavior. It may also corrupt stack, as
functions pointers are not available after dlclose.
Fix: The fix is to use thd_set_ha_data() when
manipulating per-connection handler data. It does appropriate
plugin locking.
In galera, like other DDLs, CREATE/ALTER VIEW commands are recreated
and replicated during parsing. The ALGORITHM clause is internally set
to VIEW_ALGORITHM_INHERIT if its not explicitly specified by the user.
But since its not a valid type to be used in a command, it leads to an
assertion failure. The solution is to not include the ALGORITHM clause
in the command if its not explicitly specified (or INHERIT).
The fix is that if the slave has a different integer size than
the master, then they will assume the master has the same signed/unsigned modifier
as the slave.
This means that one can safely change a coon the slave an int to a bigint
or an unsigned int to an unsigned int. Changing an unsigned int to an
signed bigint will cause replication failures when the high bit of the
unsigned int is set.
We can't give an error if the signess is different on the master and slave
as the binary log doesn't contain the signess of the column on the master.
The fix is that if the slave has a different integer size than
the master, then they will assume the master has the same signed/unsigned modifier
as the slave.
This means that one can safely change a coon the slave an int to a bigint
or an unsigned int to an unsigned int. Changing an unsigned int to an
signed bigint will cause replication failures when the high bit of the
unsigned int is set.
We can't give an error if the signess is different on the master and slave
as the binary log doesn't contain the signess of the column on the master.
- Removing use of calls to current_thd
- More DBUG_PRINT
- Code style changes
- Made some local functions static
Ensure that calls to print_keyuse are locked with mutex to get all lines in same debug packet
SELECT ... WHERE XX IN (SELECT YY)
this was transformed to something like:
SELECT ... WHERE IF_EXISTS(SELECT ... HAVING XX=YY)
The bug was that for normal execution XX was fixed in the original outer SELECT context while in PS it was fixed in the sub query context and this confused the optimizer.
Fixed by ensuring that XX is always fixed in the outer context.
This is MDEV-7601, including it's sub tasks MDEV-7594, MDEV-7555, MDEV-7590, MDEV-7581, MDEV-7589
The problem was that select_lex->non_agg_fields was not properly reset for re-execution and this caused an overwrite of a random memory position.
The fix was move non_agg_fields from select_lext to JOIN, which is properly reset.
Post push fix. The function cmp_dtuple_rec() was used without a prototype
in the file row0purge.c. Adding the include file rem0cmp.h to row0purge.c
to resolve this issue.
approved by Krunal over IM.
* Wait for aborted thd (victim) to release MDL locks
* Skip aborting an already aborted thd
* Defer setting OK status in case of CTAS
* Minor cosmetic changes
* Added a test case
Problem:
If we add a referential integrity constraint with a duplicate
name, an error occurs. The foreign key object would not have
been added to the dictionary cache. In the error path, there
is an attempt to remove this foreign key object. Since this
object is not there, the search returns a NULL result.
De-referencing the null object results in this crash.
Solution:
If the search to the foreign key object failed, then don't
attempt to access it.
rb#9309 approved by Marko.
Description: The newest RHEL/CentOS/SL 6.6 openssl package
(1.0.1e-30.el6_6.9; published around 6/4/2015) contains a fix for
LogJam. RedHat's fix for this was to limit the use
of any SSL DH key sizes to a minimum of 768 bits. This breaks any
DHE SSL ciphers for MySQL clients as soon as you install the
openssl update, since in vio/viosslfactories.c, the default
DHPARAM is a 512 bit one. This cannot be changed in
configuration/runtime; and needs a recompile. Because of this the
client connection with --ssl-cipher=DHE-RSA-AES256-SHA is not
able to connect the server.
Analysis: Openssl has changed Diffie-Hellman key from the 512 to
1024 due to some reasons(please see the details at
http://openssl.org/news/secadv_20150611.txt) Because of this the client
with DHE cipher is failing to connect the server. This change took
place from the openssl-1.0.1n onwards.
Fix: Similar bug fix is already pushed to mysql-5.7 under bug#18367167.
Hence we backported the same fix to mysql-5.5 and mysql-5.6.