1. Clear text password client plugin disabled by default.
2. Added an environment variable LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN, that
when set to something starting with '1', 'Y' or 'y' will enable the clear
text
plugin for all connections.
3. Added a new mysql_options() option : MYSQL_ENABLE_CLEARTEXT_PLUGIN
that takes an my_bool argument. When the value of the argument is non-zero
the clear text plugin is enabled for this connection only.
4. Added an enable-cleartext-plugin config file option that takes a numeric
argument. If the numeric value of the numeric argument is non-zero the
clear
text plugin is enabled for the connection
5. Added a boolean command line option "--enable_cleartext_plugin" to
mysql, mysqlslap and mysqladmin. When specified it will call mysql_options
with the effect of #3
6. Added a new CLEARTEXT option to the connect command in mysqltest.
When specified it will enable the cleartext plugin for usage.
7. Added test cases and updated existing ones that need the clear text
plugin.
executing
The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.
There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and
the formula inside mysqldump.
1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint
to save on row space.
2. Added a warning on the mysqldump's standard error for a possible
problems replaying the dump file if the columns of a view exceed 1000.
3. Added a test case.
This is a followup patch for the bug enabling the test
i_binlog.binlog_mysqlbinlog_file_write.test
this was disabled in mysql trunk and mysql 5.5 as in the release
build mysqlbinlog was not debug compiled whereas the mysqld was.
Since have_debug.inc script checks only for mysqld to be debug
compiled, the test was not being skipped on release builds.
We resolve this problem by creating a new inc file
mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
to be debug compiled. if not it skips the test.
mysql-test/include/mysqlbinlog_have_debug.inc:
new inc file to check if mysqlbinlog is debug compiled.
Print the warning(note):
YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead
on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
Introduction of cost based decision on filesort vs index for UPDATE
statements changed detection of the fact that the index used to scan the
table is being updated. The new design missed the case of index merge
when there is no single index to check. That was worked until a recent
change in InnoDB after which it went into infinite recursion if update of
the used index wasn't properly detected.
The fix consists of 'used key being updated' detection code from 5.1.
Patch done by Evgeny Potemkin <evgeny.potemkin@oracle.com>
and transferred into the 5.5.25a release build by Joerg Bruehe.
This changeset is the difference between MySQL 5.5.25 and 5.5.25a.
VERSION:
Version number change.
sql/sql_update.cc:
Bug#65745: UPDATE ON INNODB TABLE ENTERS RECURSION
The check for used key being updated is extended to cover the case when
index merge is used.
Several fixes :
* sql-common/client.c
Added a validity check of the fields metadata packet sent
by the server.
Now libmysql will check if the length of the data sent by
the server matches what's expected by the protocol before
using the data.
* client/mysqltest.cc
Fixed the error handling code in mysqltest to avoid sending
new commands when the reading the result set failed (and
there are unread data in the pipe).
* sql_common.h + libmysql/libmysql.c + sql-common/client.c
unpack_fields() now generates a proper error when it fails.
Added a new argument to this function to support the error
generation.
* sql/protocol.cc
Added a debug trigger to cause the server to send a NULL
insted of the packet expected by the client for testing
purposes.
Introduction of cost based decision on filesort vs index for UPDATE
statements changed detection of the fact that the index used to scan the
table is being updated. The new design missed the case of index merge
when there is no single index to check. That was worked until a recent
change in InnoDB after which it went into infinite recursion if update of
the used index wasn't properly detected.
The fix consists of 'used key being updated' detection code from 5.1.
sql/sql_update.cc:
Bug#14248833: UPDATE ON INNODB TABLE ENTERS RECURSION
The check for used key being updated is extended to cover the case when
index merge is used.
- Better error messages
This fixes that one again can run the test systems with many threads without having to increase fs.aio-max-nr.
mysql-test/include/mtr_check.sql:
Ignore the INNODB_USE_NATIVE_AIO variable (may change during execution)
mysql-test/mysql-test-run.pl:
Ignore warnings for failure to setup AIO
storage/innobase/os/os0file.c:
Continue without AIO even if we can't allocate resources for AIO
storage/xtradb/os/os0file.c:
Continue without AIO even if we can't allocate resources for AIO
storage/xtradb/srv/srv0start.c:
Give an error message (instead of core dump) if AIO can't be initialized
TABLE_LIST::check_single_table made aware about fact that now if table attached to a merged view it can be (unopened) temporary table
(in 5.2 it was always leaf table or non (in case of several tables)).
a multiple definition of 'THD::clear_error()' in (at least)
libmysqld.a(lib_sql.o) and libmysqld.a(libfederated_a-ha_federated.o).
Patch provided by Ramil Kalimullin.
Keep track of how many pending XIDs (transactions that are prepared in
storage engine and written into binlog, but not yet durably committed
on disk in the engine) there are in each binlog.
When the count of one binlog drops to zero, write a new binlog checkpoint
event, telling which is the oldest binlog with pending XIDs.
When doing XA recovery after a crash, check the last binlog checkpoint
event, and scan all binlog files from that point onwards for XIDs that
must be committed if found in prepared state inside engine.
Remove the code in binlog rotation that waits for all prepared XIDs to
be committed before writing a new binlog file (this is no longer necessary
when recovery can scan multiple binlog files).
Add function to replace arbitrary event with dummy event.
Add code which uses this to fix the bug that enabling row_annotate events
on the master breaks slaves which do not request such events.
Add that slaves set a variable @mariadb_slave_capability to inform the
master in a robust way about which events it can, and cannot, handle.
Add tests.
the new file is fully synced to disk and binlog index. This fixes a window
where a crash would leave next server restart unable to detect that a crash
occured, causing recovery to fail.
We set correct cmp_context during preparation to avoid changing it later by Item_field::equal_fields_propagator.
(see mysql bugs #57135#57692 during merging)
The semisync code does a fast-but-unsafe check for enabled or not without lock,
followed by a slow-but-safe check under lock. However, if the slow check failed,
the code still referenced not valid data (in an assert() expression), causing a
crash.
Fixed by not running the incorrect assert when semisync is disabled.