(make relies GNU extentions). The patch was partially
backport from 6.0.
Original comment:
bug#30708: make relies GNU extensions. Now that we no longer use
BitKeeper we can safely remove the SCCS handling with no loss of
functionality.
Bug #50087 Interval arithmetic for Event_queue_element is not portable.
Subtraction of two unsigned months yielded a (very large) positive value.
Conversion of this to a signed value was not necessarily well defined.
Solution: do the subtraction on signed values.
Analysis showed that in case of accessing I_S table
ROUTINES we perform unnecessary allocations
with get_field() function for every processed row that
in their turn causes significant memory growth.
the fix is to avoid use of get_field().
mode
Post-push fix after backporting the patch to 5.1-bugteam:
1 - changed the name of some variables to be equivalent to pe.
2 - fixed that patch to mark a statement as unsafe when both a
self-logging eng. and regular eng. are accessed and one of them
is updated.
multiquery packet).
Background:
- a query can contain multiple SQL statements;
- the server frees resources allocated to process a query when the
whole query is handled. In other words, resources allocated to process
one SQL statement from a multi-statement query are freed when all SQL
statements are handled.
The problem was that the parser allocated a buffer of size of the whole
query for each SQL statement in a multi-statement query. Thus, if a query
had many SQL-statements (so, the query was long), but each SQL statement
was short, ther parser tried to allocate huge amount of memory (number of
small SQL statements * length of the whole query).
The memory was allocated for a so-called "cpp buffer", which is intended to
store pre-processed SQL statement -- SQL text without version specific
comments.
The fix is to allocate memory for the "cpp buffer" once for all SQL
statements (once for a query).
ha_myisam::index_first(uchar*)") at assert.c:81
Single-table DELETE crash/assertion similar to single-table
UPDATE bug 14272.
Same resolution as for the bug 14272:
Don't run index scan when we should use quick select.
This could cause failures because there are table handlers (like federated)
that support quick select scanning but do not support index scanning.
for ALTER TABLE, LOAD DATA).
ROW_COUNT is now assigned according to the following rules:
- In my_ok():
- for DML statements: to the number of affected rows;
- for DDL statements: to 0.
- In my_eof(): to -1 to indicate that there was a result set.
We derive this semantics from the JDBC specification, where int
java.sql.Statement.getUpdateCount() is defined to (sic) "return the
current result as an update count; if the result is a ResultSet
object or there are no more results, -1 is returned".
- In my_error(): to -1 to be compatible with the MySQL C API and
MySQL ODBC driver.
- For SIGNAL statements: to 0 per WL#2110 specification. Zero is used
since that's the "default" value of ROW_COUNT in the diagnostics area.
NULL from outer join query
Problem: optimising MIN/MAX() queries without GROUP BY clause
by replacing the aggregate expression with a constant, we may set it
to NULL disregarding the fact that there may be outer joins involved.
Fix: don't replace MIN/MAX() with NULL if there're outer joins.
Note: the fix itself is just
- if (!count)
+ if (!count && !outer_tables)
set to NULL
The rest of the patch eliminates repeated code to improve speed
and for easy maintenance of the code.
- Update/fix file layouts for each package type, add new types for
native package formats including deb, rpm and svr4.
- Build all plugins, including debug versions
- Update compiler flags to match current release
- Add missing @VAR@ expansions
- Install correct mysqclient library symlinks
- Fix icc/ia64 builds
- Fix install of libmysqld-debug
- Don't include mysql_embedded
- Remove unpackaged manual pages to avoid missing files warnings
- Don't install mtr's test suite
update statements
Only SELECT statements report any examined rows in the slow
log. Slow UPDATE, DELETE and INSERT statements report 0 rows
examined, unless the statement has a condition including a
SELECT substatement.
This patch adds counting of examined rows for the UPDATE and
DELETE statements. An INSERT ... VALUES statement will still
not report any rows as examined.
MySQL handles the join syntax "JOIN ... USING( field1,
... )" and natural joins by building the same parse tree as
a corresponding join with an "ON t1.field1 = t2.field1 ..."
expression would produce. This parse tree was not cleaned up
properly in the following scenario. If a thread tries to
lock some tables and finds that the tables were dropped and
re-created while waiting for the lock, it cleans up column
references in the statement by means a per-statement free
list. But if the statement was part of a stored procedure,
column references on the stored procedure's free list
weren't cleaned up and thus contained pointers to freed
objects.
Fixed by adding a call to clean up the current prepared
statement's free list.
This is a backport from MySQL 5.1
remember range endpoints
The Loose Index Scan optimization keeps track of a sequence
of intervals. For the current interval it maintains the
current interval's endpoints. But the maximum endpoint was
not stored in the SQL layer; rather, it relied on the
storage engine to retain this value in-between reads. By
coincidence this holds for MyISAM and InnoDB. Not for the
partitioning engine, however.
Fixed by making the key values iterator
(QUICK_RANGE_SELECT) keep track of the current maximum endpoint.
This is also more efficient as we save a call through the
handler API in case of open-ended intervals.
The code to calculate endpoints was extracted into
separate methods in QUICK_RANGE_SELECT, and it was possible to
get rid of some code duplication as part of fix.
Conflicts:
Text conflict in mysql-test/r/grant.result
Text conflict in mysql-test/t/grant.test
Text conflict in mysys/mf_loadpath.c
Text conflict in sql/slave.cc
Text conflict in sql/sql_priv.h
MYSQL_BIN_LOG m_table_map_version member and it's associated
functions were not used in the logic of binlogging and replication,
this patch removed all related code.
When using a non-transactional table (t1) on the master
and with autocommit disabled, no COMMIT is recorded
in the binary log ending the statement. Therefore, if
the slave has t1 in a transactional engine, then it will
be as if a transaction is started but never ends. This is
actually BUG#29288 all over again.
We fix this by cherrypicking the cset for BUG#29288 which
was pushed to a later mysql version. The revision picked
was: mats@sun.com-20090923094343-bnheplq8n95opjay .
Additionally, a test case for covering the scenario depicted
in the bug report is included in this cset.
Conflicts:
Text conflict in mysql-test/r/explain.result
Text conflict in mysql-test/t/explain.test
Text conflict in sql/net_serv.cc
Text conflict in sql/sp_head.cc
Text conflict in sql/sql_priv.h
truncates text/blob to 766 chars
mysqldump and SELECT ... INTO OUTFILE truncated long BLOB/TEXT
values to size of 766 bytes (MAX_FIELD_WIDTH or 255 * 3 + 1).
The select_export::send_data method has been modified to
reallocate a conversion buffer for long field data.
greedy_search optimizer_search_depth=0
The algorithm inside restore_prev_nj_state failed to
properly update the counters within the NESTED_JOIN
tree. The counter was decremented each time a table in the
node was removed from the QEP, the correct thing to do being
only to decrement it when the last table in the child node
was removed from the plan. This lead to node counters
getting negative values and the plan thus appeared
impossible. An assertion caught this.
Fixed by not recursing up the tree unless the last table in
the join nest node is removed from the plan
The bug happened under the following condition:
- there was a user variable of type REAL, containing NULL value
- there was a table with a NOT_NULL column of any type but REAL, having
default value (or auto increment);
- a row was inserted into the table with the user variable as value.
A warning was emitted here.
The problem was that handling of NULL values of REAL type was not properly
implemented: it didn't expect that REAL NULL value can be assigned to other
data type.
Basically, the problem was that set_field_to_null() was used instead of
set_field_to_null_with_conversions().
The fix is to use the right function, or more generally, to allow conversion of
REAL NULL values to other data types.
Problem:
item->name was NULL for Item_user_var_as_out_param
which made strcmp(something, item->name) crash in the LOAD XML code.
Fix:
- item_func.h: Adding set_name() in constuctor for Item_user_var_as_out_param
- sql_load.cc: Changing the condition in write_execute_load_query_log_event() which
distiguished between Item_user_var_as_out_param and Item_field
from
if (item->name == NULL)
to
if (item->type() == Item::FIELD_ITEM)
- loadxml.result, loadxml.test: adding tests
Problem: after introduction of "WL#2649 Number-to-string conversions"
This query:
SET NAMES cp850; -- Or any other non-latin1 ASCII-based character set
SELECT * FROM t1
WHERE datetime_column='2010-01-01 00:00:00'
started to add extra character set conversion:
SELECT * FROM t1
WHERE CONVERT(datetime_column USING cp850)='2010-01-01 00:00:00';
so index on DATETIME column was not used anymore.
Fix:
avoid convertion of NUMERIC/DATETIME items
(i.e. those with derivation DERIVATION_NUMERIC).
Bug#53417 my_getwd() makes assumptions on the buffer sizes which not always hold true
The mysys library contains many functions for rewriting file paths. Most of these
functions makes implicit assumptions on the buffer sizes they write to. If a path is put
in my_realpath() it will propagate to my_getwd() which assumes that the buffer holding
the path name is greater than 2. This is not true in cases.
In the special case where a VARBIN_ITEM is passed as argument to the LOAD_FILE function
this can lead to a crash.
This patch fixes the issue by introduce more safe guards agaist buffer overruns.
This is the 5.1 merge and extension of the fix.
The server was happily accepting paths in table name in all places a table
name is accepted (e.g. a SELECT). This allowed all users that have some
privilege over some database to read all tables in all databases in all
mysql server instances that the server file system has access to.
Fixed by :
1. making sure no path elements are allowed in quoted table name when
constructing the path (note that the path symbols are still valid in table names
when they're properly escaped by the server).
2. checking the #mysql50# prefixed names the same way they're checked for
path elements in mysql-5.0.
When issuing a 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER' statement, the previous
position along with the new position is dumped into the error log. Namely,
the following information is printed out: skip_counter, group_relay_log_name
and group_relay_log_pos.
When issuing a 'CHANGE MASTER TO' statement, key elements of the previous
state, namely the host, port, the master_log_file and the master_log_pos
are dumped into the error log.
Iterative patch improvement. Previously committed patch
caused wrong result on Windows. The previous patch also
broke secure_file_priv for symlinks since not all file
paths which must be compared against this variable are
normalized using the same norm.
The server variable opt_secure_file_priv wasn't
normalized properly and caused the operations
LOAD DATA INFILE .. INTO TABLE ..
and
SELECT load_file(..)
to do different interpretations of the
--secure-file-priv option.
The patch moves code to the server initialization
routines so that the path always is normalized
once and only once.
It was also intended that setting the option
to an empty string should be equal to
lifting all previously set restrictions. This
is also fixed by this patch.
The server was not checking the supplied to COM_FIELD_LIST table name
for validity and compliance to acceptable table names standards.
Fixed by checking the table name for compliance similar to how it's
normally checked by the parser and returning an error message if
it's not compliant.
WHERE predicates containing references to empty tables in a
subquery were handled incorrectly by the optimizer when
executing EXPLAIN. As a result, the optimizer could try to
evaluate such predicates rather than just stop with
"Impossible WHERE noticed after reading const tables" as
it would do in a non-subquery case. This led to valgrind
errors and crashes.
Fixed the code checking the above condition so that subqueries
are not excluded and hence are handled in the same way as top
level SELECTs.
Conflicts:
Text conflict in configure.in
Text conflict in dbug/dbug.c
Text conflict in mysql-test/r/ps.result
Text conflict in mysql-test/t/ps.test
Text conflict in sql/CMakeLists.txt
Text conflict in sql/ha_ndbcluster.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/sql_plugin.cc
Text conflict in sql/sql_table.cc
The server could be tricked to read packets indefinitely if it
received a packet larger than the maximum size of one packet.
This problem is aggravated by the fact that it can be triggered
before authentication.
The solution is to no skip big packets for non-authenticated
sessions. If a big packet is sent before a session is authen-
ticated, a error is returned and the connection is closed.
Problem: "COM_FIELD_LIST is an old command of the MySQL server, before there was real move to only
SQL. Seems that the data sent to COM_FIELD_LIST( mysql_list_fields() function) is not
checked for sanity. By sending long data for the table a buffer is overflown, which can
be used deliberately to include code that harms".
Fix: check incoming data length.
during an UPDATE
Extended the fix for bug 29310 to multi-table update:
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
Clarified error messages related to unsafe statements:
- avoid the internal technical term "row injection"
- use 'binary log' instead of 'binlog'
- avoid the word 'unsafeness'
Fix for bug #46947 "Embedded SELECT without FOR UPDATE is
causing a lock", with after-review fixes.
SELECT statements with subqueries referencing InnoDB tables
were acquiring shared locks on rows in these tables when they
were executed in REPEATABLE-READ mode and with statement or
mixed mode binary logging turned on.
This was a regression which were introduced when fixing
bug 39843.
The problem was that for tables belonging to subqueries
parser set TL_READ_DEFAULT as a lock type. In cases when
statement/mixed binary logging at open_tables() time this
type of lock was converted to TL_READ_NO_INSERT lock at
open_tables() time and caused InnoDB engine to acquire
shared locks on reads from these tables. Although in some
cases such behavior was correct (e.g. for subqueries in
DELETE) in case of SELECT it has caused unnecessary locking.
This patch tries to solve this problem by rethinking our
approach to how we handle locking for SELECT and subqueries.
Now we always set TL_READ_DEFAULT lock type for all cases
when we read data. When at open_tables() time this lock
is interpreted as TL_READ_NO_INSERT or TL_READ depending
on whether this statement as a whole or call to function
which uses particular table should be written to the
binary log or not (if yes then statement should be properly
serialized with concurrent statements and stronger lock
should be acquired).
Test coverage is added for both InnoDB and MyISAM.
This patch introduces an "incompatible" change in locking
scheme for subqueries used in SELECT ... FOR UPDATE and
SELECT .. IN SHARE MODE.
In 4.1 the server would use a snapshot InnoDB read for
subqueries in SELECT FOR UPDATE and SELECT .. IN SHARE MODE
statements, regardless of whether the binary log is on or off.
If the user required a different type of read (i.e. locking read),
he/she could request so explicitly by providing FOR UPDATE/IN SHARE MODE
clause for each individual subquery.
On of the patches for 5.0 broke this behaviour (which was not documented
or tested), and started to use locking reads fora all subqueries in SELECT ...
FOR UPDATE/IN SHARE MODE. This patch restored 4.1 behaviour.
Stored routine DDL statements use statement-based replication
regardless of the current binlog format. The problem here was
that if a DDL statement failed during metadata lock acquisition
or opening of mysql.proc, the binlog format would not be reset
before returning. So the following DDL or DML statements are
binlogged with a wrong binlog format, which causes the slave
to stop.
The problem can be resolved by grabbing an exclusive MDL lock firstly
instead of clearing the current binlog format. So that the binlog
format will not be affected when the lock grab returns directly with
an error. The same way is taken to open a proc table for update.
The problem is that message resource (message.rc) is compiled as part of static library
sql.lib rather than with executable mysqld.exe. resource files do not work in static
libraries.
The fix is to add message.rc to mysqld.exe source files list.
The problem is that message resource (message.rc) is compiled as part of static library
sql.lib rather than with executable mysqld.exe. resource files do not work in static
libraries.
The fix is to add message.rc to mysqld.exe source files list.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
Statements with CONNECTION_ID were forced to be kept in the transactional
cache and by consequence non-transactional changes that were supposed to
be flushed ahead of the transaction were kept in the transactional cache.
This happened because after BUG#51894 any statement whose thd's
thread_specific_used was set was kept in the transactional cache. The idea
was to keep changes on temporary tables in the transactional cache. However,
the thread_specific_used was set not only for statements that accessed
temporary tables but also when the CONNECTION_ID was used.
To fix the problem, we created a new variable to keep track of updates
to temporary tables.
Problem: ALTER TABLE ADD INDEX may lead to table copying if there's
numeric field(s) with non-default display width modificator specified.
Fix: compare numeric field's storage lenghts when we decide whether
they can be considered 'equal' for table alteration purposes.
The bug was a side effect of WL#5030 (fix header files) and
WL#5161 (CMake).
The problem was that CMake-generated config.h (and my_config.h
as a copy of it) had a header guard. GNU autotools-generated
[my_]config.h did not. During WL#5030 the order of header files
was changed, so the following started to happen (using GNU autotools,
in embedded server):
- my_config.h included, defining HAVE_OPENSSL
- my_global.h included, un-defining HAVE_OPENSSL
- zlib.h included, including config.h,
defining HAVE_OPENSSL again.
The fix is to check HAVE_OPENSSL in conjuction with EMBEDDED_LIBRARY.
More common fix would be to define a macros as HAVE_OPENSSL && !EMBEDDED_LIBRARY
and use it instead of HAVE_OPENSSL.
Previously installed dynamic plugins are explicitly not loaded
on startup with --skip-grant-tables enabled. However, INSTALL
PLUGIN/UNINSTALL PLUGIN commands are allowed, and result in
inconsistent error messages (reporting duplicate plugin or
plugin does not exist).
This patch adds a check for --skip-grant-tables mode, and
returns error ER_OPTION_PREVENTS_STATEMENT to the user when
the above commands are attempted.
partition if muliple columns used
Problem was that range scanning through the sorted array of
the column list values did not use a correct index calculation.
Fixed by also taking the number of columns in the calculation.
The bug was a side effect of WL#5030 (fix header files) and
WL#5161 (CMake).
The problem was that CMake-generated config.h (and my_config.h
as a copy of it) had a header guard. GNU autotools-generated
[my_]config.h did not. During WL#5030 the order of header files
was changed, so the following started to happen (using GNU autotools,
in embedded server):
- my_config.h included, defining HAVE_OPENSSL
- my_global.h included, un-defining HAVE_OPENSSL
- zlib.h included, including config.h,
defining HAVE_OPENSSL again.
The fix is to change the order of header file, moving zlib.h
to the top of the header list. More proper fix would be to wrap
unguarded auto-generated [my_]config.h by guarded non-generated
header file.
of sync
In RBR, sometimes the table->s->last_null_bit_pos can be zero. This
has impact at the slave when it compares records fetched from the
storage engine against records in the binary log event. If
last_null_bit_pos is zero the slave, while comparing in
log_event.cc:record_compare function, would set all bits in the last
null_byte to 1 (assumed all 8 were unused) . Thence it would loose the
ability to distinguish records that were similar in contents except
for the fact that some field was null in one record, but not in the
other. Ultimately this would cause wrong matches, and in the specific
case depicted in the bug report the same record would be updated
twice, resulting in a lost update.
Additionally, in the record_compare function the slave was setting the
X bit unconditionally. There are cases that the X bit does not exist
in the record header. This could also lead to wrong matches between
records.
We fix both by conditionally resetting the bits: (i) unused null_bits
are set if last_null_bit_pos > 0; (ii) X bit is set if
HA_OPTION_PACK_RECORD is in use.
transaction
BUG#52616 Temp table prevents switch binlog format from STATEMENT to ROW
Before the WL#2687 and BUG#46364, every non-transactional change that happened
after a transactional change was written to trx-cache and flushed upon
committing the transaction. WL#2687 and BUG#46364 changed this behavior and
non-transactional changes are now written to the binary log upon committing
the statement.
A binary log event is identified as transactional or non-transactional through
a flag in the Log_event which is set taking into account the underlie storage
engine on what it is stems from. In the current bug, this flag was not being
set properly when the DROP TEMPORARY TABLE was executed.
However, while fixing this bug we figured out that changes to temporary tables
should be always written to the trx-cache if there is an on-going transaction.
Otherwise, binlog events in the reversed order would be produced.
Regarding concurrency, keeping changes to temporary tables in the trx-cache is
also safe as temporary tables are only visible to the owner connection.
In this patch, we classify the following statements as unsafe:
1 - INSERT INTO t_myisam SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_myisam
3 - CREATE TEMPORARY TABLE t_myisam_temp SELECT * FROM t_myisam
On the other hand, the following statements are classified as safe:
1 - INSERT INTO t_innodb SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_innodb
The patch also guarantees that transactions that have a DROP TEMPORARY are
always written to the binary log regardless of the mode and the outcome:
commit or rollback. In particular, the DROP TEMPORARY is extended with the
IF EXISTS clause when the current statement logging format is set to row.
Finally, the patch allows to switch from STATEMENT to MIXED/ROW when there
are temporary tables but the contrary is not possible.
The server variable opt_secure_file_priv wasn't
normalized properly and caused the operations
LOAD DATA INFILE .. INTO TABLE ..
and
SELECT load_file(..)
to do different interpretations of the
--secure-file-priv option.
The patch moves code to the server initialization
routines so that the path always is normalized
once and only once.
It was also intended that setting the option
to an empty string should be equal to
lifting all previously set restrictions. This
is also fixed by this patch.
Potential deadlock situation involving LOCK_plugin,
LOCK_global_system_variables and LOCK_status.
This patch backports the fix from next-mr, unlocking
LOCK_plugin before calling plugin->init() and
add_status_vars().
Arg_comparator initializes 'comparators' array in case of
ROW comparison and does not free this array on destruction.
It leads to memory leaks.
The fix:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
This assertion could be triggered during execution of OPTIMIZE TABLE for
InnoDB tables. As part of optimize for InnoDB tables, the table is recreated
and then opened again. If the reopen failed for any reason, the assertion
would be triggered. This could for example be caused by a concurrent DROP
TABLE executed by a different connection. The reason for the assertion was
that any failures during reopening were ignored.
This patch fixes the problem by making sure that the result of reopening the
table is checked and that any error messages are sent to the client.
Test case added to innodb_mysql_sync.test.
e.g.MYSQL_AUDIT_GENERAL_ERROR
General audit API (MYSQL_AUDIT_GENERAL_CLASS) didn't expose event
subclass to plugins.
This patch exposes event subclass to plugins via
struct mysql_event_general::event_subclass.
This change is not compatible with existing general audit plugins.
Audit interface major version has been incremented.
to cleanup open connections
It was possible to UNINSTALL storage engine plugin when binding
between THD object and storage engine is still active (e.g. in
the middle of transaction).
To avoid unclean deactivation (uninstall) of storage engine plugin
in the middle of transaction, additional storage engine plugin
lock is acquired by thd_set_ha_data().
If ha_data is not null and storage engine plugin was not locked
by thd_set_ha_data() in this connection before, storage engine
plugin gets locked.
If ha_data is null and storage engine plugin was locked by
thd_set_ha_data() in this connection before, storage engine
plugin lock gets released.
If handlerton::close_connection() didn't reset ha_data, server does
it immediately after calling handlerton::close_connection().
Note that this is just a framework fix, storage engines must switch
to thd_set_ha_data() from thd_ha_data() if they want to see fit.
for write by another connection
The problem was that if a table was locked in one connection by
LOCK TABLES ... WRITE, REPAIR TABLE or OPTIMIZE TABLE, SHOW CREATE
TABLE from another connection would be blocked. As SHOW CREATE TABLE
only reads metadata about the table, such blocking is not needed.
The problem was that when SHOW CREATE TABLE tried to get a metadata
lock on the table in order to open it, it used the wrong type of
metadata lock request. It used MDL_SHARED_READ which is used when
the intent is to read both table metadata and table data. Instead
it should have used MDL_SHARED_HIGH_PRIO which signifies an intent
to only read metadata.
This patch fixes the problem by making sure SHOW CREATE TABLE uses
the MDL_SHARED_HIGH_PRIO metadata lock request type when trying to
open the table. The patch also fixes a similar problem with the
mysql_list_fields API call.
Test case added to show_check.test.
during rqg_mdl_deadlock test
The problem was that if two connection threads simultaneously tries
to execute "SET GLOBAL EVENT_SCHEDULER = OFF", one of them could
hang waiting for the scheduler to stop.
The first connection thread would kill the event scheduler thread
and then start waiting for it to exit. The second connection thread
would then find the event scheduler thread in the process of exiting
and also wait for it to exit. However, since the event scheduler
thread used signal to wake only one waiting thread, the other connection
thread would be left waiting.
This bug was a regression introduced by the fix for Bug#51160.
Before #51160 it was not possible for two connection threads to
try to stop the event scheduler thread simultaneously.
This patch fixes the problem my making sure the event scheduler
thread uses broadcast to notify all waiters that it is exiting.
No test case added as this would require adding debug sync points
to parts of the code where sync points are currently not used.
The patch has been tested with the non-deterministic test case
from the bug description as well as using the RQG.
Allow stored procedure variables in LIMIT clause.
Only allow variables of INTEGER types.
Handle negative values by means of an implicit cast to UNSIGNED
(similarly to prepared statement placeholders).
Add tests.
Make sure replication works by not doing NAME_CONST substitution
for variables in LIMIT clause.
Add replication tests.
ChangeSet@1.2703, 2007-12-07 09:35:28-05:00, cmiller@zippy.cornsilk.net +40 -0
Bug#13174: SHA2 function
Patch contributed from Bill Karwin, paper unnumbered CLA in Seattle
Implement SHA2 functions.
Chad added code to make it work with YaSSL. Also, he removed the
(probable) bug of embedded server never using SSL-dependent
functions. (libmysqld/Makefile.am didn't read ANY autoconf defs.)
Function specification:
SHA2( string cleartext, integer hash_length )
-> string hash, or NULL
where hash_length is one of 224, 256, 384, or 512. If either is
NULL or a length is unsupported, then the result is NULL. The
resulting string is always the length of the hash_length parameter
or is NULL.
Include the canonical hash examples from the NIST in the test
results.
---
Polish and address concerns of reviewers.
Problem: Segmentation fault in add_group_and_distinct_keys() when accessing
field of what is assumed to be an Item_field object.
Cause: In case of views, the item added to list by is_indexed_agg_distinct()
was not of type Item_field, but Item_ref.
Resolution: Add the real Item_field object, the one referred to by
Item_ref object, to the list, instead.
when trying to build innodb as plugin.
The reason for the error is mismatch in mysql_temp_dir_list
declaration between mysqld.h and usage in ha_innodb.cc
Add missing MYSQL_PLUGIN_IMPORT to mysql_tmpdir_list
(variables exported by the server and used by plugin need it).
Tree cleaup after the last major merges in mysql-trunk:
The files sql/lex_hash.h and sql/sql_yacc.h are automatically
generated, and should not be checked in the configuration management system.
These files are now removed.
No changes are required for .bzrignore, which already listed these files
(and similar files in libmysqld/).
The file storage/perfschema/unittest/pfs_timer-t.cc did not build
after the header files refactoring affecting mysql_priv.h
The file now builds properly using sql_priv.h
on LOAD DATA
Two problems :
1. LOAD DATA was not checking for SQL errors and was sending an OK
packet even when there were errors reported already. Fixed to check for
SQL errors in addition to the error conditions already detected.
2. There was an over-ambitious assert() on the server to check if the
protocol is always followed by the client. This can cause crashes on
debug servers by clients not completing the protocol exchange for some
reason (e.g. --send command in mysqltest). Fixed by keeping the assert
only on client side, since the server always completes the protocol
exchange.
Adding my_global.h first in all files using
NO_EMBEDDED_ACCESS_CHECKS.
Correcting a merge problem resulting from a changed definition
of check_some_access compared to the original patches.
We should disable const subselect item evaluation because
subselect transformation does not happen in view_prepare_mode
and thus val_...() methods can not be called.
The problem is that we can not use make_cond_for_table().
This function relies on used_tables() condition
which is not set properly for subqueries.
As result subquery is not filtered out.
The fix is to use remove_eq_conds() function instead
of make_cond_for_table() func. 'remove_eq_conds()'
algorithm relies on const_item() value and it allows
to handle subqueries in right way.
Conflicts:
Text conflict in mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result
Text conflict in sql/log.cc
Text conflict in sql/set_var.cc
Text conflict in sql/sql_class.cc
Procedure, while DECIMAL works
Selecting of the CONCAT(...<SP variable>...) result into
a user variable may return wrong data.
Item_func_concat::val_str contains a number of memory
allocation-saving tricks. One of them concatenates
strings inplace inserting the value of one string
at the beginning of the other string. However,
this trick didn't care about strings those points
to the same data buffer: this is possible when
a CONCAT() parameter is a stored procedure variable -
Item_sp_variable::val_str() uses the intermediate
Item_sp_variable::str_value field, where it may
store a reference to an external buffer.
The Item_func_concat::val_str function has been
modified to take into account val_str functions
(such as Item_sp_variable::val_str) that return
a pointer to an internal Item member variable
that may reference to a buffer provided.
Conflicts:
Text conflict in mysql-test/r/func_str.result
Text conflict in mysql-test/suite/sys_vars/r/myisam_sort_buffer_size_basic_32.result
Text conflict in mysql-test/suite/sys_vars/r/myisam_sort_buffer_size_basic_64.result
Text conflict in mysql-test/t/func_str.test
Text conflict in sql/mysqld.cc
Text conflict in sql/protocol.cc
Text conflict in storage/myisam/mi_open.c
on index
'my_decimal' class has two members which can be used to access the
value. The member variable buf (inherited from parent class decimal_t)
is set to member variable buffer so that both are pointing to same value.
Item_copy_decimal::copy() uses memcpy to clone 'my_decimal'. The member
buffer is declared as an array and memcpy results in copying the values
of the array, but the inherited member buf, which should be pointing at
the begining of the array 'buffer' starts pointing to the begining of
buffer in original object (which is being cloned). Further updates on
'my_decimal' updates only the inherited member 'buf' but leaves
buffer unchanged.
Later when the new object (which now holds a inconsistent value) is cloned
again using proper cloning function 'my_decimal2decimal' the buf pointer
is fixed resulting in loss of the current value.
Using my_decimal2decimal instead of memcpy in Item_copy_decimal::copy()
fixed this problem.
The problem was that a syntactically invalid trigger could cause
the server to crash when trying to list triggers. The crash would
happen due to a mishap in the backup/restore procedure that should
protect parser items which are not associated with the trigger. The
backup/restore is used to isolate the parse tree (and context) of
a statement from the load (and parsing) of a trigger. In this case,
a error during the parsing of a trigger could cause the improper
backup/restore sequence.
The solution is to properly restore the original statement context
before the parser is exited due to syntax errors in the trigger body.
This patch:
- Moves all definitions from the mysql_priv.h file into
header files for the component where the variable is
defined
- Creates header files if the component lacks one
- Eliminates all include directives from mysql_priv.h
- Eliminates all circular include cycles
- Rename time.cc to sql_time.cc
- Rename mysql_priv.h to sql_priv.h
BUG#46364 introduced the flag binlog_direct_non_transactional_updates which
would make N-changes to be written to the binary log upon committing the
statement when "ON". On the other hand, when "OFF" the option was supposed
to mimic the behavior in 5.1. However, the implementation was not mimicking
the behavior correctly and the following bugs popped up:
Case #1: N-changes executed within a transaction would go into
the S-cache. When later in the same transaction a
T-change occurs, N-changes following it were written
to the T-cache instead of the S-cache. In some cases,
this raises problems. For example, a
Table_map_log_event being written initially into the
S-cache, together with the initial N-changes, would be
absent from the T-cache. This would log N-changes
orphaned from a Table_map_log_event (thence discarded
at the slave). (MIXED and ROW)
Case #2: When rolling back a transaction, the N-changes that
might be in the T-cache were disregarded and
truncated along with the T-changes. (MIXED and ROW)
Case #3: When a MIXED statement (TN) is ahead of any other
T-changes in the transaction and it fails, it is kept
in the T-cache until the transaction ends. This is
not the case in 5.1 or Betony (5.5.2). In these, the
failed TN statement would be written to the binlog at
the same instant it had failed and not deferred until
transaction end. (SBR)
To fix these problems, we have decided to do what follows:
For Case #1 and #2, we circumvent them:
1. by not letting binlog_direct_non_transactional_updates
affect MIXED and RBR. These modes will keep the behavior
provided by WL#2687. Although this will make Celosia to
behave differently from 5.1, an execution will be always
safe under such modes in the sense that slaves will never
go out sync. In 5.1, using either MIXED or ROW while
mixing N-statements and T-statements was not safe.
For Case #3, we don't actually fix it. We:
1. keep it and make all MIXED statements whether they end
up failing or not or whether they are up front in the
transaction or after some transactional change to always
be stored in the T-cache. This means that it is written
to the binary log on transaction commit/rollback only.
2. We make the warning message even more specific about the
MIXED statement and SBR.
Problem: EXPLAIN EXTENDED was trying to resolve references to
freed temporary table fields for GROUP_CONCAT()'s ORDER BY arguments.
Fix: use stored original GROUP_CONCAT()'s arguments in such a case.
into partitioned MyISAM table
Problem was that the ha_data structure was introduced in 5.1
and only used for partitioning first, but with the intention
of be of use for others engines as well, and when used by other
engines it would clash if it also was partitioned.
Solution is to move the partitioning specific data to a separate
structure, with its own mutex (which is used for auto_increment).
Also did rename PARTITION_INFO to PARTITION_STATS since there
already exist a class named partition_info, also cleaned up
some related variables.
The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The fix is to remove erroneous 'outer_join' variable check.
The crash happens because of incorrect max_length calculation
in QUOTE function(due to overflow). max_length is set
to 0 and it leads to assert failure.
The fix is to cast expression result to
ulonglong variable and adjust it if the
result exceeds MAX_BLOB_WIDTH.
Repairing MyISAM table with fulltext indexes and low
myisam_sort_buffer_size may crash the server.
Estimation of number of index entries was done incorrectly,
causing further assertion failure or server crash.
Docs note: min value for myisam_sort_buffer_size has been
changed from 4 to 4096.
debug_max
There was a buffer overrun when unpacking the date
field. Incidentaly, this seems to affect only solaris x86_64
debug builds, but others platforms may be vulnerable as well.
In particular, the buffer size used was not taking into
consideration that the '\0' character would be written into
it.
We fix this by increasing the size of the buffer used to
accommodate one extra byte (the one for the '\0').
which was detected during the build of 5.5.3-m3.
This requires version 9 of IBM C++ to help.
More fixes will still be needed, it is very strict
(or rather: a bit picky?) about templates.
Conflicts:
Text conflict in client/mysqlbinlog.cc
Text conflict in mysql-test/Makefile.am
Text conflict in mysql-test/collections/default.daily
Text conflict in mysql-test/r/mysqlbinlog_row_innodb.result
Text conflict in mysql-test/suite/rpl/r/rpl_typeconv_innodb.result
Text conflict in mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test
Text conflict in mysql-test/suite/rpl/t/rpl_row_create_table.test
Text conflict in mysql-test/suite/rpl/t/rpl_slave_skip.test
Text conflict in mysql-test/suite/rpl/t/rpl_typeconv_innodb.test
Text conflict in mysys/charset.c
Text conflict in sql/field.cc
Text conflict in sql/field.h
Text conflict in sql/item.h
Text conflict in sql/item_func.cc
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/rpl_utility.cc
Text conflict in sql/rpl_utility.h
Text conflict in sql/set_var.cc
Text conflict in sql/share/Makefile.am
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_plugin.cc
Text conflict in sql/sql_select.cc
Text conflict in sql/sql_table.cc
Text conflict in storage/example/ha_example.h
Text conflict in storage/federated/ha_federated.cc
Text conflict in storage/myisammrg/ha_myisammrg.cc
Text conflict in storage/myisammrg/myrg_open.c
If the listed columns in the view definition of
the table used in a 'INSERT .. SELECT ..'
statement mismatched, a debug assertion would
trigger in the cache invalidation code
following the failing statement.
Although the find_field_in_view() function
correctly generated ER_BAD_FIELD_ERROR during
setup_fields(), the error failed to propagate
further than handle_select(). This patch fixes
the issue by adding a check for the return
value.
The crash happens because greedy_serach
can not determine best plan due to
wrong inner table dependences. These
dependences affects join table sorting
which performs before greedy_search starting.
In our case table which has real 'no dependences'
should be put on top of the list but it does not
happen as inner tables have no dependences as well.
The fix is to exclude RAND_TABLE_BIT mask from
condition which checks if table dependences
should be updated.
col equal to itself!
There's no need to copy the value of a field into itself.
While generally harmless (except for some performance penalties)
it may be dangerous when the copy code doesn't expect this.
Fixed by checking if the source field is the same as the destination
field before copying the data.
Note that we must preserve the order of assignment of the null
flags (hence the null_value assignment addition).
The reason of the failure was apparent flaw in that a pointer to an uninitialized buffer was
passed to DBUG_PRINT of Protocol_text::store().
Fixed with splitting the print-out into two branches:
one with length zero of the problematic arg and the rest.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.