There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Fixed with relocating the dyn-array's destructor at ~LEX() that is
the end of the session, per Gleb's patch idea.
Two optimizations were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
readline.cc: In function char* batch_readline(LINE_BUFFER*):
readline.cc:60:9: error: out_length may be used uninitialized in this function
log.cc: In function int find_uniq_filename(char*):
log.cc:1857:8: error: number may be used uninitialized in this function
BIN LOG HAS BEEN MOVED
When moving the binary/relay log files from one location to
another and restarting the server with a different log-bin or
relay-log paths, would cause the startup process to abort. The
root cause was that the server would not be able to find the log
files because it would consider old paths for entries in the
index file instead of the new location. What's even worse, the
relative paths would not be considered relative to the path
provided in log-bin and relay-log, but to mysql_data_dir.
We fix the cases where the server contains relative paths. When
the server is reading from the index file, it checks whether the
entry contains relative paths. If it does, we replace it with the
absolute path set in log-bin/relay-log option. Absolute paths
remain unchanged and the index must be manually edited to
consider the new log-bin and/or relay-log path (this should be
documented). This is a fix for a GA version, that does not break
behavior (that much).
For development versions, we should go with Zhenxing's approach
that removes paths altogether from index files.
When passing an empty user to the connect function will cause
valgrind warnings. Seems that the client code is not prepared
to handle empty users. On 5.6 this can even be triggered by
START SLAVE PASSWORD='...'; i.e., without setting USER='...' on
the START SLAVE command (see WL#4143 for details on the new
additional START SLAVE commands).
To fix this, we disallow empty users when configuring the slave
connection parameters (this decision might be revisited if the
client code accepts empty users in the future).
A patch for alter_table-big.test has been committed earlier.
This is a patch for create-big.test:
The test used to time-out after 900 seconds.
It relied on debug sleeps that are no longer present in the
code. Since the sleeps are long gone, fixing the problem didn't
involve just updating the result file or using macro
"show_binlog_events2.inc" instead of "show binlog events"
statement. The test needed to be rewritten using debug sync
points, and result then needed to be updated.
So, the sleeps have been replaced by debug_sync points and the test execution time has
been reduced significantly.
Problem was that built-in client-side support for Windows Native Authentication (WNA) was included only in the client library, but not into the server code (which also uses some of the sources from the client library).
This is fixed by modyfying sql/CMakeLists.txt to include the client-side WNA plugin library and enable WNA related code by defining AUTHENTICATION_WIN macro.
Also, the logic of libmysql/CMakeLists.txt is simplified a bit.
BY CACHING OR REDUCING CREATEEVENT CALLS".
5.5 versions of MySQL server performed worse than 5.1 versions
under single-connection workload in autocommit mode on Windows XP.
Part of this slowdown can be attributed to overhead associated
with constant creation/destruction of MDL_lock objects in the MDL
subsystem. The problem is that creation/destruction of these
objects causes creation and destruction of associated
synchronization primitives, which are expensive on Windows XP.
This patch tries to alleviate this problem by introducing a cache
of unused MDL_object_lock objects. Instead of destroying such
objects we put them into the cache and then reuse with a new
key when creation of a new object is requested.
To limit the size of this cache, a new --metadata-locks-cache-size
start-up parameter was introduced.
my_double_round(DBL_MAX, -12, ....)
should return 'inf' rather than DBL_MAX
The problem is that floor(value/tmp) * tmp
is inlined, and optimized away.
The solution seems to be to prevent inlining by pre-computing value/tmp and
storing it in a variable.
No new test case: main.type_float fails without this patch.
SCAN/CPU) => SLAVE FAILURE
When a statement containing a large amount of ROWs to be applied on
the slave, and the slave's table does not contain a PK, it can take a
considerable amount of time to find and change all the rows that are
to be changed.
The proper slave enhancement will be implemented in WL 5597. However,
in this bug we are making it clear to the user what the problem is, by
printing a message to the error log if the execution time, for a given
statement in RBR, takes more than LONG_FIND_ROW_THRESHOLD (set to 60
seconds). This shall help the DBA to diagnose what's happening when
facing a slave server that is quiet for no apparent reason...
The note is only printed to the error log if log_warnings is set to be
greater than 1.
a.k.a. Bug#7975 deadlock without any locking, simple select and update
Bug#7975 was reintroduced when the storage engine API was made
pluggable in MySQL 5.1. Instead of looking at thd->lex directly, we
rely on handler::extra(). But, we were looking at the wrong extra()
flag, and we were ignoring the TRX_DUP_REPLACE flag in places where we
should obey it.
innodb_replace.test: Add tests for hopefully all affected statement
types, so that bug should never ever resurface. This kind of tests
should have been added when fixing Bug#7975 in MySQL 5.0.3 in the
first place.
rb:806 approved by Sunny Bains
ALTER TABLE AND/OR PLUGIN/SEMISYNC
If a plugin was uninstalled, thread local values for plugin
variables of string type with PLUGIN_VAR_MEMALLOC flag were
not freed.
With this patch these variables are freed when thread is
done (like all other variables).
In patch mysql-5.5:revno:3097.92.133, we made the gcc 4.6.1 compiler
to stop complaining about the fact that binlog_can_be_corrupted was
defined but not used. The fix consisted in checking the variable
and printing a warning message.
However, the fix caused a regression as a message was being printed
out when there was no corrupted binary log causing performance
problems and triggering users' suspicions when there was no need.
In BUG#13337202, we do not print any message and use the variable
in an "if" with an empty body to keep the compiler happy.
The bug case is similar to one fixed earlier bug_49536.
Deadlock involving LOCK_log appears to be possible because the purge running thread
is holding LOCK_log whereas there is no sense of doing that and which fact was
exploited by the earlier bug fixes.
Fixed with small reengineering of rotate_and_purge(), adding two new methods and
setting up a policy to execute those instead of the former
rotate_and_purge(RP_LOCK_LOG_IS_ALREADY_LOCKED).
The policy for using rotate(), purge() is that if the caller acquires LOCK_log itself,
it should call rotate(), release the mutex and run purge().
Side effect of this patch is refining error message of bug@11747416 to print
the whole path.
PARENT FOR OTHER ONE
Do not try to lookup key_nr'th key in 'table' because there may not be such
a key there. key_nr is the number of the key in the _child_ table name, not
in the parent table.
Instead just print the fields of the record that are covered by the first key
defined on the parent table.
This bug gets a better fix in MySQL 5.6, which is too risky for 5.1 and 5.5.
Approved by: Jon Olav Hauglid (via IM)
NEW_FRM_MEM WITHOUT NEEDING TO".
During the process of opening tables for a statement, we allocated
memory which was used only during view loading even in cases when the
statement didn't use any views. Such an unnecessary allocation (and
corresponding freeing) might have caused significant performance
overhead in some workloads. For example, it caused up to 15% slowdown
in a simple stored routine calculating Fibonacci's numbers.
This memory was pre-allocated as part of "new_frm_mem" MEM_ROOT
initialization at the beginning of open_tables().
This patch addresses this issue by turning off memory pre-allocation
during initialization for this MEM_ROOT. Now, memory on this root
will be allocated only at the point when the first .FRM for a view is
opened.
The patch doesn't contain a test case since it is hard to test the
performance improvements or the absence of memory allocation in our
test framework.
The assertion in innodb is triggered in this way:
1. mysql server does lookup on the primary key with full key,
innodb decides to not store cursor position because
"any index_next/prev call will return EOF anyway"
2. server asks innodb to return any next record in the index and the
assertion is triggered because no cursor position is stored.
It happens when a unique search (match_mode=ROW_SEL_EXACT)
in the clustered index is performed. InnoDB has never stored
the cursor position after a unique key lookup in the
clustered index because storing the position is an expensive
operation. The bug was introduced by
WL3220 'Loose index scan for aggregate functions'.
The fix is to disallow loose index scan optimization
for AGG_FUNC(DISTINCT ...) if GROUP_MIN_MAX quick select
uses clustered key.
warnings are converted to errors, the compiler complains about
the fact that binlog_can_be_corrupted is defined but never used.
We need to check if this is a dead code or if someone removed any
code by mistake.
Buffer over-run on all platforms, crash on windows, wrong result on other platforms,
when rounding numbers which start with 999999999 and have
precision = 9 or 18 or 27 or 36 ...
When temporary tables is used for result sorting
result field for gconcat function is created using
group_concat_max_len size. It leads to result truncation
when character_set_results is multi-byte character set due
to insufficient tmp table field size.
The fix is to increase temporary table field size for
gconcat. Method make_string_field() is overloaded
for Item_func_group_concat class and uses
max_characters * collation.collation->mbmaxlen size for
result field. max_characters is maximum number of characters
what can fit into max_length size.
A buffer large enough to hold the query _plus_ some additional
data is allocated before parsing is started. The additional data
is used by the query cache, and consists of the name of the current
database and a set of flags.
When a packet containing multiple SQL statements is sent to the
server and one of the statements changes the current database
(a "USE <db>" statement), and the name of the new current database
is longer than of the previous, there is not enough space in the
buffer for the new name, and we write out over the buffer boundary.
The fix adds an extra field to store the number of bytes
allocated to the database name in the buffer. If the current
database name changes, and the new name is longer than the
previous one, we refuse to cache the query.
Problematic query:
insert ignore into `t1_federated` (`c1`) select `c1` from `t1_local` a
where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
When this query is killed in another connection it could lead to crash.
The problem is follwing:
An attempt to obtain table statistics for subselect table in killed query
fails with an error. So JOIN::optimize() for subquery is failed but
it does not prevent further subquery evaluation.
At the first subquery execution JOIN::optimize() is called
(see subselect_single_select_engine::exec()) and fails with
an error. 'executed' flag is set to TRUE and it prevents
further subquery evaluation. At the second call
JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
and in case of uncacheable subquery the 'executed' flag is set
to FALSE before subquery evaluation. So we loose 'optimize stage'
error indication (see subselect_single_select_engine::exec()).
In other words 'executed' flag is used for two purposes, for
error indication at JOIN::optimize() stage and for an
indication of subquery execution. And it seems it's wrong
as the flag could be reset.
Binary log of master can get a partially logged event if the server
runs out of disk space and, while waiting for some space to be freed,
is shut down (or crashes). If the server is not stopped, it will just
wait endlessly for space to be freed, thus no partial event anomaly
occurs. The restarted master server has had a dubious policy to send
the incomplete event to slave which it apparently can't handle.
Although an error was printed out the fact of sending with unclear
error message is a source of confusion.
Actually the problem of presence an incomplete event in the binary log
was already fixed by WL 5493 (which was merged to our current trunk
branch, major version 5.6). The fix makes the server truncate the
binary log on server restart and recovery.
However 5.5 master can't do that. So the current issue is a problem of
sending incomplete events to the slave by 5.5 master.
It is fixed in this patch by changing the policy so that only complete
events are pushed by the dump thread to the IO thread. In addition,
the error text that master sends to the slave when an incomplete event
is found, now states that incomplete event may have been caused by an
out-of-disk space situation and provides coordinates of
the first and the last event bytes read.
1 - If a user had SHOW VIEW and SELECT privileges on a view and
this view was referencing another view, EXPLAIN SELECT on the outer
view (that the user had privileges on) could reveal the structure
of the underlying "inner" view as well as the number of rows in
the underlying tables, even if the user had privileges on none of
these referenced objects.
This happened because we used DEFINER's UID ("SUID") not just for
the view given in EXPLAIN, but also when checking privileges on
the underlying views (where we should use the UID of the EXPLAIN's
INVOKER instead).
We no longer run the EXPLAIN SUID (with DEFINER's privileges).
This prevents a possible exploit and makes permissions more
orthogonal.
2 - EXPLAIN SELECT would reveal a view's structure even if the user
did not have SHOW VIEW privileges for that view, as long as they
had SELECT privilege on the underlying tables.
Instead of requiring both SHOW VIEW privilege on a view and SELECT
privilege on all underlying tables, we were checking for presence
of either of them.
We now explicitly require SHOW VIEW and SELECT privileges on
the view we run EXPLAIN SELECT on, as well as all its
underlying views. We also require SELECT on all relevant
tables.
Problem: The following statements can cause the slave to go out of sync
if logged in statement format:
INSERT IGNORE...SELECT
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE
REPLACE ... SELECT
UPDATE IGNORE :
CREATE ... IGNORE SELECT
CREATE ... REPLACE SELECT
Background: Since the order of the rows returned by the SELECT
statement or otherwise may differ on master and slave, therefore
the above statements may cuase the salve to go out of sync with
the master.
Fix:
Issue a warning when statements like the above are exectued and
the bin-logging format is statement. If the logging format is mixed,
use row based logging. Marking a statement as unsafe has been
done in the sql/sql_parse.cc instead of sql/sql_yacc.cc, because while
parsing for a token has been done we cannot be sure if the parsing
of the other tokens has been done as well.
Six new warning messages has been added for each unsafe statement.
binlog.binlog_unsafe.test has been updated to incoporate these additional unsafe statments.
******
BUG#11758262 - 50439: MARK INSERT...SEL...ON DUP KEY UPD,REPLACE...SEL,CREATE...[IGN|REPL] SEL
Problem: The following statements can cause the slave to go out of sync
if logged in statement format:
INSERT IGNORE...SELECT
INSERT ... SELECT ... ON DUPLICATE KEY UPDATE
REPLACE ... SELECT
UPDATE IGNORE :
CREATE ... IGNORE SELECT
CREATE ... REPLACE SELECT
Background: Since the order of the rows returned by the SELECT
statement or otherwise may differ on master and slave, therefore
the above statements may cuase the salve to go out of sync with
the master.
Fix:
Issue a warning when statements like the above are exectued and
the bin-logging format is statement. If the logging format is mixed,
use row based logging. Marking a statement as unsafe has been
done in the sql/sql_parse.cc instead of sql/sql_yacc.cc, because while
parsing for a token has been done we cannot be sure if the parsing
of the other tokens has been done as well.
Six new warning messages has been added for each unsafe statement.
binlog.binlog_unsafe.test has been updated to incoporate these additional unsafe statments.
SYSTEM VARIABLE NAME SQL_MAX_JOIN_SI
BACKGROUND:
ER_TOO_BIG_SELECT refers to SQL_MAX_JOIN_SIZE, which is the
old name for MAX_JOIN_SIZE.
FIX:
Support for old name SQL_MAX_JOIN_SIZE is removed in MySQL 5.6
and is renamed as MAX_JOIN_SIZE.So the errmsg.txt
and mysql.cc files have been updated and the corresponding result
files have also been updated.
The main problem was that lex_start() was forgotten to be called before processing
COM_REFRESH.
Another problem discovered was that if failures to flush the error log were not properly
handled, which resulted in the server crash.
The user-visible effect of these problems were:
- if COM_REFRESH command was sent after SQL-queries of some sort,
the server would crash.
- if COM_REFRESH was requested with REFRESH_LOG only, and the error log
failed to flush, the server would crash. The error log fails to flush
when it points to unavailable file (for example, due to restricted
permissions).
The fixes are:
- call lex_start() in the beginning of COM_REFRESH;
- handle failures to flush the error log properly, i.e. raise ER_UNKNOWN_ERROR.
WITH MYSQL_REFRESH()
reset_slave_info.all was not initialized.
We fix this by setting lex->reset_slave_info.all= false in
the lex_start routine, which is called before every statement.
(also 5.5+ solution for bug#11766879/bug#60106)
The valgrind warning was due to an unused 'new handler_add_index(...)'
which was never freed.
The error handling did not work (fails as in bug#11766879) and
the implementation was not as transparant as it could, therefore I
made it a bit simpler and more transparant to the underlying handlers.
This way it follows the api better and the error handling works and
is also now tested.
Also added a debug test to verify the error handling.
Improved according to Jon Olavs review:
Added class ha_partition_add_index.
Also added base class Sql_alloc to handler_add_index.
Update 3.
Amendment to previous patch:
Failure in CONV() should return NULL instead of
empty set.
When compiled on Windows or Solaris the function
Item_func_conv::val_str() doesn't fail on
longlong2str() but finds an earlier exit path
based on the attributes of the arguments.
This exit path returns NULL on failure and as a
consequence the original patch caused different
test results depending on the OS used.
Connection of slave to master using a replication account which authenticates
with an external plugin was not possible.
Fixed by making sure that the CLIENT_PLUGIN_AUTH capability is set when client connects using mysql_real_connect(). Also, a plugin-dir path used by client library to locate authentication plugins is set based on the analogous server setting. This is done in connect_to_master() function before a call to mysql_real_connect().
FROM OK PACKET
There's no reliable way (without knowing the protocol variants that each
plugin pair implements) to find out when does the authentication exchange
end.
The server is changed to send all the extra authentication packets that
server plugins need to send prefixed with the \x1 command.
Failure to check the return state of a longlong2str() call
caused a crash. This could happen if a user executed the sql
function CONV() with certain parameters.
The patch fixes the issue by checking that the returned pointer
isn't NULL.
This fix was accidentally pushed to mysql-5.1 after the 5.1.59 clone-off in
bzr revision id marko.makela@oracle.com-20110829081642-z0w992a0mrc62s6w
with the fix of Bug#12704861 Corruption after a crash during BLOB update
but not merged to mysql-5.5 and upwards.
In the Barracuda formats, the clustered index record no longer
contains a prefix of off-page columns. Because of this, the undo log
must contain these prefixes, so that purge and multi-versioning will
continue to work. However, this also means that an undo log record can
become too big to fit in an undo log page. (It is a limitation of the
undo log that undo records cannot span across multiple pages.)
In case the checks for undo log size fail when CREATE TABLE or CREATE
INDEX is executed, we need a fallback that blocks a modification
operation when the undo log record would exceed the maximum size.
trx_undo_free_last_page_func(): Renamed from trx_undo_free_page_in_rollback().
Define the trx_t parameter only in debug builds.
trx_undo_free_last_page(): Wrapper for trx_undo_free_last_page_func().
Pass the trx_t parameter only in debug builds.
trx_undo_truncate_end_func(): Renamed from trx_undo_truncate_end().
Define the trx_t parameter only in debug builds. Rewrite a for(;;) loop
as a while loop for clarity.
trx_undo_truncate_end(): Wrapper for from trx_undo_truncate_end_func().
Pass the trx_t parameter only in debug builds.
trx_undo_erase_page_end(): Return TRUE if the page was non-empty
to begin with. Refuse to erase empty pages.
trx_undo_report_row_operation(): If the page for which the undo log
was too big was empty, free the undo page and return DB_TOO_BIG_RECORD.
rb:749 approved by Inaam Rana
GROUPING BY FUNCTIONS.... (PART
The bug was introduced in a patch for bug 49897.
Problem: The assertion inserted by the original patch to guard against
zero-lenght sort keys during merge phase triggers also when the whole
set fits in memory.
Fix: Move assert so that it does not trigger if the whole set is in
memory.
Converting the number zero to binary and back yielded the number zero,
but with no digits, i.e. zero precision.
This made the multiply algorithm go haywire in various ways.
Suppress the known warnings generated by filesort().
The real fix belongs to worklog 1509:
Pack values of non-sorted fields in the sort buffer
(which is basically the same issue, but in an optimization context:
We are writing the entire sort buffer to disk,
including un-used space for varchar columns.)
Background: Backporting fix for BUG 11752963 to Mysql5.1 branch.
Problem: Fix of bug 11752963 was only available for trunk and 5.5 branch.
Partial fix has been pushed to 5.1 branch as well.
Fix: backporting the fixes of bug 11752963 to 5.1 branch.
1. Made all major changes to make 5.1 branch in line with 5.5 and the trunk.
2. skipped the partial patch that was already applied to the 5.1 branch.
Blind attempt to fix BUG 12881278 - MAIN.MYISAM TEST FAILS ON LINUX
The printed text is truncated on char 63:
"MySQL thread id 1236, OS thread handle 0x7ff187b96700, query id"
still I do not understand how this truncation could have caused the
main.myisam failure but anyway - the buffer needs to be increased.
The client-server protocol has left some room for interpretation
which this patch fixes by introducing byte counters and
enforced logic for SSL handshakes.
PARTITONING, ON INDEX CREATE
If the first partition succeeded in adding a index, but a successive partition failed,
then the first partition had still the new index.
The fix reverts the added indexes from previous partitions on failure.
CRASHES SERVER
Flushing of MERGE table or one of its child tables, which was
locked by flushing thread using LOCK TABLES, might have caused
crashes or assertion failures if the thread failed to reopen
child or parent table.
Particularly, this might have happened when another connection
killed this FLUSH TABLE statement/connection.
Also this problem might have occurred when we failed to reopen
MERGE table or one of its children when executing DDL statement
under LOCK TABLES.
The problem was caused by the fact that reopen_tables() might
have failed to reopen child table but still tried to reopen,
reattach children for and re-lock its parent. Vice versa it
might have failed to reopen parent but kept references from
children to parent around. Since reopen_tables() closes table
it has failed to reopen and therefore frees all associated
memory such dangling references led to crashes when followed.
This patch solves this problem by ensuring that we always close
parent table and all its children if we fail to reopen this
table or one of its children. Same happens if we fail to reattach
children to parent.
Affects 5.1 only.
Also addressed issues in bug #11745133, where we could mark a table
corrupted instead of crashing the server when found a corrupted buffer/page
if the table created with innodb_file_per_table on.
to mysql-5.5.16-release.
Original revision:
# revision-id: dmitry.lenev@oracle.com-20110811155849-feyt3h7tj48padiu
# parent: tatjana.nuernberg@oracle.com-20110811120945-c6x9a5d2du8s9oj2
# committer: Dmitry Lenev <Dmitry.Lenev@oracle.com>
# branch nick: mysql-5.5-12828477
# timestamp: Thu 2011-08-11 19:58:49 +0400
# message:
# Fix for bug #12828477 - "MDL SUBSYSTEM CREATES BIG OVERHEAD
# FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
#
# The problem was that metadata locking subsystem introduced
# too much overhead for queries to I_S which were processed by
# opening only .FRM or .TRG files and had to scanned a lot of
# tables (e.g. SELECT COUNT(*) FROM I_S.TRIGGERS was affected).
# The same effect was not observed for similar queries which
# performed full-blown table open in order to fill I_S table.
#
# The problem stemmed from the fact that in case when I_S
# implementation opened only .FRM or .TRG file for each table
# processed it didn't release metadata lock it has acquired on
# the table after finishing its processing. As result, list
# of acquired metadata locks were growing until the end of
# statement. Since acquisition of each new lock required
# search in the list of already acquired locks performance
# degraded.
#
# The same effect is not observed when I_S implementation
# performs full-blown table open for each table being
# processed, as in the latter cases metadata lock on the
# table is released right after table processing.
#
# This fix addressed the problem by ensuring that I_S
# implementation releases metadata lock after processing
# the table in both cases of full-blown table open and in
# case when only .FRM or .TRG file is read.
SET STATEMENT.
Server built with debug asserts, without debug crashes if a user tries
to run a stored procedure that constains query with subquery that include
either LIMIT or LIMIT OFFSET clauses.
The problem was that Item::fix_fields() was not called for the items
representing LIMIT or OFFSET clauses.
The solution is to call Item::fix_fields() right before evaluation in
st_select_lex_unit::set_limit().
FOR SOME PLUGINS TO WORK
Some dynamically loadable plugins on the Mac may need functions from the
CoreServices framework. Unfortunately the only place where this can be initialized
is the main executable. Thus to allow plugins to use functions from that framework
the mysqld binary needs to link to the framework.
PAM AUTHENTICATION SETTINGS
SET PASSWORD code on a account with plugin authentication was errorneously
resetting the in-memory plugin pointer for the user back to native password
plugin despite the fact that it was sending a warning that the command has
no immediate effect.
Fixed by not updating the user's plugin if it's already set to a non default value.
Note that the bug affected only the in-memory cache of the user definitions.
Any restart of the server will fix the problem.
Also the salt and the password has are still stored into the user tables (just as
it's documented now).
Test case added.
One old test case result updated to have the correct value.
FOR CERTAIN QUERIES TO INFORMATION_SCHEMA".
The problem was that metadata locking subsystem introduced
too much overhead for queries to I_S which were processed by
opening only .FRM or .TRG files and had to scanned a lot of
tables (e.g. SELECT COUNT(*) FROM I_S.TRIGGERS was affected).
The same effect was not observed for similar queries which
performed full-blown table open in order to fill I_S table.
The problem stemmed from the fact that in case when I_S
implementation opened only .FRM or .TRG file for each table
processed it didn't release metadata lock it has acquired on
the table after finishing its processing. As result, list
of acquired metadata locks were growing until the end of
statement. Since acquisition of each new lock required
search in the list of already acquired locks performance
degraded.
The same effect is not observed when I_S implementation
performs full-blown table open for each table being
processed, as in the latter cases metadata lock on the
table is released right after table processing.
This fix addressed the problem by ensuring that I_S
implementation releases metadata lock after processing
the table in both cases of full-blown table open and in
case when only .FRM or .TRG file is read.
BUG #11754979 - 46675: ON DUPLICATE KEY UPDATE AND UPDATECOUNT() POSSIBLY WRONG
The mysql_affected_rows() client call returns 3 instead of 2 on
INSERT ... ON DUPLICATE KEY UPDATE query with a duplicated key value.
The fix for the old bug #29692 was incomplete: unnecessary double
increment of "touched" rows still happened.
This bugfix removes:
1) unneeded increment of "touched" rows and
2) useless double resetting of auto-increment value.
There is an optimization of DISTINCT in JOIN::optimize()
which depends on THD::used_tables value. Each SELECT statement
inside SP resets used_tables value(see mysql_select()) and it
leads to wrong result. The fix is to replace THD::used_tables
with LEX::used_tables.
The problem is that TIME_FUZZY_DATE is explicitly used for get_arg0_date()
function in Item_date_typecast::get_date method. The fix is to use real
fuzzy_date value.
In 5.5, REFRESH SLAVE is used as an alias for RESET SLAVE and
was removed in 5.6. Reseting a slave through REFRESH SLAVE was
causing errors in the valgrind platform since reset_slave_info
was undefined.
To fix the problem, we have set reset_slave_info while calling
REFRESH SLAVE.
SHOW ALL PROBLEMS FOR MERGE TABLE COMPLIANCE IN 5.1".
The problem was that CHECK/REPAIR TABLE for a MERGE table which
had several children missing or in wrong engine reported only
issue with the first such table in its result-set. While in 5.0
this statement returned the whole list of problematic tables.
Ability to report problems for all children was lost during
significant refactorings of MERGE code which were done as part
of work on 5.1 and 5.5 releases.
This patch restores status quo ante refactorings by changing
code in such a way that:
1) Failure to open child table due to its absence during CHECK/
REPAIR TABLE for a MERGE table is not reported immediately
when its absence is discovered in open_tables(). Instead
handling/error reporting in such a situation is postponed
until the moment when children are attached.
2) Code performing attaching of children no longer stops when
it encounters first problem with one of the children during
CHECK/REPAIR TABLE. Instead it continues iteration through
the child list until all problems caused by child absence/
wrong engine are reported.
Note that even after this change problem with mismatch of
child/parent definition won't be reported if there is also
another child missing, but this is how it was in 5.0 as well.
TOOLS
Backport a fix for Bug 57094 from 5.5.
The following revision was backported:
# revision-id: alexander.nozdrin@oracle.com-20101006150613-ls60rb2tq5dpyb5c
# parent: bar@mysql.com-20101006121559-am1e05ykeicwnx48
# committer: Alexander Nozdrin <alexander.nozdrin@oracle.com>
# branch nick: mysql-5.5-bugteam-bug57094
# timestamp: Wed 2010-10-06 19:06:13 +0400
# message:
# Fix for Bug 57094 (Copyright notice incorrect?).
#
# The fix is to:
# - introduce ORACLE_WELCOME_COPYRIGHT_NOTICE define to have a single place
# to specify copyright notice;
# - replace custom copyright notices with ORACLE_WELCOME_COPYRIGHT_NOTICE
# in programs.
Before BUG#28796, an empty host was used to identify that an instance was no
longer a slave. However, BUG#28796 changed this behavior and one cannot set
an empty host. Besides, a RESET SLAVE only cleans up information on the next
event to retrieve from the master, disables ssl and resets heartbeat period.
So a call to SHOW SLAVE STATUS after issuing a RESET SLAVE still returns some
valid information, such as host, port, user and password.
To fix this problem, we have introduced the command RESET SLAVE ALL that does
what a regular RESET SLAVE does and also clears host, port, user and password
information thus allowing users to identify when an instance is no longer a
slave.
HA_ERR was returning 0 (null string) when no error happened
(error=0). Since HA_ERR is used in DBUG_PRINT, regardless there
was an error or not, the server could crash in solaris debug
builds.
We fix this by:
- deploying an assertion that ensures that the function
is not called when no error has happened;
- making sure that HA_ERR is only called when an error
happened;
- making HA_ERR return "No Error", instead of 0, for
non-debug builds if it is called when no error happened.
This will make HA_ERR return values to work with DBUG_PRINT on
solaris debug builds.
non-latin1 server error message
The problem was a one byte buffer overflow in the conversion
of a error message between character sets. Ahead of explaining
the problem further, some background information. Before an
error message is sent to the user, the message is converted
to the character set specified in the character_set_results
variable. For various reasons, this conversion might cause
the message to increase in length -- for example, if certain
characters can't be represented in the result character set.
If the final message length is greater than the maximum allowed
length of a error message (MYSQL_ERRMSG_SIZE), the message
is truncated. The message is also always null-terminated
regardless of the character set. The problem arises from this
null-termination. If a message length reached the maximum,
the terminating null character would be placed one byte past
the end of the message buffer.
The solution is to reserve the end of the message buffer for
the null character.
The server crashes if it processes table map events that are
corrupted, especially if they map different tables to the same
identifier. This could happen, for instance, due to BUG 56226.
We fix this by checking whether the table map has already been
mapped before actually applying the event. If it has been mapped
with different settings an error is raised and the slave SQL
thread stops. If it has been mapped with same settings the event
is skipped. If the table is set to be ignored by the filtering
rules, there is no change in behavior: the event is skipped and
ids are not checked.
When CREATE TABLE wasn't given ENGINE=... it would determine
the default ENGINE at parse-time rather than at execution
time, leading to incorrect behaviour (namely, later changes
to the default engine being ignore) when calling CREATE TABLE
from a stored procedure.
We now defer working out the default engine till execution of
CREATE TABLE.
We must allocate a larger ref_pointer_array. We failed to account for extra
items allocated here:
#0 find_order_in_list
uint el= all_fields.elements;
all_fields.push_front(order_item); /* Add new field to field list. */
ref_pointer_array[el]= order_item;
order->item= ref_pointer_array + el;
#1 setup_order
#2 setup_without_group
#3 JOIN::prepare
GCC 4.6 has new -Wunused-but-set-variable flag, which is enabled
by -Wall, that causes GCC to emit a warning whenever a local variable
is assigned to, but otherwise unused (aside from its declaration).
Since the maintainer mode uses -Wall and -Werror, source code which
triggers these warnings will be rejected. That is, these warnings
become hard errors.
The solution is to fix the code which triggers these specific warnings.
In most of the cases, this is a welcome cleanup as code which triggers
this warning is probably dead anyway.
OLD VALUE OF INPUT PARAMETER.
The user-visible problem was that CASE-control-flow function
(not CASE-statement) misbehaved in stored routines under some
circumstances. The problem resulted in a crash or wrong data
returned. The error happened when expressions in CASE-function
were not of the same character set.
A CASE-function should return values of the same character set
for all branches. Internally, that means a new Item-instance
for the CONVERT(... USING <some charset>)-function is added
to the item tree when needed. The problem was that such changes
were not properly recorded using THD::change_item_tree(),
thus dangling pointers remain in the item tree after
THD::rollback_item_tree_changes(), which lead to undefined
behavior (i.e. crash / wrong data) for subsequent executions of
the stored routine.
This bug was introduced by a patch for Bug 11753363
(44793 - CHARACTER SETS: CASE CLAUSE, UCS2 OR UTF32, FAILURE).
The fixed function is Item_func_case::fix_length_and_dec().
New CONVERT-items are added in agg_item_set_converter(),
which calls THD::change_item_tree().
The problem was that an intermediate array was passed
to agg_item_set_converter(). Thus, THD::change_item_tree() there
was called on intermediate objects.
Note: those intermediate objects are allocated on THD's
memory root, so it's Ok to put them into "changed item lists".
The fix is to track changes on the correct objects.
UPDATED TWICE
For multi update it is not allowed to update a column
of a table if that table is accessed through multiple aliases
and either
1) the updated column is used as partitioning key
2) the updated column is part of the primary key
and the primary key is clustered
This check is done in unsafe_key_update().
The bug was that for case 2), it was checked whether
updated_column_number == table_share->primary_key
However, the primary_key variable is the index number of the
primary key, not a column number.
Prior to this bugfix, the first column was wrongly believed to be
the primary key. The columns covered by an index is found in
table->key_info[idx_number]->key_part. The bugfix is to check if
any of the columns in the keyparts of the primary key are
updated.
The user-visible effect is that for storage engines with
clustered primary key (e.g. InnoDB but not MyISAM) queries
like
"UPDATE t1 AS A JOIN t2 AS B SET A.primkey=..."
will now error with
"ERROR HY000: Primary key/partition key update is not allowed
since the table is updated both as 'A' and 'B'."
instead of
"ERROR 1032 (HY000): Can't find record in 't1_tb'"
even if primkey is not the first column in the table. This
was the intended behavior of bugfix 11764529.
COLUMNS IN VIEWS
Issue:
charset value for a Column, returned by MYSQL_LIST_FIELDS(), was not same
for Table and View. This was because, for view, field charset was not being
returned.
Solution:
Added definition of function "charset_for_protocol()" in calss
Item_ident_for_show to return field charset value.
RESULT CONSISTED OF MORE THAN ONE ROW
MySQL converts incorrect DATEs and DATETIMEs to '0000-00-00' on
insertion by default. This means that this sequence is possible:
CREATE TABLE t1(date_notnull DATE NOT NULL);
INSERT INTO t1 values (NULL);
SELECT * FROM t1;
0000-00-00
At the same time, ODBC drivers do not (or at least did not in the
90's) understand the DATE and DATETIME value '0000-00-00'. Thus,
to be able to query for the value 0000-00-00 it was decided in
MySQL 4.x (or maybe even before that) that for the special case
of DATE/DATETIME NOT NULL columns, the query "SELECT ... WHERE
date_notnull IS NULL" should return rows with date_notnull ==
'0000-00-00'. This is documented misbehavior that we do not want
to change.
The hack used to make MySQL return these rows is to convert
"date_notnull IS NULL" to "date_notnull = 0". This is, however,
only done if the table date_notnull belongs to is not an inner
table of an outer join. The rationale for this seems to be that
if there is no join match for the row in the outer table,
null-complemented rows would otherwise not be returned because
the null-complemented DATE value is actually NULL. On the other
hand, this means that the "return rows with 0000-00-00 when the
query asks for IS NULL"-hack is not in effect for outer joins.
In this bug, we have a LEFT JOIN that does not misbehave like
the documentation says it should. The fix is to rewrite
"date_notnull IS NULL" to "date_notnull IS NULL OR
date_notnull = 0"
if dealing with an OUTER JOIN, otherwise
"date_notnull IS NULL" to "date_notnull = 0"
as was done before.
Note:
The bug was originally reported as different result on first
and second execution of SP. The reason was that during first
execution the query was correctly rewritten to an inner join
due to a null-rejecting predicate. On second execution the
"IS NULL" -> "= 0" rewrite was done because there was no outer
join. The real problem, though, was incorrect date/datetime
IS NULL handling for OUTER JOINs.
SYNTAX TRIGGERS IN ANY WAY
Table with triggers which were using deprecated (5.0-only) syntax became
unavailable for any DML and DDL after upgrade to 5.1 version of server.
Attempt to execute any statement on such a table resulted in parsing
error reported. Since this included DROP TRIGGER and DROP TABLE
statements (actually, the latter was allowed but was not functioning
properly for such tables) it was impossible to fix the problem without
manual operations on .TRG and .TRN files in data directory.
The problem was that failure to parse trigger body (due to 5.0-only
syntax) when opening trigger file for a table prevented the table
from being open. This made all operations on the table impossible
(except DROP TABLE which due to peculiarity in its implementation
dropped the table but left trigger files around).
This patch solves this problem by silencing error which occurs when
we parse trigger body during table open. Error message is preserved
for the future use and table is marked as having a broken trigger.
We also try to analyze parse tree to recover trigger name, which
will be needed in order to drop the broken trigger. DML statements
which invoke triggers on the table marked as having broken trigger
are prohibited and emit saved error message. The same happens for
DDL which change triggers except DROP TRIGGER and DROP TABLE which
try their best to do what was requested. Table becomes no longer
marked as having broken trigger when last such trigger is dropped.
THE EVENT STATUS.
Any ALTER EVENT statement on a disabled event enabled it back
(unless this ALTER EVENT statement explicitly disabled the event).
The problem was that during processing of an ALTER EVENT statement
value of status field was overwritten unconditionally even if new
value was not specified explicitly. As a consequence this field
was set to default value for status which corresponds to ENABLE.
The solution is to check if status field was explicitly specified in
ALTER EVENT statement before assigning new value to status field.
SEEMS TO BE 'LEAKING' INTO THE SCHEMA NAME SPACE)
and bug#12428824 (Parser stack overflow and crash in sp_add_used_routine
with obscure query).
The first problem was that attempts to call a stored function by
its fully qualified name ended up with unwarranted error "ERROR 1305
(42000): FUNCTION someMixedCaseDb.my_function_name does not exist"
if this function belonged to a schema that had uppercase letters in
its name AND --lower_case_table_names was equal to either 1 or 2.
The second problem was that 5.5 version of MySQL server might have
crashed when a user tried to call stored function with too long name
or too long database name (i.e if a function and database name combined
occupied more than 2*3*64 bytes in utf8). This issue didn't affect
versions of server < 5.5.
The first problem was caused by the fact that in cases when a stored
function was called by its fully qualified name we didn't lowercase
name of its schema before performing look up of the function in
mysql.proc table even although lower_case_table_names mode was on.
As result we were unable to find this function since during its
creation we store lowercased version of schema name in the system
table in this mode and field for schema name uses binary collation.
Calls to stored functions were unaffected by this problem since for
them schema name is converted to lowercase as necessary.
The reason for the second bug was that MySQL Server didn't check length
of function name and database name before proceeding with execution of
stored function. As a consequence too long database name or function
name caused buffer overruns in places where the code assumes that their
length is within fixed limits, like mdl_key_init() in 5.5.
Again this issue didn't affect calls to stored procedures as for them
length of schema name and procedure name are properly checked.
This patch fixes both these bugs by adding calls to check_db_name()
and check_routine_name() to grammar rule which corresponds to a call
to a stored function. These functions ensure that length of database
name and function name for routine called is within standard limit.
Moreover call to check_db_name() handles conversion of database name
to lowercase if --lower_case_table_names mode is on.
Note that even although the second issue seems to be only reproducible
in 5.5 we still add code fixing it to 5.1 to be on the safe side (and
make code a bit more robust against possible future changes).
STATEMENTS FAIL".
Attempt to execute CREATE TABLE LIKE statement on a MyISAM
table with INDEX or DATA DIRECTORY options specified as a
source resulted in "MyISAM table '...' is in use..." error.
According to our documentation such a statement should create
a copy of source table with DATA/INDEX DIRECTORY options
omitted.
The problem was that new implementation of CREATE TABLE LIKE
statement in 5.5 tried to copy value of INDEX and DATA DIRECTORY
parameters from the source table. Since in description of source
table this parameters also included name of this table, attempt
to create target table with these parameter led to file name
conflict and error.
This fix addresses the problem by preserving documented and
backward-compatible behavior. I.e. by ensuring that contents
of DATA/INDEX DIRECTORY clauses for the source table is
ignored when target table is created.
This is the 5.1 version of the fix.
Need to free the memory allocated by the option parsing code for empty
strings when resetting the pointer to NULL.
No test case needed, as the existing ones already cover this path.
BOGUS "THE TABLE MYSQL.PROC IS MISSING,..."
There was a race condition between loading a stored routine
(function/procedure/trigger) specified by fully qualified name
SCHEMA_NAME.PROC_NAME and dropping the stored routine database.
The problem was that there is a window for race condition when one server
thread tries to load a stored routine being executed and the other thread
tries to drop the stored routine schema.
This condition race window exists in implementation of function
mysql_change_db() called by db_load_routine() during loading of stored
routine to cache. Function mysql_change_db() calls check_db_dir_existence()
that might failed because specified database was dropped during concurrent
execution of DROP SCHEMA statement. db_load_routine() calls mysql_change_db()
with flag 'force_switch' set to 'true' value so when referenced db is not found
then my_error() is not called and function mysql_change_db() returns ok.
This shadows information about schema opening error in db_load_routine().
Then db_load_routine() makes attempt to parse stored routine that is failed.
This makes to return error to sp_cache_routines_and_add_tables_aux() but since
during error generation a call to my_error wasn't made and hence
THD::main_da wasn't set we set the generic "mysql.proc table corrupt" error
when running sp_cache_routines_and_add_tables_aux().
The fix is to install an error handler inside db_load_routine() for
the mysql_op_change_db() call, and check later if the ER_BAD_DB_ERROR
was caught.
TO POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
ALTER TABLE MODIFY/CHANGE ... FIRST did nothing except renaming
columns if new version of the table had exactly the same
structure as the old one (i.e. as result of such statement, names
of columns changed their order as specified but data in columns
didn't). The same thing happened for ALTER TABLE DROP COLUMN/ADD
COLUMN statements which were supposed to produce new version of
table with exactly the same structure as the old version of table.
I.e. in the latter case the result was the same as if old column
was renamed instead of being dropped and new column with default
as value being created.
Both these problems were caused by the fact that ALTER TABLE
implementation incorrectly interpreted both these situations as
simple renaming of columns and assumed that in-place ALTER TABLE
algorithm could have been used for them.
This patch fixes this problem by ensuring that in cases when some
column is moved to the first position or some column is dropped
the default ALTER TABLE algorithm involving table copying is
always used. This is achieved by detecting such situations in
mysql_prepare_alter_table() and setting Alter_info::change_level
to ALTER_TABLE_DATA_CHANGED for them.
FAIL IN EMBEDDED SERVER
FreeBSD 64 bit needs the FP_X_DNML to fpsetmask() to prevent exceptions from
propagating into mysql (as a threaded application).
However fpsetmask() itself is deprecated in favor of fedisableexcept().
1. Fixed the #ifdef to check for FP_X_DNML instead of i386.
2. Added a configure.in check for fedisableexcept() and, if present,
this function is called insted of the fpsetmask().
No need for new tests, as the existing tests cover this already.
Removed the affected tests from the experimental list.
The types mysql_event_general/mysql_event_connection are
being cast to the incompatible type mysql_event. The way
mysql_event and the other types are designed are prone to
strict aliasing violations and can break things depending
on how compilers optimizes this code.
This patch fixes audit interface, so it confirms to strict-
aliasing rules. It introduces incompatible changes to audit
interface:
- mysql_event type has been removed;
- event_class has been removed from mysql_event_generic and
mysql_event_connection types;
- st_mysql_audit::event_notify() second argument is event_class;
- st_mysql_audit::event_notify() third argument is event of type
(const void *).
"Writing Audit Plugins" section of manual should be updated:
http://dev.mysql.com/doc/refman/5.5/en/writing-audit-plugins.html
The check for empty password in the user account was checking the wrong field.
Fixed to check the proper password hash.
Test case added.
Fixed native_password and old_password plugins that suffered from the same
problems.
Unambuguated the auth_string ACL_USER member : previously it was used for
both password and the authentication string (depending on the plugin). Now
fixed to contain either the authentication string specified or empty string.
SECONDARY INDEX IN INNODB
The patches for Bug#11751388 and Bug#11784056 enabled concurrent
reads while creating secondary indexes in InnoDB. However, they
introduced a regression. This regression occured if ALTER TABLE
failed after the index had been added, for example during the
lock upgrade needed to update .FRM. If this happened, InnoDB
and the server got out of sync with regards to which indexes
actually existed. Therefore the patch for Bug#11815600 again
disabled concurrent reads.
This patch re-enables concurrent reads. The original regression
is fixed by splitting the ADD INDEX operation into two parts.
First the new index is created but not made active. This is
done while concurrent reads are allowed. The second part of
the operation makes the index active (or reverts the change).
This is done after lock upgrade, which prevents the original
regression.
In order to implement this change, the patch changes the storage
API for in-place index creation. handler::add_index() is split
into two functions, handler_add_index() and
handler::final_add_index(). The former for creating indexes without
making them visible and the latter for commiting (i.e. making
visible) new indexes or reverting the changes.
Large parts of this patch were written by Marko Mäkelä.
Test case added to innodb_mysql_lock.test.
With this change, the index prefix column length lifted from 767 bytes
to 3072 bytes if "innodb_large_prefix" is set to "true".
rb://603 approved by Marko
The problem is that clients implementing the 4.0 version of the
protocol (that is, mysql-4.0) do not null terminate a string
at the end of the authentication packet. These clients denote
the end of the string with the end of the packet.
Although this goes against the documented (see MySQL Internals
ClientServer Protocol wiki) description of the protocol, these
old clients still need to be supported.
The solution is to support the documented and actual behavior
of the clients. If a client is using the pre-4.1 version of
the protocol, the end of a string in the authentication packet
can either be denoted with a null character or by the end of
the packet. This restores backwards compatibility with old
clients implementing either the documented or actual behavior.
will create multiple running events.
A CREATE IF NOT EXIST on an event that existed and was enabled caused
multiple instances of the event to run. Disabling the event didn't help.
If the event was dropped, the event stopped running, but when created
again, multiple instances of the event were still running. The only way
to get out of this situation was to restart the server.
The problem was that Event_db_repository::create_event() didn't return
enough information to discriminate between situation when event didn't
exist and was created and when event did exist and was not created
(but a warning was emitted). As result in the latter case event
was added to in-memory queue of events second time. And this led to
unwarranted multiple executions of the same event.
The solution is to add out-parameter to Event_db_repository::create_event()
method which will signal that event was not created because it already
exists and so it should not be added to the in-memory queue.
HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK".
Attempt to update an InnoDB temporary table under LOCK TABLES
led to assertion failure in both debug and production builds
if this temporary table was explicitly locked for READ. The
same scenario works fine for MyISAM temporary tables.
The assertion failure was caused by discrepancy between lock
that was requested on the rows of temporary table at LOCK TABLES
time and by update operation. Since SQL-layer requested a
read-lock at LOCK TABLES time InnoDB engine assumed that upcoming
statements which are going to be executed under LOCK TABLES will
only read table and therefore should acquire only S-lock.
An update operation broken this assumption by requesting X-lock.
Possible approaches to fixing this problem are:
1) Skip locking of temporary tables as locking doesn't make any
sense for connection-local objects.
2) Prohibit changing of temporary table locked by LOCK TABLES ...
READ.
Unfortunately both of these approaches have drawbacks which make
them unviable for stable versions of server.
So this patch takes another approach and changes code in such way
that LOCK TABLES for a temporary table will always request write
lock. In 5.5 version of this patch switch from read lock to write
lock is done on SQL-layer.
Problem: MYSQL_BIN_LOG::reset_logs acquires mutexes in wrong order.
The correct order is first LOCK_thread_count and then LOCK_log. This function
does it the other way around. This leads to deadlock when run in parallel
with a thread that takes the two locks in correct order. For example, a thread
that disconnects will take the locks in the correct order.
Fix: change order of the locks in MYSQL_BIN_LOG::reset_logs:
first LOCK_thread_count and then LOCK_log.
Assertion happens due to missing NULL value check in
Item_func_round::fix_length_and_dec() function.
The fix: added NULL value check for second parameter.
In RBR and in case of converting blob fields, the space allocated
while unpacking into the conversion field was not freed after
copying from it into the real field.
We fix this by freeing the conversion field when the conversion
table is not needed anymore (on close_tables_to_lock).
when selecting from I_S and views exist, in SP.
Symptoms: re-execution of prepared statement (or statement in a stored
routine) which read from one of I_S tables and which in order to fill
this I_S table had to open a view led to increasing memory consumption.
What happened in this situation was that during the process of view
opening for purpose of I_S filling view-related structures (like its
LEX) were allocated on persistent MEM_ROOT of prepared statement (or
stored routine). Since this MEM_ROOT is not freed until prepared
statement deallocation (or expulsion of stored routine from the cache)
and code responsible for filling I_S is not able to re-use results of
view opening from previous executions this allocation ended up in
memory hogging.
This patch solves the problem by ensuring that when a view opened
for the purpose of I_S filling all its structures are allocated on
non-persistent runtime MEM_ROOT. This is achieved by activating a
temporary Query_arena bound to this MEM_ROOT.
Since this step makes impossible linking of view structures into
LEX of our prepared statement (or stored routine statement) this
patch also changes code filling I_S table to install a proxy LEX
before trying to open a view or a table. Consequently some code
which was responsible for backing-up/restoring parts of LEX when
view/table was opened during filling of I_S table became redundant
and was removed.
This patch doesn't contain test case for this bug as it is hard
to test memory hogging in our test suite.
Issue:
While running embedded server, if client issues TEE command (\T foo/bar) and
"foo/bar" directory doesn't exist, it is suppose to give error. But it was
aborting. This was happening because wrong error handler was being called.
Solution:
Modified calls to correct error handler. In embedded server case, there are
two error handler (client and server) which are supposed to be called based
on which context code is in. If it is in client context, client error handler
should be called otherwise server.
Test case:
Test case automation is not possible as current (following) code doesn't
allow '\T' to be executed from command line (OR command read from a file):
[client/mysql.cc]
...
static int
com_tee(String *buffer __attribute__((unused)),
char *line __attribute__((unused)))
{
char file_name[FN_REFLEN], *end, *param;
if (status.batch) << THIS IS TRUE WHILE EXECUTING FROM COMMAND LINE.
return 0;
...
So, not adding test case in GA. WIll add a test case in mysql-trunk after
removing above code so that this could be properly tested before GA.
There are two problems:
1. There is a missing check for 'year' parameter(year can not be greater than 9999) in
makedate function. fix: added check that year can not be greater than 9999.
2. There is a missing check for zero date in from_days() function.
fix: added zero date check into Item_func_from_days::get_date()
function.