The problem is that the offset argument of the limit clause
might be truncated on a 32-bits server built without big
tables support. The truncation was happening because the
original 64-bits long argument was being cast to a 32-bits
(ha_rows) offset counter.
The solution is to check if the conversing resulted in value
truncation and if so, the offset is set to the maximum possible
value that can fit on the type.
``FLUSH TABLES WITH READ LOCK''
Concurrent execution of 1) multitable update with a
NATURAL/USING join and 2) a such query as "FLUSH TABLES
WITH READ LOCK" or "ALTER TABLE" of updating table led
to a server crash.
The mysql_multi_update_prepare() function call is optimized
to lock updating tables only, so it postpones locking to
the last, and if locking fails, it does cleanup of modified
syntax structures and repeats a query analysis. However,
that cleanup procedure was incomplete for NATURAL/USING join
syntax data: 1) some Field_item items pointed into freed
table structures, and 2) the TABLE_LIST::join_columns fields
was not reset.
Major change:
short-living Field *Natural_join_column::table_field has
been replaced with long-living Item*.
columns data types
The "SELECT @lastId, @lastId := Id FROM t" query returns
different result sets depending on the type of the Id column
(INT or BIGINT).
Note: this fix doesn't cover the case when a select query
references an user variable and stored function that
updates a value of that variable, in this case a result
is indeterminate.
The server uses incorrect assumption about a constantness of
an user variable value as a select list item:
The server caches a last query number where that variable
was changed and compares this number with a current query
number. If these numbers are different, the server guesses,
that the variable is not updating in the current query, so
a respective select list item is a constant. However, in some
common cases the server updates cached query number too late.
The server has been modified to memorize user variable
assignments during the parse phase to take them into account
on the next (query preparation) phase independently of the
order of user variable references/assignments in a select
item list.
build)
The crash was caused by freeing the internal parser stack during the parser
execution.
This occured only for complex stored procedures, after reallocating the parser
stack using my_yyoverflow(), with the following C call stack:
- MYSQLparse()
- any rule calling sp_head::restore_lex()
- lex_end()
- x_free(lex->yacc_yyss), xfree(lex->yacc_yyvs)
The root cause is the implementation of stored procedures, which breaks the
assumption from 4.1 that there is only one LEX structure per parser call.
The solution is to separate the LEX structure into:
- attributes that represent a statement (the current LEX structure),
- attributes that relate to the syntax parser itself (Yacc_state),
so that parsing multiple statements in stored programs can create multiple
LEX structures while not changing the unique Yacc_state.
Now, Yacc_state and the existing Lex_input_stream are aggregated into
Parser_state, a structure that represent the complete state of the (Lexical +
Syntax) parser.
enabled)
Before this fix, the lexer and parser would treat the ';' character as a
different token (either ';' or END_OF_INPUT), based on convoluted logic,
which failed in simple cases where a stored procedure is implemented as a
single statement, and used in a multi query.
With this fix:
- the character ';' is always parsed as a ';' token in the lexer,
- parsing multi queries is implemented in the parser, in the 'query:' rules,
- the value of thd->client_capabilities, which is the capabilities
negotiated between the client and the server during bootstrap,
is immutable and not arbitrarily modified during parsing (which was the
root cause of the bug)
Mixing aggregate functions and non-grouping columns is not allowed in the
ONLY_FULL_GROUP_BY mode. However in some cases the error wasn't thrown because
of insufficient check.
In order to check more thoroughly the new algorithm employs a list of outer
fields used in a sum function and a SELECT_LEX::full_group_by_flag.
Each non-outer field checked to find out whether it's aggregated or not and
the current select is marked accordingly.
All outer fields that are used under an aggregate function are added to the
Item_sum::outer_fields list and later checked by the Item_sum::check_sum_func
function.
between 5.0 and 5.1.
The problem was that in the patch for Bug#11986 it was decided
to store original query in UTF8 encoding for the INFORMATION_SCHEMA.
This approach however turned out to be quite difficult to implement
properly. The main problem is to preserve the same IS-output after
dump/restore.
So, the fix is to rollback to the previous functionality, but also
to fix it to support multi-character-set-queries properly. The idea
is to generate INFORMATION_SCHEMA-query from the item-tree after
parsing view declaration. The IS-query should:
- be completely in UTF8;
- not contain character set introducers.
For more information, see WL4052.
partitioned table
Trying INSERT DELAYED on a partitioned table, that has not been
used right before, crashes the server. When a table is used for
select or update, it is kept open for some time. This period I
mean with "right before".
Information about partitioning of a table is stored in form of
a string in the .frm file. Parsing of this string requires a
correctly set up lexical analyzer (lex). The partitioning code
uses a new temporary instance of a lex. But it does still refer
to the previously active lex. The delayd insert thread does not
initialize its lex though...
Added initialization for thd->lex before open table in the delayed
thread and at all other places where it is necessary to call
lex_start() if all tables would be partitioned and need to parse
the .frm file.
Problem: creating a partitioned table during name resolution for the
partition function we search for column names in all parts of the
CREATE TABLE query. It is superfluous (and wrong) sometimes.
Fix: launch name resolution for the partition function against
the table we're creating.
The parser uses ulonglong to store the LIMIT number. This number
then is stored into a variable of type ha_rows. ha_rows is either
4 or 8 byte depending on the BIG_TABLES define from config.h
So an overflow may occur (and LIMIT becomes zero) while storing an
ulonglong value in ha_rows.
Fixed by :
1. Using the maximum possible value for ha_rows on overflow
2. Defining BIG_TABLES for the windows builds (to match the others)
comments)
This change set is for 5.1 (manually merged)
Before this fix, the server would accept queries that contained comments,
even when the comments were not properly closed with a '*' '/' marker.
For example,
select 1 /* + 2 <EOF>
would be accepted as
select 1 /* + 2 */ <EOF>
and executed as
select 1
With this fix, the server now rejects queries with unclosed comments
as syntax errors.
Both regular comments ('/' '*') and special comments ('/' '*' '!') must be
closed with '*' '/' to be parsed correctly.
comments)
Before this fix, the server would accept queries that contained comments,
even when the comments were not properly closed with a '*' '/' marker.
For example,
select 1 /* + 2 <EOF>
would be accepted as
select 1 /* + 2 */ <EOF>
and executed as
select 1
With this fix, the server now rejects queries with unclosed comments
as syntax errors.
Both regular comments ('/' '*') and special comments ('/' '*' '!') must be
closed with '*' '/' to be parsed correctly.
Before this patch, the parser would execute:
- Select->expr_list.push_front()
- Select->expr_list.pop()
when parsing expressions lists, in the following rules:
- udf_expr_list
- expr_list
- ident_list
This is unnecessary, and introduces overhead due to the memory allocations
performed with Select->expr_list
With this patch, this code has been removed.
The list being parsed is maintained in the parser stack instead.
Also, 'udf_expr_list' has been renamed 'opt_udf_expr_list', since this
production can be empty.
The bug caused memory corruption for some queries with top OR level
in the WHERE condition if they contained equality predicates and
other sargable predicates in disjunctive parts of the condition.
The corruption happened because the upper bound of the memory
allocated for KEY_FIELD and SARGABLE_PARAM internal structures
containing info about potential lookup keys was calculated incorrectly
in some cases. In particular it was calculated incorrectly when the
WHERE condition was an OR formula with disjuncts being AND formulas
including equalities and other sargable predicates.
Faster thr_alarm()
Added 'Opened_files' status variable to track calls to my_open()
Don't give warnings when running mysql_install_db
Added option --source-install to mysql_install_db
I had to do the following renames() as used polymorphism didn't work with Forte compiler on 64 bit systems
index_read() -> index_read_map()
index_read_idx() -> index_read_idx_map()
index_read_last() -> index_read_last_map()
(Regression, caused by a patch for the bug 22646).
Problem: when result type of date_format() was changed from
binary string to character string, mixing date_format()
with a ascii column in CONCAT() stopped to work.
Fix:
- adding "repertoire" flag into DTCollation class,
to mark items which can return only pure ASCII strings.
- allow character set conversion from pure ASCII to other character sets.
When a table was explicitly locked with LOCK TABLES no associated
tables from any related trigger on the subject table were locked.
As a result of this the user could experience unexpected locking
behavior and statement failures similar to "failed: 1100: Table'xx'
was not locked with LOCK TABLES".
This patch fixes this problem by making sure triggers are
pre-loaded on any statement if the subject table was explicitly
locked with LOCK TABLES.
causes full table lock on innodb table.
Also fixes Bug#28502 Triggers that update another innodb table
will block on X lock unnecessarily (duplciate).
Code review fixes.
Both bugs' synopses are misleading: InnoDB table is
not X locked. The statements, however, cannot proceed concurrently,
but this happens due to lock conflicts for tables used in triggers,
not for the InnoDB table.
If a user had an InnoDB table, and two triggers, AFTER UPDATE and
AFTER INSERT, competing for different resources (e.g. two distinct
MyISAM tables), then these two triggers would not be able to execute
concurrently. Moreover, INSERTS/UPDATES of the InnoDB table would
not be able to run concurrently.
The problem had other side-effects (see respective bug reports).
This behavior was a consequence of a shortcoming of the pre-locking
algorithm, which would not distinguish between different DML operations
(e.g. INSERT and DELETE) and pre-lock all the tables
that are used by any trigger defined on the subject table.
The idea of the fix is to extend the pre-locking algorithm to keep track,
for each table, what DML operation it is used for and not
load triggers that are known to never be fired.