This is a performance bug, related to the parsing or 'OR' and 'AND' boolean
expressions.
Let N be the number of expressions involved in a OR (respectively AND).
When N=1
For example, "select 1" involve only 1 term: there is no OR operator.
In 4.0 and 4.1, parsing expressions not involving OR had no overhead.
In 5.0, parsing adds some overhead, with Select->expr_list.
With this patch, the overhead introduced in 5.0 has been removed,
so that performances for N=1 should be identical to the 4.0 performances,
which are optimal (there is no code executed at all)
The overhead in 5.0 was in fact affecting significantly some operations.
For example, loading 1 Million rows into a table with INSERTs,
for a table that has 100 columns, leads to parsing 100 Millions of
expressions, which means that the overhead related to Select->expr_list
is executed 100 Million times ...
Considering that N=1 is by far the most probable expression,
this case should be optimal.
When N=2
For example, "select a OR b" involves 2 terms in the OR operator.
In 4.0 and 4.1, parsing expressions involving 2 terms created 1 Item_cond_or
node, which is the expected result.
In 5.0, parsing these expression also produced 1 node, but with some extra
overhead related to Select->expr_list : creating 1 list in Select->expr_list
and another in Item_cond::list is inefficient.
With this patch, the overhead introduced in 5.0 has been removed
so that performances for N=2 should be identical to the 4.0 performances.
Note that the memory allocation uses the new (thd->mem_root) syntax
directly.
The cost of "is_cond_or" is estimated to be neglectable: the real problem
of the performance degradation comes from unneeded memory allocations.
When N>=3
For example, "select a OR b OR c ...", which involves 3 or more terms.
In 4.0 and 4.1, the parser had no significant cost overhead, but produced
an Item tree which is difficult to evaluate / optimize during runtime.
In 5.0, the parser produces a better Item tree, using the Item_cond
constructor that accepts a list of children directly, but at an extra cost
related to Select->expr_list.
With this patch, the code is implemented to take the best of the two
implementations:
- there is no overhead with Select->expr_list
- the Item tree generated is optimized and flattened.
This is achieved by adding children nodes into the Item tree directly,
with Item_cond::add(), which avoids the need for temporary lists and memory
allocation
Note that this patch also provide an extra optimization, that the previous
code in 5.0 did not provide: expressions are flattened in the Item tree,
based on what the expression already parsed is, and not based on the order
in which rules are reduced.
For example : "(a OR b) OR c", "a OR (b OR c)" would both be represented
with 2 Item_cond_or nodes before this patch, and with 1 node only with this
patch. The logic used is based on the mathematical properties of the OR
operator (it's associative), and produces a simpler tree.
sql/item_cmpfunc.h:
Improved performances for parsing boolean expressions
sql/sql_yacc.yy:
Improved performances for parsing boolean expressions
mysql-test/r/parser_precedence.result:
Added test cases to cover boolean operator precedence
mysql-test/t/parser_precedence.test:
Added test cases to cover boolean operator precedence
into adventure.(none):/home/thek/Development/cpp/mysql-5.0-runtime
mysql-test/r/query_cache.result:
Auto merged
mysql-test/t/query_cache.test:
Auto merged
Although the query cache doesn't support retrieval of statements containing
column level access control, it was still possible to cache such statements
thus wasting memory.
This patch extends the access control check on the target tables to avoid
caching a statement with column level restrictions.
Views are excepted and can be cached but only retrieved by super user account.
mysql-test/t/query_cache_with_views.test:
Rename: mysql-test/t/view_query_cache.test -> mysql-test/t/query_cache_with_views.test
mysql-test/r/query_cache_with_views.result:
Rename: mysql-test/r/view_query_cache.result -> mysql-test/r/query_cache_with_views.result
mysql-test/r/query_cache.result:
Modified test case to allow caching of views
mysql-test/t/query_cache.test:
Modified test case to allow caching of views
sql/sql_cache.cc:
Allow caching of views
added SUPER_ACL check for I_S.TRIGGERS
mysql-test/r/information_schema.result:
result fix
mysql-test/r/information_schema_db.result:
result fix
mysql-test/t/information_schema.test:
test case
sql/sql_show.cc:
added SUPER_ACL check for I_S.TRIGGERS
into adventure.(none):/home/thek/Development/cpp/mysql-5.0-runtime
mysql-test/r/query_cache.result:
Auto merged
mysql-test/t/query_cache.test:
Auto merged
Although the query cache doesn't support retrieval of statements containing
column level access control, it was still possible to cache such statements
thus wasting memory.
This patch extends the access control check on the target tables to avoid
caching a statement with column level restrictions.
mysql-test/r/query_cache.result:
Added test
mysql-test/t/query_cache.test:
Added test
sql/sql_cache.cc:
The function check_table_access leaves the artifact
grant.want_privileges= 1, if a statement refers to tables with column level
privileges. To avoid the statement from being stored into the query cache,
it is enough to check this flag and set 'safe_to_cache_query' to zero.
sql/sql_cache.h:
- Removed 'static' attribute or class methods
- Added THD parameter to process_and_count_tables
HEAP tables can't index BIT fields. Due to this when grouping by such fields is
needed they are converted to a fields of the LONG type when temporary table
is being created. But a side effect of this is that a wrong type of BIT
fields is returned to a client.
Now the JOIN::prepare and the create_distinct_group functions are create
additional hidden copy of BIT fields to preserve original fields untouched.
New hidden fields are used for grouping instead.
mysql-test/t/type_bit.test:
Added a test case for the bug#30245: A wrong type of a BIT field is reported when grouped by it.
mysql-test/r/type_bit.result:
Added a test case for the bug#30245: A wrong type of a BIT field is reported when grouped by it.
sql/sql_select.cc:
Bug#30245: A wrong type of a BIT field is reported when grouped by it.
Now the JOIN::prepare and the create_distinct_group functions are create
additional hidden copy of BIT fields to preserve original fields untouched.
New hidden fields are used for grouping instead.
The bug caused memory corruption for some queries with top OR level
in the WHERE condition if they contained equality predicates and
other sargable predicates in disjunctive parts of the condition.
The corruption happened because the upper bound of the memory
allocated for KEY_FIELD and SARGABLE_PARAM internal structures
containing info about potential lookup keys was calculated incorrectly
in some cases. In particular it was calculated incorrectly when the
WHERE condition was an OR formula with disjuncts being AND formulas
including equalities and other sargable predicates.
mysql-test/r/select.result:
Added a test case for bug #30396.
mysql-test/t/select.test:
Added a test case for bug #30396.
sql/item_cmpfunc.h:
Removed max_members from the COND_EQUAL class as not useful anymore.
sql/sql_base.cc:
Added the max_equal_elems field to the st_select_lex structure.
sql/sql_lex.cc:
Added the max_equal_elems field to the st_select_lex structure.
sql/sql_lex.h:
Added the max_equal_elems field to the st_select_lex structure.
The field contains the maximal number of elements in multiple equalities
built for the query conditions.
sql/sql_select.cc:
Fixed bug #30396.
The bug caused memory corruption for some queries with top OR level
in the WHERE condition if they contained equality predicates and
other sargable predicates in disjunctive parts of the condition.
The corruption happened because the upper bound of the memory
allocated for KEY_FIELD and SARGABLE_PARAM internal structures
containing info about potential lookup keys was calculated incorrectly
in some cases. In particular it was calculated incorrectly when the
WHERE condition was an OR formula with disjuncts being AND formulas
including equalities and other sargable predicates.
The max_equal_elems field to the st_select_lex structure is used now
to calculate the above mentioned upper bound. The field contains the
maximal number of elements in multiple equalities built for the query
conditions.
mysql_ha_open calls mysql_ha_close on the error path (unsupported) to close the (opened) table before inserting it into the tables hash list handler_tables_hash) but mysql_ha_close only closes tables which are on the hash list, causing the table to be left open and locked.
This change moves the table close logic into a separate function that is always called on the error path of mysql_ha_open or on a normal handler close (mysql_ha_close).
mysql-test/r/handler.result:
Bug#25856 test result
mysql-test/t/handler.test:
Bug#25856 test case
sql/sql_handler.cc:
Move the table close logic into a separate function that is always called on the error path of mysql_ha_open or on a normal handler close
ORDER BY is used
The range analysis module did not correctly signal to the
handler that a range represents a ref (EQ_RANGE flag). This causes
non-range queries like
SELECT ... FROM ... WHERE keypart_1=const, ..., keypart_n=const
ORDER BY ... FOR UPDATE
to wait for a lock unneccesarily if another running transaction uses
SELECT ... FOR UPDATE on the same table.
Fixed by setting EQ_RANGE for all range accesses that represent
an equality predicate.
mysql-test/r/innodb_mysql.result:
bug#28570: Test Result
mysql-test/t/innodb_mysql.test:
bug#28570: Test Case
sql/handler.cc:
bug#28570: Updated comment
sql/opt_range.cc:
bug#28570: Removed the criterion that key has to be unique (HA_NOSAME) in
order for the EQ_RANGE flag to be set. It is sufficient that the range
represent a ref access.
ChangeSet@1.2575, 2007-08-07 19:16:06+02:00, msvensson@pilot.(none) +2 -0
Bug#26793 mysqld crashes when doing specific query on information_schema
- Drop the newly created user user1@localhost
- Cleanup testcase
mysql-test/r/ndb_bug26793.result:
mysql-test/r/ndb_bug26793.result@1.3, 2007-08-07 19:16:04+02:00, msvensson@pilot.(none)
+1 -6
Update test result
mysql-test/t/ndb_bug26793.test:
mysql-test/t/ndb_bug26793.test@1.3, 2007-08-07 19:16:04+02:00, msvensson@pilot.(none) +8
-11
- Remove the drop/restore of anonymous users - there are no such users
by default anymore(if there were, they would probably be in mysql.user)
- Switch back to default connection before cleanup
- Drop user1@localhost as part of cleanup
Write test results to var/log
Add test for "source" and variable expansion
client/mysqltest.c:
Improve error messages
Write .reject file to the location specified by --logdir
mysql-test/mysql-test-run.pl:
Pass logdir to mysqltest, to get test results written to var/log
mysql-test/r/mysqltest.result:
Update test results
mysql-test/t/mysqltest.test:
Add test for "source" and variable expansion
Update test after writing result in var/log
also fix "while" and "connect"
It's now possible to write "if("
client/mysqltest.c:
Don't require a space between for example "if" and "(". This should
also fix "while" and "connect"
mysql-test/t/mysqltest.test:
Remove space between if and ( to check it works
into pilot.(none):/data/msvensson/mysql/mysql-5.0-maint
client/mysqltest.c:
Auto merged
mysql-test/t/mysqltest.test:
Auto merged
mysql-test/r/mysqltest.result:
SCCS merged
client/mysqltest.c:
- Remove the extra newline first in the file produced by
write_file and append_file
- Add check for too many arguments passed to 'check_command_args'
mysql-test/r/mysqltest.result:
Update test result
mysql-test/t/mysqltest.test:
Add test to check that no extra newline is created
into bodhi.(none):/opt/local/work/mysql-5.0-runtime
mysql-test/r/federated.result:
Auto merged
mysql-test/t/federated.test:
Auto merged
sql/item.cc:
Auto merged
ndb/src/ndbapi/NdbDictionaryImpl.cpp:
Twiddle the "replicaCount" and "fragCount" variable when restore data from different endian.
ndb/src/ndbapi/NdbDictionaryImpl.hpp:
Add byte order variable
ndb/tools/restore/Restore.cpp:
Twiddle blob, datatime,timestamp when do restore in different endian.
mysql-test/r/ndb_restore_different_endian_data.result:
Test case result for restore data from different endian
mysql-test/std_data/ndb_backup50_data_be/BACKUP-1-0.1.Data:
Test case data
mysql-test/std_data/ndb_backup50_data_be/BACKUP-1-0.2.Data:
Test case data
mysql-test/std_data/ndb_backup50_data_be/BACKUP-1.1.ctl:
Test case data
mysql-test/std_data/ndb_backup50_data_be/BACKUP-1.1.log:
Test case data
mysql-test/std_data/ndb_backup50_data_be/BACKUP-1.2.ctl:
Test case data
mysql-test/std_data/ndb_backup50_data_be/BACKUP-1.2.log:
Test case data
mysql-test/std_data/ndb_backup50_data_le/BACKUP-1-0.1.Data:
Test case data
mysql-test/std_data/ndb_backup50_data_le/BACKUP-1-0.2.Data:
Test case data
mysql-test/std_data/ndb_backup50_data_le/BACKUP-1.1.ctl:
Test case data
mysql-test/std_data/ndb_backup50_data_le/BACKUP-1.1.log:
Test case data
mysql-test/std_data/ndb_backup50_data_le/BACKUP-1.2.ctl:
Test case data
mysql-test/std_data/ndb_backup50_data_le/BACKUP-1.2.log:
Test case data
mysql-test/t/ndb_restore_different_endian_data.test:
Test case for restore data from different endian
under terms of bug#28875 for better performance.
The change appeared to require more changes in item_cmpfunc.cc,
which is dangerous in 5.0.
Conversion between a latin1 column and an ascii string constant
stopped to work.
mysql-test/r/ctype_recoding.result:
Adding test case.
mysql-test/t/ctype_recoding.test:
Adding test case.