BUILD/Makefile.am:
Add BUILD/compile-bintar to source tarball.
BUILD/SETUP.sh:
Move common code to separate file to enable sharing.
BUILD/compile-bintar:
Add script to build with correct flags and ./configure options for bintar package.
BUILD/util.sh:
Move common code to separate file to enable sharing.
Fixed sporadic test failure for suit/pbxt/t/lock_multi.test
Fixed sporadic test faulure for suit/rpl/t/do_grant.test
OpenSolaris 5.11-x86 now compiles (tested with 32 bit)
BUILD/compile-solaris-amd64-debug-forte:
Added execute bit
BUILD/compile-solaris-x86-32:
Added execute bit
BUILD/compile-solaris-x86-32-debug:
Added execute bit
BUILD/compile-solaris-x86-32-debug-forte:
Added execute bit
BUILD/compile-solaris-x86-forte-32:
Added execute bit
extra/libevent/devpoll.c:
Removed compiler warning
extra/libevent/evbuffer.c:
Removed compiler warning
extra/libevent/select.c:
Removed compiler warning
mysql-test/mysql-test-run.pl:
Fixed sporadic test faulure for suit/rpl/t/do_grant.test (Seen on OpenSolaris)
mysql-test/suite/pbxt/r/lock_multi.result:
Fixed sporadic test failure for suit/pbxt/t/lock_multi.test (seen in buildbot)
This was done by merging the test with main/lock_multi.test
mysql-test/suite/pbxt/t/lock_multi.test:
Fixed sporadic test failure for suit/pbxt/t/lock_multi.test (seen in buildbot)
This was done by merging the test with main/lock_multi.test
mysys/my_sync.c:
Removed compiler warnings
sql/ha_ndbcluster.cc:
Fixed linking error on OpenSolaris when compiling without ndb
Bug #34866 Can't compile on Solaris 9/Sparc with gcc
storage/archive/azlib.h:
Removed compiler warning about redefined symbols
storage/maria/ma_blockrec.c:
Removed compiler warning
storage/maria/ma_loghandler.c:
Removed compiler warning
storage/maria/ma_test3.c:
Removed compiler warning
storage/myisam/mi_test3.c:
Removed compiler warning
storage/pbxt/src/ha_pbxt.cc:
Removed compiler warning
thr_main -> thr_main_pbxt
storage/pbxt/src/restart_xt.cc:
thr_main -> thr_main_pbxt
storage/pbxt/src/thread_xt.cc:
thr_main -> thr_main_pbxt
This was needed as thr_main() is an internal thread function on OpenSolaris()
storage/pbxt/src/thread_xt.h:
thr_main -> thr_main_pbxt
storage/xtradb/srv/srv0srv.c:
Use compatiblity macro to get code to work on OpenSolaris
support-files/compiler_warnings.supp:
Ignore compiler warning from yassl
Removed compiler warnings
extra/libevent/epoll.c:
Removed compiler warnings
extra/libevent/evbuffer.c:
Removed compiler warnings
extra/libevent/event.c:
Removed compiler warnings
extra/libevent/select.c:
Removed compiler warnings
extra/libevent/signal.c:
Removed compiler warnings
include/m_ctype.h:
Define CHARSET_INFO, MY_CHARSET_HANDLER, MY_COLLATION_HANDLER, MY_UNICASE_INFO, MY_UNI_CTYPE and MY_UNI_IDX as const structures.
Declare that pointers point to const data
include/m_string.h:
Declare that pointers point to const data
include/my_sys.h:
Redefine variables and function prototypes
include/mysql.h:
Declare charset as const
include/mysql.h.pp:
Declare charset as const
include/mysql/plugin.h:
Declare charset as const
include/mysql/plugin.h.pp:
Declare charset as const
mysys/charset-def.c:
Charset can't be of type CHARSET_INFO as they are changed when they are initialized.
mysys/charset.c:
Functions that change CHARSET_INFO must use 'struct charset_info_st'
Add temporary variables to not have to change all_charsets[] (Which now is const)
sql-common/client.c:
Added cast to const
sql/item_cmpfunc.h:
Added cast to avoid compiler error.
sql/sql_class.cc:
Added cast to const
sql/sql_lex.cc:
Added cast to const
storage/maria/ma_ft_boolean_search.c:
Added cast to avoid compiler error.
storage/maria/ma_ft_parser.c:
Added cast to avoid compiler error.
storage/maria/ma_search.c:
Added cast to const
storage/myisam/ft_boolean_search.c:
Added cast to avoid compiler error
storage/myisam/ft_parser.c:
Added cast to avoid compiler error
storage/myisam/mi_search.c:
Added cast to const
storage/pbxt/src/datadic_xt.cc:
Added cast to const
storage/pbxt/src/ha_pbxt.cc:
Added cast to const
Removed compiler warning by changing prototype of XTThreadPtr()
storage/pbxt/src/myxt_xt.h:
Character sets should be const
storage/pbxt/src/xt_defs.h:
Character sets should be const
storage/xtradb/btr/btr0cur.c:
Removed compiler warning
strings/conf_to_src.c:
Added const
Functions that change CHARSET_INFO must use 'struct charset_info_st'
strings/ctype-big5.c:
Made arrays const
strings/ctype-bin.c:
Made arrays const
strings/ctype-cp932.c:
Made arrays const
strings/ctype-czech.c:
Made arrays const
strings/ctype-euc_kr.c:
Made arrays const
strings/ctype-eucjpms.c:
Made arrays const
strings/ctype-extra.c:
Made arrays const
strings/ctype-gb2312.c:
Made arrays const
strings/ctype-gbk.c:
Made arrays const
strings/ctype-latin1.c:
Made arrays const
strings/ctype-mb.c:
Made arrays const
strings/ctype-simple.c:
Made arrays const
strings/ctype-sjis.c:
Made arrays const
strings/ctype-tis620.c:
Made arrays const
strings/ctype-uca.c:
Made arrays const
strings/ctype-ucs2.c:
Made arrays const
strings/ctype-ujis.c:
Made arrays const
strings/ctype-utf8.c:
Made arrays const
strings/ctype-win1250ch.c:
Made arrays const
strings/ctype.c:
Made arrays const
Added cast to const
Functions that change CHARSET_INFO must use 'struct charset_info_st'
strings/int2str.c:
Added cast to const
For tables with metadata sizes ranging from 251 to 255 the size
of the event data (m_data_size) was being improperly calculated
in the Table_map_log_event constructor. This was due to the fact
that when writing the Table_map_log_event body (in
Table_map_log_event::write_data_body) a call to net_store_length
is made for packing the m_field_metadata_size. It happens that
net_store_length uses *one* byte for storing
m_field_metadata_size when it is smaller than 251 but *three*
bytes when it exceeds that value. BUG 42749 had already
pinpointed and fix this fact, but the fix was incomplete, as the
calculation in the Table_map_log_event constructor considers 255
instead of 251 as the threshold to increment m_data_size by
three. Thence, the window for having a mismatch between the
number of bytes written and the number of bytes accounted in the
event length (m_data_size) was left open for
m_field_metadata_size values between 251 and 255.
We fix this by changing the condition in the Table_map_log_event
constructor to match the one in the net_store_length, ie,
increment one byte if m_field_metadata_size < 251 and three if it
exceeds this value.
mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result:
Updated result file.
mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test:
Changes to the original test case: added slave and moved
file into the rpl suite.
New test case: replicates two tables one with 250 and
another with 252 metadata sizes. This exercises the usage
of 1 or 3 bytes while packing the m_field_metadata_size.
sql/log_event.cc:
Made the m_data_size calculation for the table map log event
to match the number of bytes used while packing the
m_field_metadata_size value (according to net_store_length
function in pack.c).
This bug is the same problem as Bug 49836 for 5.1 versions.
mysql-test/suite/rpl/r/rpl_geometry.result:
Test case for bug 48776
mysql-test/suite/rpl/t/rpl_geometry.test:
Test case for bug 48776
sql/rpl_utility.h:
Add missing case MYSQL_TYPE_GEOMETRY
In statement-based or mixed-mode replication, use DROP TEMPORARY TABLE
to drop multiple tables causes different errors on master and slave,
when one or more of these tables do not exist. Because when executed
on slave, it would automatically add IF EXISTS to the query to ignore
all ER_BAD_TABLE_ERROR errors.
To fix the problem, do not add IF EXISTS when executing DROP TEMPORARY
TABLE on the slave, and clear the ER_BAD_TABLE_ERROR error after
execution if the query does not expect any errors.
mysql-test/suite/rpl/r/rpl_drop_temp.result:
Updated for the patch of bug#49137.
mysql-test/suite/rpl/t/rpl_drop_temp.test:
Added the test file to verify if DROP MULTI TEMPORARY TABLE
will cause different errors on master and slave, when one or
more of these tables do not exist.
sql/log_event.cc:
Added code to handle above cases which are
removed from sql_parse.cc
sql/sql_parse.cc:
Remove the code to issue the 'Unknown table' error,
if the temporary table does not exist when dropping
it on slave. The above cases decribed in comments
will be handled later in log_event.cc.
In statement-based or mixed-mode replication, use DROP TEMPORARY TABLE
to drop multiple tables causes different errors on master and slave,
when one or more of these tables do not exist. Because when executed
on slave, it would automatically add IF EXISTS to the query to ignore
all ER_BAD_TABLE_ERROR errors.
To fix the problem, do not add IF EXISTS when executing DROP TEMPORARY
TABLE on the slave, and clear the ER_BAD_TABLE_ERROR error after
execution if the query does not expect any errors.
mysql-test/r/rpl_drop_temp.result:
Updated for the patch of bug#49137.
mysql-test/t/rpl_drop_temp.test:
Added the test file to verify if DROP MULTI TEMPORARY TABLE
will cause different errors on master and slave, when one or
more of these tables do not exist.
sql/log_event.cc:
Added code to handle above cases which are
removed from sql_parse.cc
sql/sql_parse.cc:
Remove the code to issue the 'Unknown table' error,
if the temporary table does not exist when dropping
it on slave. The above cases decribed in comments
will be handled later in log_event.cc.
- The reason the test failed was competition between 3+ QEPs with identical
costs. Before, two plans were competing, and that was addressed by using
--sorted_result on the EXPLAIN output because they were different only in
join order.
Now we've got a 3rd plan which differs with "Using where" and that doesn't
work anymore.
- This patch fixes it by removing 'Using where' from EXPLAIN output. Test coverage
is somewhat reduced but probably still ok as PBXT and nested outer join processing
have no interaction and we don't expect any bugs here.
Simplify testing of needed characterset
Remove ndb from --with-plugins=max build
mysqlbug now sends email to maria-developers@lists.launchpad.net
client/mysqltest.cc:
SKIP now expands variables (for better error messages)
mysql-test/include/have_big5.inc:
Simplify by using have_collation.inc
mysql-test/include/have_collation.inc:
Test if '$collation' is supported
mysql-test/include/have_cp1250_ch.inc:
Simplify by using have_collation.inc
mysql-test/include/have_cp1251.inc:
Simplify by using have_collation.inc
mysql-test/include/have_cp866.inc:
Simplify by using have_collation.inc
mysql-test/include/have_cp932.inc:
Simplify by using have_collation.inc
mysql-test/include/have_eucjpms.inc:
Simplify by using have_collation.inc
mysql-test/include/have_euckr.inc:
Simplify by using have_collation.inc
mysql-test/include/have_gb2312.inc:
Simplify by using have_collation.inc
mysql-test/include/have_gbk.inc:
Simplify by using have_collation.inc
mysql-test/include/have_koi8r.inc:
Simplify by using have_collation.inc
mysql-test/include/have_latin2_ch.inc:
Simplify by using have_collation.inc
mysql-test/include/have_sjis.inc:
Simplify by using have_collation.inc
mysql-test/include/have_tis620.inc:
Simplify by using have_collation.inc
mysql-test/include/have_ucs2.inc:
Simplify by using have_collation.inc
mysql-test/include/have_ujis.inc:
Simplify by using have_collation.inc
mysql-test/include/have_utf8.inc:
Simplify by using have_collation.inc
mysql-test/r/create-uca.result:
Create tests that uses unicode
mysql-test/r/create.result:
Move test with unicode to create-uca.test
mysql-test/r/have_big5.require:
Not needed anymore
mysql-test/r/have_cp1250_ch.require:
Not needed anymore
mysql-test/r/have_cp1251.require:
Not needed anymore
mysql-test/r/have_cp866.require:
Not needed anymore
mysql-test/r/have_cp932.require:
Not needed anymore
mysql-test/r/have_eucjpms.require:
Not needed anymore
mysql-test/r/have_euckr.require:
Not needed anymore
mysql-test/r/have_gb2312.require:
Not needed anymore
mysql-test/r/have_gbk.require:
Not needed anymore
mysql-test/r/have_koi8r.require:
Not needed anymore
mysql-test/r/have_latin2_ch.require:
Not needed anymore
mysql-test/r/have_sjis.require:
Not needed anymore
mysql-test/r/have_tis620.require:
Not needed anymore
mysql-test/r/have_ucs2.require:
Not needed anymore
mysql-test/r/have_ujis.require:
Not needed anymore
mysql-test/r/have_utf8.require:
Not needed anymore
mysql-test/r/innodb.result:
Move tests that depends on unicode to innodb_utf8.test
mysql-test/r/innodb_utf8.result:
Test moved from innodb.test
mysql-test/suite/rpl/t/rpl_ignore_table.test:
Test for required collations
mysql-test/t/create-uca.test:
Create tests that uses unicode
mysql-test/t/create.test:
Move test with unicode to create-uca.test
mysql-test/t/ctype_utf8.test:
Test that require unicode
mysql-test/t/ddl_i18n_koi8r.test:
Test for required collations
mysql-test/t/ddl_i18n_utf8.test:
Test for required collations
mysql-test/t/fulltext.test:
Test for required collations
mysql-test/t/fulltext2.test:
Test for required collations
mysql-test/t/innodb.test:
Move tests that depends on unicode to innodb_utf8.test
mysql-test/t/innodb_utf8.test:
Tests that uses unicode
mysql-test/t/query_cache_ps_no_prot.test:
Test for required collations
mysql-test/t/query_cache_ps_ps_prot.test:
Test for required collations
scripts/mysqlbug.sh:
Send emails to maria-developers@lists.launchpad.net
storage/ndb/plug.in:
Don't include ndb in 'max' builds
subselect_single_select_engine::exec()
When a subquery doesn't need to be evaluated because
it returns only aggregate functions and these aggregates
can be calculated from the metadata about the table it
was not updating all the relevant members of the JOIN
structure to reflect that this is a constant query.
This caused problems to the enclosing subquery
('<> SOME' in the test case above) trying to read some
data about the tables.
Fixed by setting const_tables to the number of tables
when the SELECT is optimized away.
REORGANIZE PARTITION
There were several problems which lead to this this,
all related to bad error handling.
1) There was several bugs preventing the ddl-log to be used for
cleaning up created files on error.
2) The error handling after the copy partition rows did not close
and unlock the tables, resulting in deletion of partitions
which were in use, which lead InnoDB to put the partition to
drop in a background queue.
sql/ha_partition.cc:
Bug#47343: InnoDB fails to clean-up after lock wait timeout on
REORGANIZE PARTITION
Better error handling, if partition has been created/opened/locked
then make sure it is unlocked and closed before returning error.
The delete of the newly created partition is handled by the ddl-log.
sql/sql_parse.cc:
Bug#47343: InnoDB fails to clean-up after lock wait timeout on
REORGANIZE PARTITION
Fix a bug found when experimenting, thd could really be NULL here,
as mentioned in the function header.
sql/sql_partition.cc:
Bug#47343: InnoDB fails to clean-up after lock wait timeout on
REORGANIZE PARTITION
Used the correct .frm shadow name to put into the ddl-log.
Really use the ddl-log to handle errors.
sql/sql_table.cc:
Bug#47343: InnoDB fails to clean-up after lock wait timeout on
REORGANIZE PARTITION
Fixes of the ddl-log when used as error recovery (no crash).
When executing an entry from memory (not read from disk)
the name_len was not set correctly.
error in the query.
Fixes a leak after materializing a GROUP BY subquery to a
temp table when the subquery has a blob column in the SELECT
list.
Fixed by correctly destructing temporary buffers after doing
the conversion.
Problem was when calculating the range of partitions for
pruning.
Solution was to get the calculation correct. I also simplified
it a bit for easier understanding.
mysql-test/r/partition_pruning.result:
Bug#49742: Partition Pruning not working correctly for RANGE
Added results.
mysql-test/t/partition_pruning.test:
Bug#49742: Partition Pruning not working correctly for RANGE
Added tests to prevent regressions.
sql/sql_partition.cc:
Bug#49742: Partition Pruning not working correctly for RANGE
Simplified calculation for partition id for ranges.
Easier to get right and understand.
Added comments.
Several problems fixed :
1. Non constant expressions in UNION ... ORDER BY were not correctly cleaned up
in st_select_lex_unit::cleanup() causing crashes in EXPLAIN EXTENDED because of
fields quoted by these expressions pointing to the already freed temporary table
used to calculate the UNION.
Fixed by correctly cleaning up expressions of any depth.
2. Subqueries in the order by part of UNION ... ORDER BY ... caused a crash in
EXPLAIN EXTENDED because of a transformation attempt made during EXPLAIN EXTENDED
execution. Fixed by not doing the transformation when in EXPLAIN.
3. Fulltext functions caused crash when in the ORDER BY part of an un-parenthesized
UNION that gets "promoted" to be valid for the whole union, e.g.
SELECT * FROM t1 UNION SELECT * FROM t2 ORDER BY MATCHES (a) AGAINST ('abc' IN BOOLEAN MODE).
This is a case that demonstrates a more general problem of parts of the query being
moved to another level. When doing such transformation late in the optimization run
when most of the flags about the contents of the query are already aggregated it's possible
to "split" the flags so that they correctly reflect the new queries after the transformation.
In specific the ST_SELECT_LEX::ftfunc_list is holding all the free text function for all the
parts of the second SELECT in the UNION and we don't know what part of that is in the ORDER BY
that we're to move to the UNION level and what part is about the other parts of the second SELECT.
Fixed by throwing and error when such statements are about to be processed by adding a check
for the presence of MATCH() inside the ORDER BY clause that's going to get promoted to UNION.
To workaround this new limitation one must parenthesize the UNION SELECTs and provide a real
global ORDER BY for the UNION outside of the parenthesis.