Link view/derived table fields to a real table to check turning the table record to null row.
Item_direct_view_ref wrapper now checks if table is turned to null row.
Autointersections of an object were treated as nodes, so the wrong result.
per-file comments:
mysql-test/r/gis.result
Bug #1043845 st_distance() results are incorrect depending on variable order.
test result updated.
mysql-test/t/gis.test
Bug #1043845 st_distance() results are incorrect depending on variable order.
test case added.
sql/item.cc
small fix to make compilers happy.
sql/item_geofunc.cc
Bug #1043845 st_distance() results are incorrect depending on variable order.
Skip intersection points when calculate distance.
When we append data to the binlog file, we use fdatasync() to ensure
the data gets to disk so that crash recovery can work.
Unfortunately there seems to be a bug in ext3/ext4 on linux, so that
fdatasync() does not correctly sync all data when the size of a file
is increased. This causes crash recovery to not work correctly (it
loses transactions from the binlog).
As a work-around, use fsync() for the binlog, not fdatasync(). Since
we are increasing the file size, (correct) fdatasync() will most
likely not be faster than fsync() on any file system, and fsync()
does work correctly on ext3/ext4. This avoids the need to try to
detect if we are running on buggy ext3/ext4.
1. Field_newdate::get_date should refuse to return a date with zeros when
TIME_NO_ZERO_IN_DATE is set, not when TIME_FUZZY_DATE is unset
2. Item_func_to_days and Item_date_add_interval can only work with valid dates,
no zeros allowed.
fix Item_func_add_time::get_date() to generate valid dates.
Move the validity check inside get_date_from_daynr()
instead of relying on callers
(5 that had it, and 2 that did not, but should've)
The problem was that was_null and null_value variables was reset in each reexecution of IN subquery, but engine rerun only for non-constant subqueries.
Fixed checking constant in Item_equal sort.
Fix constant reporting in Item_subselect.
two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)
The fix backports from MWL#182: Explain running statements the logic that
saves the original JOIN_TAB array of a query plan after optimization. This
array is later used during EXPLAIN to iterate over the original JOIN plan
nodes in the cases when this plan could be changed by early subquery
execution during the optimization phase of the outer query.
The crash happend when combining query cache, prepared statements and using a read only cursor.
sql/sql_cache.cc:
Fixed unlikely error when one adjust query cache size in middle of operation
sql/sql_cursor.cc:
Disable query cache when using cursors. This fixed lp:1039277
tests/mysql_client_test.c:
Test case for lp:1039277
sql/handler.cc:
SHOW INNODB STATUS sometimes returns 0 even if it has generated an error.
This code is here to catch it until InnoDB some day is fixed.
storage/innobase/handler/ha_innodb.cc:
Catch at least one of the possible errors from SHOW INNODB STATUS to provide a correct return code.
storage/xtradb/handler/ha_innodb.cc:
Catch at least one of the possible errors from SHOW INNODB STATUS to provide a correct return code.
support-files/my-huge.cnf.sh:
Fixed typo
Fixed error in test that caused following tests to fail
extra/yassl/taocrypt/src/dsa.cpp:
Fixed compiler warning by adding cast
mysql-test/suite/rpl/t/rpl_start_slave_deadlock_sys_vars.test:
We have to first test for have_debug_sync to not start master wrongly
plugin/auth_pam/auth_pam.c:
Fixed compiler warning
sql/sys_vars.h:
Fixed compiler warning (Sys_var_max_user_conn is now signed)
support-files/compiler_warnings.supp:
Don't give warnings for auth_pam.c (Tried to fix it by changing the code, but could not find an easy way to do that on solaris)
mysql-test/suite/maria/lock.result:
Added test case
mysql-test/suite/maria/lock.test:
Added test case
sql/sql_table.cc:
One can't call HA_EXTRA_FORCE_REOPEN on something that may be opened twice.
It's safe to remove the call in this case as we will call HA_EXTRA_PREPARE_FOR_DROP for the table anyway.
(One nice side effect is that drop is a bit faster as we are not flushing the cache to disk before the drop anymore)
mysql-test/r/partition.result:
Added test case
mysql-test/t/partition.test:
Added test case
sql/sql_partition.cc:
Do mysql_trans_prepare_alter_copy_data() after all original tables are locked.
(We don't want to disable transactions for the original tables, that still may be in the cache)
sql/sql_table.cc:
Fixed two wrong DBUG_ENTER
sql/item_subselect.cc:
Added purecov info
sql/sql_select.cc:
Added cast
storage/innobase/handler/ha_innodb.cc:
Added cast
storage/xtradb/btr/btr0btr.c:
Added buf_block_get_frame_fast() to avoid compiler warning
storage/xtradb/handler/ha_innodb.cc:
Added cast
storage/xtradb/include/buf0buf.h:
Innodb has buf_block_get_frame(block) defined as (block)->frame.
Didn't want to do a big change to break xtradb as it may use block_get_frame() differently, so I mad this quick hack to patch one compiler warning.
Starting the SQL thread might deadlock with reading the values of the
replication filtering options.
The deadlock is due to a lock order violation when the variables are
read or set. For example, reading replicate_ignore_table first
acquires LOCK_global_system_variables in sys_var::value_ptr and later
acquires LOCK_active_mi in Sys_var_rpl_filter::global_value_ptr. This
violates the order established when starting a SQL thread, where
LOCK_active_mi is acquired before start_slave, and ends up creating a
thread (handle_slave_sql) that allocates a THD handle whose
constructor acquires LOCK_global_system_variables in THD::init.
The solution is to unlock LOCK_global_system_variables before the
replication filtering options are set or read. This way the lock
order is preserved and the data being read/set is still protected
given that it acquires LOCK_active_mi.
The bug could caused a crash when the server executed a query with
ORDER by and sort_buffer_size was set to a small enough number.
It happened because the small sort buffer did not allow to allocate
all merge buffers in it.
Made sure that the allocated sort buffer would be big enough
to contain all possible merge buffers.