Its root cause is a difference between the "readline" and "libedit" (header files)
definitions of "rl_completion_entry_function", where the "libedit" one is wrong anyway:
This variable is used as a pointer to a function returning "char *",
but "libedit" declares it as returning "int" and then adds casts on usage.
Change it to "CPFunction *" and get rid of the casts.
Two problems here:
Problem 1:
While constructing the join columns list the optimizer does as follows:
1. Sets the join_using_fields/natural_join members of the right JOIN
operand.
2. Makes a "table reference" (TABLE_LIST) to parent the two tables.
3. Assigns the join_using_fields/is_natural_join of the wrapper table
using join_using_fields/natural_join of the rightmost table
4. Sets join_using_fields to NULL for the right JOIN operand.
5. Passes the parent table up to the same procedure on the upper
level.
Step 1 overrides the the join_using_fields that are set for a nested
join wrapping table in step 4.
Fixed by making a designated variable SELECT_LEX::prev_join_using to
pass the data from step 1 to step 4 without destroying the wrapping
table data.
Problem 2:
The optimizer checks for ambiguous columns while transforming
NATURAL JOIN/JOIN USING to JOIN ON. While doing that there was no
distinction between columns that are used in the generated join
condition (where ambiguity can be checked) and the other columns
(where ambiguity can be checked only when resolving references
coming from outside the JOIN construct itself).
Fixed by allowing the non-USING columns to be present in multiple
copies in both sides of the join and moving the ambiguity check
to the place where unqualified references to the join columns are
resolved (find_field_in_natural_join()).
The optimizer takes away columns from GROUP BY/DISTINCT if they constitute
all the parts of an unique index.
However if some of the columns can contain NULLs this cannot be done
(because an UNIQUE index can have multiple rows with NULL values).
Fixed by not using UNIQUE indexes with nullable columns to remove
grouping columns from GROUP BY/DISTINCT.
If inserting a GPL header, include a complete one
Makefile.am, mysql.dsw, mysql.sln:
Removed C version of mysql-test-run
mysql.spec.sh:
Specify GPL license only in GPL sources
.del-my_manage.h:
Delete: mysql-test/my_manage.h
mysql.spec.sh:
Added GPL header
.del-mysql_test_run_new.c:
Delete: mysql-test/mysql_test_run_new.c
.del-mysql_test_run_new.dsp:
Delete: VC++Files/mysql-test/mysql_test_run_new.dsp
.del-my_create_tables.c:
Delete: mysql-test/my_create_tables.c
.del-mysql_test_run_new_ia64.dsp:
Delete: VC++Files/mysql-test/mysql_test_run_new_ia64.dsp
msql2mysql.sh:
Use up-to-date GPL header
.del-mysql_test_run_new.vcproj:
Delete: VC++Files/mysql-test/mysql_test_run_new.vcproj
.del-my_manage.c:
Delete: mysql-test/my_manage.c
Made the function opt_sum_query to return HA_ERR_KEY_NOT_FOUND when
no matches were found (instead of -1 it returned prior this patch).
This changes allow us to avoid possible conflicts with return values
from user-defined handler methods which also may return -1.
No particular test cases are provided with this fix.
1) move AbortOption from NdbTransaction to NdbOperation
2) let each operation have a "default" abort option dependant on
operation type
- read - AO_IgnoreError
- dml - AbortOnError
- scan take over - AbortOnError
3) Changed default value to execute() from AbortOnError to DefaultAbortOption, which does not change the operations abort-option.
Another value to execute(AO) is equivalent to setting AO on each operation before calling execute
4) execute() does _only_ return -1 if transaction has been aborted
otherwise, you need to check each operation for error code
Checking for NULL before calling the val_xxx()
methods only checks for such arguments that are
known to be NULLs at compile time.
The arguments that may or may not contain
NULLs (e.g. function calls and possibly others)
are not checked at all.
Fixed by first calling the val_xxx() method and
then checking for null in SEC_TO_TIME().
In addition QUARTER() was not returning 0 (as all the
val_int() functions do when processing a NULL value).
The function that checks whether we can use keys for aggregates,
find_key_for_maxmin(), assumes that keys disabled by ALTER TABLE
... DISABLE KEYS are not in the set table->keys_in_use_for_query.
I.E., if a key is in this set, the optimizer assumes it is free to
use it.
The bug is that keys disabled with ALTER TABLE ... DISABLE KEYS still
appear in table->keys_in_use_for_query When the TABLE object has been
initialized with setup_tables(). Before setup_tables is called, however,
keys that are disabled in the aforementioned way are not included in
TABLE::keys_in_use_for_query.
The provided patch changes the code that updates keys_is_use_for_query so
that it assumes that keys_is_use_for_query already takes into account all
disabled keys, and generally all keys that should be used by the query.
it can't be compile everywhere if we just cast pointer to pthread_t
So now we use correct pthread_self() value as the pthread_t and
pointer where we need to identify the thread exactly.
binlog_filter and rpl_filter are initialized in server's 'main' function
which doesn't work in the embedded server. So this code moved to
function working in both cases.
Removed a lot of compiler warnings
Removed not used variables, functions and labels
Initialize some variables that could be used unitialized (fatal bugs)
%ll -> %l