time per connection
Removed const_string() method from Item_string (it was only used in one
place, in a bad way). Defer possible SP variable, and access data directly
instead, in date_format item.
Basically, this fix contains a test case and removing of a workaround
for replication. This fix became possible after pushing WL#2897
(Complete definer support in stored routines).
The idea is to add DEFINER-clause in CREATE PROCEDURE and CREATE FUNCTION
statements. Almost all support of definer in stored routines had been already
done before this patch.
NOTE: this patch changes behaviour of dumping stored routines in mysqldump.
Before this patch, mysqldump did not dump DEFINER-clause for stored routines
and this was documented behaviour. In order to get full information about stored
routines, one should have dumped mysql.proc table. This patch changes this
behaviour, so that DEFINER-clause is dumped.
Since DEFINER-clause is not supported in CREATE PROCEDURE | FUNCTION statements
before this patch, the clause is covered by additional version-specific comments.
if --skip-grant-tables specified.
The problem is that there is a check that prevents creating a definer
with empty host name.
In --skip-grant-tables mode this check prevents the user from creating a
trigger/view without explicitly specifying its definer. This happens, because
in --skip-grant-tables mode CURRENT_USER is ''@''. According to Sanja this
check was implemented intentionally.
However, according to the MySQL manual it is possible to specify empty host
name (as well as empty user name). Moreover, the behaviour for stored routines
is different in this aspect -- we allow them to be created with implicit
definer.
Based on this, we believe it is OK to change the behaviour for views to be
similar with the behaviour for stored routines.
The cause of this bug was a design flaw due to which the list of natural
join columns was incorrectly computed and stored for nested joins that
are not natural joins, but are operands (possibly indirect) of nested joins.
The patch corrects the flaw in a such a way, that the result columns of a
table reference are materialized only if it is a leaf table (that is, only
if it is a view, stored table, or natural/using join).
union.result, union.test:
Adding test case.
item.cc:
Allow safe character set conversion in UNION
- string constant to column's charset
- to unicode
Thus, UNION now works the same with CONCAT (and other string functions)
in respect of aggregating arguments with different character sets.
The idea of the fix is to extend support of non-SUID triggers for backward
compatibility. Formerly non-SUID triggers were appeared when "new" server
is being started against "old" database. Now, they are also created when
"new" slave receives updates from "old" master.
Fix URLs.
README:
Fix URL.
mysqltest.result:
Update test result for real_sleep error message.
mysqltest.c:
Fix do_sleep() to print correct command name for real_sleep.
bug #13525 "Rename table does not keep info of triggers".
Now we use MYSQLTEST_VARDIR in order to be able to run this test in different
vardir. Also improved cleanup after the test.
--skip-grant-tables. However when deleting functions UDFs list was checked
regardless of whther UDFs are initialized or not. Additional check is added
into free_udf() and find_udf() functions to prevent possible runtime errors.
Let us transfer triggers associated with table when we rename it (but only if
we are not changing database to which table belongs, in the latter case we will
emit error).
The cause of the bug was an ASSERT that checked the consistency
of TABLE_SHARE::db and TABLE_LIST::db and failed for I_S tables.
The fix relaxes the requirement for consistency for I_S.
column is increasing when table is recreated with PS/SP":
make use of create_field::char_length more consistent in the code.
Reinit create_field::length from create_field::char_length
for every execution of a prepared statement (actually fixes the
bug).
When a too long field is used for a key, only a prefix part of the field is
used. Length is reduced to the max key length allowed for storage. But if the
field have a multibyte charset it is possible to break multibyte char
sequence. This leads to the failed assertion in the innodb code and
server crash when a record is inserted.
The make_prepare_table() now aligns truncated key length to the boundary of
multibyte char.
symptom). sys_var::check_set() was wrong. mysqlbinlog makes use of such SET SQL_MODE=N
(where N is interpreted like if SQL_MODE was a field of type SET), so
this bug affected recovery from binlogs if the server was running with certain SQL_MODE values,
for example the default values on Windows (STRICT_TRANS_TABLES); to work around this bug people
had to edit mysqlbinlog's output.
if the function, invoked in a non-binlogged caller (e.g. SELECT, DO), failed half-way on the master,
slave would stop and complain that error code between him and master mismatch.
To solve this, when a stored function is invoked in a non-binlogged caller (e.g. SELECT, DO), we binlog the function
call as SELECT instead of as DO (see revision comment of sp_head.cc for more).
And: minor wording change in the help text.
This cset will cause conflicts in 5.1, I'll merge.
problem was: when a connection disconnects having an open transaction affecting MyISAM and InnoDB, the ROLLBACK event stored in the binary log
contained a non-zero error code (1053 because of the disconnection), so when slave applied the transaction, slave complained that its ROLLBACK succeeded
(error_code=0) while master's had 1053, so slave stopped. But internally generated binlog events such as this ROLLBACK
should always have 0 as error code, as is true in 4.1 and was accidentally broken in 5.0,
so that there is no false alarm.
The name length was checked "the old way", not taking charsets into account,
when creating a stored routine.
Fixing this enforces the real limit (64 characters) again, and no truncation
is possible.
- Detect that connection to server has been broken in "net_clear". Since
net_clear is always called before we send command to server, we can be sure
that server has not received the command.
If item->cached_table is set, find_field_in_tables() returns found field
even if it doesn't belong to current select. Because Item_field::fix_fields
doesn't expect such behaviour, reported bug occurs.
Item_field::fix_fields() was modifed to detect when find_field_in_tables()
can return field from outer select and process such fields accordingly.
In order to ease this code which was searching and processing outed fields was
moved into separate function called Item_field::fix_outer_field().
- Pass "buffers[i]" to val_str() in udf_handler::fix_fields insteead of NULL.
- Add testcase for UDF that will load and run the udf_example functions
if available
The problem was a code generation bug: cpop instructions were not generated
when using ITERATE back to an outer block from a context with a declared
cursor; this would make it push a new cursor without popping in-between,
eventually overrunning the cursor stack with a crash as the result.
Fixed the calculation of how many cursors to pop (in sp_pcontext.cc:
diff_cursors()), and also corrected diff_cursors() and diff_handlers()
to when doing a "leave"; don't include the last context we're leaving
(we are then jumping to the appropriate pop instructions).
The Item_func_if::fix_length_and_dec() function when calculating length of
result doesn't take into account unsigned_flag. But it is taken when
calculating length of temporary field. This result in creating field that
shorter than needed. Due to this, in the reported query 40.0 converted to 9.99.
The function Item_func_if::fix_length_and_dec() now adds 1 to the max_length if
the unsigned_flag isn't set.
A subquery transformation changes the HAVING clause of the embedding query if the subquery contains
a GROUP BY clause. Yet the split_sum_func2 function was not applied to the modified HAVING clause.
This could result in wrong answers.
Added { ... } around float8get() macro, avoids VC7 error
message "illegal else without matching if"
mtr_report.pl:
Parse error logs to create "warnings" file
mtr_cases.pl:
Added optoion --ignore-disabled-def
Windows build now let TZ pass, removed
work around
mysql-test-run.pl, mtr_process.pl:
Back port of changes from 5.0
errorneously abort reporting failure to kill child processes, where in
reality the problem was merely that the child had become a zombie because
of missing waitpid() call.
1. Fix index access costs for MERGE tables, set block_size=myisam_block_size/#underlying_tables
instead of 0 which it was before.
2. Make index scans on MERGE table to return records in (key_tuple, merge_table_rowid) order,
instead of just (key_tuple) order. This makes an index scan on MERGE table to be truly a ROR-scan
which is a requirement for index_merge union/intersection.
Bug #17257 ndb, update fails for inner joins if tables do not have Primary Key
change: the allocated area by setValue may not be around for later, store hidden key in special member variable instead
internal charset to one associated with currently being handled query.
To note such a query can come from interactive client either.
There was a discussion within replication team and Monty who's suggestion won.
It avoids straightforward parsing of all `set' queries that could affect client side
character set.
According to the idea, mysql client does not parse `set' queries but rather cares of
`charset new_cs_name' command.
This command is generated by mysqlbinlog in form of exclaiming comment (Lars' suggestion)
so that enlightened clients like `mysql' knows what to do with it.
Interactive human can switch between many multi-byte charsets during the session
providing the command explicitly.
To note that setting new internal mysql's charset does not
trigger sending any `SET' sql statement to the server.
Check if AGGREGATE was given with a stored (non-UDF) function, and return
error in that case.
Also made udf_example/udf_test work again, by adding a missing *_init()
function. (_init() functions required unless --allow_suspicious_udfs is
given to the server, since March 2005 - it seems udf_example wasn't updated
at the time.)
not needed by the tescases. This will save test time for those testcases
that does not need cluster, but need a restart, as they dont have to wait
the extra time it would take for cluster to restart. It will also save
time for other testcases, as cluster does not
need to be contacted for each table to be dropped or created.
Backport from 5.1
fix for bug#8461
BUG 8461 - TRUNCATE returns incorrect result if 2nd argument is negative
Reason: Both TRUNCATE/ROUND converts INTEGERS to DOUBLE and back to INTEGERS
Changed the integer routine to work on integers only.
This bug affects 4.1, 5.0 and 5.1
Fixing in 4.1 will need to change the routine to handle different types individually.
5.0 did had different routines for different types already just the INTEGER routine was bad.
Bug #17158 load data infile of char values into table of char with no (PK) fails to load
Bug #17081 Doing "LOAD DATA INFILE" directly after delete can cause missing data
If check_quick_select returns non-empty range then the function cost_group_min_max
cannot return 0 as an estimate of the number of retrieved records.
Yet the function erroneously returned 0 as the estimate in some situations.
When an ambiguous field name is used in a group by clause a warning is issued
in the find_order_in_list function by a call to push_warning_printf.
An expression that was not always valid was passed to this call as the field
name parameter.
There are (at least) two implementations of the checksum
computation. One is in MyISAM for the quick checksum. It
is executed on every row change. The other is in the
SQL layer for the extended checksum. It retrieves all rows
of a table via the respective storage engine.
In former MySQL versions varchars were stored with their
maximum length, but now with their real length similar to
blobs.
This change had been forgotten to take care of in the
extended checksum calculation. Hence too much data was
checksumed. In MyISAM this change had been taken care of
already. Only the real data is included in the checksum.
I changed mysql_checksum_table() so that it uses the
length information of true varchar fields instead
of the field length like in former varchar
implementations.
A query with a group by and having clauses could return a wrong
result set if the having condition contained a constant conjunct
evaluated to FALSE.
It happened because the pushdown condition for table with
grouping columns lost its constant conjuncts.
Pushdown conditions are always built by the function make_cond_for_table
that ignores constant conjuncts. This is apparently not correct when
constant false conjuncts are present.
which was fixed by earlier changesets; LOAD INDEX is not allowed in functions.
Also testing CACHE INDEX, while OPTIMIZE and CHECK were covered by existing tests already.
The problem has manifested itself in the cases when we have a nested outer join
for which it can be inferred that one of the inner tables is a single row table.