mysql_stmt_affected_rows()
The problem was that affected_rows for prepared statement wasn't updated
in the client library on the error. The solution is to always update
affected_rows, which will be equal to -1 on the error.
- Most likely caused by linking with libmysqlclient using gcc and not g++.
- Removing our "yassl_dummy_link_fix" hacks so some test programs are linked
with gcc
- Remove build of non working test programs in vio
Due to the complexity of this change, everything is documented in WL#3565
This patch is the third iteration, it takes into account the comments
received to date.
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
Non-upper-level INSERTs (the ones in the body of stored procedure,
stored function, or trigger) into a table that have AUTO_INCREMENT
column didn't affected the result of LAST_INSERT_ID() on this level.
The problem was introduced with the fix of bug 6880, which in turn was
introduced with the fix of bug 3117, where current insert_id value was
remembered on the first call to LAST_INSERT_ID() (bug 3117) and was
returned from that function until it was reset before the next
_upper-level_ statement (bug 6880).
The fix for bug#21726 brings back the behaviour of version 4.0, and
implements the following: remember insert_id value at the beginning
of the statement or expression (which at that point equals to
the first insert_id value generated by the previous statement), and
return that remembered value from LAST_INSERT_ID() or @@LAST_INSERT_ID.
Thus, the value returned by LAST_INSERT_ID() is not affected by values
generated by current statement, nor by LAST_INSERT_ID(expr) calls in
this statement.
Version 5.1 does not have this bug (it was fixed by WL 3146).
- Use the "%.*b" format when printing prepared and exeuted prepared statements to the log.
- Add test case to check that also prepared statements end up in the query log
Bug#14346 Prepared statements corrupting general log/server memory
- Use "stmt->query" when logging the newly prepared query instead of "packet"
into govinda.patg.net:/home/patg/mysql-build/mysql-5.1-5.0-merge2
Push by holyfoot@production.mysql.com on Tue Jul 25 13:41:40 2006:
bk clone -l -r'holyfoot/hf@mysql.com/deer.(none)|ChangeSet|20060725085017|41021' mysql-5.0 tmp_merge
when calling a SP from C API"
The bug was caused by lack of checks for misuse in mysql_real_query.
A stored procedure always returns at least one result, which is the
status of execution of the procedure itself.
This result, or so-called OK packet, is similar to a result
returned by INSERT/UPDATE/CREATE operations: it contains the overall
status of execution, the number of affected rows and the number of
warnings. The client test program attached to the bug did not read this
result and ivnoked the next query. In turn, libmysql had no check for
such scenario and mysql_real_query was simply trying to send that query
without reading the pending response, thus messing up the communication
protocol.
The fix is to return an error from mysql_real_query when it's called
prior to retrieval of all pending results.
the server's binlog file, might be set to a different directory. This adds a new
"vardir" parameter, which takes the name of the directory as a value, so that the
test_bug17667() test can find the binlog.
this is a cleanup patch for our current auto_increment handling:
new names for auto_increment variables in THD, new methods to manipulate them
(see sql_class.h), some move into handler::, causing less backup/restore
work when executing substatements.
This makes the logic hopefully clearer, less work is is needed in
mysql_insert().
By cleaning up, using different variables for different purposes (instead
of one for 3 things...), we fix those bugs, which someone may want to fix
in 5.0 too:
BUG#20339 "stored procedure using LAST_INSERT_ID() does not replicate
statement-based"
BUG#20341 "stored function inserting into one auto_increment puts bad
data in slave"
BUG#19243 "wrong LAST_INSERT_ID() after ON DUPLICATE KEY UPDATE"
(now if a row is updated, LAST_INSERT_ID() will return its id)
and re-fixes:
BUG#6880 "LAST_INSERT_ID() value changes during multi-row INSERT"
(already fixed differently by Ramil in 4.1)
Test of documented behaviour of mysql_insert_id() (there was no test).
The behaviour changes introduced are:
- LAST_INSERT_ID() now returns "the first autogenerated auto_increment value
successfully inserted", instead of "the first autogenerated auto_increment
value if any row was successfully inserted", see auto_increment.test.
Same for mysql_insert_id(), see mysql_client_test.c.
- LAST_INSERT_ID() returns the id of the updated row if ON DUPLICATE KEY
UPDATE, see auto_increment.test. Same for mysql_insert_id(), see
mysql_client_test.c.
- LAST_INSERT_ID() does not change if no autogenerated value was successfully
inserted (it used to then be 0), see auto_increment.test.
- if in INSERT SELECT no autogenerated value was successfully inserted,
mysql_insert_id() now returns the id of the last inserted row (it already
did this for INSERT VALUES), see mysql_client_test.c.
- if INSERT SELECT uses LAST_INSERT_ID(X), mysql_insert_id() now returns X
(it already did this for INSERT VALUES), see mysql_client_test.c.
- NDB now behaves like other engines wrt SET INSERT_ID: with INSERT IGNORE,
the id passed in SET INSERT_ID is re-used until a row succeeds; SET INSERT_ID
influences not only the first row now.
Additionally, when unlocking a table we check that the thread is not keeping
a next_insert_id (as the table is unlocked that id is potentially out-of-date);
forgetting about this next_insert_id is done in a new
handler::ha_release_auto_increment().
Finally we prepare for engines capable of reserving finite-length intervals
of auto_increment values: we store such intervals in THD. The next step
(to be done by the replication team in 5.1) is to read those intervals from
THD and actually store them in the statement-based binary log. NDB
will be a good engine to test that.
When using a parameter bind MYSQL_TYPE_DATE in a prepared statement,
the time part of the MYSQL_TIME buffer was written to zero in
mysql_stmt_execute(). The param_store_date() function in libmysql.c
worked directly on the provided buffer.
Changed to use a copy of the buffer.
After view onening real view db name and table name are placed
into table_list->view_db & table_list->view_name.
Item_field class does not handle these names properly during
intialization of Send_field.
The fix is to use new class 'Item_ident_for_show'
which sets correct view db name and table name for Send_field.
Removed Berkeley DB
configure.in:
Adjusted Netware support
basic.t.c:
Change for Netware
Makefile.am:
Use thread safe libmysqlclient_r if it was built
valgrind.supp:
Hide report about strlen/_dl_init_paths
ha_tina.cc:
Temporarely disable CSV engine on Netware,
as the engine depends on mmap()
net_serv.cc:
Include <sys/select.h> for Netware
Bug#17667: An attacker has the opportunity to bypass query logging.
This adds a new, local-only printf format specifier to our *printf functions
that allows us to print known-size buffers that must not be interpreted as
NUL-terminated "strings."
It uses this format-specifier to print to the log, thus fixing this
problem.
it breaks binary compatibility. The patch will be left intact
in 5.1. Warning: this changeset should be null-merged into 5.1.
A separate commit in order to push into the release clone of
5.0.19.
used
In a simple queries a result of the GROUP_CONCAT() function was always of
varchar type.
But if length of GROUP_CONCAT() result is greater than 512 chars and temporary
table is used during select then the result is converted to blob, due to
policy to not to store fields longer than 512 chars in tmp table as varchar
fields.
In order to provide consistent behaviour, result of GROUP_CONCAT() now
will always be converted to blob if it is longer than 512 chars.
Item_func_group_concat::field_type() is modified accordingly.
and new binlog format called "mixed" (which is statement-based except if only row-based is correct,
in this cset it means if UDF or UUID is used; more cases could be added in later 5.1 release):
SET GLOBAL|SESSION BINLOG_FORMAT=row|statement|mixed|default;
the global default is statement unless cluster is enabled (then it's row) as in 5.1-alpha.
It's not possible to use SET on this variable if a session is currently in row-based mode and has open temporary tables (because CREATE
TEMPORARY TABLE was not binlogged so temp table is not known on slave), or if NDB is enabled (because
NDB does not support such change on-the-fly, though it will later), of if in a stored function (see below).
The added tests test the possibility or impossibility to SET, their effects, and the mixed mode,
including in prepared statements and in stored procedures and functions.
Caveats:
a) The mixed mode will not work for stored functions: in mixed mode, a stored function will
always be binlogged as one call and in a statement-based way (e.g. INSERT VALUES(myfunc()) or SELECT myfunc()).
b) for the same reason, changing the thread's binlog format inside a stored function is
refused with an error message.
c) the same problems apply to triggers; implementing b) for triggers will be done later (will ask
Dmitri).
Additionally, as the binlog format is now changeable by each user for his session, I remove the implication
which was done at startup, where row-based automatically set log-bin-trust-routine-creators to 1
(not possible anymore as a user can now switch to stmt-based and do nasty things again), and automatically
set --innodb-locks-unsafe-for-binlog to 1 (was anyway theoretically incorrect as it disabled
phantom protection).
Plus fixes for compiler warnings.
1. dbug state is now local to a thread
2. new macros: DBUG_EXPLAIN, DBUG_EXPLAIN_INITIAL,
DBUG_SET, DBUG_SET_INITIAL, DBUG_EVALUATE, DBUG_EVALUATE_IF
3. macros are do{}while(0) wrapped
4. incremental modifications to the dbug state (e.g. "+d,info:-t")
5. dbug code cleanup, style fixes
6. _db_on_ and DEBUGGER_ON/OFF removed
7. rest of MySQL code fixed because of 3 (missing ;) and 6
8. dbug manual updated
9. server variable @@debug (global and local) to control dbug from SQL!
a. -#T to print timestamps in the log
- Add code to 'mysql_stmt_store_result' to allow it to be called on
a prepared statement with open server side cursor.
- Add tests to mysql_client_test that uses 'mysql_stmt_store_result'
Allow for configuration of the maximum number of indexes per table.
Added and used a configure.in macro.
Replaced fixed limits by the configurable limit.
Limited MyISAM indexes to its hard limit.
Fixed a bug in opt_range.cc for many indexes with InnoDB.
Tested for 2, 63, 64, 65, 127, 128, 129, 255, 256, and 257 indexes.
Testing this part of the bugfix requires rebuilding of the server
with different options. This cannot be done with our test suite.
Therefore I added the necessary test files to the bug report.
If you repeat the tests, please note that the ps_* tests fail for
everything but 64 indexes. This is because of differences in the
meta data, namely field lengths for index names etc.
large table gives server crash": make sure that when a MyISAM temporary
table is created for a cursor, it's created in its memory root,
not the memory root of the current query.
- The testcase create a .frm file consisting of "junk". Unfortunately the "junk" wasn't
written to the .frm file if mysql_client_test was run with -s option to make it run silent.
This most likely caused the file never to be created on windows, and thus the
test case failed.
cursor is interpreted latin1 character and Bug#9819 "Cursors: Mysql Server
Crash while fetching from table with 5 million records."
A fix for a possible memory leak when fetching into an SP cursor
in a long loop.
The patch uses a common implementation of cursors in the binary protocol and
in stored procedures and implements materialized cursors.
For implementation details, see comments in sql_cursor.cc
This fix is cancellation of ChangeSet
1.2329 05/07/12 08:35:30 reggie@linux.site +8 -0
Bug 7142 Show Fields from fails using Borland's dbExpress interface
The reason is we can't fix bug#7142 without
breaking of existing applications/APIs that worked fine with earlier 4.1
bug 7142 is fixed in 5.0
* Provide backwards compatibility extension to name resolution of
coalesced columns. The patch allows such columns to be qualified
with a table (and db) name, as it is in 4.1.
Based on a patch from Monty.
* Adjusted tests accordingly to test both backwards compatible name
resolution of qualified columns, and ANSI-style resolution of
non-qualified columns.
For this, each affected test has two versions - one with qualified
columns, and one without.
create_tmp_field_from_item() was creating tmp field without regard to
original field type of Item. This results in wrong type being reported to
client.
To create_tmp_field_from_item() added special handling for Items with
DATE/TIME field types to preserve their type.
This allows us to use statement replication with functions and triggers
The following things are fixed with this patch:
- NOW() and automatic timestamps takes the value from the main event for functions and triggers (which allows these to replicate with statement level logging)
- No side effects for triggers or functions with auto-increment values(), last_insert_id(), rand() or found_rows()
- Triggers can't return result sets
Fixes bugs:
#12480: NOW() is not constant in a trigger
#12481: Using NOW() in a stored function breaks statement based replication
#12482: Triggers has side effects with auto_increment values
#11587: trigger causes lost connection error
"Process NATURAL and USING joins according to SQL:2003".
* Some of the main problems fixed by the patch:
- in "select *" queries the * expanded correctly according to
ANSI for arbitrary natural/using joins
- natural/using joins are correctly transformed into JOIN ... ON
for any number/nesting of the joins.
- column references are correctly resolved against natural joins
of any nesting and combined with arbitrary other joins.
* This patch also contains a fix for name resolution of items
inside the ON condition of JOIN ... ON - in this case items must
be resolved only against the JOIN operands. To support such
'local' name resolution, the patch introduces a stack of
name resolution contexts used at parse time.
NOTICE:
- This patch is not complete in the sense that
- there are 2 test cases that still do not pass -
one in join.test, one in select.test. Both are marked
with a comment "TODO: WL#2486".
- it does not include a new test specific for the task
subqry order by server crash": failing DBUG_ASSERT(curr_join == this)
when opening a cursor.
Ensure that for top-level join curr_join == join (always),
and thus fix the failing assert.
curr_join is a hack to ensure that uncacheable subqueries can be
re-evaluated safely, and should be never different from main join
in case of top-level join.
cursors. This should fix Bug#11813 when InnoDB part is in
(tested with a draft patch).
The idea of the patch is that if a storage engine supports
consistent read views, we open one when open a cursor,
set is as the active view when fetch from the cursor, and close
together with cursor close.
The idea of the patch
is that every cursor gets its own lock id for table level locking.
Thus cursors are protected from updates performed within the same
connection. Additionally a list of transient (must be closed at
commit) cursors is maintained and all transient cursors are closed
when necessary. Lastly, this patch adds support for deadlock
timeouts to TLL locking when using cursors.
+ post-review fixes.
Changed defaults option --instance to --defaults-group-suffix
Changed option handling to allow --defaults-file, --defaults-extra-file and --defaults-group-suffix to be given in any order
Changed MYSQL_INSTANCE to MYSQL_GROUP_SUFFIX
mysql_print_defaults now understands --defaults-group-suffix
Remove usage of my_tempnam() (not safe function)
if( -> if ( and while( to while (
No separate typecode for MEDIUMTEXT/LONGTEXT is added, as we
have no sound decision yet what typecodes and for what types are
sent by the server (aka what constitutes a distinct type in MySQL).
The problem here is that columns that have an especially long type
such as an enum type with many options would be longer than 40 chars
but the type column returned from show columns always was defined
as varchar(40).
This is fixed in 5.0 using info schema.