There was no way to return an error from the client library
if no MYSQL connections was established.
So here i added variables to store that king of errors and
made functions like mysql_error(NULL) to return these.
Remove the mysql_odbc_escape_string() function. The function
has multi-byte character escaping issues, doesn't honor the
NO_BACKSLASH_ESCAPES mode and is not used anymore by the
Connector/ODBC as of 3.51.17.
The problem is that when copying the supplied username and
database, no bounds checking is performed on the fixed-length
buffer. A sufficiently large (> 512) user string can easily
cause stack corruption. Since this API can be used from PHP
and other programs, this is a serious problem.
The solution is to increase the buffer size to the accepted
size in similar functions and perform bounds checking when
copying the username and database.
type of the result.
There are several functions that accept parameters of different types.
The result field type of such functions was determined based on
the aggregated result type of its arguments. As the DATE and the DATETIME
types are represented by the STRING type, the result field type
of the affected functions was always STRING for DATE/DATETIME arguments.
The affected functions are COALESCE, IF, IFNULL, CASE, LEAST/GREATEST, CASE.
Now the affected functions aggregate the field types of their arguments rather
than their result types and return the result of aggregation as their result
field type.
The cached_field_type member variable is added to the number of classes to
hold the aggregated result field type.
The str_to_date() function's result field type now defaults to the
MYSQL_TYPE_DATETIME.
The agg_field_type() function is added. It aggregates field types with help
of the Field::field_type_merge() function.
The create_table_from_items() function now uses the
item->tmp_table_field_from_field_type() function to get the proper field
when the item is a function with a STRING result type.
make sure that if builder configured with a non-standard (!= 3306)
default TCP port that value actually gets used throughout. if they
didn't configure a value, assume "use a sensible default", which
will be read from /etc/services or, failing that, from the factory
default. That makes the order of preference
- command-line option
- my.cnf, where applicable
- $MYSQL_TCP_PORT environment variable
- /etc/services (unless configured --with-tcp-port)
- default port (--with-tcp-port=... or factory default)
The cli_read_binary_rows function is used to fetch data from the server
after a prepared statement execution. It accepts a statement handler and gets
the connection handler from it. But when the auto-reconnect option is set
the connection handler is reset to NULL after reconnection because the
prepared statement is lost and the handler became useless. This case
wasn't checked in the cli_read_binary_rows function and caused server crash.
Now the cli_read_binary_rows function checks the connection handler to be
not NULL and returns an error if it is.
In embedded server we use result->alloc to store field data for the
result, but we didn't clean the result->alloc if the query returned
an empty recordset. Cleaning for the empty recordset enabled
The C optimizer may decide that data access operations
through pointer of different type are not related to
the original data (strict aliasing).
This is what happens in fetch_long_with_conversion(),
when called as part of mysql_stmt_fetch() : it tries
to check for truncation errors by first storing float
(and other types of data) into a char * buffer and then
accesses them through a float pointer.
This is done to prevent the effects of excess precision
when using FPU registers.
However the doublestore() macro converts a double pointer
to an union pointer. This violates the strict aliasing rule.
Fixed by making the intermediary variables volatile (
to not re-introduce the excess precision bug) and using
the intermediary value instead of the char * buffer.
Note that there can be loss of precision for both signed
and unsigned 64 bit integers converted to double and back,
so the check must stay there (even for compatibility
reasons).
Based on the excellent analysis in bug 28400.
(Part of fix for Bug#25621 Error in my_thread_global_end(): 1 threads didn't exit)
Give correct error message if InnoDB table is not found
(This allows us to drop a an innodb table that is not in the InnoDB registery)
- The "mysql client in mysqld"(which is used by
replication and federated) should use alarms instead of setting
socket timeout value if the rest of the server uses alarm. By
always calling 'my_net_set_write_timeout'
or 'my_net_set_read_timeout' when changing the timeout value(s), the
selection whether to use alarms or timeouts will be handled by
ifdef's in those two functions.
- Move declaration of 'vio_timeout' into "vio_priv.h"
- Always reset error when calling mysql_stmt_prepare a second time
- Set stmt->state to MYSQL_STMT_INIT_DONE before closing prepared stmt in server.
- Add test to mysql_client_test
- Remove mysql_stmt_close in mysqltest after each query
- Close all open statements in mysqltest if disable_ps_protocol is called.
- Removed not used variables and functions
- Added #ifdef around code that is not used
- Renamed variables and functions to avoid conflicts
- Removed some not used arguments
Fixed some class/struct warnings in ndb
Added define IS_LONGDATA() to simplify code in libmysql.c
I did run gcov on the changes and added 'purecov' comments on almost all lines that was not just variable name changes
(Mostly in DBUG_PRINT() and unused arguments)
Fixed bug in query cache when used with traceing (--with-debug)
Fixed memory leak in mysqldump
Removed warnings from mysqltest scripts (replaced -- with #)
mysql_stmt_affected_rows()
The problem was that affected_rows for prepared statement wasn't updated
in the client library on the error. The solution is to always update
affected_rows, which will be equal to -1 on the error.
when calling a SP from C API"
The bug was caused by lack of checks for misuse in mysql_real_query.
A stored procedure always returns at least one result, which is the
status of execution of the procedure itself.
This result, or so-called OK packet, is similar to a result
returned by INSERT/UPDATE/CREATE operations: it contains the overall
status of execution, the number of affected rows and the number of
warnings. The client test program attached to the bug did not read this
result and ivnoked the next query. In turn, libmysql had no check for
such scenario and mysql_real_query was simply trying to send that query
without reading the pending response, thus messing up the communication
protocol.
The fix is to return an error from mysql_real_query when it's called
prior to retrieval of all pending results.
When using a parameter bind MYSQL_TYPE_DATE in a prepared statement,
the time part of the MYSQL_TIME buffer was written to zero in
mysql_stmt_execute(). The param_store_date() function in libmysql.c
worked directly on the provided buffer.
Changed to use a copy of the buffer.
There actually was 3 different problems -
hash_user_connections wasn't cleaned
one strdupped database name wasn't freed
and stmt->mem_root wasn't cleaned as it was
replased with mysql->field_alloc for result
For the last one - i made the library using stmt's
fields to store result if it's the case.