Bug#53417 my_getwd() makes assumptions on the buffer sizes which not always hold true
The mysys library contains many functions for rewriting file paths. Most of these
functions makes implicit assumptions on the buffer sizes they write to. If a path is put
in my_realpath() it will propagate to my_getwd() which assumes that the buffer holding
the path name is greater than 2. This is not true in cases.
In the special case where a VARBIN_ITEM is passed as argument to the LOAD_FILE function
this can lead to a crash.
This patch fixes the issue by introduce more safe guards agaist buffer overruns.
This is the 5.1 merge and extension of the fix.
The server was happily accepting paths in table name in all places a table
name is accepted (e.g. a SELECT). This allowed all users that have some
privilege over some database to read all tables in all databases in all
mysql server instances that the server file system has access to.
Fixed by :
1. making sure no path elements are allowed in quoted table name when
constructing the path (note that the path symbols are still valid in table names
when they're properly escaped by the server).
2. checking the #mysql50# prefixed names the same way they're checked for
path elements in mysql-5.0.
When issuing a 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER' statement, the previous
position along with the new position is dumped into the error log. Namely,
the following information is printed out: skip_counter, group_relay_log_name
and group_relay_log_pos.
When issuing a 'CHANGE MASTER TO' statement, key elements of the previous
state, namely the host, port, the master_log_file and the master_log_pos
are dumped into the error log.
Iterative patch improvement. Previously committed patch
caused wrong result on Windows. The previous patch also
broke secure_file_priv for symlinks since not all file
paths which must be compared against this variable are
normalized using the same norm.
The server variable opt_secure_file_priv wasn't
normalized properly and caused the operations
LOAD DATA INFILE .. INTO TABLE ..
and
SELECT load_file(..)
to do different interpretations of the
--secure-file-priv option.
The patch moves code to the server initialization
routines so that the path always is normalized
once and only once.
It was also intended that setting the option
to an empty string should be equal to
lifting all previously set restrictions. This
is also fixed by this patch.
The server was not checking the supplied to COM_FIELD_LIST table name
for validity and compliance to acceptable table names standards.
Fixed by checking the table name for compliance similar to how it's
normally checked by the parser and returning an error message if
it's not compliant.
WHERE predicates containing references to empty tables in a
subquery were handled incorrectly by the optimizer when
executing EXPLAIN. As a result, the optimizer could try to
evaluate such predicates rather than just stop with
"Impossible WHERE noticed after reading const tables" as
it would do in a non-subquery case. This led to valgrind
errors and crashes.
Fixed the code checking the above condition so that subqueries
are not excluded and hence are handled in the same way as top
level SELECTs.
The server could be tricked to read packets indefinitely if it
received a packet larger than the maximum size of one packet.
This problem is aggravated by the fact that it can be triggered
before authentication.
The solution is to no skip big packets for non-authenticated
sessions. If a big packet is sent before a session is authen-
ticated, a error is returned and the connection is closed.
Problem: "COM_FIELD_LIST is an old command of the MySQL server, before there was real move to only
SQL. Seems that the data sent to COM_FIELD_LIST( mysql_list_fields() function) is not
checked for sanity. By sending long data for the table a buffer is overflown, which can
be used deliberately to include code that harms".
Fix: check incoming data length.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
of sync
In RBR, sometimes the table->s->last_null_bit_pos can be zero. This
has impact at the slave when it compares records fetched from the
storage engine against records in the binary log event. If
last_null_bit_pos is zero the slave, while comparing in
log_event.cc:record_compare function, would set all bits in the last
null_byte to 1 (assumed all 8 were unused) . Thence it would loose the
ability to distinguish records that were similar in contents except
for the fact that some field was null in one record, but not in the
other. Ultimately this would cause wrong matches, and in the specific
case depicted in the bug report the same record would be updated
twice, resulting in a lost update.
Additionally, in the record_compare function the slave was setting the
X bit unconditionally. There are cases that the X bit does not exist
in the record header. This could also lead to wrong matches between
records.
We fix both by conditionally resetting the bits: (i) unused null_bits
are set if last_null_bit_pos > 0; (ii) X bit is set if
HA_OPTION_PACK_RECORD is in use.
The server variable opt_secure_file_priv wasn't
normalized properly and caused the operations
LOAD DATA INFILE .. INTO TABLE ..
and
SELECT load_file(..)
to do different interpretations of the
--secure-file-priv option.
The patch moves code to the server initialization
routines so that the path always is normalized
once and only once.
It was also intended that setting the option
to an empty string should be equal to
lifting all previously set restrictions. This
is also fixed by this patch.
Potential deadlock situation involving LOCK_plugin,
LOCK_global_system_variables and LOCK_status.
This patch backports the fix from next-mr, unlocking
LOCK_plugin before calling plugin->init() and
add_status_vars().
Arg_comparator initializes 'comparators' array in case of
ROW comparison and does not free this array on destruction.
It leads to memory leaks.
The fix:
-added Arg_comparator::cleanup() method which frees
'comparators' array.
-added Item_bool_func2::cleanup() method which calls
Arg_comparator::cleanup() method
union...order by (select... where...)
The problem is mysql is trying to materialize and
cache the scalar sub-queries at JOIN::optimize
even for EXPLAIN where the number of columns is
totally different from what's expected.
Fixed by not executing the scalar subqueries
for EXPLAIN.
to cleanup open connections
It was possible to UNINSTALL storage engine plugin when binding
between THD object and storage engine is still active (e.g. in
the middle of transaction).
To avoid unclean deactivation (uninstall) of storage engine plugin
in the middle of transaction, additional storage engine plugin
lock is acquired by thd_set_ha_data().
If ha_data is not null and storage engine plugin was not locked
by thd_set_ha_data() in this connection before, storage engine
plugin gets locked.
If ha_data is null and storage engine plugin was locked by
thd_set_ha_data() in this connection before, storage engine
plugin lock gets released.
If handlerton::close_connection() didn't reset ha_data, server does
it immediately after calling handlerton::close_connection().
Note that this is just a framework fix, storage engines must switch
to thd_set_ha_data() from thd_ha_data() if they want to see fit.
on LOAD DATA
Two problems :
1. LOAD DATA was not checking for SQL errors and was sending an OK
packet even when there were errors reported already. Fixed to check for
SQL errors in addition to the error conditions already detected.
2. There was an over-ambitious assert() on the server to check if the
protocol is always followed by the client. This can cause crashes on
debug servers by clients not completing the protocol exchange for some
reason (e.g. --send command in mysqltest). Fixed by keeping the assert
only on client side, since the server always completes the protocol
exchange.
We should disable const subselect item evaluation because
subselect transformation does not happen in view_prepare_mode
and thus val_...() methods can not be called.
The problem is that we can not use make_cond_for_table().
This function relies on used_tables() condition
which is not set properly for subqueries.
As result subquery is not filtered out.
The fix is to use remove_eq_conds() function instead
of make_cond_for_table() func. 'remove_eq_conds()'
algorithm relies on const_item() value and it allows
to handle subqueries in right way.
Procedure, while DECIMAL works
Selecting of the CONCAT(...<SP variable>...) result into
a user variable may return wrong data.
Item_func_concat::val_str contains a number of memory
allocation-saving tricks. One of them concatenates
strings inplace inserting the value of one string
at the beginning of the other string. However,
this trick didn't care about strings those points
to the same data buffer: this is possible when
a CONCAT() parameter is a stored procedure variable -
Item_sp_variable::val_str() uses the intermediate
Item_sp_variable::str_value field, where it may
store a reference to an external buffer.
The Item_func_concat::val_str function has been
modified to take into account val_str functions
(such as Item_sp_variable::val_str) that return
a pointer to an internal Item member variable
that may reference to a buffer provided.
on index
'my_decimal' class has two members which can be used to access the
value. The member variable buf (inherited from parent class decimal_t)
is set to member variable buffer so that both are pointing to same value.
Item_copy_decimal::copy() uses memcpy to clone 'my_decimal'. The member
buffer is declared as an array and memcpy results in copying the values
of the array, but the inherited member buf, which should be pointing at
the begining of the array 'buffer' starts pointing to the begining of
buffer in original object (which is being cloned). Further updates on
'my_decimal' updates only the inherited member 'buf' but leaves
buffer unchanged.
Later when the new object (which now holds a inconsistent value) is cloned
again using proper cloning function 'my_decimal2decimal' the buf pointer
is fixed resulting in loss of the current value.
Using my_decimal2decimal instead of memcpy in Item_copy_decimal::copy()
fixed this problem.
The problem was that a syntactically invalid trigger could cause
the server to crash when trying to list triggers. The crash would
happen due to a mishap in the backup/restore procedure that should
protect parser items which are not associated with the trigger. The
backup/restore is used to isolate the parse tree (and context) of
a statement from the load (and parsing) of a trigger. In this case,
a error during the parsing of a trigger could cause the improper
backup/restore sequence.
The solution is to properly restore the original statement context
before the parser is exited due to syntax errors in the trigger body.
during an UPDATE
Extended the fix for bug 29310 to multi-table update:
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
The problem is that message resource (message.rc) is compiled as part of static library
sql.lib rather than with executable mysqld.exe. resource files do not work in static
libraries.
The fix is to add message.rc to mysqld.exe source files list.
Problem: ALTER TABLE ADD INDEX may lead to table copying if there's
numeric field(s) with non-default display width modificator specified.
Fix: compare numeric field's storage lenghts when we decide whether
they can be considered 'equal' for table alteration purposes.
Previously installed dynamic plugins are explicitly not loaded
on startup with --skip-grant-tables enabled. However, INSTALL
PLUGIN/UNINSTALL PLUGIN commands are allowed, and result in
inconsistent error messages (reporting duplicate plugin or
plugin does not exist).
This patch adds a check for --skip-grant-tables mode, and
returns error ER_OPTION_PREVENTS_STATEMENT to the user when
the above commands are attempted.
Problem: EXPLAIN EXTENDED was trying to resolve references to
freed temporary table fields for GROUP_CONCAT()'s ORDER BY arguments.
Fix: use stored original GROUP_CONCAT()'s arguments in such a case.
The crash is the result of an attempt made by JOIN::optimize to evaluate
the WHERE condition when no records have been actually read.
The fix is to remove erroneous 'outer_join' variable check.
The crash happens because greedy_serach
can not determine best plan due to
wrong inner table dependences. These
dependences affects join table sorting
which performs before greedy_search starting.
In our case table which has real 'no dependences'
should be put on top of the list but it does not
happen as inner tables have no dependences as well.
The fix is to exclude RAND_TABLE_BIT mask from
condition which checks if table dependences
should be updated.
CHECK_FIELD_IGNORE was treated as CHECK_FIELD_ERROR_FOR_NULL;
UPDATE...SET...NULL on NOT NULL fields behaved differently after
a trigger.
Now distinguishes between IGNORE and ERROR_FOR_NULL and save/restores
check-field options.
When mysqlbinlog was given the --database=X flag, it always printed
'ROLLBACK TO', but the corresponding 'SAVEPOINT' statement was not
printed. The replicated filter(replicated-do/ignore-db) and binlog
filter (binlog-do/ignore-db) has the same problem. They are solved
in this patch together.
After this patch, We always check whether the query is 'SAVEPOINT'
statement or not. Because this is a literal check, 'SAVEPOINT' and
'ROLLBACK TO' statements are also binlogged in uppercase with no
any comments.
The binlog before this patch can be handled correctly except one case
that any comments are in front of the keywords. for example:
/* bla bla */ SAVEPOINT a;
/* bla bla */ ROLLBACK TO a;
The log event of 'CREATE EVENT' was being binlogged with garbage
at the end of the query if 'CREATE EVENT' is followed by another SQL statement
and they were executed as one command.
for example:
DELIMITER |;
CREATE EVENT e1 ON EVERY DAY DO SELECT 1; SELECT 'a';
DELIMITER ;|
When binlogging 'CREATE EVENT', we always create a new statement with definer
and write it into the log event. The new statement is made from cpp_buf(preprocessed buffer).
which is not a c string(end with '\0'), but it is copied as a c string.
In this patch, cpp_buf is copied with its length.
The crash happens because of incorrect max_length calculation
in QUOTE function(due to overflow). max_length is set
to 0 and it leads to assert failure.
The fix is to cast expression result to
ulonglong variable and adjust it if the
result exceeds MAX_BLOB_WIDTH.