Examined rows are counted for every join part. The per-join-part
counter was incremented over all iterations. The result variable
was replaced at the end of every iteration. The final result was
the number of examined rows by the join part that ended its
execution as the last one. The numbers of other join parts was
lost.
Now we reset the per-join-part counter before every iteration and
add it to the result variable at the end of the iteration. That
way we get the sum of all iterations of all join parts.
No test case. Testing this needs a look into the slow query log.
I don't know of a way to do this portably with the test suite.
set. This has always worked because when flag is !=0 then
HA_VAR_LENGTH_KEY is always set. Therefore, a test case cannot
reveal a faulty behavior.
Fix for bug#23074: typo in myisam/sort.c
The bug is present only in 4.1, will be null-merged to 5.0
For InnoDB, check value of thd->transaction.all.innodb_active_trans instead of thd->transaction.stmt.innobase_tid to see if we really need to rollback.
statement.
The problem was that during statement re-execution if the result was
empty the old result could be returned for group functions.
The solution is to implement proper cleanup() method in group
functions.
When the client program had its stdout file descriptor closed by the calling
shell, after some amount of work (enough to fill a socket buffer) the server
would complain about a packet error and then disconnect the client.
This is a serious security problem. If stdout is closed before the mysql is
exec()d, then the first socket() call allocates file number 1 to communicate
with the server. Subsequent write()s to that file number (as when printing
results that come back from the database) go back to the server instead in
the command channel. So, one should be able to craft data which, upon being
selected back from the server to the client, and injected into the command
stream become valid MySQL protocol to do something nasty when sent /back/ to
the server.
The solution is to close explicitly the file descriptor that we *printf() to,
so that the libc layer and the OS layer both agree that the file is closed.
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
to run the whole testcase to find wich testcases need to be checked more carefully
and the just "copy and paste" the suspicious test case names to
a new mysql-test-run.pl command.
Move the code to look for exe_mysqld earlier => to initial_setup
Fix warnings detected by running with "diagnostics"
Remove unused option "opt_result_ext"
Init "path_ndb_examples_dir"
Add checking of REDO to earlier during SR
so take-over of node can be performed
if it can't be restarted using logs
(which btw is really weird...as it _should_ be able to use logs of other node in node group)
Otherwise cluster could be started and 1 fragment on one node could not have been restored
Making the cluster inconsisten, VERY BAD
This is addition to fix for bug21617. Valgrind reports an error when
opening merge table that has underlying tables with less indexes than
in a merge table itself.
Copy at most min(file->keys, table->key_parts) elements from rec_per_key array.
This fixes problems when merge table and subtables have different number of keys.
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
Improve 'run_testcase_need_slave_restart' to detect if a slave restart really is necessary.
So far all rpl test requires a slave restart, but for all other tests it can be skipped
Improve the sort order used by --reorder