Still leakage, make sure all unlinked operations are put back so they will be release
(on failing blob operations, when AO_IgnoreError)
ndb/src/ndbapi/NdbConnection.cpp:
Still leakage, make sure all unlinked operations are put back so they will be release
bugs.
ndb/include/util/UtilBuffer.hpp:
Fix accessing memory after free(), if called with source and destination
pointer the same (which should not really happen...).
Fixes a problem in ndb_restore.
ndb/src/common/util/SimpleProperties.cpp:
Fix typo in check of maxValue.
into willster.(none):/home/stewart/Documents/MySQL/4.1/bug19914-mk2-merge2
sql/ha_myisammrg.cc:
Auto merged
sql/ha_ndbcluster.cc:
Auto merged
sql/sql_select.cc:
Auto merged
fixes for ndb_* tests broken by previous fix
be more careful in ndb about setting errors on failure of info call (especially
in open)
sql/ha_ndbcluster.cc:
fix some ndb* tests failing due to fix for 19914
be more careful about setting errors on failure of info call
sql/ha_ndbcluster.h:
fix some ndb* tests failing due to fix for 19914
be more careful about setting errors on failure of info call
Fix some too small buffers in backup
ndb/include/kernel/ndb_limits.h:
backport for 5.1
add MAX_WORDS_META_FILE for computing Backup::NO_OF_PAGES_META_FILE
ndb/src/kernel/blocks/backup/Backup.cpp:
Make sure to set maxInsert so that we actually can handle NO_OF_META_PAGES
ndb/src/kernel/blocks/backup/Backup.hpp:
backport for 5.1
add MAX_WORDS_META_FILE for computing Backup::NO_OF_PAGES_META_FILE
Fixed a 4.1/5.0 vs. 5.1 name change in latest SR bug fix
ndb/src/kernel/blocks/dbdih/DbdihMain.cpp:
Fixed a 4.1/5.0 vs. 5.1 name change in latest SR bug fix
Fix monster SR bug making SR with ordered indexes (or temporary tables) broken
ndb/src/kernel/blocks/dbdih/DbdihMain.cpp:
Fix monster SR bug making SR with ordered indexes (or temporary tables) broken
Make sure postExecute is not run for blobs if AO_IgnoreError
ndb/src/ndbapi/NdbConnection.cpp:
If AO_IgnoreError, error codes arent always set on individual operations, making postExecute impossible
Repair table could crash a server if there is not sufficient
memory (myisam_sort_buffer_size) to operate. Affects not only
repair, but also all statements that use create index by sort:
repair by sort, parallel repair, bulk insert.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
myisam/sort.c:
maxbuffer is number of BUFFPEK-s for repair. It is calculated
as records / keys. keys is number of keys that can be stored
in memory (myisam_sort_buffer_size). There must be sufficient
memory to store both BUFFPEK-s and keys. It was checked
correctly before this patch. However there is another
requirement that wasn't checked: there must be sufficient
memory for at least one key per BUFFPEK, otherwise repair
by sort/parallel repair cannot operate.
Return an error if there is not sufficient memory to store at
least one key per BUFFPEK.
Also fixed memory leak if thr_find_all_keys returns an error.
mysql-test/r/repair.result:
A test case for BUG#23175.
mysql-test/t/repair.test:
A test case for BUG#23175.
When resolving unqualified name references MySQL was not
checking what is the item type for the reference. Thus
e.g a string literal item that has by convention a name
equal to its string value will also work as a reference to
a SELECT list item or a table field.
Fixed by allowing only Item_ref or Item_field to referenced by
(unqualified) name.
mysql-test/r/func_gconcat.result:
Bug #14019: group by converts literal string to column name
- removed undeterministic testcase : order by a constant
means no order.
mysql-test/r/group_by.result:
Bug #14019: group by converts literal string to column name
- test case
mysql-test/t/func_gconcat.test:
Bug #14019: group by converts literal string to column name
- removed undeterministic testcase : order by a constant
means no order.
mysql-test/t/group_by.test:
Bug #14019: group by converts literal string to column name
- test case
sql/sql_base.cc:
Bug #14019: group by converts literal string to column name
- resolve unqualified by name refs only for real references
into moonlight.intranet:/home/tomash/src/mysql_ab/mysql-4.1-bug9678
BitKeeper/deleted/.del-lib_vio.c~d779731a1e391220:
Use local.
include/violite.h:
Use local.
sql/net_serv.cc:
Use local.
vio/viosocket.c:
Use local.
Fix race-condition between COPY_GCIREQ (GCP) and lcpSetActiveStatusEnd
Solution is _not_ to copy sysfileData from COPY_GCIREQ from "self"
ndb/src/kernel/blocks/ERROR_codes.txt:
Add error insert for dealying of copy sysfileData
ndb/src/kernel/blocks/dbdih/DbdihMain.cpp:
1) Add error insert for delaying of sysfileData
2) Change to that master is _not_ copying sysfileData from COPY_GCIREQ
as it might be updating it while COPY_GCIREQ is "in the fly"
hangs on Linux
If REPAIR TABLE ... USE_FRM is issued for table that is located in different
than default database server crash could happen.
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Affects 4.1 only.
mysql-test/r/repair.result:
A test case for BUG#22562.
mysql-test/t/repair.test:
A test case for BUG#22562.
sql/sql_base.cc:
In reopen_name_locked_table take database name from table_list (user specified
or default database) instead of from thd (default database).
Examined rows are counted for every join part. The per-join-part
counter was incremented over all iterations. The result variable
was replaced at the end of every iteration. The final result was
the number of examined rows by the join part that ended its
execution as the last one. The numbers of other join parts was
lost.
Now we reset the per-join-part counter before every iteration and
add it to the result variable at the end of the iteration. That
way we get the sum of all iterations of all join parts.
No test case. Testing this needs a look into the slow query log.
I don't know of a way to do this portably with the test suite.
sql/sql_select.cc:
Bug#12240 - Rows Examined in Slow Log showing incorrect number?
Fixed reseting and accumulation of examined rows counts.
into chilla.local:/home/mydev/mysql-4.1-bug8283-one
myisam/mi_check.c:
Auto merged
myisam/mi_packrec.c:
Auto merged
myisam/sort.c:
Auto merged
mysql-test/r/myisam.result:
Bug#8283 - OPTIMIZE TABLE causes data loss
Manual merge
mysql-test/t/myisam.test:
Bug#8283 - OPTIMIZE TABLE causes data loss
Manual merge
OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick
parallel repair. This means that it does not only rebuild all
indexes, but also the data file.
Non-quick parallel repair works so that there is one thread per
index. The first of the threads rebuilds also the new data file.
The problem was that all threads shared the read io cache on the
old data file. If there were holes (deleted records) in the table,
the first thread skipped them, writing only contiguous, non-deleted
records to the new data file. Then it built the new index so that
its entries pointed to the correct record positions. But the other
threads didn't know the new record positions, but put the positions
from the old data file into the index.
The new design is so that there is a shared io cache which is filled
by the first thread (the data file writer) with the new contiguous
records and read by the other threads. Now they know the new record
positions.
Another problem was that for the parallel repair of compressed
tables a common bit_buff and rec_buff was used. I changed it so
that thread specific buffers are used for parallel repair.
A similar problem existed for checksum calculation. I made this
multi-thread safe too.
include/my_sys.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of io_cache_share.
include/myisam.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of checksum calculation in mi_check.c.
'calc_checksum' is now in myisamdef.h:st_mi_sort_param.
myisam/mi_check.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Implemented a new parallel repair design.
Using a synchronized shared read/write cache.
Allowed for thread specific bit_buff, rec_buff, and calc_checksum.
myisam/mi_open.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added DBUG output.
myisam/mi_packrec.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Allowed for thread specific bit_buff and rec_buff.
myisam/myisamdef.h:
Bug#8283 - OPTIMIZE TABLE causes data loss
Commented on checksum calculation variables.
Allowed for thread specific bit_buff.
Added DBUG output for better table crash detection.
myisam/sort.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added implications of the new parallel repair design.
Renamed 'info' -> 'sort_param'.
Added DBUG output.
mysql-test/r/myisam.result:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added test results.
mysql-test/t/myisam.test:
Bug#8283 - OPTIMIZE TABLE causes data loss
Added test cases.
mysys/mf_iocache.c:
Bug#8283 - OPTIMIZE TABLE causes data loss
Redesign of io_cache_share.
We do now allow a writer to synchronize himself with the
readers of a shared cache. When all threads join in the lock,
the writer copies the data from his write buffer to the shared
read buffer.
Add checking of REDO to earlier during SR
so take-over of node can be performed
if it can't be restarted using logs
(which btw is really weird...as it _should_ be able to use logs of other node in node group)
Otherwise cluster could be started and 1 fragment on one node could not have been restored
Making the cluster inconsisten, VERY BAD
ndb/src/kernel/blocks/dbdih/Dbdih.hpp:
Break-out methods which searches for REDO for a fragment, so it can be used earlier during SR
ndb/src/kernel/blocks/dbdih/DbdihMain.cpp:
Add checking of REDO to earlier during SR
so take-over of node can be performed
if it can't be restarted using logs
(which btw is really weird...as it _should_ be able to use logs of other node in node group)
This is addition to fix for bug21617. Valgrind reports an error when
opening merge table that has underlying tables with less indexes than
in a merge table itself.
Copy at most min(file->keys, table->key_parts) elements from rec_per_key array.
This fixes problems when merge table and subtables have different number of keys.
sql/ha_myisammrg.cc:
Copy at most min(file->keys, table->key_parts) elements from rec_per_key array.
This fixes problems when merge table and subtables have different number of keys.
Though this is not storage engine specific problem, I was able to
repeat this problem with BDB and NDB engines only. That was the
reason to add a test case into ndb_update.test. As a result
different bad things could happen.
BDB has removed duplicate rows which is not expected.
NDB returns an error.
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.
mysql-test/r/ndb_update.result:
A test case for bug#21381.
mysql-test/t/ndb_update.test:
A test case for bug#21381.
sql/sql_update.cc:
For multi table update notify storage engine about UPDATE IGNORE
as it is done in single table UPDATE.