necessary (required to actually pass the sql-bench without aborting
with "table space full")
Build-tools/Do-compile:
- Enable InnoDB to autoextend the table space if necessary (required to
actually pass the sql-bench without aborting with "table space full")
mysys/my_init.c:
Fix the TZ variable bug: 100% CPU usage
sql/mysqld.cc:
Added optional NT service
sql/nt_servc.cc:
Added optional NT service
sql/nt_servc.h:
Added optional NT service
Docs/manual.texi:
ChangeLog
sql/field.h:
New virtual function to set a field to null or signal an error
sql/field_conv.cc:
New function to set a field to null or signal an error
sql/item.cc:
When setting a field to null internally (for WHERE testing) don't autoconvert NULL -> now() or last_insert_id()
sql/item.h:
New virtual function to set a field to null or signal an error
Fix bug: range estimator exaggerated small range size greatly if the paths in the B-tree happened to branch on a high level
innobase/btr/btr0cur.c:
Fix bug: range estimator exaggerated small range size greatly if the paths in the B-tree happened to branch on a high level
Build-tools/Do-compile:
- fixed brainfart that ruined the 3.23.53-Max binaries: of course
"--with-innodb" has to be added, when requested (will be part of
3.23.53a packages now)
WHERE column_name = key_column_name was calculated as true
for NULL values.
Docs/manual.texi:
Changelog
mysql-test/r/distinct.result:
Updated results caused by bug fix.
mysql-test/r/null_key.result:
New tests
mysql-test/t/null_key.test:
New tests
sql/sql_select.cc:
Additional change for previous changeset for using BLOB in GROUP BY
Don't initalize memory areas when run with --skip-safemalloc.
Docs/manual.texi:
ChangeLog
heap/heapdef.h:
Allocate HEAP blocks in smaller blocks to get better memory utilization and more speed when used with safemalloc.
heap/hp_open.c:
Allocate HEAP blocks in smaller blocks to get better memory utilization and more speed when used with safemalloc.
mysys/safemalloc.c:
Don't initalize memory areas when run with --skip-safemalloc.
This can in some cases increase speed with 20 times when debugging
- bumped up version number to 3.23.54 in configure.in
- replaced Docs/LICENSE with Docs/MySQLEULA.txt and modified
scripts/make_binary_distribution.sh and Build-tools/mysql-copyright*
accordingly.
BitKeeper/deleted/.del-LICENSE~4cfaff8de837acb8:
Delete: Docs/LICENSE
Build-tools/mysql-copyright-2:
- replaced LICENSE with MySQLEULA.txt
Build-tools/mysql-copyright:
- use "tar" instead of "gtar"
- replaced LICENSE with MySQLEULA.txt
configure.in:
- Bumped up version number to 3.23.54 now that 3.23.53 has been
tagged
scripts/make_binary_distribution.sh:
- replaced LICENSE with MySQLEULA.txt
Fix compilation error on HP-UX-11: pthread_t is a scalar there, not a struct like in HP-UX-10.20
innobase/os/os0thread.c:
Fix compilation error on HP-UX-11: pthread_t is a scalar there, not a struct like in HP-UX-10.20
myisam/mi_open.c:
Fixed problem with wrongly calculated max_data_file_length
mysql-test/Makefile.am:
Added missing .require test files
scripts/mysqlhotcopy.sh:
Remove end / from directory names (portability fix)
tests/grant.res:
Update of test results
(possibly also fixes binlog filename corruption problems--hasn't
been reproduced since)
sql/log.cc:
Fixed race caused by calling MYSQL_LOG::is_open() outside of critical section.
sql/sql_parse.cc:
added missing args to calls to MYSQL_LOG::new_file(bool)
BitKeeper/etc/logging_ok:
Logging to logging@openlogging.org accepted
SHOW INNODB STATUS always showed average bytes read as 0 in Unix
innobase/os/os0file.c:
SHOW INNODB STATUS always showed average bytes read as 0 in Unix
Do not let range estimator to return over 1 / 2 of total rows in table; use longlong in range estimation
btr0cur.h, ha_innobase.cc:
Use longlong in range estimation, in case there are > 4 billion rows
sql/ha_innobase.cc:
Use longlong in range estimation, in case there are > 4 billion rows
innobase/include/btr0cur.h:
Use longlong in range estimation, in case there are > 4 billion rows
innobase/btr/btr0cur.c:
Do not let range estimator to return over 1 / 2 of total rows in table; use longlong in range estimation