The old code added to 10.6 was inconsisting in how TCP/IP and
socket connection was chosen. One got also a confusing warning
in some cases.
Examples:
> ../client/mysql --print-defaults
../client/mysql would have been started with the following arguments:
--socket=/tmp/mariadbd.sock --port=3307 --no-auto-rehash
> ../client/mysql
ERROR 2002 (HY000): Can't connect to local server through socket '/tmp/mariadbd.sock' (2)
> ../client/mysql --print-defaults
../client/mysql would have been started with the following arguments:
--socket=/tmp/mariadbd.sock --port=3307 --no-auto-rehash
> ../client/mysql --port=3333
WARNING: Forcing protocol to TCP due to option specification. Please explicitly state intended protocol.
ERROR 2002 (HY000): Can't connect to server on 'localhost' (111)
> ../client/mysql --port=3333 --socket=sss
ERROR 2002 (HY000): Can't connect to local server through socket 'sss' (2)
> ../client/mysql --socket=sss --port=3333
ERROR 2002 (HY000): Can't connect to local server through socket 'sss' (2)
Some notable things:
- One gets a warning if one uses just --port if config file sets socket
- Using port and socket gives no warning
- Using socket and then port still uses socket
This patch changes things the following ways:
If --port= is given on the command line, the the protocol is automatically
changed to "TCP/IP".
- If --socket= is given on the command line, the protocol is automatically
changed to "socket".
- The last option wins
- No warning is given if protocol changes automatically.
do_shutdown_server(): After sending SIGKILL, invoke wait_until_dead().
Thanks to Sergei Golubchik for pointing out that the previous fix
does not actually work.
do_shutdown_server(): Call wait_until_dead() also when we are forcibly
killing the process (timeout=0). We have evidence that killing
the process may take some time and cause mystery failures in
crash recovery tests. For InnoDB, several failures were observed between
commit da094188f6 and
commit 0ee1082bd2
when no advisory file locking was being used by default.
These are mainly internal files so is a low impact change.
The few scripts/mysql*sql where renames to mariadb_* prefix
on the name.
mysql-test renamed to mariadb-test in the final packages
The tests innodb.import_tablespace_race, innodn.restart, and innodb.innodb-wl5522 move
the tablespace file between the data directory and the tmp directory specified by
global environment variables. However this is risky because it's not unusual that the
set tmp directory (often under /tmp) is mounted on another disk partition or device,
and 'move_file' command may fail with "Errcode: 18 'Invalid cross-device link.'"
To stabilize mysqltest in the described scenario, and prevent such
behavior in the future, let make_file() check both from file path and to
file path and make sure they are either both under MYSQLTEST_VARDIR or
MYSQL_TMP_DIR.
All new code of the whole pull request, including one or several files that
are either new files or modified ones, are contributed under the BSD-new license.
I am contributing on behalf of my employer Amazon Web Services, Inc.
While performing SAST scanning using Cppcheck against source code of
commit 81196469, several code vulnerabilities were found.
Fix following issues:
1. Parameters of `snprintf` function are incorrect.
Cppcheck error:
client/mysql_plugin.c:1228: error: snprintf format string requires 6 parameters but only 5 are given.
It is due to commit 630d7229 introduced option `--lc-messages-dir`
in the bootstrap command. However the parameter was not even given
in the `snprintf` after changing the format string.
Fix:
Restructure the code logic and correct the function parameters for
`snprintf`.
2. Null pointer is used in a `snprintf` which could cause a crash.
Cppcheck error:
extra/mariabackup/xbcloud.cc:2534: error: Null pointer dereference
The code intended to print the swift_project name, if the
opt_swift_project_id is NULL but opt_swift_project is not NULL.
However the parameter of `snprintf` was mistakenly using
`opt_swift_project_id`.
Fix:
Change to use the correct string from `opt_swift_project`.
3. Potential double release of a memory
Cppcheck error:
plugin/auth_pam/testing/pam_mariadb_mtr.c:69: error: Memory pointed to by 'resp' is freed twice.
A pointer `resp` is reused and allocated new memory after it has been
freed. However, `resp` was not set to NULL after freed.
Potential double release of the same pointer if the call back
function doesn't allocate new memory for `resp` pointer.
Fix:
Set the `resp` pointer to NULL after the first free() to make sure
the same address is not freed twice.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
Renames the upgrade state file, and ensures the old
file is properly removed when `mariadb-upgrade` tool is executed.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.
- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
- Most engine costs has been found with this program. All steps to
calculate the new costs are documented in Docs/optimizer_costs.txt
- User optimizer_cost variables are given in microseconds (as individual
costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
(9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
This makes it easy to apply different cost modifiers in ha_..time()
functions for io and cpu costs.
- scan_time()
- rnd_pos_time() & rnd_pos_call_time()
- keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
of keys with a given number of ranges and optional number of blocks that
need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
Used heap table costs for json_table. The rest are using default engine
costs.
- Added the following new optimizer variables:
- optimizer_disk_read_ratio
- optimizer_disk_read_cost
- optimizer_key_lookup_cost
- optimizer_row_lookup_cost
- optimizer_row_next_find_cost
- optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
This allows one to change costs without having to compile a lot of
files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
using 'records_out' (min rows seen for table) to calculate filtering.
This greatly simplifies the filtering code in
JOIN_TAB::save_explain_data().
This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed. These fixes are in the following commits. To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.
InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
prevent that the optimizer is trying to use index-only scans on
the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
ha_innobase::rnd_pos_time() as the default engine cost functions now
works good for InnoDB.
Other things:
- Added --show-query-costs (\Q) option to mysql.cc to show the query
cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
the value that user is given. This is used to change cost from
microseconds (user input) to milliseconds (what the server is
internally using).
- Added include/my_tracker.h ; Useful include file to quickly test
costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
shown in microseconds for the user but stored as milliseconds.
This is to make the numbers easier to read for the user (less
pre-zeros). Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
check_index_intersect_extension() and similar functions to be able
to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
to calculate usage space of keys in b-trees. (Before we used numeric
constants).
- Removed code that assumed that b-trees has similar costs as binary
trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
based on the new fields from best_access_patch()
Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.
Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure. When a
handlerton is created, we also created a new cost variable for the
handlerton. We also create a new variable if the user changes a
optimizer cost for a not yet loaded handlerton either with command
line arguments or with SET
@@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
default_optimizer_costs The default costs + changes from the
command line without an engine specifier.
heap_optimizer_costs Heap table costs, used for temporary tables
tmp_table_optimizer_costs The cost for the default on disk internal
temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
accesses the handler has a pointer to this. The cost is copied
to the table on first access. If one wants to change the cost one
must first update the global engine cost and then do a FLUSH TABLES.
This was done to be able to access the costs for an open table
without any locks.
- When a handlerton is created, the cost are updated the following way:
See sql/keycaches.cc for details:
- Use 'default_optimizer_costs' as a base
- Call hton->update_optimizer_costs() to override with the engines
default costs.
- Override the costs that the user has specified for the engine.
- One handler open, copy the engine cost from handlerton to TABLE_SHARE.
- Call handler::update_optimizer_costs() to allow the engine to update
cost for this particular table.
- There are two costs stored in THD. These are copied to the handler
when the table is used in a query:
- optimizer_where_cost
- optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
structure. (Idea/Suggestion by Igor)