close table->update_handler in close_thread_tables().
it's not enough to do it in sql_update.cc only, because
sql_insert.cc can also do updates (REPLACE) and even
sql_delete.cc can (DELETE ... FOR PORTION OF)
when auto-adding a virtual LONG_UNIQUE_HASH_FIELD, fill in
a Virtual_column_info for it, so that fill_alter_inplace_info()
would know we're adding a virtual field (ALTER_ADD_VIRTUAL_COLUMN).
The bug manifested itself when executing a query with materialized
view/derived/CTE whose specification was a SELECT query contained
another materialized derived and impossible WHERE/HAVING condition
was detected for this SELECT.
As soon as such condition is detected the join structures of all
derived tables used in the SELECT are destroyed. So optimization
of the queries specifying these derived tables is impossible. Besides
it's not needed.
In 10.3 optimization of a materialized derived table is performed before
detection of impossible WHERE/HAVING condition in the embedding SELECT.
with UNION ALL after INTERSECT
EXPLAIN EXTENDED erroneously showed UNION instead of UNION ALL in
the warning if UNION ALL followed INTERSECT or EXCEPT operations.
The bug was in the function st_select_lex_unit::print() that printed
the text of the query used in the warning.
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
After FLUSH PRIVILEGES remember if the connection started under
--skip-grant-tables and keep it all-powerful, not a lowly anonymous.
One could use this connection to reset passwords as needed.
Also fix a crash in SHOW CREATE USER
post-merge changes:
* handle password expiration on old tables like everything else -
make changes in memory, even if they cannot be done on disk
* merge "debug" tests with non-debug tests, they don't use dbug anyway
* only run rpl password expiration in MIXED mode, it doesn't replicate
anything, so no need to repeat it thrice
* restore update_user_table_password() prototype, it should not change
ACL_USER, this is done in acl_user_update()
* don't parse json twice in get_password_lifetime and get_password_expired
* remove LEX_USER::is_changing_password, see if there was any auth instead
* avoid overflow in expiration calculations
* don't initialize Account_options in the constructor, it's bzero-ed later
* don't create ulong sysvars - they're not portable, prefer uint or ulonglong
* misc simplifications
This patch adds support for expiring user passwords.
The following statements are extended:
CREATE USER user@localhost PASSWORD EXPIRE [option]
ALTER USER user@localhost PASSWORD EXPIRE [option]
If no option is specified, the password is expired with immediate
effect. If option is DEFAULT, global policy applies according to
the default_password_lifetime system var (if 0, password never
expires, if N, password expires every N days). If option is NEVER,
the password never expires and if option is INTERVAL N DAY, the
password expires every N days.
The feature also supports the disconnect_on_expired_password system
var and the --connect-expired-password client option.
Closes#1166
Optimized the code that removed multiple equalities pushed from HAVING
into WHERE. Now this removal is postponed until all multiple equalities
are eliminated in substitute_for_best_equal_field().
The variable controls the amount of sampling analyze table performs.
If ANALYZE table with histogram collection is too slow, one can reduce the
time taken by setting analyze_sample_percentage to a lower value of the
total number of rows.
Setting it to 0 will use a formula to compute how many rows to sample:
The number of rows collected is capped to a minimum of 50000 and
increases logarithmically with a coffecient of 4096. The coffecient is
chosen so that we expect an error of less than 3% in our estimations
according to the paper:
"Random Sampling for Histogram Construction: How much is enough?”
– Surajit Chaudhuri, Rajeev Motwani, Vivek Narasayya, ACM SIGMOD, 1998.
The drawback of sampling is that avg_frequency number is computed
imprecisely and will yeild a smaller number than the real one.
in the tree bb-10.4-mdev7486
The crash was caused because of the similar problem as in mdev-16765:
Item_cond::excl_dep_on_group_fields_for_having_pushdown() was missing.
Change the defaults:
-histogram_size=0
+histogram_size=254
-histogram_type=SINGLE_PREC_HB
+histogram_type=DOUBLE_PREC_HB
Adjust the testcases:
- Some have ignorable changes in EXPLAIN outputs and
more counter increments due to EITS table reads.
- Testcases that meaningfully depend on the old defaults
are changed to use the old values.
Condition can be pushed from the HAVING clause into the WHERE clause
if it depends only on the fields that are used in the GROUP BY list
or depends on the fields that are equal to grouping fields.
Aggregate functions can't be pushed down.
How the pushdown is performed on the example:
SELECT t1.a,MAX(t1.b)
FROM t1
GROUP BY t1.a
HAVING (t1.a>2) AND (MAX(c)>12);
=>
SELECT t1.a,MAX(t1.b)
FROM t1
WHERE (t1.a>2)
GROUP BY t1.a
HAVING (MAX(c)>12);
The implementation scheme:
1. Extract the most restrictive condition cond from the HAVING clause of
the select that depends only on the fields that are used in the GROUP BY
list of the select (directly or indirectly through equalities)
2. Save cond as a condition that can be pushed into the WHERE clause
of the select
3. Remove cond from the HAVING clause if it is possible
The optimization is implemented in the function
st_select_lex::pushdown_from_having_into_where().
New test file having_cond_pushdown.test is created.
Add server support for user account locking.
This patch extends the ALTER/CREATE USER statements for
denying a user's subsequent login attempts:
ALTER USER
user [, user2] ACCOUNT [LOCK | UNLOCK]
CREATE USER
user [, user2] ACCOUNT [LOCK | UNLOCK]
The SHOW CREATE USER statement was updated to display the
locking state of an user.
Closes#1006
fix mysql_fix_privilege_tables to look for the `user` table in the
correct schema when deciding whether to convert it to `global_priv` table
make related tests a bit more verbose
Due to inconsistent usage of different cost models to calculate
the cost of ref accesses we have to make the calculation of the
gain promising by usage a range filter more complex.
When creating a field of type JSON, it will be automatically
converted to TEXT with CHECK (json_valid(`a`)), if there wasn't any
previous check for the column.
Additional things:
- Added two bug fixes that was found while testing JSON. These bug
fixes has also been pushed to 10.3 (with a test case), but as they
where minimal and needed to get this task done and tested, the fixes
are repeated here.
- CREATE TABLE ... SELECT drops constraints for columns that
are both in the create and select part.
- If one has both a default expression and check constraint for a
column, one can get the error "Expression for field `a` is refering
to uninitialized field `a`.
- Removed some duplicate MYSQL_PLUGIN_IMPORT symbols
- CREATE TABLE ... SELECT drops constraints for columns that
are both in the create and select part.
- Fixed by copying the constraint in
Column_definiton::redefine_stage1_common()
- If one has both a default expression and check constraint for a
column, one can get the error "Expression for field `a` is refering
to uninitialized field `a`.
- Fixed by ignoring default expressions for current column when checking
for CHECK constraint
This task involves the implementation for the optimizer trace.
This feature produces a trace for any SELECT/UPDATE/DELETE/,
which contains information about decisions taken by the optimizer during
the optimization phase (choice of table access method, various costs,
transformations, etc). This feature would help to tell why some decisions were
taken by the optimizer and why some were rejected.
Trace is session-local, controlled by the @@optimizer_trace variable.
To enable optimizer trace we need to write:
set @@optimizer_trace variable= 'enabled=on';
To display the trace one can run:
SELECT trace FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE;
This task also involves:
MDEV-18489: Limit the memory used by the optimizer trace
introduces a switch optimizer_trace_max_mem_size which limits
the memory used by the optimizer trace. This was implemented by
Sergei Petrunia.
Change the default authentication for root@localhost to
IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket
which provides secure passwordless login, while still allowing
SET PASSWORD to work as expected.
Also create a second all-privilege account for the user that owns
datadir (and thus has full access to the data anyway).
Compile unix_socket plugin statically into the server.
Find indexes of one table which parts participate in one constraint.
These indexes are called constraint correlated.
New methods: TABLE::find_constraint_correlated_indexes() and
virtual method check_index_dependence() were added.
For each index it's own constraint correlated index map was created
where all indexes that are constraint correlated with the current are
marked.
The results of this task are used for MDEV-16188 (Use in-memory
PK filters built from range index scans).