of concurrent connections for the same account)"
Added support of account specific max_user_connections limit. Made all
user limits to be counted per account instead of the old behavior,
which was per user/host accounting. Added option which enables the old
behavior. Added testing of these to the test suite.
(After review version).
Logging to logging@openlogging.org accepted
sql_yacc.yy, sql_parse.cc, sql_lex.h, lex.h:
Implements the SHOW MUTEX STATUS command
set_var.cc, mysqld.cc, mysql_priv.h:
Added new GLOBAL variable timed_mutexes
ha_innodb.h:
New function innodb_mutex_show_status
ha_innodb.cc:
Added new innodb variables in SHOW STATUS
Implements the SHOW MUTEX STATUS command
innodb.test, innodb.result:
Added new row_lock_waits status variables tests.
variables.test, variables.result:
test new variable timed_mutexes
ut0ut.c:
New function ut_usectime.
sync0sync.c:
Mutex counting.
sync0rw.c:
New mutex parameters initialization.
srv0srv.c:
Counting row lock waits
row0sel.c, row0mysql.c:
Setting row_lock or table_lock state to thd.
que0que.c:
Added default no_lock_state to thd.
univ.i:
Added UNIV_SRV_PRINT_LATCH_WAITS debug define
sync0sync.ic:
Count mutex using.
sync0sync.h:
Added new parameters to mutex structure for counting.
sync0rw.h:
Added new parameters to rw_create_func.
srv0srv.h:
Added new innodb varuables to SHOW STATUS.
que0que.h:
Added thread lock states.
IN and CONVERT_TZ()" (with after review changes).
Now we add implicitly used time zone tables to global table list right
at the parsing stage instead of doing it later in mysql_execute_command()
or in check_prepared_statement().
No special test-case needed since this bug also manifests itself as
timezone2.test failure if one runs it with --ps-protocol option.
tables requires privileges for them if some table or column level grants
present" (with after-review fixes).
We should set SELECT_ACL for implicitly opened tables in
my_tz_check_n_skip_implicit_tables() to be able to bypass privilege
checking in check_grant(). Also we should exclude those tables from
privilege checking in multi-update.
transactional table locks to tables mentioned in the query. These locks
are released at the end of the transaction automatically.
This is fix for bugs #5655, #5998 and issue #3762.
Renamed HA_VAR_LENGTH to HA_VAR_LENGTH_PART
Renamed in all files FIELD_TYPE_STRING and FIELD_TYPE_VAR_STRING to MYSQL_TYPE_STRING and MYSQL_TYPE_VAR_STRING to make it easy to catch all possible errors
Added support for VARCHAR KEYS to heap
Removed support for ISAM
Now only long VARCHAR columns are changed to TEXT on demand (not CHAR)
Internal temporary files can now use fixed length tables if the used VARCHAR columns are short
- add_field_to_list() now uses <List>String
instead of TYPELIB to be able to distinguish
literals 'aaa' and hex literals 0xaabbcc.
- move some code from add_field_to_list() where
we don't know column charset yet, to
mysql_prepare_table(), where we do.
out of order". (final version)
Now instead of binding Item_trigger_field to TABLE objects during
trigger definition parsing at table open, we perform pass through
special list of all such objects in trigger. This allows easily check
all references to fields in old/new version of row in trigger during
execution of CREATE TRIGGER statement (this is more courtesy for users
since we can't check everything anyway).
We also report that such reference is bad by returning error from
Item_trigger_field::fix_fields() method (instead of setup_field())
This means that if trigger is broken we will bark during trigger
execution instead of trigger definition parsing at table open.
(i.e. now we allow to open tables with broken triggers).
binlog coordinates corresponding to the dump".
The good news is that now mysqldump can be used to get an online backup of InnoDB *which works for
point-in-time recovery and replication slave creation*. Formerly, mysqldump --master-data --single-transaction
used to call in fact mysqldump --master-data, so the dump was not an online dump (took big lock all time of dump).
The only lock which is now taken in this patch is at the beginning of the dump: mysqldump does:
FLUSH TABLES WITH READ LOCK; START TRANSACTION WITH CONSISTENT SNAPSHOT; SHOW MASTER STATUS; UNLOCK TABLES;
so the lock time is in fact the time FLUSH TABLES WITH READ LOCK takes to return (can be 0 or very long, if
a table is undergoing a huge update).
I have done some more minor changes listed in the paragraph of mysqldump.c.
WL#2237 "WITH CONSISTENT SNAPSHOT clause for START TRANSACTION":
it's a START TRANSACTION which additionally starts a consistent read on all
capable storage engine (i.e. InnoDB). So, can serve as a replacement for
BEGIN; SELECT * FROM some_innodb_table LIMIT 1; which starts a consistent read too.
Added push_back(void *, MEM_ROOT *) to make some list-handling code easier that needs to be allocated in a different mem-root
(Before one had to change thd->mem_root ; push_back(); restore mem_root.
Now thd->mem_root is a pointer to thd->main_mem_root and THR_MALLOC is a pointer to thd->mem_root.
This gives us the following benefits:
- Allow us to easily detect if arena has already been swapped before (this fixes a bug in setup_conds() where arena was swaped twice in some cases)
- Faster swaps of arenas (as we don't have to copy the whole MEM_ROOT)
- We don't anymore have to call my_pthread_setspecific_ptr(THR_MALLOC,...) to change where memory is alloced. Now it's enough to set thd->mem_root
- No RESTICT|CASCADE in DROP SP (since it's not implemented)
- Added optional "noise" to FETCH: [[NEXT] FROM]
- At least one statement required in all block constructs except BEGIN-END
(where zero is allowed)
FOUND is not a reserved keyword anymore
Added Item_field::set_no_const_sub() to be able to mark fields that can't be substituted
Added 'simple_select' method to be able to quickly determinate if a select_result is a normal SELECT
Note that the 5.0 tree is not yet up to date: Sanja will have to fix multi-update-locks for this merge to be complete
he has SELECT and INSERT privileges for table with primary key"
Now we set lex->duplicates= DUP_UPDATE right in parser if INSERT has
ON DUPLICATE KEY UPDATE clause, this simplifies insert_precheck()
function (this also fixes a bug) and some other code.
NO SQL
CONTAINS SQL (default)
READS SQL DATA
MODIFIES SQL DATA
These are needed as hints for the replication.
(Before this, we did have the default in the mysql.proc table, but no support in the parser.)
More tests.
Better error messages.
Fixed bug when checking if we updated all needed columns for INSERT.
Give an error if we encounter a wrong float value during parsing.
Don't print DEFAULT for columns without a default value in SHOW CREATE/SHOW FIELDS.
Fixed UPDATE IGNORE when using STRICT mode.
column types TIMESTAMP is NOT NULL by default, so in order to have
TIMESTAMP column holding NULL valaues you have to specify NULL as
one of its attributes (this needed for backward compatibility).
Main changes:
Replaced TABLE::timestamp_default_now/on_update_now members with
TABLE::timestamp_auto_set_type flag which is used everywhere
for determining if we should auto-set value of TIMESTAMP field
during this operation or not. We are also use Field_timestamp::set_time()
instead of handler::update_timestamp() in handlers.
Make "FRAC_SECOND"/"SQL_TSI_FRAC_SECOND" non-reserved words,
must like "SECOND"/"SQL_TSI_SECOND", "MINUTE"/"SQL_TSI_MINUTE",
etc.
Will wait for okay to push. (It doesn't break any tests.)
Easy to prevent crash, but the question was how to treat this case?
We ended up implementing the "global" SPs (i.e. with no associated
db), which were planned but left unresolved when SPs moved into dbs.
So now things like "call .p()" work too.
Mostly needed for Monty for him getting notion what needed for triggers
from new .FRM format.
Things to be done:
- Right placement of trigger's invocations
- Right handling of errors in triggers (including transaction rollback)
- Support for priviliges
- Right handling of DROP/RENAME table (hope that it will be handled automatically
with merging of .TRG into .FRM file)
- Saving/restoring some information critical for trigger creation and replication
with their definitions (e.g. sql_mode, creator, ...)
- Replication
Already has some known bugs so probably not for general review.
Instead of trying to open time zone tables during calculation of CONVERT_TZ() function
or setting of @@time_zone variable we should open and lock them with the rest of
statement's table (so we should add them to global table list) and after that use such
pre-opened tables for loading info about time zones.
Added a test case for bug #4922.
sql_select.cc:
Blocked an optimization performed by join_read_const_table when
applied to an inner table of a nested outer join.
It was done to fix bug #4922.
sql_yacc.yy:
Fixed a typo bug in the rule for join_table.
at least partially. It doesn't crash or give packets out of order
any more, but it's unclear why it doesn't actually return anything
from within an SP. This should be investigated at some point, but
for the moment this will have to do. (It is a rather obscure feature... :)