json path
Analysis: When searching for the given path in json string, if the current
step is of array range type, then path was considered reached which meant
path exists. So output was always true. The end indexes of range were not
evaluated.
Fix: If the current step type for a path is array range, then check if the
value array_counter[] is in range of n_item and n_item_end. If it is, then
path exists. Only then return true. If the range criteria is never met
then return false.
Analysis: When current date is '2022-03-17', dayname() gives 'Thursday'. The
previous json state is PS_KEYX which means key started with quote.
So now json parser for path is supposed to parse the key.
The keyname starts with 'T'. But the path transition table has JE_SYN when
previous state is PS_KEYX and next letter is 'T'. So it gives error.
Fix: We want to continue parsing the quoted keyname. So JE_SYN is incorrect.
Replaced it with PS_KNMX.
Range can be thought about in similar manner as wildcard (*) where
more than one elements are processed. To implement range notation, extended
json parser to parse the 'to' keyword and added JSON_PATH_ARRAY_RANGE for
path type. If there is 'to' keyword then use JSON_PATH_ARRAY range for
path type along with existing type.
This new integer to store the end index of range is n_item_end.
When there is 'to' keyword, store the integer in n_item_end else store in
n_item.
The variables, spider_crd_mode and spider_sts_mode, specify the
ways to fetch statistics from data nodes.
Using the SHOW command seems to work for any cases. Thus, we deprecate
the variables.
We deprecate the variable spider_xa_register_mode because there is
no need to perform a two phase commit for a read-only transaction.
Note that the variable only affects Spider's internal XA transactions.
A few PERFORMANCE_SCHEMA instrumentation keys were not exposed
in all_innodb_mutexes[]. Let us remove them.
The keys fts_pll_tokenize_mutex_key and read_view_mutex_key were
internally used. Let us make ReadView::m_mutex use the simpler
and smaller srw_mutex, hoping to improve memory access patterns.
trx_lock_t: Remove byte pad[256] and use
alignas(CPU_LEVEL1_DCACHE_LINESIZE) instead.
trx_t: Declare n_ref (the first member) aligned at cache line.
Pool: Assert that the sizes are multiples of
CPU_LEVEL1_DCACHE_LINESIZE, and invoke an aligned allocator.
Access the all_charsets[] array directly in a hot loop.
This avoids get_charset() that increments a shared counter via
my_collation_statistics_inc_use_count(), causing a scalability issue.
Instead, call get_charset() when a table is opened (and InnoDB
fills in dtype_t values) - this is enough, as charset is marked
ready (MY_CS_READY) only once, on the first get_charset() call,
after that it can be accessed directly.
This also fixes a potential bug in InnoDB. It used to access charsets
directly when filling dtype_t values. This wasn't preceded by any
get_charset() call inside InnoDB. Normally the server would call
get_charset() on reading the frm, but if the table was first opened
from a purge thread InnoDB could, theoretically, access a not-ready
charset in innobase_get_cset_width().
JSON Path
Analysis: When we have '-' followed by 0, then the state is
changed to JE_SYN, meaning syntax error.
Fix: Change the state to PS_INT instead, because we are
reading '0' next (integer) and it is not a syntax error.
This patch can be viewed as combination of two parts:
1) Enabling '-' in the path so that the parser does not give out a warning.
2) Setting the negative index to a correct value and returning the
appropriate value.
1) To enable using the negative index in the path:
To make the parser not return warning when negative index is used in path
'-' needs to be allowed in json path characters. P_NEG is added
to enable this and is made recognizable by setting the 45th index of
json_path_chr_map[] to P_NEG (instead of previous P_ETC)
because 45 corresponds to '-' in unicode.
When the path is being parsed and '-' is encountered, the parser should
recognize it as parsing '-' sign, so a new json state PS_NEG is required.
When the state is PS_NEG, it means that a negative integer is
going to be parsed so set is_negative_index of current step to 1 and
n_item is set accordingly when integer is encountered after '-'.
Next proceed with parsing rest of the path and get the correct path.
Next thing is parsing the json and returning correct value.
2) Setting the negative index to a correct value and returning the value:
While parsing json if we encounter array and the path step for the array
is a negative index (n_item < 0), then we can count the number of elements
in the array and set n_item to correct corresponding value. This is done in
json_skip_array_and_count.
on_table_fill_finished() should always be done at the end of open()
even if result is not Select_materialize but (for example)
Select_fetch_into_spvars.
on_table_fill_finished() should always be done at the end of open()
even if result is not Select_materialize but (for example)
Select_fetch_into_spvars.
The variable spider_internal_limit is for specifying the number of
records acquired by Spider from each remote server.
There seems not to be much use of the variable. Thus, we deprecate it
and the corresponding table options.
Reviewed-by: Nayuta Yanagisawa <nayuta.yanagisawa@hey.com>
We can only remove a subdirectory in mtr on an installed instance
Example failure previously:
CURRENT_TEST: rocksdb.rocksdb_log_dir
mysqltest: At line 15: Path '/usr/local/mariadb-10.9.0-linux-systemd-x86_64/mysql-test/var/tmp/1' is not a subdirectory of MYSQLTEST_VARDIR '/usr/local/mariadb-10.9.0-linux-systemd-x86_64/mysql-test/var/1'or MYSQL_TMP_DIR '/usr/local/mariadb-10.9.0-linux-systemd-x86_64/mysql-test/var/tmp/1'
Parameter rocksdb_log_dir specifies the path of MyRocks error logs.
By default, the error logs are stored in the same folder with MyRocks
redo logs. Being able to put human readable logs in one place and
machine logs in another place improves usability.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
This commit fixes problems with parsing ipv6 addresses given via
the wsrep_sst_receive_address and wsrep_node_address options.
Also, this commit removes extra lines in the configuration files
in the mtr test suites for Galera related to these parameters.
don't initialize error_log_handler_list in set_handlers()
* error_log_handler_list is initialized to LOG_FILE early, in init_base()
* set_handlers always reinitializes it to LOG_FILE, so it's pointless
* after init_base() concurrent threads start using sql_log_warning,
so following set_handlers() shouldn't modify error_log_handler_list
without some protection
The problem was not with the server behavior,
it was with wrong test results recorded.
Enabling both type_set and type_enum.
type_set.result was OK. The test did not fail.
type_enum.result was incorrectly recorded in this commit:
eb483c5181
Recording correct results.
- Trigger statement initiate update statement after bulk insert
operation and leads to disable of bulk operation. During commit,
InnoDB expects transaction to be in bulk insert mode before
applying the bulk insert operation. InnoDB transaction should
apply all bulk insert operation before disabling bulk insert
operation.
Remove unndeeded paths from mariadb-server-10.6.postinst as
it's already in export PATH and it fires 'command-with-path-in-maintainer-script'
in lintian checks
I change from `exit;` to `exit(1);` on a function `usage()`.
When we try to run mtr with a wrong option, a function `usage()` is called with the wrong option as its argument. In this case, because the function call `exit` in a first if statement, we get exit status 0.
`m_status == DA_ERROR' failed on SELECT after setting tmp_disk_table_size.
Analysis: Mismatch in number of warnings between "194 warnings" vs
"64 rows in set" is because of max_error_count variable which has default
value of 64.
About the corrupted tables, the error that occurs because of insufficient
tmp_disk_table_size variable is not reported correctly and we continue to
execute the statement. But because the previous error (about table being
full)is not reported correctly, this error moves up the stack and is
wrongly reported as parsing error later on while parsing frm file of one
of the information schema table. This parsing error gives corrupted table
error.
As for the innodb error, it occurs even when tmp_disk_table_size is not
insufficient is default but the internal error handler takes care of it
and the error doesn't show. But when tmp_disk_table_size is insufficient,
the fatal error which wasn't reported correctly moves up the stack so
internal error handler is not called. So it shows errors.
Fix: Report the error correctly.
This is a temporary fix for 10.2.
This problem was permanently fixed in 10.9 under terms of MDEV-27743.
This patch should propagate up to 10.8 then null-merged to 10.9.
The issue was that the value of MARIA_FOUND_WRONG_KEY was a value
that could be returned by ha_key_cmp.
This was already fixed in MyISAM, now using the same fix in Aria:
Setting the value to INT_MAX32, which should be impossible in any
normal cases.
I also fixed so that if there is a wrong key, we now get a proper error
message and not an assert.
- Purge thread is trying to access the table name when other thread
does rename of the table name. It leads to heap use after free error
by purge thread. purge thread should check whether the table name
is temporary while holding dict_sys.mutex.
Creating a temporary table with Spider is non-sense because a Spider
table cannot hold any physical data and it requires an additional
effort to manage even if it is configured correctly.
Set HTON_TEMPORARY_NOT_SUPPORTED to spider_hton->flags.
Reviewed-by: nayuta.yanagisawa@hey.com
Co-authored-by: d8sk4ueun@gmail.com
dict_acquire_mdl_shared(): Invoke the correct variant
dict_table_t::parse_name<true>() also when trylock=true,
that is, we are being called from fts_optimize_sync_table().
Ever since commit 82b7c561b7 (MDEV-24258)
this code was prone to a hang. If another thread requested an
exclusive dict_sys.latch between the time
dict_acquire_mdl_shared<trylock=true>() acquired a shared dict_sys.latch
and dict_table_t::parse_name<false>() was trying to acquire another
shared dict_sys.latch, InnoDB would get into an infinite livelock
of threads, because the futex-based srw-lock implementation prioritizes
exclusive latch requests.
dict_table_t::parse_name(): Rename the template parameter dict_locked
into dict_frozen.
buf_page_t::set_state(): Relax a debug assertion. It is fine to update
a read-fixed block descriptor to be both read-fixed and buffer-fixed.
buf_pool_t::watch_unset(): Fix some incorrect logic that was implemented
in commit e9e6db9355.
Thanks to Elena Stepanova for the test case.