Analysis: null_value is not set if any one of the arguments is NULL. So it
returns 1.
Fix: when either argument is NULL, set null_value to true, so that null can
be returned
starts with '['
Analysis:
When type is non-scalar and the json document has syntax error
then it is not detected during validating type. And Since other validate
functions take const argument, the error state is not stored eventually.
Fix:
After we run out of all schemas (in case of no error during validation) from
the schema list, go over the json document until there is error in parsing
or json doc has ended.
When the server is started with `--event-scheduler=ON` and with
`--skip-grant-tables` (or built as embedded server which has no grant
tables at all), the event scheduler *appears* to be enabled (`SELECT
@@global.event_scheduler` returns `'ON'`), but attempting to
manipulate it in any way returns a misleading error message:
"Cannot proceed, because event scheduler is disabled"
Possible solutions:
1. Fast-fail: fail immediately on startup if `EVENT_SCHEDULER` is set to
any value other than `DISABLED` when starting up without grant
tables, then prevent `SET GLOBAL event_scheduler` while running.
Problem: there are existing setup scripts and code which start with
this combination and assume it will not fail.
2. Warn and change value: if `EVENT_SCHEDULER` is set to any value
other than `DISABLED` when starting, print a warning and change it
immediately to `DISABLED`.
Advantage: The value of the `EVENT_SCHEDULER` system variable after
startup will be consistent with its functional unavailability.
3. Display a clear error: if `EVENT_SCHEDULER` is enabled, but grant
tables are not enabled, then ensure error messages clearly explain
the fact that the combination is not supported.
Advantage: The error message encountered by the end user when
attempting to manipulate the event scheduler (such as `CREATE
EVENT`) is clear and explicit.
This commit implements the combination of solutions (2) and (3): it
will set `EVENT_SCHEDULER=DISABLED` on startup (reflecting the
functional reality) and it will print a startup warning, *and* it will
print a *distinct* error message each time that an end user attempts to
manipulate the event scheduler, so that the end user will clearly understand
the problem even if the startup messages are not visible at that point.
It also adds an MTR test `main.events_skip_grant_tables` to verify the
expected behavior, and the unmodified `main.events_restart` test
continues to demonstrate no change in the error message when the event
scheduler is non-functional for *different* reasons.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the BSD-new
license. I am contributing on behalf of my employer Amazon Web Services
object of type 'Item_string' in sql/json_schema.cc
Analysis: make_string_literal() returns pointer of type
Item_basic_constant which is converted to pointer of type Item_string. Now,
Item_string is base class of Item_basic_constant, so the error about
downcasting.
Fix: using constructor of Item_string type directly instead of
downcasting would be more appropriate.
always return 1
Analysis: Implementation used double to store value. longlong is better
choice
Fix: Use longlong for multiple_of and the value to store it and num_flag to
check for decimals.
Analysis: multipleOf must be strictly greater then 0. However the
implementation only disallowed values strcitly less than 0
Fix: check if value is less than or equal to 0 instead of less than 0 and
return true.
Also fixed the incorrect return value for some other keywords.
Analysis: Current implementation does not check the number of elements in
the enum array and whether they are unique or not.
Fix: Add a counter that counts number of elements and before inserting the
element in the enum hash check whether it exists.
input json schema
Analysis: In case of syntax error while scanning json schema, true is
returned inspite of it being wanring and not error.
Fix: return true instead of false.
unevaluatedProperties with properties declared in subschemas
Analysis:
When a key fails to validate for "properties" when "properties" is being
treated as alternate schema, it needs to fall back on alternate schema
for "properites" itself ("unevaluatedProperties" in context of the bug).
But that doesn't happen and we end up returning false (=validated)
Fix:
When "properties" fails to validate as an alternate schema, fall back on
alternate schema for "properties" itself.
regex
Analysis:
When initializing Regexp_processor_pcre object, we set the PCRE2_CASELESS
flag which is responsible for case insensitive comparison.
Fix:
Unset the flag after initializing.
comment 2)
Analysis:
flag to check unique gets reset every time. Hence unique values cannot be
correctly checked
Fix:
do not reset the flag that checks unique for null, true and false
values.
comment 3)
Analysis:
current implementation checks for appropriate value but does not
return true.
Fix:
return true on error
comment 4)
Analysis:
Current implementation did not check for value type for values
inside required array.
Fix:
Check values inside required array
Implementation:
Implementation is made according to json schema validation draft 2020
JSON schema basically has same structure as that of json object, consisting
of key-value pairs. So it can be parsed in the same manner as
any json object.
However, none of the keywords are mandatory, so making guess about the
json value type based only on the keywords would be incorrect.
Hence we need separate objects denoting each keyword.
So during create_object_and_handle_keyword() we create appropriate objects
based on the keywords and validate each of them individually on the json
document by calling respective validate() function if the type matches.
If any of them fails, return false, else return true.
Rewrite datetime comparison conditions into sargeable. For example,
YEAR(col) <= val -> col <= YEAR_END(val)
YEAR(col) < val -> col < YEAR_START(val)
YEAR(col) >= val -> col >= YEAR_START(val)
YEAR(col) > val -> col > YEAR_END(val)
YEAR(col) = val -> col BETWEEN YEAR_START(val) AND YEAR_END(val)
Do the same with DATE(col), for example:
DATE(col) <= val -> col <= DAY_END(val)
After such a rewrite index lookup on column "col" can be employed
In MariaDB, we have a confusing problem where:
* The transaction_isolation option can be set in a configuration file, but it cannot be set dynamically.
* The tx_isolation system variable can be set dynamically, but it cannot be set in a configuration file.
Therefore, we have two different names for the same thing in different contexts. This is needlessly confusing, and it complicates the documentation. The same thing applys for transaction_read_only.
MySQL 5.7 solved this problem by making them into system variables. https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-20.html
This commit takes a similar approach by adding new system variables and marking the original ones as deprecated. This commit also resolves some legacy problems related to SET STATEMENT and transaction_isolation.
These are mainly internal files so is a low impact change.
The few scripts/mysql*sql where renames to mariadb_* prefix
on the name.
mysql-test renamed to mariadb-test in the final packages
handle_slave_io(), handle_slave_sql(), os_thread_exit():
Remove a redundant pthread_exit(nullptr) call, because it
would cause SIGSEGV.
mysql_print_status(): Add MEM_MAKE_DEFINED() to work around
some missing instrumentation around mallinfo2().
que_graph_free_stat_list(): Invoke que_node_get_next(node) before
que_graph_free_recursive(node). That is the logical and
MSAN_OPTIONS=poison_in_dtor=1 compatible way of freeing memory.
ins_node_t::~ins_node_t(): Invoke mem_heap_free(entry_sys_heap).
que_graph_free_recursive(): Rely on ins_node_t::~ins_node_t().
fts_t::~fts_t(): Invoke mem_heap_free(fts_heap).
fts_free(): Replace with direct calls to fts_t::~fts_t().
The failures in free_root() due to MSAN_OPTIONS=poison_in_dtor=1
will be covered in MDEV-30942.
* it isn't "pfs" function, don't call it Item_func_pfs,
don't use item_pfsfunc.*
* tests don't depend on performance schema, put in the main suite
* inherit from Item_str_ascii_func
* use connection collation, not utf8mb3_general_ci
* set result length in fix_length_and_dec
* do not set maybe_null
* use my_snprintf() where possible
* don't set m_value.ptr on every invocation
* update sys schema to use the format_pico_time()
* len must be size_t (compilation error on Windows)
* the correct function name for double->double is fabs()
* drop volatile hack
AWK in used in Debian SysV-init and postinst scripts to determine
is there enough space starting MariaDB database or create new
database to target destination.
These AWK scripts can be rewrited to use pure SH or help
using Coreutils which is mandatory for usage of MariaDB currently.
Reasoning behind this is to get rid of one very less used dependency
…with: Test assertion failed
Problem:
=======
Assertion text: 'Value returned by SSS and PS table for Last_Error_Number
should be same.'
Assertion condition: '"1146" = "0"'
Assertion condition, interpolated: '"1146" = "0"'
Assertion result: '0'
Analysis:
========
In parallel replication when slave is started the worker pool gets
activated and it gets cleared when slave stops. Each time the worker pool
gets activated a backup worker pool also gets created to store worker
specific perforance schema information in case of errors. On error, all
relevant information is copied from rpl_parallel_thread to rli and it gets
cleared from thread. Then server waits for all workers to complete their
work, during this stage performance schema table specific worker info is
stored into the backup pool and finally the actual pool gets cleared. If
users query the performance schema table to know the status of workers the
information from backup pool will be used. The test simulates
ER_NO_SUCH_TABLE error and verifies the worker information in pfs table.
Test works fine if execution occurs in following order.
Step 1. Error occurred 'worker information is copied to backup pool'.
Step 2. handle_slave_sql invokes 'rpl_parallel_resize_pool_if_no_slaves' to
deactivate worker pool, it marks the pool->count=0
Step 3. PFS table is queried, since actual pool is deactivated backup pool
information is read.
If the Step 3 happens prior to Step2 the pool is yet to be deactivated and
the actual pool is read, which doesn't have any error details as they were
cleared. Hence test ocasionally fails.
Fix:
===
Upon error mark the back pool as being active so that if PFS table is
quried since the backup pool is flagged as valid its information will be
read, in case it is not flagged regular pool will be read.
This work is one of the last pieces created by the late Sujatha Sivakumar.
Problem:
========
- InnoDB replace statement returns can't find record as result during
bulk insert operation. InnoDB returns DB_END_OF_INDEX blindly when
bulk transaction is visible to current transaction even though
the search tuple is inserted as a part of current replace statement.
Solution:
=========
row_search_mvcc(): InnoDB should allow the transaction to read
all the rows when innodb intends to do any locking on the
record even though bulk insert transaction changes are
visible to the current transaction
buf_dblwr_t::init(), buf_dblwr_t::close(): Cover also write_cond,
which was added in commit a55b951e60
without explicit initialization. On GNU/Linux, PTHREAD_COND_INITIALIZER
is a zero-initializer. That is why the default zero initialization
happened to work on that platform.
Commit reduces need of AWK-command at least
for Debian mariadb-server-compat package.
Commit removes need of AWK-command from
scripts/mysql_install_db.sh script.
AWK command is replace by purely Posix sh compiliant version.
- agressively -> aggressively
- exising -> existing
- occured -> occurred
- releated -> related
- seperated -> separated
- sucess -> success
- use use -> use
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
In commit d6aed21621 a condition at
the start of buf_read_ahead_random() was refactored. Only the caller
buf_read_recv_pages() was adjusted for this. We must in fact adjust
every caller and make sure that spare blocks will be allocated
while crash recovery is in progress. This is the simplest fix;
ideally recovery would operate on the compressed page frame.
The observed recovery hang occurred because pages 0 and 3 of a
tablespace were being read due to buf_page_get_gen() calls by
trx_resurrect_table_locks() before the log records for these pages
had been applied. In buf_page_t::read_complete() we would skip
the call to recv_recover_page() because no uncompressed page frame
had been allocated for the block.
btr_cur_upd_rec_in_place(): Avoid calling page_zip_write_rec() if we
are not modifying any fields that are stored in compressed format.
btr_cur_update_in_place_zip_check(): New function to check if a
ROW_FORMAT=COMPRESSED record can actually be updated in place.
btr_cur_pessimistic_update(): If the BTR_KEEP_POS_FLAG is not set
(we are in a ROLLBACK and cannot write any BLOBs), ignore the potential
overflow and let page_zip_reorganize() or page_zip_compress() handle it.
This avoids a failure when an attempted UPDATE of an NULL column to 0 is
rolled back. During the ROLLBACK, we would try to move a non-updated
long column to off-page storage in order to avoid a compression failure
of the ROW_FORMAT=COMPRESSED page.
page_zip_write_trx_id_and_roll_ptr(): Remove an assertion that would fail
in row_upd_rec_in_place() because the uncompressed page would already
have been modified there.
This is a 10.5 version of commit ff3d4395d8
(different because of commit 08ba388713).