Several macros such as sint2korr() and uint4korr() are using the
arithmetic + operator while a bitwise or operator would suffice.
GCC 5 and clang 5 and later can detect patterns consisting of
bitwise or and shifts by multiples of 8 bits, such as those used
in the InnoDB function mach_read_from_4(). They actually translate
that verbose low-level code into high-level machine language
(i486 bswap instruction or fused into the Haswell movbe instruction).
We should do the same for MariaDB Server code that is outside InnoDB.
Note: The Microsoft C compiler is lacking this optimization.
There, we might consider using _byteswap_ushort(), _byteswap_ulong(),
_byteswap_uint64(). But, those would lead to unaligned reads, which are
bad for reasons stated in MDEV-20277. Besides, outside InnoDB,
most data is already being stored in the native little-endian format
of that compiler.
mi_records_in_range(): Because HA_POS_ERROR cannot be accurately
represented in double (it will be off by one), add an explicit
cast to silence the warning.
Problem:-
So the issue is when we do bulk insert with rows
> MI_MIN_ROWS_TO_DISABLE_INDEXES(100) , We try to disable the indexes to
speedup insert. But current logic also disables the long unique indexes.
Solution:- In ha_myisam::start_bulk_insert if we find long hash index
(HA_KEY_ALG_LONG_HASH) we will not disable the index.
This commit also refactors the mi_disable_indexes_for_rebuild function,
Since this is function is called at only one place, it is inlined into
start_bulk_insert
mi_clear_key_active is added into myisamdef.h because now it is also used
in ha_myisam.cc file.
(Same is done for Aria Storage engine)
The fix consists of three commits backported from 10.3:
1) Cleanup isnan() portability checks
(cherry picked from commit 7ffd7fe962)
2) Cleanup isinf() portability checks
Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2
build would cache undefined HAVE_ISINF from 10.2, whereas it is expected
to be 1 in 10.3.
std::isinf() seem to be available on all supported platforms.
(cherry picked from commit bc469a0bdf)
3) Use std::isfinite in C++ code
This is addition to parent revision fixing build failures.
(cherry picked from commit 54999f4e75)
Fix partitioning and DS-MRR to work together
- In ha_partition::index_end(): take into account that ha_innobase (and
other engines using DS-MRR) will have inited=RND when initialized for
DS-MRR scan.
- In ha_partition::multi_range_read_next(): if the MRR scan is using
HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's
handler will store anything into *range_info.
- In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions
about how much memory their MRR implementation needs by passing
*buffer_size=0. DS-MRR code didn't know about this (actually it used
uint for buffer size calculation and would have an under-flow).
Returning *buffer_size=0 made ha_partition assume that partitions do
not need MRR memory and pass the same buffer to each of them.
Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return
the amount of buffer space needed, but not more than about
@@mrr_buffer_size.
* Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its
partitions, and partition use DS-MRR, the code will call handler->clone
with TABLE (*NOT partition*) name as an argument.
DS-MRR has no way of knowing the partition name, so the solution was
to have the ::clone() function for the affected storage engine to ignore
the name argument and get it elsewhere.
For CMAKE_BUILD_TYPE=Debug, the default MYSQL_MAINTAINER_MODE=AUTO
implies -Werror along with other flags in cmake/maintainer.cmake,
which would break the debug builds when CMAKE_CXX_FLAGS include -O2.
This fix includes a backport of 6dd3f24090
from MariaDB 10.3.
Limit increased from 1000 to 2000.
Avoiding stack overflow by only storing keys and pages on the stack in
recursive functions if there is plenty of space on it.
Other things:
- Use less stack space for b-tree operations as we now only allocate as
much space as needed instead of always allocating HA_MAX_KEY_LENGTH.
- Replaced most usage of my_safe_alloca() in Aria with the stack_alloc
interface.
- Moved my_setstacksize() to mysys/my_pthread.c
The MDEV-20265 commit e746f451d5
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
When using field_conv(), which is called in case of field1=field2 copy in
fill_records(), full varstring's was copied, including unitialized bytes.
This caused valgrind to compilain about usage of unitialized bytes when
using Aria static length records.
Fixed by not using memcpy when copying varstrings but instead just copy
the real bytes.
MDEV-19486 and one more similar bug appeared because handler::write_row() interface
welcomes to modify buffer by storage engine. But callers are not ready for that
thus bugs are possible in future.
handler::write_row():
handler::ha_write_row(): make argument const
cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Debug
Maintainer mode makes all warnings errors. This patch fix warnings. Mostly about
deprecated `register` keyword.
Too much warnings came from Mroonga and I gave up on it.
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.
Th command line used to generate this diff was:
find ./ -type f \
-exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
-exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
make live checksum to be returned in handler::info(),
and slow table-scan checksum to be calculated in handler::checksum().
part of
MDEV-16249 CHECKSUM TABLE for a spider table is not parallel and saves all data in memory in the spider head by default
The MDEV-17262 commit 26432e49d3
was skipped. In Galera 4, the implementation would seem to require
changes to the streaming replication.
In the tests archive.rnd_pos main.profiling, disable_ps_protocol
for SHOW STATUS and SHOW PROFILE commands until MDEV-18974
has been fixed.
There were two newly enabled warnings:
1. cast for a function pointers. Affected sql_analyse.h, mi_write.c
and ma_write.cc, mf_iocache-t.cc, mysqlbinlog.cc, encryption.cc, etc
2. memcpy/memset of nontrivial structures. Fixed as:
* the warning disabled for InnoDB
* TABLE, TABLE_SHARE, and TABLE_LIST got a new method reset() which
does the bzero(), which is safe for these classes, but any other
bzero() will still cause a warning
* Table_scope_and_contents_source_st uses `TABLE_LIST *` (trivial)
instead of `SQL_I_List<TABLE_LIST>` (not trivial) so it's safe to
bzero now.
* added casts in debug_sync.cc and sql_select.cc (for JOIN)
* move assignment method for MDL_request instead of memcpy()
* PARTIAL_INDEX_INTERSECT_INFO::init() instead of bzero()
* remove constructor from READ_RECORD() to make it trivial
* replace some memcpy() with c++ copy assignments
This was caused by a combination of factors:
* MyISAM/Aria temporary tables historically never saved the state
to disk (MYI/MAI), because the state never needed to persist
* certain ALTER TABLE operations modify the original TABLE structure
and if they fail, the original table has to be reopened to
revert all changes (m_needs_reopen=1)
as a result, when ALTER fails and MyISAM/Aria temp table gets reopened,
it reads the stale state from the disk.
As a fix, MyISAM/Aria tables now *always* write the state to disk
on close, *unless* HA_EXTRA_PREPARE_FOR_DROP was done first. And
the server now always does HA_EXTRA_PREPARE_FOR_DROP before dropping
a temporary table.
don't do anything special for stored generated columns
in MyISAM repair code.
add an assert that if there are virtual indexed columns, they
_must_ be beyond the file->s->base.reclength boundary
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.
Part of MDEV-5336 Implement LOCK FOR BACKUP
The idea is that instead of waiting in close_cached_tables() for all
tables to be closed, we instead call flush_tables() that does:
- Flush not used objects in table cache to free memory
- Collect all tables that are open
- Call HA_EXTRA_FLUSH on the objects, to get them into "closed state"
- Added HA_EXTRA_FLUSH support to archive and CSV
- Added multi-user protection to HA_EXTRA_FLUSH in MyISAM and Aria
The benefit compared to old code is:
- FTWRL doesn't have to wait for long running read operations or
open HANDLER's
- Made output to be aligned in aria_chk -d
- Aria engine error texts are now written instead of "Undefined error"
- When running with --check --force, tables with wrong TRN's but otherwise
correct are now zerofilled
- Fixed several bugs in check and recovery related to fulltext
- When doing recovery, store highest found TRID in aria_control_file
Before this, the
- Changed ERROR to WARNING for MyISAM/Aria message
that are warnings in the check utilities.
This affects for example "client is using or
hasn't closed the table properly".
- Print "Table is fixed" if check succeded in
fixing the table.
We do not accept:
1. We did not have this problem (fixed earlier and better)
d982e717ab Bug#27510150: MYSQLDUMP FAILS FOR SPECIFIC --WHERE CLAUSES
2. We do not have such options (an DBUG_ASSERT put just in case)
bbc2e37fe4 Bug#27759871: BACKRONYM ISSUE IS STILL IN MYSQL 5.7
3. Serg fixed it in other way in this release:
e48d775c6f Bug#27980823: HEAP OVERFLOW VULNERABILITIES IN MYSQL CLIENT LIBRARY
Description:- MyISAM table gets corrupted with concurrent
executions of INSERT, DELETE statements in a particular
sequence.
Analysis:- Due to the inappropriate manipulation of w_lock
and r_lock associated with a MyISAM table, there arises a
scenario where the table's state information becomes
invalid.
Fix:- A lock is introduced to resolve this issue.
- Removed test if HA_FT_WTYPE == HA_KEYTYPE_FLOAT as this never worked
(HA_KEYTYPE_FLOAT is an enum)
- Define HA_FT_MAXLEN to 126 (was tested before but never defined)
Modern compilers (such as GCC 8) emit warnings that the
'register' keyword is deprecated and not valid C++17.
Let us remove most use of the 'register' keyword.
Code in 'extra/' is not touched.
This bug happened due to a defect of the implementation of the handler
function ha_delete_all_rows() for the ARIA engine.
The function maria_delete_all_rows() truncated the table, but it didn't
touch the write cache, so the cache's write offset was not reset.
In the scenario like in the function st_select_lex_unit::exec_recursive
when first all records were deleted from the table and then several new
records were added some metadata became inconsistent with the state of
the cache. As a result the table scan function could not read records
at the end of the table.
The same defect could be found in the implementation of ha_delete_all_rows()
for the MYISAM engine mi_delete_all_rows().
Additionally made late instantiation for the temporary table used to store
rows that were used for each new iteration when executing a recursive CTE.
Main reason was to make it easier to print the above structures in
a debugger. Additional benefits is that I was able to use same
defines for both structures, which simplifes some code.
Most of the code is just removing Alter_info:: and Alter_inplace_info::
from alter table flags.
Following renames was done:
HA_ALTER_FLAGS -> alter_table_operations
CHANGE_CREATE_OPTION -> ALTER_CHANGE_CREATE_OPTION
Alter_info::ADD_INDEX -> ALTER_ADD_INDEX
DROP_INDEX -> ALTER_DROP_INDEX
ADD_UNIQUE_INDEX -> ALTER_ADD_UNIQUE_INDEX
DROP_UNIQUE_INDEx -> ALTER_DROP_UNIQUE_INDEX
ADD_PK_INDEX -> ALTER_ADD_PK_INDEX
DROP_PK_INDEX -> ALTER_DROP_PK_INDEX
Alter_info:ALTER_ADD_COLUMN -> ALTER_PARSE_ADD_COLUMN
Alter_info:ALTER_DROP_COLUMN -> ALTER_PARSE_DROP_COLUMN
Alter_inplace_info::ADD_INDEX -> ALTER_ADD_NON_UNIQUE_NON_PRIM_INDEX
Alter_inplace_info::DROP_INDEX -> ALTER_DROP_NON_UNIQUE_NON_PRIM_INDEX
Other things:
- Added typedef alter_table_operatons for alter table flags
- DROP CHECK CONSTRAINT can now be done online
- Added checks for Aria tables in alter_table_online.test
- alter_table_flags now takes an ulonglong as argument.
- Don't support online operations if checksum option is used.
- sql_lex.cc doesn't add ALTER_ADD_INDEX if index is not created
The merge only covered 10.1 up to
commit 4d248974e0.
Actually merge the changes up to
commit 0a534348c7.
Also, remove the unused InnoDB field trx_t::abort_type.
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.
This fix excludes rocksdb, spider,spider, sphinx and connect for now.
This will make it easier to how memory allocation is done when debugging
with either DBUG or gdb.
Will especially help when debugging stored procedures
Main change is a name argument as second argument to init_alloc_root()
init_sql_alloc()
Other things:
- Added DBUG_ENTER/EXIT to some Virtual_tmp_table functions
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call,
do it once in open() (because they don't change) on TABLE::mem_root
(so they stay valid until the table is closed)
NOT UPDATE FILE ON DISK
Description:- When the server variable, "myisam_use_mmap" is
enabled, MyISAM tables on windows are not updating the file
on disk even when the server variable "flush" is set to 1.
This is inturn making the table corrupted when encountering
a power failure.
Analysis:- When the server variable "myisam_use_mmap" is set,
files of MyISAM tables will be memory mapped using the OS
APIs mmap()/munmap()/msync() on Unix and CreateFileMapping()
/UnmapViewOfFile()/FlushViewOfFile() on Windows. msync() and
FlushViewOfFile() is responsible for flushing the changes
made to the in-core copy of a file that was mapped into
memory using mmap()/CreateFileMapping() back to the
file system. FLUSH is determined by the OS unless
explicitly called using msync()/FlushViewOfFile().
When the server variables "myisam_use_mmap" and "flush" are
enabled, MyISAM is only flushing the files from file system
cache to disc using "mysql_file_sync()" and not the memory
mapped file from memory to FS cache using "my_msync()".
["my_msync()" inturn calls msync() on Unix and
FlushViewOfFile() on Windows.
Fix:- As part of the fix, if server variable
"myisam_use_mmap" is enabled along with "flush",
"my_msync()" is invoked to flush the data in memory to file
system cache and followed by "mysql_file_sync()" which will
flush the data from file system cache to disk.
- Fix win64 pointer truncation warnings
(usually coming from misusing 0x%lx and long cast in DBUG)
- Also fix printf-format warnings
Make the above mentioned warnings fatal.
- fix pthread_join on Windows to set return value.
Before this patch running full mtr generated some 70 cores (at least
on systemd). Now no cores should be generated.
- Changed DBUG_ABORT()'s used by mysql-test-run to DBUG_SUICIDE()
- Changed DBUG_ABORT() used to crash server with core to DBUG_ASSERT(0)
- DBUG_ASSERT now flushes DBUG files
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
end_io_call uses uninitialized values from the new_data_cache
As such we the buffer 0 and check this before calling end_io_cache on it.
Thanks Sergey Vojtovich for the review and for this solution.
Found by Coverity (ref 972481).
These self references were previously used to avoid having to check the
IO_CACHE's type. However, a benchmark shows that on x86 5930k stock,
the type comparison is marginally faster than the double pointer dereference.
For 40 billion my_b_tell calls, the difference is .1 seconds in favor of performing the
type check. (Basically there is no measurable difference)
To prevent bugs from copying the structure using the equals(=) operator,
and having to do the bookkeeping manually, remove these "convenience"
variables.
MyISAM only allows online alter if autoincrement didn't change.
MyISAM detects that by comparing new autoinc value from create_info,
with the old one, stored in MYI. But in partitioned tables,
create_info->auto_increment_value is for the whole table, max of
autoinc values of individual MYI partitions. So *some* MYI partitions
will inevitably think that alter table changes auto_increment value
and will deny online alter.
Fix: only compare autoinc values, if the user has used AUTO_INCREMENT
in the ALTER TABLE statement.
The sole purpose of handlerton::release_temporary_latches and its wrapper
function was to release the InnoDB adaptive hash index latch
(btr_search_latch).
When the btr_search_latch was split into an array of latches
in MySQL 5.7.8 as part of the Oracle Bug#20985298 fix, the "caching"
of the latch across storage engine API calls was removed. As part of that,
the function trx_search_latch_release_if_reserved() was changed to an
assertion and the function trx_reserve_search_latch_if_not_reserved()
was removed, and handlerton::release_temporary_latches() practically
became a no-op.
Note: MDEV-12121 replaced the function
trx_search_latch_release_if_reserved()
with the more appropriately named macro trx_assert_no_search_latch().
This excludes MDEV-12472 (InnoDB should accept XtraDB parameters,
warning that they are ignored). In other words, MariaDB 10.3 will not
recognize any XtraDB-specific parameters.
Don't rebuild the table for ALTER TABLE delay_key_write changes.
After that, delay_key_write value in .frm may differ from the
value in .MYI. We'll do what .frm says.
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.
This affected mainly MyISAM and Aria engines.
Also fixed that end_bulk_insert() detects errors from
internal mi_end_bulk_insert() and ma_end_bulk_insert()
- delete_tree() and delete_tree_element() now has an
extra argument that marks if future calls to
tree->free should be ignored.
- tree->free changed to function returning int, to be
able to signal errors.
- Restored deleting flag in MyISAM that was accidently
disabled in mi_extra(PREPARE_FOR_DROP)
The issue was that my_errno was not set properly when a repair was killed,
which confused the rpl_killed_ddl script.
I also added an extra test line in varchar.inc to ensure we don't give
duplicate error rows.
bunch of bugs when external_lock() fails on unlock:
* mi_lock_database() used mi_mark_crashed() under share->intern_lock,
but mi_mark_crashed() itself locks this mutex.
* handler::close() required table to be unlocked, but failed
external_lock didn't count as unlock
* mysql_unlock_tables() ignored all unlock errors, but they still set
the error status in stmt_da.
SYMLINK CHECK RACE CONDITIONS
ANALYSIS:
=========
A potential defect exists in the handling of CREATE
TABLE .. DATA DIRECTORY/ INDEX DIRECTORY which gives way to
the user to gain access to another user table or a system
table.
FIX:
====
The lstat and fstat output of the target files are now
stored which help in determining the identity of the target
files thus preventing the unauthorized access to other
files.
Other things
- Ensure that ut_d() is set to EXPR if ut_ad() is DEBUG_ASSERT()
If not, we will get a crash in purge_sys_t::~purge_sys_t() as
this ut_ad() code expect's that the ut_d() codes has been executed
Benefits of this patch:
- Removed a lot of calls to strlen(), especially for field_string
- Strings generated by parser are now const strings, less chance of
accidently changing a string
- Removed a lot of calls with LEX_STRING as parameter (changed to pointer)
- More uniform code
- Item::name_length was not kept up to date. Now fixed
- Several bugs found and fixed (Access to null pointers,
access of freed memory, wrong arguments to printf like functions)
- Removed a lot of casts from (const char*) to (char*)
Changes:
- This caused some ABI changes
- lex_string_set now uses LEX_CSTRING
- Some fucntions are now taking const char* instead of char*
- Create_field::change and after changed to LEX_CSTRING
- handler::connect_string, comment and engine_name() changed to LEX_CSTRING
- Checked printf() related calls to find bugs. Found and fixed several
errors in old code.
- A lot of changes from LEX_STRING to LEX_CSTRING, especially related to
parsing and events.
- Some changes from LEX_STRING and LEX_STRING & to LEX_CSTRING*
- Some changes for char* to const char*
- Added printf argument checking for my_snprintf()
- Introduced null_clex_str, star_clex_string, temp_lex_str to simplify
code
- Added item_empty_name and item_used_name to be able to distingush between
items that was given an empty name and items that was not given a name
This is used in sql_yacc.yy to know when to give an item a name.
- select table_name."*' is not anymore same as table_name.*
- removed not used function Item::rename()
- Added comparision of item->name_length before some calls to
my_strcasecmp() to speed up comparison
- Moved Item_sp_variable::make_field() from item.h to item.cc
- Some minimal code changes to avoid copying to const char *
- Fixed wrong error message in wsrep_mysql_parse()
- Fixed wrong code in find_field_in_natural_join() where real_item() was
set when it shouldn't
- ER_ERROR_ON_RENAME was used with extra arguments.
- Removed some (wrong) ER_OUTOFMEMORY, as alloc_root will already
give the error.
TODO:
- Check possible unsafe casts in plugin/auth_examples/qa_auth_interface.c
- Change code to not modify LEX_CSTRING for database name
(as part of lower_case_table_names)
This was done to make it clear that a update_row() should not change the
row.
This was not done for handler::write_row() as this function still needs
to update auto_increment values in the row. This should at some point
be moved to handler::ha_write_row() after which write_row can also have
const arguments.
Working features:
CREATE OR REPLACE [TEMPORARY] SEQUENCE [IF NOT EXISTS] name
[ INCREMENT [ BY | = ] increment ]
[ MINVALUE [=] minvalue | NO MINVALUE ]
[ MAXVALUE [=] maxvalue | NO MAXVALUE ]
[ START [ WITH | = ] start ] [ CACHE [=] cache ] [ [ NO ] CYCLE ]
ENGINE=xxx COMMENT=".."
SELECT NEXT VALUE FOR sequence_name;
SELECT NEXTVAL(sequence_name);
SELECT PREVIOUS VALUE FOR sequence_name;
SELECT LASTVAL(sequence_name);
SHOW CREATE SEQUENCE sequence_name;
SHOW CREATE TABLE sequence_name;
CREATE TABLE sequence-structure ... SEQUENCE=1
ALTER TABLE sequence RENAME TO sequence2;
RENAME TABLE sequence TO sequence2;
DROP [TEMPORARY] SEQUENCE [IF EXISTS] sequence_names
Missing features
- SETVAL(value,sequence_name), to be used with replication.
- Check replication, including checking that sequence tables are marked
not transactional.
- Check that a commit happens for NEXT VALUE that changes table data (may
already work)
- ALTER SEQUENCE. ANSI SQL version of setval.
- Share identical sequence entries to not add things twice to table list.
- testing insert/delete/update/truncate/load data
- Run and fix Alibaba sequence tests (part of mysql-test/suite/sql_sequence)
- Write documentation for NEXT VALUE / PREVIOUS_VALUE
- NEXTVAL in DEFAULT
- Ensure that NEXTVAL in DEFAULT uses database from base table
- Two NEXTVAL for same row should give same answer.
- Oracle syntax sequence_table.nextval, without any FOR or FROM.
- Sequence tables are treated as 'not read constant tables' by SELECT; Would
be better if we would have a separate list for sequence tables so that
select doesn't know about them, except if refereed to with FROM.
Other things done:
- Improved output for safemalloc backtrack
- frm_type_enum changed to Table_type
- Removed lex->is_view and replaced with lex->table_type. This allows
use to more easy check if item is view, sequence or table.
- Added table flag HA_CAN_TABLES_WITHOUT_ROLLBACK, needed for handlers
that want's to support sequences
- Added handler calls:
- engine_name(), to simplify getting engine name for partition and sequences
- update_first_row(), to be able to do efficient sequence implementations.
- Made binlog_log_row() global to be able to call it from ha_sequence.cc
- Added handler variable: row_already_logged, to be able to flag that the
changed row is already logging to replication log.
- Added CF_DB_CHANGE and CF_SCHEMA_CHANGE flags to simplify
deny_updates_if_read_only_option()
- Added sp_add_cfetch() to avoid new conflicts in sql_yacc.yy
- Moved code for add_table_options() out from sql_show.cc::show_create_table()
- Added String::append_longlong() and used it in sql_show.cc to simplify code.
- Added extra option to dd_frm_type() and ha_table_exists to indicate if
the table is a sequence. Needed by DROP SQUENCE to not drop a table.
MyISAM in compute_vcols() - which is used only in mi_check code -
was computing indexed vcols into an internally allocated buffer
(not record[0]) and the buffer was calculated to be long enough to fit
every keyseg (a keyseg knows where its value in a record buffer is
and the length of the value).
This logic didn' work for prefix keys, because the keyseg length is the
length of a prefix, but the record buffer needs to fit the complete
value of a vcol. In this bug MyISAM was writing a 2K varchar
into a buffer too short.
Also it didn't work for repair-with-keycache, because that code
recalculats all vcols, not only indexed ones.
So, the buffer size (MYISAM_SHARE::vreclength) should include all
vcols' full lengths. But it was calculated in mi_open and low-level
MyISAM code has no knowledge of vcols.
As a fix we now recalculate MYISAM_SHARE::vreclength in
ha_myisam::setup_vcols_for_repair() which is always called
before compute_vcols().
FT_BOOLEAN_CHECK_SYNTAX_STRING
ISSUE: my_isalnum macro used for checking if character is
alphanumeric dereferences uninitialized pointer
in default character set structure resulting in
server exiting abnormally.
FIX: Used standard isalnum function instead of macro my_isalnum.