- Added JSON OBJECT specification for pretty != 2.
- Fix NULL values not recognized for nullable JSON columns
- Issue an error message when a JSON table is created without specifying LRECL if PRETTY != 2.
- Make JSONColumns use a TDBJSON class.
- Make JSON table using MAPFAM
modified:
filamap.h
filamtxt.h
ha_connect.cc
json.result
tabjson.cpp
tabjson.h
table.cpp
- Implementing Discovery for the XML table type.
modified:
domdoc.cpp
domdoc.h
ha_connect.cc
libdoc.cpp
plgxml.cpp
plgxml.h
reldef.cpp
reldef.h
tabxml.cpp
tabxml.h
- Providing an error message when creating an ODBC table via discovery returns
columns of more than one table.
modified:
ha_connect.cc
- TableOptionStruct declaration moved from ha_connect.h to mycat.h
To make it easier to use by other classes.
modified:
ha_connect.cc
ha_connect.h
mycat.cc
mycat.h
reldef.cpp
tabmysql.cpp
taboccur.cpp
tabpivot.cpp
tabtbl.cpp
tabutil.cpp
tabxcl.cpp
From the previous branch:
commit eda4928ff122a0845baf5ade83b4aa29244a3a89
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Mon Mar 9 22:34:56 2015 +0100
- Add discovery to JSON tables
When columns are not defined, CONNECT analyses the json file to find column definitions.
This wors only on table that are an array of objects. Pair keys are used to generate the
column names and pair values are used for its definition. When the LEVEL option is defined
as a not null integer, the eventual JPATH is scanned up to the LEVEL value.
From the current one:
- Fix MDEV-7521 when column names are utf8 encoded (not a general multi-charset fix)
- Adds more to JSON discovery processing and UDF's
- Use PlugDup everywhere it replaces PlugSubAlloc + strcpy.
When the server starts up, check if the master-bin.state file was lost.
If it was, recover its contents by scanning the last binlog file, thus
avoiding running with a corrupt binlog state.
Temporary table count fix. The number of temporary tables was increased
when the table is not actually created. (when do_not_open was passed
as TRUE to create_tmp_table).
partially cherry-pick from mysql/5.6.
No test case (mysql/5.6 test case is useless, the correct
test case uses too much memory)
commit e061985813db54948f99892d89f7e076242473a5
Author: <Dao-Gang.Qu@sun.com>
Date: Tue Jun 1 15:02:22 2010 +0800
Bug #49931 Incorrect type in read_log_event error
Bug #49932 mysqlbinlog max_allowed_packet hard coded to 1GB
If somehow the COMMIT or XID event in an event group was missing, the code in
parallel replication to handle this was not sufficient, leading to server
deadlock.
In parallel replication, don't rollback inside ha_commit_trans() in case of
error.
The rollback will be done later, but the parallel replication code needs to
run unmark_start_commit() before the rollback to properly control the
sequencing of transactions.
I did not manage to come up with a reliable automatic test case for this, but
I tested it manually.
(Without this, it happened for me that realpath() failed returning
undef for the default vardir. This in turn caused mysql-test-run.pl to
delete the source mysql-test/ directory.)
Backport from 10.1, it's not nice to get one's source directory nuked
by a rouge mysql-test-run.
cherry-pick the upstream fix
commit d4ba10184cd7bde9c31c610e664ecd0c93605c46
Author: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
Date: Wed Jul 2 11:34:11 2014 +0530
Bug#17453826:ASSERTION ERROR WHEN SETTING FUTURE BINLOG
FILE/POS WITH SEMISYNC
Problem:
========
When DMLs are in progress on the master stopping a slave and
setting ahead binlog name/pos will cause an assert on the
master.
...
Item_func::print() prints itself as name + "(" + arguments + ")".
Normally that works, but Item_func_interval internally implements its
arguments as one single Item_row. Item_row prints itself as
"(" + values + ")". As a result, "INTERVAL(1,2)" was being printed
as "INTERVAL((1,2))". Fixed with a custom Item_func_interval::print().
Redefine FT_KEYPART in a way that it does not conflict with Hash Join.
Hash join stores field->field_index in KEYUSE::keypart, so we must
use a value of FT_KEYPART that's greater than MAX_FIELDS.
The order of initialisation during server startup was incorrect. The slave
threads were started before the parallel replication worker thread pool was
initialised, allowing a race where uninitialised data could be accessed.