BitKeeper/etc/logging_ok:
  auto-union
include/mysql.h:
  Auto merged
libmysql/libmysql.c:
  Auto merged
mysql-test/r/multi_update.result:
  Auto merged
mysql-test/t/multi_update.test:
  Auto merged
mysys/Makefile.am:
  Auto merged
sql/field.cc:
  Auto merged
sql/item_cmpfunc.cc:
  Auto merged
sql/item_func.cc:
  Auto merged
sql/item_func.h:
  Auto merged
sql/log_event.cc:
  Auto merged
sql/mysql_priv.h:
  Auto merged
sql/mysqld.cc:
  Auto merged
sql/sql_class.h:
  Auto merged
sql/sql_parse.cc:
  Auto merged
sql/sql_select.cc:
  Auto merged
sql/share/english/errmsg.txt:
  Auto merged
sql/share/italian/errmsg.txt:
  Auto merged
sql/sql_show.cc:
  Auto merged
sql/sql_table.cc:
  Auto merged
sql/sql_udf.cc:
  Auto merged
sql/sql_yacc.yy:
  Auto merged
This commit is contained in:
unknown 2003-01-04 15:40:55 +02:00
commit 8ce6c9fc6c
100 changed files with 2666 additions and 852 deletions

View file

@ -68,6 +68,7 @@ peter@mysql.com
ram@gw.udmsearch.izhnet.ru
ram@mysql.r18.ru
ram@ram.(none)
ranger@regul.home.lan
root@x3.internalnet
salle@banica.(none)
salle@geopard.(none)
@ -95,6 +96,7 @@ venu@myvenu.com
venu@work.mysql.com
vva@eagle.mysql.r18.ru
vva@genie.(none)
walrus@kishkin.ru
walrus@mysql.com
wax@mysql.com
worm@altair.is.lan

View file

@ -1585,7 +1585,7 @@ fe 00 . .
@node 4.1 protocol changes,,,
@section Changes to 4.0 protocol in 4.1
All basic package handling is identical to 4.0. When communication
All basic packet handling is identical to 4.0. When communication
with an old 4.0 or 3.x client we will use the old protocol.
The new things that we support with 4.1 are:
@ -1596,7 +1596,7 @@ Warnings
@item
Prepared statements
@item
Binary protocol (will be much faster than the current protocol that
Binary protocol (will be faster than the current protocol that
converts everything to strings)
@end itemize
@ -1617,15 +1617,15 @@ results will sent as binary (low-byte-first).
@end itemize
@node 4.1 field package,,,
@section 4.1 field description package
@node 4.1 field packet,,,
@section 4.1 field description packet
The field description package is sent as a response to a query that
contains a result set. It can be distinguished from a ok package by
the fact that the first byte can't be 0 for a field package.
@xref {4.1 ok package}.
The field description packet is sent as a response to a query that
contains a result set. It can be distinguished from a ok packet by
the fact that the first byte can't be 0 for a field packet.
@xref {4.1 ok packet}.
The header package has the following structure:
The header packet has the following structure:
@multitable @columnfractions .10 .90
@item Size @tab Comment
@ -1634,7 +1634,7 @@ The header package has the following structure:
uses this to send the number of rows in the table)
@end multitable
This package is always followed by a field description set.
This packet is always followed by a field description set.
@xref{4.1 field desc}.
@node 4.1 field desc,,,
@ -1655,17 +1655,17 @@ The field description result set contains the meta info for a result set.
@end multitable
@node 4.1 ok package,,,
@section 4.1 ok package
@node 4.1 ok packet,,,
@section 4.1 ok packet
The ok package is the first that is sent as an response for a query
The ok packet is the first that is sent as an response for a query
that didn't return a result set.
The ok package has the following structure:
The ok packet has the following structure:
@multitable @columnfractions .10 .90
@item Size @tab Comment
@item 1 @tab 0 ; Marker for ok package
@item 1 @tab 0 ; Marker for ok packet
@item 1-9 @tab Affected rows
@item 1-9 @tab Last insert id (0 if one wasn't used)
@item 2 @tab Server status; Can be used by client to check if we are inside an transaction
@ -1681,10 +1681,10 @@ The message is optional. For example for multi line INSERT it
contains a string for how many rows was inserted / deleted.
@node 4.1 end package,,,
@section 4.1 end package
@node 4.1 end packet,,,
@section 4.1 end packet
The end package is sent as the last package for
The end packet is sent as the last packet for
@itemize @bullet
@item
@ -1695,41 +1695,42 @@ End of parameter type information
End of result set
@end itemize
The end package has the following structure:
The end packet has the following structure:
@multitable @columnfractions .10 .90
@item Size @tab Comment
@item 1 @tab 254 ; Marker for EOF package
@item 1 @tab 254 ; Marker for EOF packet
@item 2 @tab Warning count
@item 2 @tab Status flags (For flags like SERVER_STATUS_MORE_RESULTS)
@end multitable
Note that a normal package may start with byte 254, which means
Note that a normal packet may start with byte 254, which means
'length stored in 9 bytes'. One can different between these cases
by checking the packet length < 9 bytes (in which case it's and end
packet).
@node 4.1 error package
@section 4.1 error package.
@node 4.1 error packet
@section 4.1 error packet.
The error package is sent when something goes wrong.
The error package has the following structure:
The error packet is sent when something goes wrong.
The error packet has the following structure:
@multitable @columnfractions .10 .90
@item Size @tab Comment
@item 1 @tab 255 Error package marker
@item 1 @tab 255 Error packet marker
@item 2 @tab Error code
@item 1-255 @tab Null terminated error message
@end multitable
The client/server protocol is designed in such a way that a package
can only start with 255 if it's an error package.
The client/server protocol is designed in such a way that a packet
can only start with 255 if it's an error packet.
@node 4.1 prep init,,,
@section 4.1 prepared statement init package
@section 4.1 prepared statement init packet
This is the return package when one sends a query with the COM_PREPARE
This is the return packet when one sends a query with the COM_PREPARE
command.
@multitable @columnfractions .10 .90
@ -1755,8 +1756,8 @@ Note that the above is not yet in 4.1 but will be added this month.
As MySQL can have a parameter 'anywhere' it will in many cases not be
able to provide the optimal information for all parameters.
If number of columns, in the header package, is not 0 then the
prepared statement will contain a result set. In this case the package
If number of columns, in the header packet, is not 0 then the
prepared statement will contain a result set. In this case the packet
is followed by a field description result set. @xref{4.1 field descr}.
@ -1768,22 +1769,22 @@ value. One can call mysql_send_long_data() multiple times for the
same parameter; The server will concatenate the results to a one big
string.
The server will not require an end package for the string.
The server will not require an end packet for the string.
mysql_send_long_data() is responsible updating a flag that all data
has been sent. (Ie; That the last call to mysql_send_long_data() has
the 'last_data' flag set).
This package is sent from client -> server:
This packet is sent from client -> server:
@multitable @columnfractions .10 .90
@item Size @tab Comment
@item 4 @tab Statement handler
@item 2 @tab Parameter number
@item 2 @tab Type of parameter (not used at this point)
@item # @tab data (Rest of package)
@item # @tab data (Rest of packet)
@end itemize
The server will NOT send an @code{ok} or @code{error} package in
The server will NOT send an @code{ok} or @code{error} packet in
responce for this. If there is any errors (like to big string), one
will get the error when calling execute.
@ -1791,13 +1792,13 @@ will get the error when calling execute.
@section 4.1 execute
On execute we send all parameters to the server in a COM_EXECUTE
package.
packet.
The package contains the following information:
The packet contains the following information:
@multitable @columnfractions .30 .70
@item Size @tab Comment
@item (param_count+7)/8 @tab Null bit map
@item (param_count+9)/8 @tab Null bit map (2 bits reserved for protocol)
@item 1 @tab new_parameter_bound flag. Is set to 1 for first
execute or if one has rebound the parameters.
@item 2*param_count @tab Type of parameters (only given if new_parameter_bound flag is 1)
@ -1813,7 +1814,7 @@ The parameters are stored the following ways:
@multitable @columnfractions .20 .10 .70
@item Type @tab Size @tab Comment
@item tynyint @tab 1 @tab One byte integer
@item tinyint @tab 1 @tab One byte integer
@item short @tab 2 @tab
@item int @tab 4 @tab
@item longlong @tab 8 @tab
@ -1822,7 +1823,7 @@ The parameters are stored the following ways:
@item string @tab 1-9 + # @tab Packed string length + string
@end multitable
The result for this will be either an ok package or a binary result
The result for this will be either an ok packet or a binary result
set.
@node 4.1 binary result,,,
@ -1836,11 +1837,11 @@ For each result row:
@item
null bit map with first two bits set to 01 (bit 0,1 value 1)
@item
parameter data, repeated for each not null parameter.
parameter data, repeated for each not null result column.
@end itemize
The idea with the reserving two bits in the null map is that we can
use standard error (first byte 255) and ok packages (first byte 0)
use standard error (first byte 255) and ok packets (first byte 0)
to end a result sets.
Except that the null-bit-map is shifted two steps, the server is
@ -1849,6 +1850,18 @@ bound parameters to the client. The server is always sending the data
as type given for 'column type' for respective column. It's up to the
client to convert the parameter to the requested type.
DATETIME, DATE and TIME are sent to the server in a binary format as follows:
@multitable @columnfractions .20 .10 .70
@item Type @tab Size @tab Comment
@item date @tab 1 + 0-11 @tab Length + 2 byte year, 1 byte MMDDHHMMSS, 4 byte billionth of a second
@item datetime @tab 1 + 0-11 @tab Length + 2 byte year, 1 byte MMDDHHMMSS, 4 byte billionth of a second
@item time @tab 1 + 0-14 @tab Length + sign (0 = pos, 1= neg), 4 byte days, 1 byte HHMMDD, 4 byte billionth of a second
@end multitable
The first byte is a length byte and then comes all parameters that are
not 0. (Always counted from the beginning).
@node Fulltext Search, , protocol, Top
@chapter Fulltext Search in MySQL

View file

@ -300,7 +300,7 @@
#define ER_NOT_ALLOWED_COMMAND 1148
"The used command is not allowed with this MySQL version",
#define ER_SYNTAX_ERROR 1149
"You have an error in your SQL syntax",
"You have an error in your SQL syntax. Check the manual that corresponds to your MySQL server version for the right syntax to use",
#define ER_DELAYED_CANT_CHANGE_LOCK 1150
"Delayed insert thread couldn't get requested lock for table %-.64s",
#define ER_TOO_MANY_DELAYED_THREADS 1151
@ -358,7 +358,7 @@
#define ER_CHECK_NO_SUCH_TABLE 1177
"Can't open table",
#define ER_CHECK_NOT_IMPLEMENTED 1178
"The handler for the table doesn't support check/repair",
"The handler for the table doesn't support %s",
#define ER_CANT_DO_THIS_DURING_AN_TRANSACTION 1179
"You are not allowed to execute this command in a transaction",
#define ER_ERROR_DURING_COMMIT 1180
@ -454,4 +454,24 @@
#define ER_DUP_ARGUMENT 1225
"Option '%s' used twice in statement",
#define ER_USER_LIMIT_REACHED 1226
"User '%-64s' has exceeded the '%s' resource (current value: %ld)",
"User '%-.64s' has exceeded the '%s' resource (current value: %ld)",
#define ER_SPECIFIC_ACCESS_DENIED_ERROR 1227
"Access denied. You need the %-.128s privilege for this operation",
#define ER_LOCAL_VARIABLE 1228
"Variable '%-.64s' is a LOCAL variable and can't be used with SET GLOBAL",
#define ER_GLOBAL_VARIABLE 1229
"Variable '%-.64s' is a GLOBAL variable and should be set with SET GLOBAL",
#define ER_NO_DEFAULT 1230
"Variable '%-.64s' doesn't have a default value",
#define ER_WRONG_VALUE_FOR_VAR 1231
"Variable '%-.64s' can't be set to the value of '%-.64s'",
#define ER_WRONG_TYPE_FOR_VAR 1232
"Wrong argument type to variable '%-.64s'",
#define ER_VAR_CANT_BE_READ 1233
"Variable '%-.64s' can only be set, not read",
#define ER_CANT_USE_OPTION_HERE 1234
"Wrong usage/placement of '%s'",
#define 1235
"This version of MySQL doesn't yet support '%s'",
#define ER_MASTER_FATAL_ERROR_READING_BINLOG 1236
"Got fatal error %d: '%-.128s' from master when reading data from binary log",

View file

@ -43,7 +43,8 @@ RSC=rc.exe
# PROP Intermediate_Dir "debug"
# PROP Target_Dir ""
# ADD BASE CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "__NT__" /D "WIN32" /D "_MBCS" /YX /FD /c
# ADD CPP /nologo /G6 /MTd /W3 /GX /Z7 /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "__NT__" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /YX /FD /c
# ADD CPP /nologo /G6 /MTd /W3 /GX /Z7 /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "__NT__" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /FD /c
# SUBTRACT CPP /YX
# ADD BASE RSC /l 0x416 /d "NDEBUG"
# ADD RSC /l 0x416 /d "NDEBUG"
BSC32=bscmake.exe
@ -66,7 +67,8 @@ LIB32=link.exe -lib
# PROP Intermediate_Dir "release"
# PROP Target_Dir ""
# ADD BASE CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /YX /FD /c
# ADD CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /YX /FD /c
# ADD CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /FD /c
# SUBTRACT CPP /YX
# ADD BASE RSC /l 0x416 /d "NDEBUG"
# ADD RSC /l 0x416 /d "NDEBUG"
BSC32=bscmake.exe
@ -89,7 +91,8 @@ LIB32=link.exe -lib
# PROP Intermediate_Dir "innobase___Win32_nt"
# PROP Target_Dir ""
# ADD BASE CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /YX /FD /c
# ADD CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /YX /FD /c
# ADD CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /FD /c
# SUBTRACT CPP /YX
# ADD BASE RSC /l 0x416 /d "NDEBUG"
# ADD RSC /l 0x416 /d "NDEBUG"
BSC32=bscmake.exe
@ -112,7 +115,8 @@ LIB32=link.exe -lib
# PROP Intermediate_Dir "innobase___Win32_Max_nt"
# PROP Target_Dir ""
# ADD BASE CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /YX /FD /c
# ADD CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /YX /FD /c
# ADD CPP /nologo /G6 /MT /W3 /GX /O2 /I "../innobase/include" /I "../include" /D "NDEBUG" /D "_LIB" /D "_WIN32" /D "WIN32" /D "_MBCS" /D "MYSQL_SERVER" /FD /c
# SUBTRACT CPP /YX
# ADD BASE RSC /l 0x416 /d "NDEBUG"
# ADD RSC /l 0x416 /d "NDEBUG"
BSC32=bscmake.exe

View file

@ -122,6 +122,10 @@ SOURCE=.\myrg_queue.c
# End Source File
# Begin Source File
SOURCE=.\myrg_range.c
# End Source File
# Begin Source File
SOURCE=.\myrg_rfirst.c
# End Source File
# Begin Source File

View file

@ -42,7 +42,7 @@ RSC=rc.exe
# PROP Ignore_Export_Lib 0
# PROP Target_Dir ""
# ADD BASE CPP /nologo /W3 /GX /O2 /D "WIN32" /D "NDEBUG" /D "_CONSOLE" /D "_MBCS" /YX /FD /c
# ADD CPP /nologo /G6 /MT /W3 /O2 /I "../include" /I "../" /I "../client" /D "NDEBUG" /D "DBUG_OFF" /D "_CONSOLE" /D "_MBCS" /D "_WINDOWS" /D "MYSQL_SERVER" /FD /c
# ADD CPP /nologo /G6 /MT /W3 /O2 /I "../include" /I "../" /I "../sql" /D "NDEBUG" /D "DBUG_OFF" /D "_CONSOLE" /D "_MBCS" /D "_WINDOWS" /D "MYSQL_SERVER" /FD /c
# ADD BASE RSC /l 0x409 /d "NDEBUG"
# ADD RSC /l 0x409 /d "NDEBUG"
BSC32=bscmake.exe
@ -67,7 +67,7 @@ LINK32=link.exe
# PROP Ignore_Export_Lib 0
# PROP Target_Dir ""
# ADD BASE CPP /nologo /W3 /Gm /GX /ZI /Od /D "WIN32" /D "_DEBUG" /D "_CONSOLE" /D "_MBCS" /YX /FD /GZ /c
# ADD CPP /nologo /G6 /MTd /W3 /Gm /ZI /Od /I "../include" /I "../" /I "../client" /D "_DEBUG" /D "SAFEMALLOC" /D "SAFE_MUTEX" /D "_CONSOLE" /D "_MBCS" /D "_WINDOWS" /D "MYSQL_SERVER" /FD /c
# ADD CPP /nologo /G6 /MTd /W3 /Gm /ZI /Od /I "../include" /I "../" /I "../sql" /D "_DEBUG" /D "SAFEMALLOC" /D "SAFE_MUTEX" /D "_CONSOLE" /D "_MBCS" /D "_WINDOWS" /D "MYSQL_SERVER" /FD /c
# ADD BASE RSC /l 0x409 /d "_DEBUG"
# ADD RSC /l 0x409 /d "_DEBUG"
BSC32=bscmake.exe
@ -88,7 +88,7 @@ LINK32=link.exe
# PROP Default_Filter "cpp;c;cxx;rc;def;r;odl;idl;hpj;bat"
# Begin Source File
SOURCE=.\mysqlbinlog.cpp
SOURCE=..\client\mysqlbinlog.cpp
# End Source File
# End Group
# Begin Group "Header Files"

View file

@ -41,7 +41,8 @@ RSC=rc.exe
# PROP Intermediate_Dir "Release"
# PROP Target_Dir ""
# ADD BASE CPP /nologo /W3 /GX /O2 /D "WIN32" /D "NDEBUG" /D "_MBCS" /D "_LIB" /YX /FD /c
# ADD CPP /nologo /MT /W3 /O2 /I "../include" /I "../regex" /I "../sql" /I "../bdb/build_win32" /D "WIN32" /D "NDEBUG" /D "_MBCS" /D "_LIB" /D "USE_SYMDIR" /D "SIGNAL_WITH_VIO_CLOSE" /D "HAVE_DLOPEN" /D "EMBEDDED_LIBRARY" /D "MYSQL_SERVER" /D "HAVE_INNOBASE_DB" /D "DBUG_OFF" /D "USE_TLS" /D "__WIN__" /YX /FD /c
# ADD CPP /nologo /MT /W3 /O2 /I "../include" /I "../regex" /I "../sql" /I "../bdb/build_win32" /D "WIN32" /D "NDEBUG" /D "_MBCS" /D "_LIB" /D "USE_SYMDIR" /D "SIGNAL_WITH_VIO_CLOSE" /D "HAVE_DLOPEN" /D "EMBEDDED_LIBRARY" /D "MYSQL_SERVER" /D "HAVE_INNOBASE_DB" /D "DBUG_OFF" /D "USE_TLS" /D "__WIN__" /FD /c
# SUBTRACT CPP /YX
# ADD BASE RSC /l 0x416 /d "NDEBUG"
# ADD RSC /l 0x416 /d "NDEBUG"
BSC32=bscmake.exe
@ -64,7 +65,8 @@ LIB32=link.exe -lib
# PROP Intermediate_Dir "Debug"
# PROP Target_Dir ""
# ADD BASE CPP /nologo /W3 /Gm /GX /ZI /Od /D "WIN32" /D "_DEBUG" /D "_MBCS" /D "_LIB" /YX /FD /GZ /c
# ADD CPP /nologo /MTd /W3 /Gm /Zi /Od /I "../include" /I "../regex" /I "../sql" /I "../bdb/build_win32" /D "WIN32" /D "_DEBUG" /D "_MBCS" /D "_LIB" /D "USE_SYMDIR" /D "SIGNAL_WITH_VIO_CLOSE" /D "HAVE_DLOPEN" /D "EMBEDDED_LIBRARY" /D "MYSQL_SERVER" /D "HAVE_INNOBASE_DB" /D "USE_TLS" /D "__WIN__" /YX /FD /GZ /c
# ADD CPP /nologo /MTd /W3 /Gm /Zi /Od /I "../include" /I "../regex" /I "../sql" /I "../bdb/build_win32" /D "WIN32" /D "_DEBUG" /D "_MBCS" /D "_LIB" /D "USE_SYMDIR" /D "SIGNAL_WITH_VIO_CLOSE" /D "HAVE_DLOPEN" /D "EMBEDDED_LIBRARY" /D "MYSQL_SERVER" /D "HAVE_INNOBASE_DB" /D "USE_TLS" /D "__WIN__" /FD /GZ /c
# SUBTRACT CPP /YX
# ADD BASE RSC /l 0x416 /d "_DEBUG"
# ADD RSC /l 0x416 /d "_DEBUG"
BSC32=bscmake.exe

View file

@ -41,7 +41,7 @@ RSC=rc.exe
# PROP Intermediate_Dir "Release"
# PROP Target_Dir ""
# ADD BASE CPP /nologo /W3 /GX /O2 /D "WIN32" /D "NDEBUG" /D "_MBCS" /D "_LIB" /YX /FD /c
# ADD CPP /nologo /MT /W3 /O2 /I "../include" /I "../regex" /I "../sql" /I "../bdb/build_win32" /D "WIN32" /D "NDEBUG" /D "_MBCS" /D "_LIB" /D "USE_SYMDIR" /D "SIGNAL_WITH_VIO_CLOSE" /D "HAVE_DLOPEN" /D "EMBEDDED_LIBRARY" /D "HAVE_INNOBASE_DB" /D "DBUG_OFF" /D "USE_TLS" /D "MYSQL_SERVER" /D "__WIN__" /YX /FD /c
# ADD CPP /nologo /MT /W3 /O2 /I "../include" /I "../regex" /I "../sql" /I "../bdb/build_win32" /D "WIN32" /D "NDEBUG" /D "_MBCS" /D "_LIB" /D "HAVE_BERKELEY_DB" /D "USE_SYMDIR" /D "SIGNAL_WITH_VIO_CLOSE" /D "HAVE_DLOPEN" /D "EMBEDDED_LIBRARY" /D "HAVE_INNOBASE_DB" /D "DBUG_OFF" /D "USE_TLS" /YX /FD /c
# ADD BASE RSC /l 0x416 /d "NDEBUG"
# ADD RSC /l 0x416 /d "NDEBUG"
BSC32=bscmake.exe
@ -64,7 +64,7 @@ LIB32=link.exe -lib
# PROP Intermediate_Dir "Debug"
# PROP Target_Dir ""
# ADD BASE CPP /nologo /W3 /Gm /GX /ZI /Od /D "WIN32" /D "_DEBUG" /D "_MBCS" /D "_LIB" /YX /FD /GZ /c
# ADD CPP /nologo /MTd /W3 /Gm /Zi /Od /I "../include" /I "../regex" /I "../sql" /I "../bdb/build_win32" /D "WIN32" /D "_DEBUG" /D "_MBCS" /D "_LIB" /D "USE_SYMDIR" /D "SIGNAL_WITH_VIO_CLOSE" /D "HAVE_DLOPEN" /D "EMBEDDED_LIBRARY" /D "HAVE_INNOBASE_DB" /D "USE_TLS" /YX /FD /GZ /c
# ADD CPP /nologo /MTd /W3 /Gm /Zi /Od /I "../include" /I "../regex" /I "../sql" /I "../bdb/build_win32" /D "WIN32" /D "_DEBUG" /D "_MBCS" /D "_LIB" /D "HAVE_BERKELEY_DB" /D "USE_SYMDIR" /D "SIGNAL_WITH_VIO_CLOSE" /D "HAVE_DLOPEN" /D "EMBEDDED_LIBRARY" /D "HAVE_INNOBASE_DB" /D "USE_TLS" /YX /FD /GZ /c
# ADD BASE RSC /l 0x416 /d "_DEBUG"
# ADD RSC /l 0x416 /d "_DEBUG"
BSC32=bscmake.exe
@ -80,13 +80,5 @@ LIB32=link.exe -lib
# Name "mysqlserver - Win32 Release"
# Name "mysqlserver - Win32 Debug"
# Begin Source File
SOURCE=..\sql\set_var.cpp
# End Source File
# Begin Source File
SOURCE=..\sql\sql_load.cpp
# End Source File
# End Target
# End Project

View file

@ -218,7 +218,7 @@ SOURCE=.\derror.cpp
# End Source File
# Begin Source File
SOURCE=..\client\errmsg.c
SOURCE=..\libmysql\errmsg.c
# End Source File
# Begin Source File

View file

@ -1215,7 +1215,9 @@ changequote(, )dnl
hpux10.[2-9][0-9]* | hpux1[1-9]* | hpux[2-9][0-9]*)
changequote([, ])dnl
if test "$GCC" = yes; then
ac_cv_sys_largefile_CFLAGS=-D__STDC_EXT__
case `$CC --version 2>/dev/null` in
2.95.*) ac_cv_sys_largefile_CFLAGS=-D__STDC_EXT__ ;;
esac
fi
;;
# IRIX 6.2 and later require cc -n32.
@ -1330,7 +1332,7 @@ AC_DEFUN(MYSQL_SYS_LARGEFILE,
# Local version of _AC_PROG_CXX_EXIT_DECLARATION that does not
# include #stdlib.h as this breaks things on Solaris
# include #stdlib.h as default as this breaks things on Solaris
# (Conflicts with pthreads and big file handling)
m4_define([_AC_PROG_CXX_EXIT_DECLARATION],
@ -1340,7 +1342,8 @@ m4_define([_AC_PROG_CXX_EXIT_DECLARATION],
'extern "C" void std::exit (int); using std::exit;' \
'extern "C" void exit (int) throw ();' \
'extern "C" void exit (int);' \
'void exit (int);'
'void exit (int);' \
'#include <stdlib.h>'
do
_AC_COMPILE_IFELSE([AC_LANG_PROGRAM([@%:@include <stdlib.h>
$ac_declaration],

View file

@ -2207,10 +2207,15 @@ int run_query(MYSQL* mysql, struct st_query* q, int flags)
/* Add all warnings to the result */
if (!disable_result_log && mysql_warning_count(mysql))
{
MYSQL_RES *warn_res= mysql_warnings(mysql);
MYSQL_RES *warn_res=0;
uint count= mysql_warning_count(mysql);
if (!mysql_real_query(mysql, "SHOW WARNINGS", 13))
{
warn_res=mysql_store_result(mysql);
}
if (!warn_res)
verbose_msg("Warning count is %d but didn't get any warnings\n",
mysql_warning_count(mysql));
verbose_msg("Warning count is %u but didn't get any warnings\n",
count);
else
{
dynstr_append_mem(ds, "Warnings:\n", 10);

View file

@ -130,18 +130,24 @@ AC_PROG_CXX
AC_PROG_CPP
# Print version of CC and CXX compiler (if they support --version)
CC_VERSION=`$CC --version`
CC_VERSION=`$CC --version | sed 1q`
if test $? -eq "0"
then
AC_MSG_CHECKING("C Compiler version");
AC_MSG_RESULT("$CC $CC_VERSION")
else
CC_VERSION=""
fi
CXX_VERSION=`$CXX --version`
CXX_VERSION=`$CXX --version | sed 1q`
if test $? -eq "0"
then
AC_MSG_CHECKING("C++ compiler version");
AC_MSG_RESULT("$CXX $CXX_VERSION")
else
CXX_VERSION=""
fi
AC_SUBST(CXX_VERSION)
AC_SUBST(CC_VERSION)
# Fix for sgi gcc / sgiCC which tries to emulate gcc
if test "$CC" = "sgicc"
@ -960,8 +966,8 @@ case $SYSTEM_TYPE in
*darwin5*)
if test "$ac_cv_prog_gcc" = "yes"
then
CFLAGS="$CFLAGS -traditional-cpp -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DHAVE_BROKEN_REALPATH"
CXXFLAGS="$CXXFLAGS -traditional-cpp -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ"
CFLAGS="$CFLAGS -traditional-cpp -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DHAVE_BROKEN_REALPATH -DFN_NO_CASE_SENCE"
CXXFLAGS="$CXXFLAGS -traditional-cpp -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DFN_NO_CASE_SENCE"
MAX_C_OPTIMIZE="-O"
with_named_curses=""
fi
@ -969,8 +975,8 @@ case $SYSTEM_TYPE in
*darwin6*)
if test "$ac_cv_prog_gcc" = "yes"
then
CFLAGS="$CFLAGS -traditional-cpp -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DHAVE_BROKEN_REALPATH"
CXXFLAGS="$CXXFLAGS -traditional-cpp -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ"
CFLAGS="$CFLAGS -traditional-cpp -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DHAVE_BROKEN_REALPATH -DFN_NO_CASE_SENCE"
CXXFLAGS="$CXXFLAGS -traditional-cpp -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DFN_NO_CASE_SENCE"
MAX_C_OPTIMIZE="-O"
fi
;;
@ -1231,7 +1237,7 @@ then
# CC="$CC -Kthread -DOpenUNIX8";
# CXX="$CXX -Kthread -DOpenUNIX8";
CC="$CC -Kthread -DUNIXWARE_7 -DHAVE_BROKEN_RWLOCK";
CXX="$CXX -Kthread -DUNIXWARE_7";
CXX="$CXX -Kthread -DUNIXWARE_7 -DHAVE_BROKEN_RWLOCK"
fi
AC_MSG_RESULT("yes")
else

View file

@ -80,6 +80,7 @@ static HA_ERRORS ha_errlist[]=
{ 147,"Lock table is full; Restart program with a larger locktable"},
{ 148,"Updates are not allowed under a read only transactions"},
{ 149,"Lock deadlock; Retry transaction"},
{ 150,"Foreign key constraint is incorrectly formed"},
{ -30999, "DB_INCOMPLETE: Sync didn't finish"},
{ -30998, "DB_KEYEMPTY: Key/data deleted or never created"},
{ -30997, "DB_KEYEXIST: The key/data pair already exists"},

View file

@ -107,9 +107,6 @@ enum ha_extra_function {
HA_EXTRA_IGNORE_DUP_KEY, /* Dup keys don't rollback everything*/
HA_EXTRA_NO_IGNORE_DUP_KEY,
HA_EXTRA_DONT_USE_CURSOR_TO_UPDATE, /* Cursor will not be used for update */
HA_EXTRA_BULK_INSERT_BEGIN,
HA_EXTRA_BULK_INSERT_FLUSH, /* Flush one index */
HA_EXTRA_BULK_INSERT_END,
HA_EXTRA_PREPARE_FOR_DELETE,
HA_EXTRA_PREPARE_FOR_UPDATE /* Remove read cache if problems */
};

View file

@ -397,6 +397,10 @@ void mi_disable_non_unique_index(MI_INFO *info, ha_rows rows);
my_bool mi_test_if_sort_rep(MI_INFO *info, ha_rows rows, ulonglong key_map,
my_bool force);
int mi_init_bulk_insert(MI_INFO *info, ulong cache_size, ha_rows rows);
void mi_flush_bulk_insert(MI_INFO *info, uint inx);
void mi_end_bulk_insert(MI_INFO *info);
#ifdef __cplusplus
}
#endif

View file

@ -51,6 +51,7 @@ typedef struct st_mymerge_info /* Struct from h_info */
uint reclength; /* Recordlength */
int errkey; /* With key was dupplicated on err */
uint options; /* HA_OPTION_... used */
ulong *rec_per_key; /* for sql optimizing */
} MYMERGE_INFO;
typedef struct st_myrg_table_info
@ -71,6 +72,7 @@ typedef struct st_myrg_info
my_bool cache_in_use;
LIST open_list;
QUEUE by_key;
ulong *rec_per_key_part; /* for sql optimizing */
} MYRG_INFO;

View file

@ -381,7 +381,6 @@ MYSQL_RES * STDCALL mysql_list_fields(MYSQL *mysql, const char *table,
MYSQL_RES * STDCALL mysql_list_processes(MYSQL *mysql);
MYSQL_RES * STDCALL mysql_store_result(MYSQL *mysql);
MYSQL_RES * STDCALL mysql_use_result(MYSQL *mysql);
MYSQL_RES * STDCALL mysql_warnings(MYSQL *mysql);
int STDCALL mysql_options(MYSQL *mysql,enum mysql_option option,
const char *arg);
void STDCALL mysql_free_result(MYSQL_RES *result);

View file

@ -19,6 +19,9 @@ Created 2/17/1996 Heikki Tuuri
#include "btr0btr.h"
#include "ha0ha.h"
ulint btr_search_this_is_zero = 0; /* A dummy variable to fool the
compiler */
ulint btr_search_n_succ = 0;
ulint btr_search_n_hash_fail = 0;
@ -56,16 +59,20 @@ before hash index building is started */
/************************************************************************
Builds a hash index on a page with the given parameters. If the page already
has a hash index with different parameters, the old hash index is removed. */
has a hash index with different parameters, the old hash index is removed.
If index is non-NULL, this function checks if n_fields and n_bytes are
sensible values, and does not build a hash index if not. */
static
void
btr_search_build_page_hash_index(
/*=============================*/
page_t* page, /* in: index page, s- or x-latched */
ulint n_fields, /* in: hash this many full fields */
ulint n_bytes, /* in: hash this many bytes from the next
dict_index_t* index, /* in: index for which to build, or NULL if
not known */
page_t* page, /* in: index page, s- or x-latched */
ulint n_fields,/* in: hash this many full fields */
ulint n_bytes,/* in: hash this many bytes from the next
field */
ulint side); /* in: hash for searches from this side */
ulint side); /* in: hash for searches from this side */
/*********************************************************************
This function should be called before reserving any btr search mutex, if
@ -173,7 +180,9 @@ btr_search_info_create(
}
/*************************************************************************
Updates the search info of an index about hash successes. */
Updates the search info of an index about hash successes. NOTE that info
is NOT protected by any semaphore, to save CPU time! Do not assume its fields
are consistent. */
static
void
btr_search_info_update_hash(
@ -295,7 +304,9 @@ set_new_recomm:
}
/*************************************************************************
Updates the block search info on hash successes. */
Updates the block search info on hash successes. NOTE that info and
block->n_hash_helps, n_fields, n_bytes, side are NOT protected by any
semaphore, to save CPU time! Do not assume the fields are consistent. */
static
ibool
btr_search_update_block_hash_info(
@ -425,12 +436,19 @@ btr_search_info_update_slow(
{
buf_block_t* block;
ibool build_index;
ulint* params;
ulint* params2;
ut_ad(!rw_lock_own(&btr_search_latch, RW_LOCK_SHARED)
&& !rw_lock_own(&btr_search_latch, RW_LOCK_EX));
block = buf_block_align(btr_cur_get_rec(cursor));
/* NOTE that the following two function calls do NOT protect
info or block->n_fields etc. with any semaphore, to save CPU time!
We cannot assume the fields are consistent when we return from
those functions! */
btr_search_info_update_hash(info, cursor);
build_index = btr_search_update_block_hash_info(info, block, cursor);
@ -439,7 +457,7 @@ btr_search_info_update_slow(
btr_search_check_free_space_in_heap();
}
if (cursor->flag == BTR_CUR_HASH_FAIL) {
/* Update the hash node reference, if appropriate */
@ -453,12 +471,30 @@ btr_search_info_update_slow(
}
if (build_index) {
ut_a(block->n_fields + block->n_bytes > 0);
/* Note that since we did not protect block->n_fields etc.
with any semaphore, the values can be inconsistent. We have
to check inside the function call that they make sense. We
also malloc an array and store the values there to make sure
the compiler does not let the function call parameters change
inside the called function. It might be that the compiler
would optimize the call just to pass pointers to block. */
btr_search_build_page_hash_index(block->frame,
block->n_fields,
block->n_bytes,
block->side);
params = mem_alloc(3 * sizeof(ulint));
params[0] = block->n_fields;
params[1] = block->n_bytes;
params[2] = block->side;
/* Make sure the compiler cannot deduce the values and do
optimizations */
params2 = params + btr_search_this_is_zero;
btr_search_build_page_hash_index(cursor->index,
block->frame,
params2[0],
params2[1],
params2[2]);
mem_free(params);
}
}
@ -976,16 +1012,20 @@ btr_search_drop_page_hash_when_freed(
/************************************************************************
Builds a hash index on a page with the given parameters. If the page already
has a hash index with different parameters, the old hash index is removed. */
has a hash index with different parameters, the old hash index is removed.
If index is non-NULL, this function checks if n_fields and n_bytes are
sensible values, and does not build a hash index if not. */
static
void
btr_search_build_page_hash_index(
/*=============================*/
page_t* page, /* in: index page, s- or x-latched */
ulint n_fields, /* in: hash this many full fields */
ulint n_bytes, /* in: hash this many bytes from the next
dict_index_t* index, /* in: index for which to build, or NULL if
not known */
page_t* page, /* in: index page, s- or x-latched */
ulint n_fields,/* in: hash this many full fields */
ulint n_bytes,/* in: hash this many bytes from the next
field */
ulint side) /* in: hash for searches from this side */
ulint side) /* in: hash for searches from this side */
{
hash_table_t* table;
buf_block_t* block;
@ -1028,7 +1068,18 @@ btr_search_build_page_hash_index(
return;
}
ut_a(n_fields + n_bytes > 0);
/* Check that the values for hash index build are sensible */
if (n_fields + n_bytes == 0) {
return;
}
if (index && (dict_index_get_n_unique_in_tree(index) < n_fields
|| (dict_index_get_n_unique_in_tree(index) == n_fields
&& n_bytes > 0))) {
return;
}
/* Calculate and cache fold values and corresponding records into
an array for fast insertion to the hash index */
@ -1186,8 +1237,8 @@ btr_search_move_or_delete_hash_entries(
ut_a(n_fields + n_bytes > 0);
btr_search_build_page_hash_index(new_page, n_fields, n_bytes,
side);
btr_search_build_page_hash_index(NULL, new_page, n_fields,
n_bytes, side);
ut_a(n_fields == block->curr_n_fields);
ut_a(n_bytes == block->curr_n_bytes);
ut_a(side == block->curr_side);

View file

@ -1113,6 +1113,7 @@ dict_index_add_to_cache(
ulint n_ord;
ibool success;
ulint i;
ulint j;
ut_ad(index);
ut_ad(mutex_own(&(dict_sys->mutex)));
@ -1143,6 +1144,28 @@ dict_index_add_to_cache(
return(FALSE);
}
/* Check that the same column does not appear twice in the index.
InnoDB assumes this in its algorithms, e.g., update of an index
entry */
for (i = 0; i < dict_index_get_n_fields(index); i++) {
for (j = 0; j < i; j++) {
if (dict_index_get_nth_field(index, j)->col
== dict_index_get_nth_field(index, i)->col) {
ut_print_timestamp(stderr);
fprintf(stderr,
" InnoDB: Error: column %s appears twice in index %s of table %s\n"
"InnoDB: This is not allowed in InnoDB.\n"
"InnoDB: UPDATE can cause such an index to become corrupt in InnoDB.\n",
dict_index_get_nth_field(index, i)->col->name,
index->name, table->name);
}
}
}
/* Build the cache internal representation of the index,
containing also the added system fields */
@ -2212,6 +2235,9 @@ dict_create_foreign_constraints(
ulint error;
ulint i;
ulint j;
ibool is_on_delete;
ulint n_on_deletes;
ulint n_on_updates;
dict_col_t* columns[500];
char* column_names[500];
ulint column_name_lens[500];
@ -2371,6 +2397,12 @@ col_loop2:
return(DB_CANNOT_ADD_CONSTRAINT);
}
n_on_deletes = 0;
n_on_updates = 0;
scan_on_conditions:
/* Loop here as long as we can find ON ... conditions */
ptr = dict_accept(ptr, "ON", &success);
if (!success) {
@ -2381,23 +2413,58 @@ col_loop2:
ptr = dict_accept(ptr, "DELETE", &success);
if (!success) {
dict_foreign_free(foreign);
ptr = dict_accept(ptr, "UPDATE", &success);
if (!success) {
dict_foreign_free(foreign);
return(DB_CANNOT_ADD_CONSTRAINT);
return(DB_CANNOT_ADD_CONSTRAINT);
}
is_on_delete = FALSE;
n_on_updates++;
} else {
is_on_delete = TRUE;
n_on_deletes++;
}
ptr = dict_accept(ptr, "RESTRICT", &success);
if (success) {
goto try_find_index;
goto scan_on_conditions;
}
ptr = dict_accept(ptr, "CASCADE", &success);
if (success) {
foreign->type = DICT_FOREIGN_ON_DELETE_CASCADE;
if (is_on_delete) {
foreign->type |= DICT_FOREIGN_ON_DELETE_CASCADE;
} else {
foreign->type |= DICT_FOREIGN_ON_UPDATE_CASCADE;
}
goto try_find_index;
goto scan_on_conditions;
}
ptr = dict_accept(ptr, "NO", &success);
if (success) {
ptr = dict_accept(ptr, "ACTION", &success);
if (!success) {
dict_foreign_free(foreign);
return(DB_CANNOT_ADD_CONSTRAINT);
}
if (is_on_delete) {
foreign->type |= DICT_FOREIGN_ON_DELETE_NO_ACTION;
} else {
foreign->type |= DICT_FOREIGN_ON_UPDATE_NO_ACTION;
}
goto scan_on_conditions;
}
ptr = dict_accept(ptr, "SET", &success);
@ -2430,20 +2497,23 @@ col_loop2:
}
}
foreign->type = DICT_FOREIGN_ON_DELETE_SET_NULL;
if (is_on_delete) {
foreign->type |= DICT_FOREIGN_ON_DELETE_SET_NULL;
} else {
foreign->type |= DICT_FOREIGN_ON_UPDATE_SET_NULL;
}
goto scan_on_conditions;
try_find_index:
/* We check that there are no superfluous words like 'ON UPDATE ...'
which we do not support yet. */
ptr = dict_accept(ptr, (char *) "ON", &success);
if (success) {
if (n_on_deletes > 1 || n_on_updates > 1) {
/* It is an error to define more than 1 action */
dict_foreign_free(foreign);
return(DB_CANNOT_ADD_CONSTRAINT);
}
/* Try to find an index which contains the columns as the first fields
and in the right order, and the types are the same as in
foreign->foreign_index */
@ -3265,7 +3335,8 @@ dict_print_info_on_foreign_keys_in_create_format(
/*=============================================*/
char* buf, /* in: auxiliary buffer */
char* str, /* in/out: pointer to a string */
ulint len, /* in: space in str available for info */
ulint len, /* in: str has to be a buffer at least
len + 5000 bytes */
dict_table_t* table) /* in: table */
{
@ -3335,14 +3406,30 @@ dict_print_info_on_foreign_keys_in_create_format(
buf2 += sprintf(buf2, ")");
if (foreign->type == DICT_FOREIGN_ON_DELETE_CASCADE) {
if (foreign->type & DICT_FOREIGN_ON_DELETE_CASCADE) {
buf2 += sprintf(buf2, " ON DELETE CASCADE");
}
if (foreign->type == DICT_FOREIGN_ON_DELETE_SET_NULL) {
if (foreign->type & DICT_FOREIGN_ON_DELETE_SET_NULL) {
buf2 += sprintf(buf2, " ON DELETE SET NULL");
}
if (foreign->type & DICT_FOREIGN_ON_DELETE_NO_ACTION) {
buf2 += sprintf(buf2, " ON DELETE NO ACTION");
}
if (foreign->type & DICT_FOREIGN_ON_UPDATE_CASCADE) {
buf2 += sprintf(buf2, " ON UPDATE CASCADE");
}
if (foreign->type & DICT_FOREIGN_ON_UPDATE_SET_NULL) {
buf2 += sprintf(buf2, " ON UPDATE SET NULL");
}
if (foreign->type & DICT_FOREIGN_ON_UPDATE_NO_ACTION) {
buf2 += sprintf(buf2, " ON UPDATE NO ACTION");
}
foreign = UT_LIST_GET_NEXT(foreign_list, foreign);
}
no_space:
@ -3434,6 +3521,22 @@ dict_print_info_on_foreign_keys(
buf2 += sprintf(buf2, " ON DELETE SET NULL");
}
if (foreign->type & DICT_FOREIGN_ON_DELETE_NO_ACTION) {
buf2 += sprintf(buf2, " ON DELETE NO ACTION");
}
if (foreign->type & DICT_FOREIGN_ON_UPDATE_CASCADE) {
buf2 += sprintf(buf2, " ON UPDATE CASCADE");
}
if (foreign->type & DICT_FOREIGN_ON_UPDATE_SET_NULL) {
buf2 += sprintf(buf2, " ON UPDATE SET NULL");
}
if (foreign->type & DICT_FOREIGN_ON_UPDATE_NO_ACTION) {
buf2 += sprintf(buf2, " ON UPDATE NO ACTION");
}
foreign = UT_LIST_GET_NEXT(foreign_list, foreign);
}
no_space:

View file

@ -2479,20 +2479,20 @@ try_again:
n_free = n_free_list_ext + n_free_up;
if (alloc_type == FSP_NORMAL) {
/* We reserve 1 extent + 4 % of the space size to undo logs
and 1 extent + 1 % to cleaning operations; NOTE: this source
/* We reserve 1 extent + 0.5 % of the space size to undo logs
and 1 extent + 0.5 % to cleaning operations; NOTE: this source
code is duplicated in the function below! */
reserve = 2 + ((size / FSP_EXTENT_SIZE) * 5) / 100;
reserve = 2 + ((size / FSP_EXTENT_SIZE) * 2) / 200;
if (n_free <= reserve + n_ext) {
goto try_to_extend;
}
} else if (alloc_type == FSP_UNDO) {
/* We reserve 1 % of the space size to cleaning operations */
/* We reserve 0.5 % of the space size to cleaning operations */
reserve = 1 + ((size / FSP_EXTENT_SIZE) * 1) / 100;
reserve = 1 + ((size / FSP_EXTENT_SIZE) * 1) / 200;
if (n_free <= reserve + n_ext) {
@ -2572,11 +2572,11 @@ fsp_get_available_space_in_free_extents(
n_free = n_free_list_ext + n_free_up;
/* We reserve 1 extent + 4 % of the space size to undo logs
and 1 extent + 1 % to cleaning operations; NOTE: this source
/* We reserve 1 extent + 0.5 % of the space size to undo logs
and 1 extent + 0.5 % to cleaning operations; NOTE: this source
code is duplicated in the function above! */
reserve = 2 + ((size / FSP_EXTENT_SIZE) * 5) / 100;
reserve = 2 + ((size / FSP_EXTENT_SIZE) * 2) / 200;
if (reserve > n_free) {
return(0);

View file

@ -2657,10 +2657,7 @@ reset_bit:
new_bits, &mtr);
}
}
ibuf_data->n_merges++;
ibuf_data->n_merged_recs += n_inserts;
#ifdef UNIV_IBUF_DEBUG
/* printf("Ibuf merge %lu records volume %lu to page no %lu\n",
n_inserts, volume, page_no); */
@ -2670,6 +2667,14 @@ reset_bit:
mem_heap_free(heap);
/* Protect our statistics keeping from race conditions */
mutex_enter(&ibuf_mutex);
ibuf_data->n_merges++;
ibuf_data->n_merged_recs += n_inserts;
mutex_exit(&ibuf_mutex);
ibuf_exit();
#ifdef UNIV_IBUF_DEBUG
ut_a(ibuf_count_get(space, page_no) == 0);

View file

@ -728,8 +728,8 @@ struct buf_block_struct{
bufferfixed, or (2) the thread has an
x-latch on the block */
/* 5. Hash search fields: NOTE that these fields are protected by
btr_search_mutex */
/* 5. Hash search fields: NOTE that the first 4 fields are NOT
protected by any semaphore! */
ulint n_hash_helps; /* counter which controls building
of a new hash index for the page */
@ -742,6 +742,9 @@ struct buf_block_struct{
whether the leftmost record of several
records with the same prefix should be
indexed in the hash index */
/* The following 4 fields are protected by btr_search_latch: */
ibool is_hashed; /* TRUE if hash index has already been
built on this page; note that it does
not guarantee that the index is

View file

@ -42,6 +42,8 @@ Created 5/24/1996 Heikki Tuuri
#define DB_CANNOT_ADD_CONSTRAINT 38 /* adding a foreign key constraint
to a table failed */
#define DB_CORRUPTION 39 /* data structure corruption noticed */
#define DB_COL_APPEARS_TWICE_IN_INDEX 40 /* InnoDB cannot handle an index
where same column appears twice */
/* The following are partial failure codes */
#define DB_FAIL 1000

View file

@ -280,8 +280,15 @@ struct dict_foreign_struct{
table */
};
/* The flags for ON_UPDATE and ON_DELETE can be ORed; the default is that
a foreign key constraint is enforced, therefore RESTRICT just means no flag */
#define DICT_FOREIGN_ON_DELETE_CASCADE 1
#define DICT_FOREIGN_ON_DELETE_SET_NULL 2
#define DICT_FOREIGN_ON_UPDATE_CASCADE 4
#define DICT_FOREIGN_ON_UPDATE_SET_NULL 8
#define DICT_FOREIGN_ON_DELETE_NO_ACTION 16
#define DICT_FOREIGN_ON_UPDATE_NO_ACTION 32
#define DICT_INDEX_MAGIC_N 76789786

View file

@ -127,16 +127,18 @@ mem_heap_create_func(
ulint line /* in: line where created */
);
/*********************************************************************
NOTE: Use the corresponding macro instead of this function.
Frees the space occupied by a memory heap. */
NOTE: Use the corresponding macro instead of this function. Frees the space
occupied by a memory heap. In the debug version erases the heap memory
blocks. */
UNIV_INLINE
void
mem_heap_free_func(
/*===============*/
mem_heap_t* heap, /* in, own: heap to be freed */
char* file_name, /* in: file name where freed */
ulint line /* in: line where freed */
);
mem_heap_t* heap, /* in, own: heap to be freed */
char* file_name __attribute__((unused)),
/* in: file name where freed */
ulint line __attribute__((unused)));
/* in: line where freed */
/*******************************************************************
Allocates n bytes of memory from a memory heap. */
UNIV_INLINE

View file

@ -440,9 +440,10 @@ void
mem_heap_free_func(
/*===============*/
mem_heap_t* heap, /* in, own: heap to be freed */
char* file_name, /* in: file name where freed */
ulint line /* in: line where freed */
)
char* file_name __attribute__((unused)),
/* in: file name where freed */
ulint line __attribute__((unused)))
/* in: line where freed */
{
mem_block_t* block;
mem_block_t* prev_block;

View file

@ -492,7 +492,11 @@ struct row_prebuilt_struct {
fetch many rows from the same cursor:
it saves CPU time to fetch them in a
batch; we reserve mysql_row_len
bytes for each such row */
bytes for each such row; these
pointers point 4 bytes past the
allocated mem buf start, because
there is a 4 byte magic number at the
start and at the end */
ulint fetch_cache_first;/* position of the first not yet
fetched row in fetch_cache */
ulint n_fetch_cached; /* number of not yet fetched rows
@ -501,8 +505,12 @@ struct row_prebuilt_struct {
to this heap */
mem_heap_t* old_vers_heap; /* memory heap where a previous
version is built in consistent read */
ulint magic_n2; /* this should be the same as
magic_n */
};
#define ROW_PREBUILT_FETCH_MAGIC_N 465765687
#define ROW_MYSQL_WHOLE_ROW 0
#define ROW_MYSQL_REC_FIELDS 1
#define ROW_MYSQL_NO_TEMPLATE 2

View file

@ -312,8 +312,11 @@ struct upd_node_struct{
ibool in_mysql_interface;
/* TRUE if the update node was created
for the MySQL interface */
dict_foreign_t* foreign;/* NULL or pointer to a foreign key
constraint if this update node is used in
doing an ON DELETE or ON UPDATE operation */
upd_node_t* cascade_node;/* NULL or an update node template which
is used to implement ON DELETE CASCADE
is used to implement ON DELETE/UPDATE CASCADE
or ... SET NULL for foreign keys */
mem_heap_t* cascade_heap;/* NULL or a mem heap where the cascade
node is created */

View file

@ -9,7 +9,8 @@ Created 1/20/1994 Heikki Tuuri
#ifndef univ_i
#define univ_i
#if (defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64)) && !defined(MYSQL_SERVER)
#if (defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64)) && !defined(MYSQL_SERVER) && !defined(__WIN__)
#undef __WIN__
#define __WIN__
#include <windows.h>

View file

@ -15,6 +15,7 @@ Created 5/12/1997 Heikki Tuuri
#include "ut0mem.h"
#include "ut0lst.h"
#include "ut0byte.h"
#include "mem0mem.h"
/* We would like to use also the buffer frames to allocate memory. This
would be desirable, because then the memory consumption of the database
@ -251,7 +252,6 @@ mem_pool_fill_free_list(
mem_area_t* area;
mem_area_t* area2;
ibool ret;
char err_buf[500];
ut_ad(mutex_own(&(pool->mutex)));
@ -300,11 +300,8 @@ mem_pool_fill_free_list(
}
if (UT_LIST_GET_LEN(pool->free_list[i + 1]) == 0) {
ut_sprintf_buf(err_buf, ((byte*)area) - 50, 100);
fprintf(stderr,
"InnoDB: Error: Removing element from mem pool free list %lu\n"
"InnoDB: though the list length is 0! Dump of 100 bytes around element:\n%s\n",
i + 1, err_buf);
mem_analyze_corruption((byte*)area);
ut_a(0);
}
@ -340,7 +337,6 @@ mem_area_alloc(
mem_area_t* area;
ulint n;
ibool ret;
char err_buf[500];
n = ut_2_log(ut_max(size + MEM_AREA_EXTRA_SIZE, MEM_AREA_MIN_SIZE));
@ -364,20 +360,22 @@ mem_area_alloc(
}
if (!mem_area_get_free(area)) {
ut_sprintf_buf(err_buf, ((byte*)area) - 50, 100);
fprintf(stderr,
"InnoDB: Error: Removing element from mem pool free list %lu though the\n"
"InnoDB: element is not marked free! Dump of 100 bytes around element:\n%s\n",
n, err_buf);
"InnoDB: element is not marked free!\n",
n);
mem_analyze_corruption((byte*)area);
ut_a(0);
}
if (UT_LIST_GET_LEN(pool->free_list[n]) == 0) {
ut_sprintf_buf(err_buf, ((byte*)area) - 50, 100);
fprintf(stderr,
"InnoDB: Error: Removing element from mem pool free list %lu\n"
"InnoDB: though the list length is 0! Dump of 100 bytes around element:\n%s\n",
n, err_buf);
"InnoDB: though the list length is 0!\n",
n);
mem_analyze_corruption((byte*)area);
ut_a(0);
}
@ -451,7 +449,6 @@ mem_area_free(
void* new_ptr;
ulint size;
ulint n;
char err_buf[500];
if (mem_out_of_mem_err_msg_count > 0) {
/* It may be that the area was really allocated from the
@ -468,18 +465,25 @@ mem_area_free(
area = (mem_area_t*) (((byte*)ptr) - MEM_AREA_EXTRA_SIZE);
if (mem_area_get_free(area)) {
ut_sprintf_buf(err_buf, ((byte*)area) - 50, 100);
if (mem_area_get_free(area)) {
fprintf(stderr,
"InnoDB: Error: Freeing element to mem pool free list though the\n"
"InnoDB: element is marked free! Dump of 100 bytes around element:\n%s\n",
err_buf);
"InnoDB: element is marked free!\n");
mem_analyze_corruption((byte*)area);
ut_a(0);
}
size = mem_area_get_size(area);
ut_ad(size != 0);
if (size == 0) {
fprintf(stderr,
"InnoDB: Error: Mem area size is 0. Possibly a memory overrun of the\n"
"InnoDB: previous allocated area!\n");
mem_analyze_corruption((byte*)area);
ut_a(0);
}
#ifdef UNIV_LIGHT_MEM_DEBUG
if (((byte*)area) + size < pool->buf + pool->size) {
@ -488,7 +492,15 @@ mem_area_free(
next_size = mem_area_get_size(
(mem_area_t*)(((byte*)area) + size));
ut_a(ut_2_power_up(next_size) == next_size);
if (ut_2_power_up(next_size) != next_size) {
fprintf(stderr,
"InnoDB: Error: Memory area size %lu, next area size %lu not a power of 2!\n"
"InnoDB: Possibly a memory overrun of the buffer being freed here.\n",
size, next_size);
mem_analyze_corruption((byte*)area);
ut_a(0);
}
}
#endif
buddy = mem_area_get_buddy(area, size, pool);

View file

@ -322,13 +322,129 @@ row_ins_clust_index_entry_by_modify(
}
/*************************************************************************
Either deletes or sets the referencing columns SQL NULL in a child row.
Used in ON DELETE ... clause for foreign keys when a parent row is
deleted. */
Returns TRUE if in a cascaded update/delete an ancestor node of node
updates table. */
static
ibool
row_ins_cascade_ancestor_updates_table(
/*===================================*/
/* out: TRUE if an ancestor updates table */
que_node_t* node, /* in: node in a query graph */
dict_table_t* table) /* in: table */
{
que_node_t* parent;
upd_node_t* upd_node;
parent = que_node_get_parent(node);
while (que_node_get_type(parent) == QUE_NODE_UPDATE) {
upd_node = parent;
if (upd_node->table == table) {
return(TRUE);
}
parent = que_node_get_parent(parent);
ut_a(parent);
}
return(FALSE);
}
/**********************************************************************
Calculates the update vector node->cascade->update for a child table in
a cascaded update. */
static
ulint
row_ins_foreign_delete_or_set_null(
/*===============================*/
row_ins_cascade_calc_update_vec(
/*============================*/
/* out: number of fields in the
calculated update vector; the value
can also be 0 if no foreign key
fields changed */
upd_node_t* node, /* in: update node of the parent
table */
dict_foreign_t* foreign) /* in: foreign key constraint whose
type is != 0 */
{
upd_node_t* cascade = node->cascade_node;
dict_table_t* table = foreign->foreign_table;
dict_index_t* index = foreign->foreign_index;
upd_t* update;
upd_field_t* ufield;
dict_table_t* parent_table;
dict_index_t* parent_index;
upd_t* parent_update;
upd_field_t* parent_ufield;
ulint n_fields_updated;
ulint parent_field_no;
ulint i;
ulint j;
ut_a(node && foreign && cascade && table && index);
/* Calculate the appropriate update vector which will set the fields
in the child index record to the same value as the referenced index
record will get in the update. */
parent_table = node->table;
ut_a(parent_table == foreign->referenced_table);
parent_index = foreign->referenced_index;
parent_update = node->update;
update = cascade->update;
update->info_bits = 0;
update->n_fields = foreign->n_fields;
n_fields_updated = 0;
for (i = 0; i < foreign->n_fields; i++) {
parent_field_no = dict_table_get_nth_col_pos(
parent_table,
dict_index_get_nth_col_no(
parent_index, i));
for (j = 0; j < parent_update->n_fields; j++) {
parent_ufield = parent_update->fields + j;
if (parent_ufield->field_no == parent_field_no) {
/* A field in the parent index record is
updated. Let us make the update vector
field for the child table. */
ufield = update->fields + n_fields_updated;
ufield->field_no =
dict_table_get_nth_col_pos(table,
dict_index_get_nth_col_no(index, i));
ufield->exp = NULL;
ufield->new_val = parent_ufield->new_val;
ufield->extern_storage = FALSE;
n_fields_updated++;
}
}
}
update->n_fields = n_fields_updated;
return(n_fields_updated);
}
/*************************************************************************
Perform referential actions or checks when a parent row is deleted or updated
and the constraint had an ON DELETE or ON UPDATE condition which was not
RESTRICT. */
static
ulint
row_ins_foreign_check_on_constraint(
/*================================*/
/* out: DB_SUCCESS, DB_LOCK_WAIT,
or error code */
que_thr_t* thr, /* in: query thread whose run_node
@ -378,15 +494,34 @@ row_ins_foreign_delete_or_set_null(
ut_strlen(table->name) + 1);
node = thr->run_node;
ut_a(que_node_get_type(node) == QUE_NODE_UPDATE);
if (node->is_delete && 0 == (foreign->type &
(DICT_FOREIGN_ON_DELETE_CASCADE
| DICT_FOREIGN_ON_DELETE_SET_NULL))) {
if (!node->is_delete) {
/* According to SQL-92 an UPDATE with respect to FOREIGN
KEY constraints is not semantically equivalent to a
DELETE + INSERT. Therefore we do not perform any action
here and consequently the child rows would be left
orphaned if we would let the UPDATE happen. Thus we return
an error. */
/* No action is defined: return a foreign key error if
NO ACTION is not specified */
if (foreign->type & DICT_FOREIGN_ON_DELETE_NO_ACTION) {
return(DB_SUCCESS);
}
return(DB_ROW_IS_REFERENCED);
}
if (!node->is_delete && 0 == (foreign->type &
(DICT_FOREIGN_ON_UPDATE_CASCADE
| DICT_FOREIGN_ON_UPDATE_SET_NULL))) {
/* This is an UPDATE */
/* No action is defined: return a foreign key error if
NO ACTION is not specified */
if (foreign->type & DICT_FOREIGN_ON_UPDATE_NO_ACTION) {
return(DB_SUCCESS);
}
return(DB_ROW_IS_REFERENCED);
}
@ -411,7 +546,10 @@ row_ins_foreign_delete_or_set_null(
cascade->table = table;
if (foreign->type == DICT_FOREIGN_ON_DELETE_CASCADE ) {
cascade->foreign = foreign;
if (node->is_delete
&& (foreign->type & DICT_FOREIGN_ON_DELETE_CASCADE)) {
cascade->is_delete = TRUE;
} else {
cascade->is_delete = FALSE;
@ -425,8 +563,30 @@ row_ins_foreign_delete_or_set_null(
}
}
/* We do not allow cyclic cascaded updating of the same
table. Check that we are not updating the same table which
is already being modified in this cascade chain. We have to
check this because the modification of the indexes of a
'parent' table may still be incomplete, and we must avoid
seeing the indexes of the parent table in an inconsistent
state! In this way we also prevent possible infinite
update loops caused by cyclic cascaded updates. */
if (!cascade->is_delete
&& row_ins_cascade_ancestor_updates_table(cascade, table)) {
/* We do not know if this would break foreign key
constraints, but play safe and return an error */
err = DB_ROW_IS_REFERENCED;
goto nonstandard_exit_func;
}
index = btr_pcur_get_btr_cur(pcur)->index;
ut_a(index == foreign->foreign_index);
rec = btr_pcur_get_rec(pcur);
if (index->type & DICT_CLUSTERED) {
@ -520,7 +680,11 @@ row_ins_foreign_delete_or_set_null(
goto nonstandard_exit_func;
}
if (foreign->type == DICT_FOREIGN_ON_DELETE_SET_NULL) {
if ((node->is_delete
&& (foreign->type & DICT_FOREIGN_ON_DELETE_SET_NULL))
|| (!node->is_delete
&& (foreign->type & DICT_FOREIGN_ON_UPDATE_SET_NULL))) {
/* Build the appropriate update vector which sets
foreign->n_fields first fields in rec to SQL NULL */
@ -540,6 +704,26 @@ row_ins_foreign_delete_or_set_null(
}
}
if (!node->is_delete
&& (foreign->type & DICT_FOREIGN_ON_UPDATE_CASCADE)) {
/* Build the appropriate update vector which sets changing
foreign->n_fields first fields in rec to new values */
row_ins_cascade_calc_update_vec(node, foreign);
if (cascade->update->n_fields == 0) {
/* The update does not change any columns referred
to in this foreign key constraint: no need to do
anything */
err = DB_SUCCESS;
goto nonstandard_exit_func;
}
}
/* Store pcur position and initialize or store the cascade node
pcur stored position */
@ -629,6 +813,7 @@ row_ins_check_foreign_constraint(
dtuple_t* entry, /* in: index entry for index */
que_thr_t* thr) /* in: query thread */
{
upd_node_t* upd_node;
dict_table_t* check_table;
dict_index_t* check_index;
ulint n_fields_cmp;
@ -665,6 +850,30 @@ run_again:
}
}
if (que_node_get_type(thr->run_node) == QUE_NODE_UPDATE) {
upd_node = thr->run_node;
if (!(upd_node->is_delete) && upd_node->foreign == foreign) {
/* If a cascaded update is done as defined by a
foreign key constraint, do not check that
constraint for the child row. In ON UPDATE CASCADE
the update of the parent row is only half done when
we come here: if we would check the constraint here
for the child row it would fail.
A QUESTION remains: if in the child table there are
several constraints which refer to the same parent
table, we should merge all updates to the child as
one update? And the updates can be contradictory!
Currently we just perform the update associated
with each foreign key constraint, one after
another, and the user has problems predicting in
which order they are performed. */
return(DB_SUCCESS);
}
}
if (check_ref) {
check_table = foreign->referenced_table;
check_index = foreign->referenced_index;
@ -774,8 +983,12 @@ run_again:
break;
} else if (foreign->type != 0) {
/* There is an ON UPDATE or ON DELETE
condition: check them in a separate
function */
err =
row_ins_foreign_delete_or_set_null(
row_ins_foreign_check_on_constraint(
thr, foreign, &pcur, &mtr);
if (err != DB_SUCCESS) {

View file

@ -313,6 +313,7 @@ row_create_prebuilt(
prebuilt = mem_heap_alloc(heap, sizeof(row_prebuilt_t));
prebuilt->magic_n = ROW_PREBUILT_ALLOCATED;
prebuilt->magic_n2 = ROW_PREBUILT_ALLOCATED;
prebuilt->table = table;
@ -378,11 +379,12 @@ row_prebuilt_free(
{
ulint i;
if (prebuilt->magic_n != ROW_PREBUILT_ALLOCATED) {
if (prebuilt->magic_n != ROW_PREBUILT_ALLOCATED
|| prebuilt->magic_n2 != ROW_PREBUILT_ALLOCATED) {
fprintf(stderr,
"InnoDB: Error: trying to free a corrupt\n"
"InnoDB: table handle. Magic n %lu, table name %s\n",
prebuilt->magic_n, prebuilt->table->name);
"InnoDB: Error: trying to free a corrupt\n"
"InnoDB: table handle. Magic n %lu, magic n2 %lu, table name %s\n",
prebuilt->magic_n, prebuilt->magic_n2, prebuilt->table->name);
mem_analyze_corruption((byte*)prebuilt);
@ -390,6 +392,7 @@ row_prebuilt_free(
}
prebuilt->magic_n = ROW_PREBUILT_FREED;
prebuilt->magic_n2 = ROW_PREBUILT_FREED;
btr_pcur_free_for_mysql(prebuilt->pcur);
btr_pcur_free_for_mysql(prebuilt->clust_pcur);
@ -420,7 +423,23 @@ row_prebuilt_free(
for (i = 0; i < MYSQL_FETCH_CACHE_SIZE; i++) {
if (prebuilt->fetch_cache[i] != NULL) {
mem_free(prebuilt->fetch_cache[i]);
if ((ROW_PREBUILT_FETCH_MAGIC_N !=
mach_read_from_4((prebuilt->fetch_cache[i]) - 4))
|| (ROW_PREBUILT_FETCH_MAGIC_N !=
mach_read_from_4((prebuilt->fetch_cache[i])
+ prebuilt->mysql_row_len))) {
fprintf(stderr,
"InnoDB: Error: trying to free a corrupt\n"
"InnoDB: fetch buffer.\n");
mem_analyze_corruption(
prebuilt->fetch_cache[i]);
ut_a(0);
}
mem_free((prebuilt->fetch_cache[i]) - 4);
}
}
@ -1435,7 +1454,7 @@ int
row_create_index_for_mysql(
/*=======================*/
/* out: error number or DB_SUCCESS */
dict_index_t* index, /* in: index defintion */
dict_index_t* index, /* in: index definition */
trx_t* trx) /* in: transaction handle */
{
ind_node_t* node;
@ -1444,7 +1463,9 @@ row_create_index_for_mysql(
ulint namelen;
ulint keywordlen;
ulint err;
ulint i;
ulint j;
ut_ad(rw_lock_own(&dict_operation_lock, RW_LOCK_EX));
ut_ad(mutex_own(&(dict_sys->mutex)));
ut_ad(trx->mysql_thread_id == os_thread_get_curr_id());
@ -1465,6 +1486,31 @@ row_create_index_for_mysql(
return(DB_SUCCESS);
}
/* Check that the same column does not appear twice in the index.
InnoDB assumes this in its algorithms, e.g., update of an index
entry */
for (i = 0; i < dict_index_get_n_fields(index); i++) {
for (j = 0; j < i; j++) {
if (0 == ut_strcmp(
dict_index_get_nth_field(index, j)->name,
dict_index_get_nth_field(index, i)->name)) {
ut_print_timestamp(stderr);
fprintf(stderr,
" InnoDB: Error: column %s appears twice in index %s of table %s\n"
"InnoDB: This is not allowed in InnoDB.\n",
dict_index_get_nth_field(index, i)->name,
index->name, index->table_name);
err = DB_COL_APPEARS_TWICE_IN_INDEX;
goto error_handling;
}
}
}
heap = mem_heap_create(512);
trx->dict_operation = TRUE;
@ -1477,11 +1523,13 @@ row_create_index_for_mysql(
SESS_COMM_EXECUTE, 0));
que_run_threads(thr);
err = trx->error_state;
err = trx->error_state;
que_graph_free((que_t*) que_node_get_parent(thr));
error_handling:
if (err != DB_SUCCESS) {
/* We have special error handling here */
ut_a(err == DB_OUT_OF_FILE_SPACE);
trx->error_state = DB_SUCCESS;
@ -1491,8 +1539,6 @@ row_create_index_for_mysql(
trx->error_state = DB_SUCCESS;
}
que_graph_free((que_t*) que_node_get_parent(thr));
trx->op_info = (char *) "";

View file

@ -2415,6 +2415,7 @@ row_sel_push_cache_row_for_mysql(
row_prebuilt_t* prebuilt, /* in: prebuilt struct */
rec_t* rec) /* in: record to push */
{
byte* buf;
ulint i;
ut_ad(prebuilt->n_fetch_cached < MYSQL_FETCH_CACHE_SIZE);
@ -2424,8 +2425,18 @@ row_sel_push_cache_row_for_mysql(
/* Allocate memory for the fetch cache */
for (i = 0; i < MYSQL_FETCH_CACHE_SIZE; i++) {
prebuilt->fetch_cache[i] = mem_alloc(
prebuilt->mysql_row_len);
/* A user has reported memory corruption in these
buffers in Linux. Put magic numbers there to help
to track a possible bug. */
buf = mem_alloc(prebuilt->mysql_row_len + 8);
prebuilt->fetch_cache[i] = buf + 4;
mach_write_to_4(buf, ROW_PREBUILT_FETCH_MAGIC_N);
mach_write_to_4(buf + 4 + prebuilt->mysql_row_len,
ROW_PREBUILT_FETCH_MAGIC_N);
}
}
@ -2437,7 +2448,7 @@ row_sel_push_cache_row_for_mysql(
prebuilt->n_fetch_cached++;
}
/*************************************************************************
Tries to do a shortcut to fetch a clustered index record with a unique key,
using the hash index if possible (not always). We assume that the search

View file

@ -71,6 +71,20 @@ the x-latch freed? The most efficient way for performing a
searched delete is obviously to keep the x-latch for several
steps of query graph execution. */
/***************************************************************
Checks if an update vector changes some of the first fields of an index
record. */
static
ibool
row_upd_changes_first_fields(
/*=========================*/
/* out: TRUE if changes */
dtuple_t* entry, /* in: old value of index entry */
dict_index_t* index, /* in: index of entry */
upd_t* update, /* in: update vector for the row */
ulint n); /* in: how many first fields to check */
/*************************************************************************
Checks if index currently is mentioned as a referenced index in a foreign
key constraint. */
@ -132,6 +146,7 @@ ulint
row_upd_check_references_constraints(
/*=================================*/
/* out: DB_SUCCESS or an error code */
upd_node_t* node, /* in: row update node */
btr_pcur_t* pcur, /* in: cursor positioned on a record; NOTE: the
cursor position is lost in this function! */
dict_table_t* table, /* in: table in question */
@ -173,7 +188,16 @@ row_upd_check_references_constraints(
foreign = UT_LIST_GET_FIRST(table->referenced_list);
while (foreign) {
if (foreign->referenced_index == index) {
/* Note that we may have an update which updates the index
record, but does NOT update the first fields which are
referenced in a foreign key constraint. Then the update does
NOT break the constraint. */
if (foreign->referenced_index == index
&& (node->is_delete
|| row_upd_changes_first_fields(entry, index,
node->update, foreign->n_fields))) {
if (foreign->foreign_table == NULL) {
dict_table_get(foreign->foreign_table_name,
trx);
@ -189,10 +213,9 @@ row_upd_check_references_constraints(
}
/* NOTE that if the thread ends up waiting for a lock
we will release dict_operation_lock
temporarily! But the counter on the table
protects 'foreign' from being dropped while the check
is running. */
we will release dict_operation_lock temporarily!
But the counter on the table protects 'foreign' from
being dropped while the check is running. */
err = row_ins_check_foreign_constraint(FALSE, foreign,
table, index, entry, thr);
@ -255,6 +278,7 @@ upd_node_create(
node->index = NULL;
node->update = NULL;
node->foreign = NULL;
node->cascade_heap = NULL;
node->cascade_node = NULL;
@ -953,6 +977,53 @@ row_upd_changes_some_index_ord_field_binary(
return(FALSE);
}
/***************************************************************
Checks if an update vector changes some of the first fields of an index
record. */
static
ibool
row_upd_changes_first_fields(
/*=========================*/
/* out: TRUE if changes */
dtuple_t* entry, /* in: index entry */
dict_index_t* index, /* in: index of entry */
upd_t* update, /* in: update vector for the row */
ulint n) /* in: how many first fields to check */
{
upd_field_t* upd_field;
dict_field_t* ind_field;
dict_col_t* col;
ulint n_upd_fields;
ulint col_pos;
ulint i, j;
ut_a(update && index);
ut_a(n <= dict_index_get_n_fields(index));
n_upd_fields = upd_get_n_fields(update);
for (i = 0; i < n; i++) {
ind_field = dict_index_get_nth_field(index, i);
col = dict_field_get_col(ind_field);
col_pos = dict_col_get_clust_pos(col);
for (j = 0; j < n_upd_fields; j++) {
upd_field = upd_get_nth_field(update, j);
if (col_pos == upd_field->field_no
&& cmp_dfield_dfield(
dtuple_get_nth_field(entry, i),
&(upd_field->new_val))) {
return(TRUE);
}
}
}
return(FALSE);
}
/*************************************************************************
Copies the column values from a record. */
UNIV_INLINE
@ -1106,9 +1177,11 @@ row_upd_sec_index_entry(
err = btr_cur_del_mark_set_sec_rec(0, btr_cur, TRUE,
thr, &mtr);
if (err == DB_SUCCESS && check_ref) {
/* NOTE that the following call loses
the position of pcur ! */
err = row_upd_check_references_constraints(
node,
&pcur, index->table,
index, thr, &mtr);
if (err != DB_SUCCESS) {
@ -1224,7 +1297,7 @@ row_upd_clust_rec_by_insert(
if (check_ref) {
/* NOTE that the following call loses
the position of pcur ! */
err = row_upd_check_references_constraints(
err = row_upd_check_references_constraints(node,
pcur, table,
index, thr, mtr);
if (err != DB_SUCCESS) {
@ -1392,7 +1465,8 @@ row_upd_del_mark_clust_rec(
if (err == DB_SUCCESS && check_ref) {
/* NOTE that the following call loses the position of pcur ! */
err = row_upd_check_references_constraints(pcur, index->table,
err = row_upd_check_references_constraints(node,
pcur, index->table,
index, thr, mtr);
if (err != DB_SUCCESS) {
mtr_commit(mtr);

View file

@ -529,6 +529,9 @@ open_or_create_log_file(
new database */
ibool* log_file_created, /* out: TRUE if new log file
created */
ibool log_file_has_been_opened,/* in: TRUE if a log file has been
opened before: then it is an error
to try to create another log file */
ulint k, /* in: log group number */
ulint i) /* in: log file number in group */
{
@ -581,12 +584,17 @@ open_or_create_log_file(
}
} else {
*log_file_created = TRUE;
ut_print_timestamp(stderr);
fprintf(stderr,
" InnoDB: Log file %s did not exist: new to be created\n",
name);
if (log_file_has_been_opened) {
return(DB_ERROR);
}
fprintf(stderr, "InnoDB: Setting log file %s size to %lu MB\n",
name, srv_log_file_size
>> (20 - UNIV_PAGE_SIZE_SHIFT));
@ -1160,7 +1168,8 @@ innobase_start_or_create_for_mysql(void)
for (i = 0; i < srv_n_log_files; i++) {
err = open_or_create_log_file(create_new_db,
&log_file_created, k, i);
&log_file_created,
log_opened, k, i);
if (err != DB_SUCCESS) {
return((int) err);

View file

@ -262,7 +262,7 @@ ut_print_buf(
data = buf;
for (i = 0; i < len; i++) {
if (isprint((char)(*data))) {
if (isprint((int)(*data))) {
printf("%c", (char)*data);
}
data++;
@ -302,7 +302,7 @@ ut_sprintf_buf(
data = buf;
for (i = 0; i < len; i++) {
if (isprint((char)(*data))) {
if (isprint((int)(*data))) {
n += sprintf(str + n, "%c", (char)*data);
} else {
n += sprintf(str + n, ".");

View file

@ -1269,7 +1269,7 @@ static MYSQL_DATA *read_rows(MYSQL *mysql,MYSQL_FIELD *mysql_fields,
else
{
cur->data[field] = to;
if (to+len > end_to)
if (len > (ulong) (end_to - to))
{
free_rows(result);
net->last_errno=CR_MALFORMED_PACKET;
@ -1315,7 +1315,7 @@ read_one_row(MYSQL *mysql,uint fields,MYSQL_ROW row, ulong *lengths)
{
uint field;
ulong pkt_len,len;
uchar *pos,*prev_pos;
uchar *pos,*prev_pos, *end_pos;
if ((pkt_len=net_safe_read(mysql)) == packet_error)
return -1;
@ -1327,6 +1327,7 @@ read_one_row(MYSQL *mysql,uint fields,MYSQL_ROW row, ulong *lengths)
}
prev_pos= 0; /* allowed to write at packet[-1] */
pos=mysql->net.read_pos;
end_pos=pos+pkt_len;
for (field=0 ; field < fields ; field++)
{
if ((len=(ulong) net_field_length(&pos)) == NULL_LENGTH)
@ -1336,6 +1337,12 @@ read_one_row(MYSQL *mysql,uint fields,MYSQL_ROW row, ulong *lengths)
}
else
{
if (len > (ulong) (end_pos - pos))
{
mysql->net.last_errno=CR_UNKNOWN_ERROR;
strmov(mysql->net.last_error,ER(mysql->net.last_errno));
return -1;
}
row[field] = (char*) pos;
pos+=len;
*lengths++=len;
@ -3508,18 +3515,6 @@ uint STDCALL mysql_thread_safe(void)
#endif
}
MYSQL_RES *STDCALL mysql_warnings(MYSQL *mysql)
{
uint warning_count;
DBUG_ENTER("mysql_warnings");
/* Save warning count as mysql_real_query may change this */
warning_count= mysql->warning_count;
if (mysql_real_query(mysql, "SHOW WARNINGS", 13))
DBUG_RETURN(0);
mysql->warning_count= warning_count;
DBUG_RETURN(mysql_store_result(mysql));
}
/****************************************************************************
Some support functions
****************************************************************************/

View file

@ -368,8 +368,8 @@ int chk_key(MI_CHECK *param, register MI_INFO *info)
{
/* Remember old statistics for key */
memcpy((char*) rec_per_key_part,
(char*) share->state.rec_per_key_part+
(uint) (rec_per_key_part - param->rec_per_key_part),
(char*) (share->state.rec_per_key_part +
(uint) (rec_per_key_part - param->rec_per_key_part)),
keyinfo->keysegs*sizeof(*rec_per_key_part));
continue;
}
@ -1912,8 +1912,8 @@ int mi_repair_by_sort(MI_CHECK *param, register MI_INFO *info,
{
/* Remember old statistics for key */
memcpy((char*) rec_per_key_part,
(char*) share->state.rec_per_key_part+
(uint) (rec_per_key_part - param->rec_per_key_part),
(char*) (share->state.rec_per_key_part +
(uint) (rec_per_key_part - param->rec_per_key_part)),
sort_param.keyinfo->keysegs*sizeof(*rec_per_key_part));
continue;
}
@ -2272,8 +2272,8 @@ int mi_repair_parallel(MI_CHECK *param, register MI_INFO *info,
{
/* Remember old statistics for key */
memcpy((char*) rec_per_key_part,
(char*) share->state.rec_per_key_part+
(uint) (rec_per_key_part - param->rec_per_key_part),
(char*) (share->state.rec_per_key_part+
(uint) (rec_per_key_part - param->rec_per_key_part)),
sort_param[i].keyinfo->keysegs*sizeof(*rec_per_key_part));
i--;
continue;
@ -2435,7 +2435,7 @@ int mi_repair_parallel(MI_CHECK *param, register MI_INFO *info,
got_error=0;
if (&share->state.state != info->state)
memcpy( &share->state.state, info->state, sizeof(*info->state));
memcpy(&share->state.state, info->state, sizeof(*info->state));
err:
got_error|= flush_blocks(param,share->kfile);

View file

@ -358,33 +358,6 @@ int mi_extra(MI_INFO *info, enum ha_extra_function function, void *extra_arg)
case HA_EXTRA_QUICK:
info->quick_mode=1;
break;
case HA_EXTRA_BULK_INSERT_BEGIN:
error=_mi_init_bulk_insert(info, (extra_arg ? *(ulong*) extra_arg :
myisam_bulk_insert_tree_size));
break;
case HA_EXTRA_BULK_INSERT_FLUSH:
if (info->bulk_insert)
{
uint index_to_flush= *(uint*) extra_arg;
if (is_tree_inited(&info->bulk_insert[index_to_flush]))
reset_tree(&info->bulk_insert[index_to_flush]);
}
break;
case HA_EXTRA_BULK_INSERT_END:
if (info->bulk_insert)
{
uint i;
for (i=0 ; i < share->base.keys ; i++)
{
if (is_tree_inited(& info->bulk_insert[i]))
{
delete_tree(& info->bulk_insert[i]);
}
}
my_free((void *)info->bulk_insert, MYF(0));
info->bulk_insert=0;
}
break;
case HA_EXTRA_NO_ROWS:
if (!share->state.header.uniques)
info->opt_flag|= OPT_NO_ROWS;

View file

@ -804,26 +804,27 @@ static int keys_free(uchar *key, TREE_FREE mode, bulk_insert_param *param)
}
int _mi_init_bulk_insert(MI_INFO *info, ulong cache_size)
int mi_init_bulk_insert(MI_INFO *info, ulong cache_size, ha_rows rows)
{
MYISAM_SHARE *share=info->s;
MI_KEYDEF *key=share->keyinfo;
bulk_insert_param *params;
uint i, num_keys;
uint i, num_keys, total_keylength;
ulonglong key_map=0;
DBUG_ENTER("_mi_init_bulk_insert");
DBUG_PRINT("enter",("cache_size: %lu", cache_size));
if (info->bulk_insert)
if (info->bulk_insert || (rows && rows < MI_MIN_ROWS_TO_USE_BULK_INSERT))
DBUG_RETURN(0);
for (i=num_keys=0 ; i < share->base.keys ; i++)
for (i=total_keylength=num_keys=0 ; i < share->base.keys ; i++)
{
if (!(key[i].flag & HA_NOSAME) && share->base.auto_key != i+1
&& test(share->state.key_map & ((ulonglong) 1 << i)))
{
num_keys++;
key_map |=((ulonglong) 1 << i);
total_keylength+=key[i].maxlength+TREE_ELEMENT_EXTRA_SIZE;
}
}
@ -831,6 +832,11 @@ int _mi_init_bulk_insert(MI_INFO *info, ulong cache_size)
num_keys * MI_MIN_SIZE_BULK_INSERT_TREE > cache_size)
DBUG_RETURN(0);
if (rows && rows*total_keylength < cache_size)
cache_size=rows;
else
cache_size/=total_keylength*16;
info->bulk_insert=(TREE *)
my_malloc((sizeof(TREE)*share->base.keys+
sizeof(bulk_insert_param)*num_keys),MYF(0));
@ -839,7 +845,7 @@ int _mi_init_bulk_insert(MI_INFO *info, ulong cache_size)
DBUG_RETURN(HA_ERR_OUT_OF_MEM);
params=(bulk_insert_param *)(info->bulk_insert+share->base.keys);
for (i=0 ; i < share->base.keys ; i++,key++)
for (i=0 ; i < share->base.keys ; i++)
{
if (test(key_map & ((ulonglong) 1 << i)))
{
@ -847,8 +853,8 @@ int _mi_init_bulk_insert(MI_INFO *info, ulong cache_size)
params->keynr=i;
/* Only allocate a 16'th of the buffer at a time */
init_tree(&info->bulk_insert[i],
cache_size / num_keys / 16 + 10,
cache_size / num_keys, 0,
cache_size * key[i].maxlength,
cache_size * key[i].maxlength, 0,
(qsort_cmp2)keys_compare, 0,
(tree_element_free) keys_free, (void *)params++);
}
@ -858,3 +864,30 @@ int _mi_init_bulk_insert(MI_INFO *info, ulong cache_size)
DBUG_RETURN(0);
}
void mi_flush_bulk_insert(MI_INFO *info, uint inx)
{
if (info->bulk_insert)
{
if (is_tree_inited(&info->bulk_insert[inx]))
reset_tree(&info->bulk_insert[inx]);
}
}
void mi_end_bulk_insert(MI_INFO *info)
{
if (info->bulk_insert)
{
uint i;
for (i=0 ; i < info->s->base.keys ; i++)
{
if (is_tree_inited(& info->bulk_insert[i]))
{
delete_tree(& info->bulk_insert[i]);
}
}
my_free((void *)info->bulk_insert, MYF(0));
info->bulk_insert=0;
}
}

View file

@ -400,6 +400,7 @@ typedef struct st_mi_sort_param
#define MI_MIN_KEYBLOCK_LENGTH 50 /* When to split delete blocks */
#define MI_MIN_SIZE_BULK_INSERT_TREE 16384 /* this is per key */
#define MI_MIN_ROWS_TO_USE_BULK_INSERT 100
/* The UNIQUE check is done with a hashed long key */
@ -683,8 +684,6 @@ int mi_open_datafile(MI_INFO *info, MYISAM_SHARE *share, File file_to_dup);
int mi_open_keyfile(MYISAM_SHARE *share);
void mi_setup_functions(register MYISAM_SHARE *share);
int _mi_init_bulk_insert(MI_INFO *info, ulong cache_size);
/* Functions needed by mi_check */
volatile bool *killed_ptr(MI_CHECK *param);
void mi_check_print_error _VARARGS((MI_CHECK *param, const char *fmt,...));

View file

@ -28,8 +28,6 @@ ulonglong myrg_position(MYRG_INFO *info)
~(ulonglong) 0;
}
/* If flag != 0 one only gets pos of last record */
int myrg_status(MYRG_INFO *info,register MYMERGE_INFO *x,int flag)
{
MYRG_TABLE *current_table;
@ -55,15 +53,16 @@ int myrg_status(MYRG_INFO *info,register MYMERGE_INFO *x,int flag)
DBUG_PRINT("info2",("table: %s, offset: %lu",
file->table->filename,(ulong) file->file_offset));
}
x->records = info->records;
x->deleted = info->del;
x->data_file_length = info->data_file_length;
x->reclength = info->reclength;
x->options = info->options;
x->records= info->records;
x->deleted= info->del;
x->data_file_length= info->data_file_length;
x->reclength= info->reclength;
x->options= info->options;
if (current_table)
x->errkey = current_table->table->errkey;
x->errkey= current_table->table->errkey;
else
x->errkey=0;
x->errkey= 0;
x->rec_per_key = info->rec_per_key_part;
}
DBUG_RETURN(0);
}

View file

@ -33,7 +33,7 @@
MYRG_INFO *myrg_open(const char *name, int mode, int handle_locking)
{
int save_errno,i,errpos;
uint files,dir_length,length,options;
uint files,dir_length,length,options, key_parts;
ulonglong file_offset;
char name_buff[FN_REFLEN*2],buff[FN_REFLEN],*end;
MYRG_INFO info,*m_info;
@ -89,24 +89,39 @@ MYRG_INFO *myrg_open(const char *name, int mode, int handle_locking)
}
info.reclength=isam->s->base.reclength;
}
key_parts=(isam ? isam->s->base.key_parts : 0);
if (!(m_info= (MYRG_INFO*) my_malloc(sizeof(MYRG_INFO)+
files*sizeof(MYRG_TABLE),
files*sizeof(MYRG_TABLE)+
sizeof(long)*key_parts,
MYF(MY_WME))))
goto err;
*m_info=info;
m_info->open_tables=(files) ? (MYRG_TABLE *) (m_info+1) : 0;
m_info->tables=files;
if (files)
{
m_info->open_tables=(MYRG_TABLE *) (m_info+1);
m_info->rec_per_key_part=(ulong *) (m_info->open_tables+files);
bzero((char*) m_info->rec_per_key_part,sizeof(long)*key_parts);
}
else
{
m_info->open_tables=0;
m_info->rec_per_key_part=0;
}
errpos=2;
options= (uint) ~0;
for (i=files ; i-- > 0 ; )
{
uint j;
m_info->open_tables[i].table=isam;
m_info->options|=isam->s->options;
options&=isam->s->options;
m_info->records+=isam->state->records;
m_info->del+=isam->state->del;
m_info->data_file_length+=isam->state->data_file_length;
for (j=0; j < key_parts; j++)
m_info->rec_per_key_part[j]+=isam->s->state.rec_per_key_part[j] / files;
if (i)
isam=(MI_INFO*) (isam->open_list.next->data);
}

View file

@ -63,3 +63,11 @@ nothing 2
one 1
two 1
drop table t1;
create table t1 (row int not null, col int not null, val varchar(255) not null);
insert into t1 values (1,1,'orange'),(1,2,'large'),(2,1,'yellow'),(2,2,'medium'),(3,1,'green'),(3,2,'small');
select max(case col when 1 then val else null end) as color from t1 group by row;
color
orange
yellow
green
drop table t1;

View file

@ -48,6 +48,15 @@ sid id
skr 1
skr 2
test 1
insert into t1 values ('rts',NULL),('rts',NULL),('test',NULL);
select * from t1;
sid id
rts 1
rts 2
skr 1
skr 2
test 1
test 2
drop table t1;
drop database if exists foo;
create database foo;

View file

@ -251,3 +251,13 @@ n d
select * from t2;
n d
drop table t1,t2;
drop table if exists t1,t2,t3;
CREATE TABLE t1 ( broj int(4) unsigned NOT NULL default '0', naziv char(25) NOT NULL default 'NEPOZNAT', PRIMARY KEY (broj)) TYPE=MyISAM;
INSERT INTO t1 VALUES (1,'jedan'),(2,'dva'),(3,'tri'),(4,'xxxxxxxxxx'),(5,'a'),(10,''),(11,''),(12,''),(13,'');
CREATE TABLE t2 ( broj int(4) unsigned NOT NULL default '0', naziv char(25) NOT NULL default 'NEPOZNAT', PRIMARY KEY (broj)) TYPE=MyISAM;
INSERT INTO t2 VALUES (1,'jedan'),(2,'dva'),(3,'tri'),(4,'xxxxxxxxxx'),(5,'a');
CREATE TABLE t3 ( broj int(4) unsigned NOT NULL default '0', naziv char(25) NOT NULL default 'NEPOZNAT', PRIMARY KEY (broj)) TYPE=MyISAM;
INSERT INTO t3 VALUES (1,'jedan'),(2,'dva');
update t1,t2 set t1.naziv="aaaa" where t1.broj=t2.broj;
update t1,t2,t3 set t1.naziv="bbbb", t2.naziv="aaaa" where t1.broj=t2.broj and t2.broj=t3.broj;
drop table if exists t1,t2,t3;

View file

@ -1,4 +1,5 @@
stop slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;

View file

@ -10,7 +10,9 @@ master-bin.000001 79
show slave status;
Master_Host Master_User Master_Port Connect_retry Master_Log_File Read_Master_Log_Pos Relay_Log_File Relay_Log_Pos Relay_Master_Log_File Slave_IO_Running Slave_SQL_Running Replicate_do_db Replicate_ignore_db Last_errno Last_error Skip_counter Exec_master_log_pos Relay_log_space
127.0.0.1 root MASTER_PORT 1 master-bin.000001 79 slave-relay-bin.000002 123 master-bin.000001 Yes Yes 0 0 79 127
stop slave;
change master to master_log_pos=73;
start slave;
stop slave;
change master to master_log_pos=73;
show slave status;
@ -20,6 +22,7 @@ start slave;
show slave status;
Master_Host Master_User Master_Port Connect_retry Master_Log_File Read_Master_Log_Pos Relay_Log_File Relay_Log_Pos Relay_Master_Log_File Slave_IO_Running Slave_SQL_Running Replicate_do_db Replicate_ignore_db Last_errno Last_error Skip_counter Exec_master_log_pos Relay_log_space
127.0.0.1 root MASTER_PORT 1 master-bin.000001 73 slave-relay-bin.000001 4 master-bin.000001 No Yes 0 0 73 4
stop slave;
change master to master_log_pos=173;
start slave;
show slave status;
@ -32,6 +35,7 @@ create table if not exists t1 (n int);
drop table if exists t1;
create table t1 (n int);
insert into t1 values (1),(2),(3);
stop slave;
change master to master_log_pos=79;
start slave;
select * from t1;

View file

@ -48,6 +48,9 @@ set max_join_size=100;
show variables like 'max_join_size';
Variable_name Value
max_join_size 100
show global variables like 'max_join_size';
Variable_name Value
max_join_size HA_POS_ERROR
set GLOBAL max_join_size=2000;
show global variables like 'max_join_size';
Variable_name Value
@ -59,7 +62,7 @@ max_join_size 2000
set GLOBAL max_join_size=DEFAULT;
show global variables like 'max_join_size';
Variable_name Value
max_join_size 18446744073709551615
max_join_size HA_POS_ERROR
set @@max_join_size=1000, @@global.max_join_size=2000;
select @@local.max_join_size, @@global.max_join_size;
@@session.max_join_size @@global.max_join_size

View file

@ -30,3 +30,12 @@ insert into t1 values(1),(2),(3),(4);
select case a when 1 then 2 when 2 then 3 else 0 end as fcase, count(*) from t1 group by fcase;
select case a when 1 then "one" when 2 then "two" else "nothing" end as fcase, count(*) from t1 group by fcase;
drop table t1;
#
# Test MAX(CASE ... ) that can return null
#
create table t1 (row int not null, col int not null, val varchar(255) not null);
insert into t1 values (1,1,'orange'),(1,2,'large'),(2,1,'yellow'),(2,2,'medium'),(3,1,'green'),(3,2,'small');
select max(case col when 1 then val else null end) as color from t1 group by row;
drop table t1;

View file

@ -63,5 +63,3 @@ show tables;
#--error 1045
#connect (con1,localhost,test,zorro,);
#--error 1045

View file

@ -46,6 +46,8 @@ drop table t1;
create table t1 (sid char(20), id int(2) NOT NULL auto_increment, key(sid, id));
insert into t1 values ('skr',NULL),('skr',NULL),('test',NULL);
select * from t1;
insert into t1 values ('rts',NULL),('rts',NULL),('test',NULL);
select * from t1;
drop table t1;
#

View file

@ -220,3 +220,13 @@ DELETE t1, t2 FROM t1 a,t2 b where a.n=b.n;
select * from t1;
select * from t2;
drop table t1,t2;
drop table if exists t1,t2,t3;
CREATE TABLE t1 ( broj int(4) unsigned NOT NULL default '0', naziv char(25) NOT NULL default 'NEPOZNAT', PRIMARY KEY (broj)) TYPE=MyISAM;
INSERT INTO t1 VALUES (1,'jedan'),(2,'dva'),(3,'tri'),(4,'xxxxxxxxxx'),(5,'a'),(10,''),(11,''),(12,''),(13,'');
CREATE TABLE t2 ( broj int(4) unsigned NOT NULL default '0', naziv char(25) NOT NULL default 'NEPOZNAT', PRIMARY KEY (broj)) TYPE=MyISAM;
INSERT INTO t2 VALUES (1,'jedan'),(2,'dva'),(3,'tri'),(4,'xxxxxxxxxx'),(5,'a');
CREATE TABLE t3 ( broj int(4) unsigned NOT NULL default '0', naziv char(25) NOT NULL default 'NEPOZNAT', PRIMARY KEY (broj)) TYPE=MyISAM;
INSERT INTO t3 VALUES (1,'jedan'),(2,'dva');
update t1,t2 set t1.naziv="aaaa" where t1.broj=t2.broj;
update t1,t2,t3 set t1.naziv="bbbb", t2.naziv="aaaa" where t1.broj=t2.broj and t2.broj=t3.broj;
drop table if exists t1,t2,t3;

View file

@ -6,7 +6,9 @@ show master status;
sync_slave_with_master;
--replace_result 3306 MASTER_PORT 9306 MASTER_PORT 3334 MASTER_PORT 3336 MASTER_PORT
show slave status;
stop slave;
change master to master_log_pos=73;
start slave;
sleep 5;
stop slave;
@ -17,6 +19,7 @@ start slave;
sleep 5;
--replace_result 3306 MASTER_PORT 9306 MASTER_PORT 3334 MASTER_PORT 3336 MASTER_PORT
show slave status;
stop slave;
change master to master_log_pos=173;
--replace_result 3306 MASTER_PORT 9306 MASTER_PORT 3334 MASTER_PORT 3336 MASTER_PORT
start slave;
@ -31,6 +34,7 @@ create table t1 (n int);
insert into t1 values (1),(2),(3);
save_master_pos;
connection slave;
stop slave;
change master to master_log_pos=79;
start slave;
sync_with_master;

View file

@ -34,13 +34,15 @@ drop table t1;
set max_join_size=100;
show variables like 'max_join_size';
# Removed, because it has different value with/without BIG_TABLES
#show global variables like 'max_join_size';
--replace_result 18446744073709551615 HA_POS_ERROR 4294967295 HA_POS_ERROR
show global variables like 'max_join_size';
set GLOBAL max_join_size=2000;
show global variables like 'max_join_size';
set max_join_size=DEFAULT;
--replace_result 18446744073709551615 HA_POS_ERROR 4294967295 HA_POS_ERROR
show variables like 'max_join_size';
set GLOBAL max_join_size=DEFAULT;
--replace_result 18446744073709551615 HA_POS_ERROR 4294967295 HA_POS_ERROR
show global variables like 'max_join_size';
set @@max_join_size=1000, @@global.max_join_size=2000;
select @@local.max_join_size, @@global.max_join_size;

View file

@ -65,7 +65,6 @@ EXTRA_PROGRAMS =
DEFS = -DDEFAULT_BASEDIR=\"$(prefix)\" \
-DDATADIR="\"$(MYSQLDATAdir)\"" \
-DDEFAULT_CHARSET_HOME="\"$(MYSQLBASEdir)\"" \
-DDATADIR="\"$(MYSQLDATAdir)\"" \
-DSHAREDIR="\"$(MYSQLSHAREdir)\"" \
@DEFS@

View file

@ -25,12 +25,13 @@
void init_alloc_root(MEM_ROOT *mem_root, uint block_size,
uint pre_alloc_size __attribute__((unused)))
{
mem_root->free= mem_root->used= 0;
mem_root->free= mem_root->used= mem_root->pre_alloc= 0;
mem_root->min_malloc= 32;
mem_root->block_size= block_size-MALLOC_OVERHEAD-sizeof(USED_MEM)-8;
mem_root->error_handler= 0;
mem_root->block_num= 4; /* We shift this with >>2 */
mem_root->first_block_usage= 0;
#if !(defined(HAVE_purify) && defined(EXTRA_DEBUG))
if (pre_alloc_size)
{
@ -137,6 +138,7 @@ static inline void mark_blocks_free(MEM_ROOT* root)
/* Now everything is set; Indicate that nothing is used anymore */
root->used= 0;
root->first_block_usage= 0;
}

View file

@ -45,6 +45,13 @@ int my_rename(const char *from, const char *to, myf MyFlags)
}
#endif
#if defined(HAVE_RENAME)
#ifdef __WIN__
/*
On windows we can't rename over an existing file:
Remove any conflicting files:
*/
(void) my_delete(to, MYF(0));
#endif
if (rename(from,to))
#else
if (link(from, to) || unlink(from))

View file

@ -103,6 +103,8 @@ SUFFIXES = .sh
-e 's!@''CC''@!@CC@!'\
-e 's!@''CXX''@!@CXX@!'\
-e 's!@''GXX''@!@GXX@!'\
-e 's!@''CC_VERSION''@!@CC_VERSION@!'\
-e 's!@''CXX_VERSION''@!@CXX_VERSION@!'\
-e 's!@''PERL''@!@PERL@!' \
-e 's!@''ASFLAGS''@!@SAVE_ASFLAGS@!'\
-e 's!@''CFLAGS''@!@SAVE_CFLAGS@!'\

View file

@ -50,7 +50,7 @@ mkdir $BASE $BASE/bin $BASE/data $BASE/data/mysql $BASE/data/test \
$BASE/include $BASE/lib $BASE/support-files $BASE/share $BASE/share/mysql \
$BASE/tests $BASE/scripts $BASE/sql-bench $BASE/mysql-test \
$BASE/mysql-test/t $BASE/mysql-test/r \
$BASE/mysql-test/include $BASE/mysql-test/std_data $BASE/man
$BASE/mysql-test/include $BASE/mysql-test/std_data $BASE/man $BASE/man/man1
chmod o-rwx $BASE/data $BASE/data/*
@ -107,7 +107,7 @@ rm $BASE/include/Makefile*; rm $BASE/include/*.in
$CP tests/*.res tests/*.tst tests/*.pl $BASE/tests
$CP support-files/* $BASE/support-files
$CP man/*.? $BASE/man
$CP man/*.1 $BASE/man/man1
$CP -r sql/share/* $BASE/share/mysql
rm -f $BASE/share/mysql/Makefile* $BASE/share/mysql/*/*.OLD

View file

@ -231,6 +231,8 @@ ${ORGANIZATION- $ORGANIZATION_C}
>Class: $CLASS_C
>Release: mysql-${VERSION} ($COMPILATION_COMMENT)
`test -n "$MYSQL_SERVER" && echo ">Server: $MYSQL_SERVER"`
>C compiler: @CC_VERSION@
>C++ compiler: @CXX_VERSION@
>Environment:
$ENVIRONMENT_C
`test -n "$SYSTEM" && echo "System: $SYSTEM"`

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,5 @@
#!@PERL@
# -*- perl -*-
# Copyright (C) 2000 MySQL AB & MySQL Finland AB & TCX DataKonsult AB
#
# This library is free software; you can redistribute it and/or
@ -671,9 +672,9 @@ sub create
$field =~ s/int\(\d*\)/int/;
$field =~ s/float\(\d*,\d*\)/float/;
$field =~ s/ double/ float/;
$field =~ s/ decimal/ float/i;
$field =~ s/ big_decimal/ float/i;
$field =~ s/ date/ int/i;
# $field =~ s/ decimal/ float/i;
# $field =~ s/ big_decimal/ float/i;
# $field =~ s/ date/ int/i;
# Pg doesn't have blob, it has text instead
$field =~ s/ blob/ text/;
$query.= $field . ',';
@ -946,9 +947,9 @@ sub create
$field =~ s/ double/ float/i;
# Solid doesn't have blob, it has long varchar
$field =~ s/ blob/ long varchar/;
$field =~ s/ decimal/ float/i;
$field =~ s/ big_decimal/ float/i;
$field =~ s/ date/ int/i;
# $field =~ s/ decimal/ float/i;
# $field =~ s/ big_decimal/ float/i;
# $field =~ s/ date/ int/i;
$query.= $field . ',';
}
substr($query,-1)=")"; # Remove last ',';
@ -1194,9 +1195,9 @@ sub create
$field =~ s/ blob/ text/;
$field =~ s/ varchar\((\d+)\)/ char($1,3)/;
$field =~ s/ char\((\d+)\)/ char($1,3)/;
$field =~ s/ decimal/ float/i;
$field =~ s/ big_decimal/ longfloat/i;
$field =~ s/ date/ int/i;
# $field =~ s/ decimal/ float/i;
# $field =~ s/ big_decimal/ longfloat/i;
# $field =~ s/ date/ int/i;
$field =~ s/ float(.*)/ float/i;
if ($field =~ / int\((\d+)\)/) {
if ($1 > 4) {
@ -2896,8 +2897,8 @@ sub create
$query="create table $table_name (";
foreach $field (@$fields)
{
$field =~ s/ decimal/ double(10,2)/i;
$field =~ s/ big_decimal/ double(10,2)/i;
# $field =~ s/ decimal/ double(10,2)/i;
# $field =~ s/ big_decimal/ double(10,2)/i;
$field =~ s/ tinyint\(.*\)/ smallint/i;
$field =~ s/ smallint\(.*\)/ smallint/i;
$field =~ s/ mediumint/ integer/i;
@ -2985,7 +2986,7 @@ sub new
bless $self;
$self->{'cmp_name'} = "interbase";
$self->{'data_source'} = "DBI:InterBase:database=$database:ib_dialect=3";
$self->{'data_source'} = "DBI:InterBase:database=$database;ib_dialect=3";
$self->{'limits'} = \%limits;
$self->{'blob'} = "blob";
$self->{'text'} = "";
@ -3000,7 +3001,7 @@ sub new
$limits{'max_tables'} = 65000; # Should be big enough
$limits{'max_text_size'} = 15000; # Max size with default buffers.
$limits{'query_size'} = 1000000; # Max size with default buffers.
$limits{'max_index'} = 31; # Max number of keys
$limits{'max_index'} = 65000; # Max number of keys
$limits{'max_index_parts'} = 8; # Max segments/key
$limits{'max_column_name'} = 128; # max table and column name
@ -3050,16 +3051,13 @@ sub new
sub version
{
my ($self)=@_;
my ($dbh,$sth,$version,@row);
my ($dbh,$version);
$version='Interbase ?';
$dbh=$self->connect();
# $sth = $dbh->prepare("show version");
# $sth->execute;
# @row = $sth->fetchrow_array;
# $version = $row[0];
# $version =~ s/.*version \"(.*)\"$/$1/;
eval { $version = $dbh->func('version','ib_database_info')->{'version'}; };
$dbh->disconnect;
$version = "6.0Beta";
$version .= "/ODBC" if ($self->{'data_source'} =~ /:ODBC:/);
return $version;
}
@ -3090,36 +3088,34 @@ sub connect
sub create
{
my($self,$table_name,$fields,$index,$options) = @_;
my($query,@queries);
my($query,@queries,@keys,@indexes);
$query="create table $table_name (";
foreach $field (@$fields)
{
$field =~ s/ big_decimal/ float/i;
$field =~ s/ double/ float/i;
# $field =~ s/ big_decimal/ decimal/i;
$field =~ s/ double/ double precision/i;
$field =~ s/ tinyint/ smallint/i;
$field =~ s/ mediumint/ int/i;
$field =~ s/ integer/ int/i;
$field =~ s/ mediumint/ integer/i;
$field =~ s/\bint\b/integer/i;
$field =~ s/ float\(\d,\d\)/ float/i;
$field =~ s/ date/ int/i; # Because of tcp ?
$field =~ s/ smallint\(\d\)/ smallint/i;
$field =~ s/ int\(\d\)/ int/i;
$field =~ s/ integer\(\d\)/ integer/i;
$query.= $field . ',';
}
foreach $ind (@$index)
{
my @index;
if ( $ind =~ /\bKEY\b/i ){
if ( $ind =~ /(\bKEY\b)|(\bUNIQUE\b)/i ){
push(@keys,"ALTER TABLE $table_name ADD $ind");
}else{
my @fields = split(' ',$index);
my @fields = split(' ',$ind);
my $query="CREATE INDEX $fields[1] ON $table_name $fields[2]";
push(@index,$query);
push(@indexes,$query);
}
}
substr($query,-1)=")"; # Remove last ',';
$query.=" $options" if (defined($options));
push(@queries,$query);
push(@queries,$query,@keys,@indexes);
return @queries;
}
@ -3470,7 +3466,8 @@ sub version
if ($sth->execute && (@row = $sth->fetchrow_array)
&& $row[0] =~ /([\d\.]+)/)
{
$version="sap-db $1";
$version=$row[0];
$version =~ s/KERNEL/SAP DB/i;
}
$sth->finish;
$dbh->disconnect;
@ -3531,7 +3528,6 @@ sub create
}else{
my @fields = split(' ',$ind);
my $query="CREATE INDEX $fields[1] ON $table_name $fields[2]";
print "$query \n";
push(@index,$query);
}
}

View file

@ -4592,9 +4592,11 @@ void Field_blob::get_key_image(char *buff,uint length, CHARSET_INFO *cs,imagetyp
if ((uint32) length > blob_length)
{
#ifdef HAVE_purify
/*
Must clear this as we do a memcmp in opt_range.cc to detect
identical keys
*/
bzero(buff+2+blob_length, (length-blob_length));
#endif
length=(uint) blob_length;
}
int2store(buff,length);

View file

@ -225,10 +225,14 @@ convert_error_code_to_mysql(
return(HA_ERR_ROW_IS_REFERENCED);
} else if (error == (int) DB_CANNOT_ADD_CONSTRAINT) {
} else if (error == (int) DB_CANNOT_ADD_CONSTRAINT) {
return(HA_ERR_CANNOT_ADD_FOREIGN);
} else if (error == (int) DB_COL_APPEARS_TWICE_IN_INDEX) {
return(HA_ERR_WRONG_TABLE_DEF);
} else if (error == (int) DB_OUT_OF_FILE_SPACE) {
return(HA_ERR_RECORD_FILE_FULL);
@ -1233,7 +1237,14 @@ ha_innobase::open(
if (primary_key != MAX_KEY) {
fprintf(stderr,
"InnoDB: Error: table %s has no primary key in InnoDB\n"
"InnoDB: data dictionary, but has one in MySQL!\n", name);
"InnoDB: data dictionary, but has one in MySQL!\n"
"InnoDB: If you created the table with a MySQL\n"
"InnoDB: version < 3.23.54 and did not define a primary\n"
"InnoDB: key, but defined a unique key with all non-NULL\n"
"InnoDB: columns, then MySQL internally treats that key\n"
"InnoDB: as the primary key. You can fix this error by\n"
"InnoDB: dump + DROP + CREATE + reimport of the table.\n",
name);
}
((row_prebuilt_t*)innobase_prebuilt)
@ -1898,12 +1909,9 @@ ha_innobase::write_row(
the counter here. */
skip_auto_inc_decr = FALSE;
if (error == DB_DUPLICATE_KEY) {
ut_a(user_thd->query);
dict_accept(user_thd->query, "REPLACE",
&skip_auto_inc_decr);
}
if (error == DB_DUPLICATE_KEY &&
user_thd->lex.sql_command == SQLCOM_REPLACE)
skip_auto_inc_decr= TRUE;
if (!skip_auto_inc_decr && incremented_auto_inc_counter
&& prebuilt->trx->auto_inc_lock) {
@ -3803,8 +3811,8 @@ innobase_map_isolation_level(
enum_tx_isolation iso) /* in: MySQL isolation level code */
{
switch(iso) {
case ISO_READ_COMMITTED: return(TRX_ISO_READ_COMMITTED);
case ISO_REPEATABLE_READ: return(TRX_ISO_REPEATABLE_READ);
case ISO_READ_COMMITTED: return(TRX_ISO_READ_COMMITTED);
case ISO_SERIALIZABLE: return(TRX_ISO_SERIALIZABLE);
case ISO_READ_UNCOMMITTED: return(TRX_ISO_READ_UNCOMMITTED);
default: ut_a(0); return(0);
@ -3859,11 +3867,9 @@ ha_innobase::external_lock(
trx->n_mysql_tables_in_use++;
prebuilt->mysql_has_locked = TRUE;
if (thd->variables.tx_isolation != ISO_REPEATABLE_READ) {
trx->isolation_level = innobase_map_isolation_level(
trx->isolation_level = innobase_map_isolation_level(
(enum_tx_isolation)
thd->variables.tx_isolation);
}
if (trx->isolation_level == TRX_ISO_SERIALIZABLE
&& prebuilt->select_lock_type == LOCK_NONE) {

View file

@ -637,7 +637,7 @@ int ha_myisam::repair(THD *thd, MI_CHECK &param, bool optimize)
the following 'if', thought conceptually wrong,
is a useful optimization nevertheless.
*/
if (file->state != &file->s->state.state);
if (file->state != &file->s->state.state)
file->s->state.state = *file->state;
if (file->s->base.auto_key)
update_auto_increment_key(&param, file, 1);
@ -691,12 +691,22 @@ void ha_myisam::deactivate_non_unique_index(ha_rows rows)
mi_extra(file, HA_EXTRA_NO_KEYS, 0);
else
{
/* Only disable old index if the table was empty */
if (file->state->records == 0)
/*
Only disable old index if the table was empty and we are inserting
a lot of rows.
We should not do this for only a few rows as this is slower and
we don't want to update the key statistics based of only a few rows.
*/
if (file->state->records == 0 &&
(!rows || rows >= MI_MIN_ROWS_TO_USE_BULK_INSERT))
mi_disable_non_unique_index(file,rows);
ha_myisam::extra_opt(HA_EXTRA_BULK_INSERT_BEGIN,
current_thd->variables.bulk_insert_buff_size);
table->bulk_insert= 1;
else
{
mi_init_bulk_insert(file,
current_thd->variables.bulk_insert_buff_size,
rows);
table->bulk_insert= 1;
}
}
}
enable_activate_all_index=1;
@ -713,7 +723,7 @@ bool ha_myisam::activate_all_index(THD *thd)
MYISAM_SHARE* share = file->s;
DBUG_ENTER("activate_all_index");
mi_extra(file, HA_EXTRA_BULK_INSERT_END, 0);
mi_end_bulk_insert(file);
table->bulk_insert= 0;
if (enable_activate_all_index &&
share->state.key_map != set_bits(ulonglong, share->base.keys))
@ -954,13 +964,11 @@ int ha_myisam::extra(enum ha_extra_function operation)
}
/* To be used with WRITE_CACHE, EXTRA_CACHE and BULK_INSERT_BEGIN */
/* To be used with WRITE_CACHE and EXTRA_CACHE */
int ha_myisam::extra_opt(enum ha_extra_function operation, ulong cache_size)
{
if ((specialflag & SPECIAL_SAFE_MODE) &
(operation == HA_EXTRA_WRITE_CACHE ||
operation == HA_EXTRA_BULK_INSERT_BEGIN))
if ((specialflag & SPECIAL_SAFE_MODE) && operation == HA_EXTRA_WRITE_CACHE)
return 0;
return mi_extra(file, operation, (void*) &cache_size);
}
@ -1224,8 +1232,7 @@ longlong ha_myisam::get_auto_increment()
}
if (table->bulk_insert)
mi_extra(file, HA_EXTRA_BULK_INSERT_FLUSH,
(void*) &table->next_number_index);
mi_flush_bulk_insert(file, table->next_number_index);
longlong nr;
int error;

View file

@ -239,6 +239,13 @@ void ha_myisammrg::info(uint flag)
#else
ref_length=4; // Can't be > than my_off_t
#endif
if (flag & HA_STATUS_CONST)
{
if (table->key_parts)
memcpy((char*) table->key_info[0].rec_per_key,
(char*) info.rec_per_key,
sizeof(table->key_info[0].rec_per_key)*table->key_parts);
}
}
@ -257,9 +264,7 @@ int ha_myisammrg::extra(enum ha_extra_function operation)
int ha_myisammrg::extra_opt(enum ha_extra_function operation, ulong cache_size)
{
if ((specialflag & SPECIAL_SAFE_MODE) &
(operation == HA_EXTRA_WRITE_CACHE ||
operation == HA_EXTRA_BULK_INSERT_BEGIN))
if ((specialflag & SPECIAL_SAFE_MODE) && operation == HA_EXTRA_WRITE_CACHE)
return 0;
return myrg_extra(file, operation, (void*) &cache_size);
}

View file

@ -868,8 +868,9 @@ String *Item_func_case::val_str(String *str)
null_value=1;
return 0;
}
null_value= 0;
if (!(res=item->val_str(str)))
null_value=1;
null_value= 1;
return res;
}

View file

@ -303,6 +303,18 @@ Item *create_func_pow(Item* a, Item *b)
return new Item_func_pow(a,b);
}
Item *create_func_current_user()
{
THD *thd=current_thd;
char buff[HOSTNAME_LENGTH+USERNAME_LENGTH+2];
uint length;
length= (uint) (strxmov(buff, thd->priv_user, "@", thd->host_or_ip, NullS) -
buff);
return new Item_string("CURRENT_USER()", thd->memdup(buff, length), length,
default_charset_info);
}
Item *create_func_quarter(Item* a)
{
return new Item_func_quarter(a);
@ -406,7 +418,7 @@ Item *create_func_ucase(Item* a)
Item *create_func_version(void)
{
return new Item_string(NullS,server_version,
return new Item_string("VERSION()",server_version,
(uint) strlen(server_version),
default_charset_info);
}

View file

@ -71,6 +71,7 @@ Item *create_func_period_add(Item* a, Item *b);
Item *create_func_period_diff(Item* a, Item *b);
Item *create_func_pi(void);
Item *create_func_pow(Item* a, Item *b);
Item *create_func_current_user(void);
Item *create_func_quarter(Item* a);
Item *create_func_radians(Item *a);
Item *create_func_release_lock(Item* a);

View file

@ -2179,11 +2179,14 @@ void Item_func_get_user_var::fix_length_and_dec()
maybe_null=1;
decimals=NOT_FIXED_DEC;
max_length=MAX_BLOB_WIDTH;
if ((var_entry= get_variable(&thd->user_vars, name, 0)))
const_var_flag= thd->query_id != var_entry->update_query_id;
var_entry= get_variable(&thd->user_vars, name, 0);
}
bool Item_func_get_user_var::const_item() const
{ return var_entry && current_thd->query_id != var_entry->update_query_id; }
enum Item_result Item_func_get_user_var::result_type() const
{
user_var_entry *entry;

View file

@ -934,11 +934,10 @@ class Item_func_get_user_var :public Item_func
{
LEX_STRING name;
user_var_entry *var_entry;
bool const_var_flag;
public:
Item_func_get_user_var(LEX_STRING a):
Item_func(), name(a), const_var_flag(1) {}
Item_func(), name(a) {}
user_var_entry *get_entry();
double val();
longlong val_int();
@ -952,9 +951,9 @@ public:
*/
enum_field_types field_type() const { return MYSQL_TYPE_STRING; }
const char *func_name() const { return "get_user_var"; }
bool const_item() const { return const_var_flag; }
bool const_item() const;
table_map used_tables() const
{ return const_var_flag ? 0 : RAND_TABLE_BIT; }
{ return const_item() ? 0 : RAND_TABLE_BIT; }
bool eq(const Item *item, bool binary_cmp) const;
};

View file

@ -426,7 +426,7 @@ static SYMBOL sql_functions[] = {
{ "CAST", SYM(CAST_SYM),0,0},
{ "CEIL", SYM(FUNC_ARG1),0,CREATE_FUNC(create_func_ceiling)},
{ "CEILING", SYM(FUNC_ARG1),0,CREATE_FUNC(create_func_ceiling)},
{ "CURRENT_USER", SYM(USER),0,0},
{ "CURRENT_USER", SYM(FUNC_ARG0),0,CREATE_FUNC(create_func_current_user)},
{ "BIT_LENGTH", SYM(FUNC_ARG1),0,CREATE_FUNC(create_func_bit_length)},
{ "CENTROID", SYM(FUNC_ARG1),0,CREATE_FUNC(create_func_centroid)},
{ "CHAR_LENGTH", SYM(FUNC_ARG1),0,CREATE_FUNC(create_func_char_length)},

View file

@ -662,7 +662,12 @@ int MYSQL_LOG::purge_first_log(struct st_relay_log_info* rli)
rli->linfo.log_file_name);
goto err;
}
/*
Reset position to current log. This involves setting both of the
position variables:
*/
rli->relay_log_pos = BIN_LOG_HEADER_SIZE;
rli->pending = 0;
strmake(rli->relay_log_name,rli->linfo.log_file_name,
sizeof(rli->relay_log_name)-1);
@ -1121,8 +1126,20 @@ bool MYSQL_LOG::write(Log_event* event_info)
if (file == &log_file)
{
error = ha_report_binlog_offset_and_commit(thd, log_file_name,
/*
LOAD DATA INFILE in AUTOCOMMIT=1 mode writes to the binlog
chunks also before it is successfully completed. We only report
the binlog write and do the commit inside the transactional table
handler if the log event type is appropriate.
*/
if (event_info->get_type_code() == QUERY_EVENT
|| event_info->get_type_code() == EXEC_LOAD_EVENT)
{
error = ha_report_binlog_offset_and_commit(thd, log_file_name,
file->pos_in_file);
}
should_rotate= (my_b_tell(file) >= (my_off_t) max_binlog_size);
}
@ -1165,7 +1182,7 @@ uint MYSQL_LOG::next_file_id()
NOTE
- We only come here if there is something in the cache.
- The thing in the cache is always a complete transcation
- The thing in the cache is always a complete transaction
- 'cache' needs to be reinitialized after this functions returns.
IMPLEMENTATION
@ -1233,6 +1250,13 @@ bool MYSQL_LOG::write(THD *thd, IO_CACHE *cache)
log_file.pos_in_file)))
goto err;
signal_update();
if (my_b_tell(&log_file) >= (my_off_t) max_binlog_size)
{
pthread_mutex_lock(&LOCK_index);
new_file(0); // inside mutex
pthread_mutex_unlock(&LOCK_index);
}
}
VOID(pthread_mutex_unlock(&LOCK_log));
DBUG_RETURN(0);

View file

@ -296,9 +296,13 @@ int Log_event::exec_event(struct st_relay_log_info* rli)
{
if (rli) // QQ When is this not true ?
{
rli->inc_pos(get_event_len(),log_pos);
DBUG_ASSERT(rli->sql_thd != 0);
flush_relay_log_info(rli);
if (rli->inside_transaction)
rli->inc_pending(get_event_len());
else
{
rli->inc_pos(get_event_len(),log_pos);
flush_relay_log_info(rli);
}
}
return 0;
}
@ -865,6 +869,19 @@ int Query_log_event::exec_event(struct st_relay_log_info* rli)
mysql_log.write(thd,COM_QUERY,"%s",thd->query);
DBUG_PRINT("query",("%s",thd->query));
mysql_parse(thd, thd->query, q_len);
/*
Set a flag if we are inside an transaction so that we can restart
the transaction from the start if we are killed
This will only be done if we are supporting transactional tables
in the slave.
*/
if (!strcmp(thd->query,"BEGIN"))
rli->inside_transaction= opt_using_transactions;
else if (!strcmp(thd->query,"COMMIT"))
rli->inside_transaction=0;
DBUG_PRINT("info",("expected_error: %d last_errno: %d",
expected_error, thd->net.last_errno));
if ((expected_error != (actual_error= thd->net.last_errno)) &&

View file

@ -323,11 +323,12 @@ int mysql_create_db(THD *thd, char *db, HA_CREATE_INFO *create, bool silent);
int mysql_alter_db(THD *thd, const char *db, HA_CREATE_INFO *create);
int mysql_rm_db(THD *thd,char *db,bool if_exists, bool silent);
void mysql_binlog_send(THD* thd, char* log_ident, my_off_t pos, ushort flags);
int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists);
int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists,
my_bool drop_temporary);
int mysql_rm_table_part2(THD *thd, TABLE_LIST *tables, bool if_exists,
bool log_query);
bool drop_temporary, bool log_query);
int mysql_rm_table_part2_with_lock(THD *thd, TABLE_LIST *tables,
bool if_exists,
bool if_exists, bool drop_temporary,
bool log_query);
int quick_rm_table(enum db_type base,const char *db,
const char *table_name);

View file

@ -206,7 +206,7 @@ static char **opt_argv;
#else
#define MYSQL_SERVER_SUFFIX ""
#endif /* __NT__ */
#endif
#endif /* __WIN__ */
#ifdef HAVE_BERKELEY_DB
SHOW_COMP_OPTION have_berkeley_db=SHOW_OPTION_YES;
@ -247,6 +247,12 @@ SHOW_COMP_OPTION have_query_cache=SHOW_OPTION_NO;
const char *show_comp_option_name[]= {"YES", "NO", "DISABLED"};
bool opt_large_files= sizeof(my_off_t) > 4;
#if SIZEOF_OFF_T > 4 && defined(BIG_TABLES)
#define GET_HA_ROWS GET_ULL
#else
#define GET_HA_ROWS GET_ULONG
#endif
/*
Variables to store startup options
@ -3793,8 +3799,13 @@ struct my_option my_long_options[] =
{"lower_case_table_names", OPT_LOWER_CASE_TABLE_NAMES,
"If set to 1 table names are stored in lowercase on disk and table names will be case-insensitive.",
(gptr*) &lower_case_table_names,
(gptr*) &lower_case_table_names, 0,
GET_BOOL, NO_ARG, IF_WIN(1,0), 0, 1, 0, 1, 0},
(gptr*) &lower_case_table_names, 0, GET_BOOL, NO_ARG,
#ifdef FN_NO_CASE_SENCE
1
#else
0
#endif
, 0, 1, 0, 1, 0},
{"max_allowed_packet", OPT_MAX_ALLOWED_PACKET,
"Max packetlength to send/receive from to server.",
(gptr*) &global_system_variables.max_allowed_packet,
@ -3833,7 +3844,7 @@ struct my_option my_long_options[] =
{"max_join_size", OPT_MAX_JOIN_SIZE,
"Joins that are probably going to read more than max_join_size records return an error.",
(gptr*) &global_system_variables.max_join_size,
(gptr*) &max_system_variables.max_join_size, 0, GET_ULONG, REQUIRED_ARG,
(gptr*) &max_system_variables.max_join_size, 0, GET_HA_ROWS, REQUIRED_ARG,
~0L, 1, ~0L, 0, 1, 0},
{"max_prepared_statements", OPT_MAX_PREP_STMT,
"Max number of prepared_statements for a thread",
@ -4234,12 +4245,12 @@ static void set_options(void)
sizeof(mysql_real_data_home)-1);
/* Set default values for some variables */
global_system_variables.table_type=DB_TYPE_MYISAM;
global_system_variables.tx_isolation=ISO_REPEATABLE_READ;
global_system_variables.table_type= DB_TYPE_MYISAM;
global_system_variables.tx_isolation= ISO_REPEATABLE_READ;
global_system_variables.select_limit= (ulonglong) HA_POS_ERROR;
max_system_variables.select_limit= (ulong) HA_POS_ERROR;
max_system_variables.select_limit= (ulonglong) HA_POS_ERROR;
global_system_variables.max_join_size= (ulonglong) HA_POS_ERROR;
max_system_variables.max_join_size= (ulonglong) HA_POS_ERROR;
max_system_variables.max_join_size= (ulonglong) HA_POS_ERROR;
#ifdef __WIN__
/* Allow Win32 users to move MySQL anywhere */

View file

@ -73,7 +73,7 @@ extern pthread_mutex_t LOCK_bytes_sent , LOCK_bytes_received;
#include "thr_alarm.h"
#define TEST_BLOCKING 8
#define MAX_THREE_BYTES 255L*255L*255L
#define MAX_THREE_BYTES (256L*256L*256L-1)
static my_bool net_write_buff(NET *net,const char *packet,ulong len);
@ -312,6 +312,7 @@ net_write_command(NET *net,uchar command,
/*
Caching the data in a local buffer before sending it.
One can force the buffer to be flushed with 'net_flush'.
*/
static my_bool
@ -319,15 +320,24 @@ net_write_buff(NET *net,const char *packet,ulong len)
{
ulong left_length=(ulong) (net->buff_end - net->write_pos);
while (len > left_length)
if (len > left_length)
{
memcpy((char*) net->write_pos,packet,left_length);
if (net_real_write(net,(char*) net->buff,net->max_packet))
return 1;
net->write_pos=net->buff;
packet+=left_length;
len-=left_length;
left_length=net->max_packet;
len-= left_length;
left_length= net->max_packet;
/* Send out rest of the blocks as full sized blocks */
while (len > left_length)
{
if (net_real_write(net, packet, left_length))
return 1;
packet+= left_length;
len-= left_length;
}
}
memcpy((char*) net->write_pos,packet,len);
net->write_pos+=len;
@ -500,6 +510,7 @@ static void my_net_skip_rest(NET *net, uint32 remain, thr_alarm_t *alarmed)
ALARM alarm_buff;
uint retry_count=0;
my_bool old_mode;
uint32 old=remain;
if (!thr_alarm_in_use(&alarmed))
{
@ -521,6 +532,12 @@ static void my_net_skip_rest(NET *net, uint32 remain, thr_alarm_t *alarmed)
return;
}
remain -= (uint32) length;
if (!remain && old==MAX_THREE_BYTES &&
(length=vio_read(net->vio,(char*) net->buff,NET_HEADER_SIZE)))
{
old=remain= uint3korr(net->buff);
net->pkt_nr++;
}
statistic_add(bytes_received,length,&LOCK_bytes_received);
}
}
@ -667,7 +684,10 @@ my_real_read(NET *net, ulong *complen)
#ifdef HAVE_COMPRESS
if (net->compress)
{
/* complen is > 0 if package is really compressed */
/*
If the packet is compressed then complen > 0 and contains the
number of bytes in the uncompressed packet
*/
*complen=uint3korr(&(net->buff[net->where_b + NET_HEADER_SIZE]));
}
#endif
@ -681,11 +701,19 @@ my_real_read(NET *net, ulong *complen)
{
if (net_realloc(net,helping))
{
#ifdef MYSQL_SERVER
#ifndef NO_ALARM
if (i == 1)
my_net_skip_rest(net, (uint32) len, &alarmed);
if (net->compress)
{
len= packet_error;
goto end;
}
my_net_skip_rest(net, (uint32) len, &alarmed);
len=0;
#endif
#else
len= packet_error; /* Return error */
#endif
len= packet_error; /* Return error */
goto end;
}
}
@ -738,7 +766,7 @@ my_net_read(NET *net)
{
net->where_b += len;
total_length += len;
len = my_real_read (net,&complen);
len = my_real_read(net,&complen);
} while (len == MAX_THREE_BYTES);
if (len != packet_error)
len+= total_length;
@ -766,7 +794,7 @@ my_net_read(NET *net)
}
else
{
/* reuse buffer, as there is noting in it that we need */
/* reuse buffer, as there is nothing in it that we need */
buf_length=start_of_packet=first_packet_offset=0;
}
for (;;)

View file

@ -82,7 +82,7 @@ static int init_failsafe_rpl_thread(THD* thd)
#endif
thd->mem_root.free=thd->mem_root.used=0;
if ((ulong) thd->variables.max_join_size == (ulong) HA_POS_ERROR)
if (thd->variables.max_join_size == HA_POS_ERROR)
thd->options|= OPTION_BIG_SELECTS;
thd->proc_info="Thread initialized";

View file

@ -158,11 +158,11 @@ sys_var_thd_ulong sys_max_heap_table_size("max_heap_table_size",
&SV::max_heap_table_size);
sys_var_thd_ulong sys_pseudo_thread_id("pseudo_thread_id",
&SV::pseudo_thread_id);
sys_var_thd_ulonglong sys_max_join_size("max_join_size",
sys_var_thd_ha_rows sys_max_join_size("max_join_size",
&SV::max_join_size,
fix_max_join_size);
#ifndef TO_BE_DELETED /* Alias for max_join_size */
sys_var_thd_ulonglong sys_sql_max_join_size("sql_max_join_size",
sys_var_thd_ha_rows sys_sql_max_join_size("sql_max_join_size",
&SV::max_join_size,
fix_max_join_size);
#endif
@ -284,7 +284,7 @@ static sys_var_thd_bit sys_unique_checks("unique_checks",
/* Local state variables */
static sys_var_thd_ulonglong sys_select_limit("sql_select_limit",
static sys_var_thd_ha_rows sys_select_limit("sql_select_limit",
&SV::select_limit);
static sys_var_timestamp sys_timestamp("timestamp");
static sys_var_last_insert_id sys_last_insert_id("last_insert_id");
@ -606,7 +606,7 @@ static void fix_max_join_size(THD *thd, enum_var_type type)
{
if (type != OPT_GLOBAL)
{
if (thd->variables.max_join_size == (ulonglong) HA_POS_ERROR)
if (thd->variables.max_join_size == HA_POS_ERROR)
thd->options|= OPTION_BIG_SELECTS;
else
thd->options&= ~OPTION_BIG_SELECTS;
@ -753,12 +753,7 @@ bool sys_var_thd_ulong::update(THD *thd, set_var *var)
if (option_limits)
tmp= (ulong) getopt_ull_limit_value(tmp, option_limits);
if (var->type == OPT_GLOBAL)
{
/* Lock is needed to make things safe on 32 bit systems */
pthread_mutex_lock(&LOCK_global_system_variables);
global_system_variables.*offset= (ulong) tmp;
pthread_mutex_unlock(&LOCK_global_system_variables);
}
else
thd->variables.*offset= (ulong) tmp;
return 0;
@ -785,10 +780,60 @@ byte *sys_var_thd_ulong::value_ptr(THD *thd, enum_var_type type)
}
bool sys_var_thd_ha_rows::update(THD *thd, set_var *var)
{
ulonglong tmp= var->value->val_int();
/* Don't use bigger value than given with --maximum-variable-name=.. */
if ((ha_rows) tmp > max_system_variables.*offset)
tmp= max_system_variables.*offset;
if (option_limits)
tmp= (ha_rows) getopt_ull_limit_value(tmp, option_limits);
if (var->type == OPT_GLOBAL)
{
/* Lock is needed to make things safe on 32 bit systems */
pthread_mutex_lock(&LOCK_global_system_variables);
global_system_variables.*offset= (ha_rows) tmp;
pthread_mutex_unlock(&LOCK_global_system_variables);
}
else
thd->variables.*offset= (ha_rows) tmp;
return 0;
}
void sys_var_thd_ha_rows::set_default(THD *thd, enum_var_type type)
{
if (type == OPT_GLOBAL)
{
/* We will not come here if option_limits is not set */
pthread_mutex_lock(&LOCK_global_system_variables);
global_system_variables.*offset= (ha_rows) option_limits->def_value;
pthread_mutex_unlock(&LOCK_global_system_variables);
}
else
thd->variables.*offset= global_system_variables.*offset;
}
byte *sys_var_thd_ha_rows::value_ptr(THD *thd, enum_var_type type)
{
if (type == OPT_GLOBAL)
return (byte*) &(global_system_variables.*offset);
return (byte*) &(thd->variables.*offset);
}
bool sys_var_thd_ulonglong::update(THD *thd, set_var *var)
{
if (var->type == OPT_GLOBAL)
{
/* Lock is needed to make things safe on 32 bit systems */
pthread_mutex_lock(&LOCK_global_system_variables);
global_system_variables.*offset= var->value->val_int();
pthread_mutex_unlock(&LOCK_global_system_variables);
}
else
thd->variables.*offset= var->value->val_int();
return 0;
@ -798,7 +843,11 @@ bool sys_var_thd_ulonglong::update(THD *thd, set_var *var)
void sys_var_thd_ulonglong::set_default(THD *thd, enum_var_type type)
{
if (type == OPT_GLOBAL)
{
pthread_mutex_lock(&LOCK_global_system_variables);
global_system_variables.*offset= (ulonglong) option_limits->def_value;
pthread_mutex_unlock(&LOCK_global_system_variables);
}
else
thd->variables.*offset= global_system_variables.*offset;
}
@ -905,6 +954,8 @@ Item *sys_var::item(THD *thd, enum_var_type var_type)
return new Item_uint((int32) *(ulong*) value_ptr(thd, var_type));
case SHOW_LONGLONG:
return new Item_int(*(longlong*) value_ptr(thd, var_type));
case SHOW_HA_ROWS:
return new Item_int((longlong) *(ha_rows*) value_ptr(thd, var_type));
case SHOW_MY_BOOL:
return new Item_int((int32) *(my_bool*) value_ptr(thd, var_type),1);
case SHOW_CHAR:

View file

@ -212,6 +212,24 @@ public:
};
class sys_var_thd_ha_rows :public sys_var_thd
{
public:
ha_rows SV::*offset;
sys_var_thd_ha_rows(const char *name_arg, ha_rows SV::*offset_arg)
:sys_var_thd(name_arg), offset(offset_arg)
{}
sys_var_thd_ha_rows(const char *name_arg, ha_rows SV::*offset_arg,
sys_after_update_func func)
:sys_var_thd(name_arg,func), offset(offset_arg)
{}
bool update(THD *thd, set_var *var);
void set_default(THD *thd, enum_var_type type);
SHOW_TYPE type() { return SHOW_HA_ROWS; }
byte *value_ptr(THD *thd, enum_var_type type);
};
class sys_var_thd_ulonglong :public sys_var_thd
{
public:

View file

@ -217,8 +217,8 @@
"Deadlock found when trying to get lock; Try restarting transaction",
"The used table type doesn't support FULLTEXT indexes",
"Cannot add foreign key constraint",
"Cannot add a child row: a foreign key constraint fails",
"Cannot delete a parent row: a foreign key constraint fails",
"Cannot add or update a child row: a foreign key constraint fails",
"Cannot delete or update a parent row: a foreign key constraint fails",
"Error connecting to master: %-.128s",
"Error running query on master: %-.128s",
"Error when executing command %s: %-.128s",

View file

@ -219,25 +219,25 @@
"Impossibile aggiungere il vincolo di integrita' referenziale (foreign key constraint)",
"Impossibile aggiungere la riga: un vincolo d'integrita' referenziale non e' soddisfatto",
"Impossibile cancellare la riga: un vincolo d'integrita' referenziale non e' soddisfatto",
"Error connecting to master: %-.128s",
"Error running query on master: %-.128s",
"Error when executing command %s: %-.128s",
"Wrong usage of %s and %s",
"The used SELECT statements have a different number of columns",
"Can't execute the query because you have a conflicting read lock",
"Mixing of transactional and non-transactional tables is disabled",
"Option '%s' used twice in statement",
"User '%-.64s' has exceeded the '%s' resource (current value: %ld)",
"Access denied. You need the %-.128s privilege for this operation",
"Variable '%-.64s' is a LOCAL variable and can't be used with SET GLOBAL",
"Variable '%-.64s' is a GLOBAL variable and should be set with SET GLOBAL",
"Variable '%-.64s' doesn't have a default value",
"Variable '%-.64s' can't be set to the value of '%-.64s'",
"Wrong argument type to variable '%-.64s'",
"Variable '%-.64s' can only be set, not read",
"Wrong usage/placement of '%s'",
"This version of MySQL doesn't yet support '%s'",
"Got fatal error %d: '%-.128s' from master when reading data from binary log",
"Errore durante la connessione al master: %-.128s",
"Errore eseguendo una query sul master: %-.128s",
"Errore durante l'esecuzione del comando %s: %-.128s",
"Uso errato di %s e %s",
"La SELECT utilizzata ha un numero di colonne differente",
"Impossibile eseguire la query perche' c'e' un conflitto con in lock di lettura",
"E' disabilitata la possibilita' di mischiare tabelle transazionali e non-transazionali",
"L'opzione '%s' e' stata usata due volte nel comando",
"L'utente '%-.64s' ha ecceduto la risorsa '%s' (valore corrente: %ld)",
"Accesso non consentito. Serve il privilegio %-.128s per questa operazione",
"La variabile '%-.64s' e' una variabile locale ( LOCAL ) e non puo' essere cambiata usando SET GLOBAL",
"La variabile '%-.64s' e' una variabile globale ( GLOBAL ) e deve essere cambiata usando SET GLOBAL",
"La variabile '%-.64s' non ha un valore di default",
"Alla variabile '%-.64s' non puo' essere assegato il valore '%-.64s'",
"Tipo di valore errato per la variabile '%-.64s'",
"Alla variabile '%-.64s' e' di sola scrittura quindi puo' essere solo assegnato un valore, non letto",
"Uso/posizione di '%s' sbagliato",
"Questa versione di MySQL non supporta ancora '%s'",
"Errore fatale %d: '%-.128s' dal master leggendo i dati dal log binario",
"Wrong foreign key definition for '%-.64s': %s",
"Key reference and table reference doesn't match",
"Cardinality error (more/less than %d columns)",

View file

@ -1747,7 +1747,7 @@ static int init_slave_thread(THD* thd, SLAVE_THD_TYPE thd_type)
VOID(pthread_sigmask(SIG_UNBLOCK,&set,&thd->block_signals));
#endif
if ((ulong) thd->variables.max_join_size == (ulong) HA_POS_ERROR)
if (thd->variables.max_join_size == HA_POS_ERROR)
thd->options |= OPTION_BIG_SELECTS;
if (thd_type == SLAVE_THD_SQL)
@ -2965,7 +2965,12 @@ static IO_CACHE *reopen_relay_log(RELAY_LOG_INFO *rli, const char **errmsg)
if ((rli->cur_log_fd=open_binlog(cur_log,rli->relay_log_name,
errmsg)) <0)
DBUG_RETURN(0);
my_b_seek(cur_log,rli->relay_log_pos);
/*
We want to start exactly where we was before:
relay_log_pos Current log pos
pending Number of bytes already processed from the event
*/
my_b_seek(cur_log,rli->relay_log_pos + rli->pending);
DBUG_RETURN(cur_log);
}

View file

@ -186,11 +186,13 @@ typedef struct st_relay_log_info
volatile bool abort_slave, slave_running;
bool log_pos_current;
bool skip_log_purge;
bool inside_transaction;
st_relay_log_info()
:info_fd(-1),cur_log_fd(-1), cur_log_old_open_count(0), abort_pos_wait(0),
slave_run_id(0), inited(0), abort_slave(0), slave_running(0),
log_pos_current(0), skip_log_purge(0)
:info_fd(-1),cur_log_fd(-1), cur_log_old_open_count(0), abort_pos_wait(0),
slave_run_id(0), inited(0), abort_slave(0), slave_running(0),
log_pos_current(0), skip_log_purge(0),
inside_transaction(0) /* the default is autocommit=1 */
{
relay_log_name[0] = master_log_name[0] = 0;
bzero(&info_file,sizeof(info_file));

View file

@ -1936,7 +1936,7 @@ static int replace_table_table(THD *thd, GRANT_TABLE *grant_table,
ulong rights, ulong col_rights,
bool revoke_grant)
{
char grantor[HOSTNAME_LENGTH+1+USERNAME_LENGTH];
char grantor[HOSTNAME_LENGTH+USERNAME_LENGTH+2];
int old_row_exists = 1;
int error=0;
ulong store_table_rights, store_col_rights;

View file

@ -403,7 +403,6 @@ void field_real::add()
length= my_sprintf(buff, (buff, "%-.*f", (int) decs, num));
#endif
// We never need to check further than this
end = buff + length - 1 - decs + max_notzero_dec_len;

View file

@ -344,8 +344,8 @@ struct system_variables
{
ulonglong myisam_max_extra_sort_file_size;
ulonglong myisam_max_sort_file_size;
ulonglong select_limit;
ulonglong max_join_size;
ha_rows select_limit;
ha_rows max_join_size;
ulong bulk_insert_buff_size;
ulong join_buff_size;
ulong long_query_time;

View file

@ -468,7 +468,7 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *db,
my_dirend(dirp);
if (thd->killed ||
(tot_list && mysql_rm_table_part2_with_lock(thd, tot_list, 1, 1)))
(tot_list && mysql_rm_table_part2_with_lock(thd, tot_list, 1, 0, 1)))
DBUG_RETURN(-1);
/*

View file

@ -109,7 +109,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list,
int error;
bool log_on= ((thd->options & OPTION_UPDATE_LOG) ||
!(thd->master_access & SUPER_ACL));
bool transactional_table, log_delayed, bulk_insert=0;
bool transactional_table, log_delayed, bulk_insert;
uint value_count;
ulong counter = 1;
ulonglong id;
@ -217,21 +217,17 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list,
thd->proc_info="update";
if (duplic != DUP_ERROR)
table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
if ((bulk_insert= (values_list.elements >= MIN_ROWS_TO_USE_BULK_INSERT &&
lock_type != TL_WRITE_DELAYED &&
!(specialflag & SPECIAL_SAFE_MODE))))
if ((lock_type != TL_WRITE_DELAYED && !(specialflag & SPECIAL_SAFE_MODE)) &&
values_list.elements >= MIN_ROWS_TO_USE_BULK_INSERT)
{
table->file->extra_opt(HA_EXTRA_WRITE_CACHE,
min(thd->variables.read_buff_size,
table->avg_row_length*values_list.elements));
if (thd->variables.bulk_insert_buff_size)
table->file->extra_opt(HA_EXTRA_BULK_INSERT_BEGIN,
min(thd->variables.bulk_insert_buff_size,
(table->total_key_length +
table->keys * TREE_ELEMENT_EXTRA_SIZE)*
values_list.elements));
table->bulk_insert= 1;
table->file->deactivate_non_unique_index(values_list.elements);
bulk_insert=1;
}
else
bulk_insert=0;
while ((values= its++))
{
@ -309,7 +305,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list,
error=1;
}
}
if (table->file->extra(HA_EXTRA_BULK_INSERT_END))
if (table->file->activate_all_index(thd))
{
if (!error)
{
@ -317,7 +313,6 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list,
error=1;
}
}
table->bulk_insert= 0;
}
if (id && values_list.elements != 1)
thd->insert_id(id); // For update log

View file

@ -269,11 +269,6 @@ int mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list,
table->file->extra(HA_EXTRA_NO_IGNORE_DUP_KEY);
table->time_stamp=save_time_stamp;
table->next_number_field=0;
if (thd->lock)
{
mysql_unlock_tables(thd, thd->lock);
thd->lock=0;
}
}
if (file >= 0) my_close(file,MYF(0));
free_blobs(table); /* if pack_blob was used */
@ -292,7 +287,8 @@ int mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list,
mysql_bin_log.write(&d);
}
}
DBUG_RETURN(-1); // Error on read
error= -1; // Error on read
goto err;
}
sprintf(name,ER(ER_LOAD_INFO),info.records,info.deleted,
info.records-info.copied,thd->cuted_fields);
@ -326,6 +322,13 @@ int mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list,
}
if (transactional_table)
error=ha_autocommit_or_rollback(thd,error);
err:
if (thd->lock)
{
mysql_unlock_tables(thd, thd->lock);
thd->lock=0;
}
DBUG_RETURN(error);
}

View file

@ -516,6 +516,7 @@ check_connections(THD *thd)
{
vio_in_addr(net->vio,&thd->remote.sin_addr);
thd->host=ip_to_hostname(&thd->remote.sin_addr,&connect_errors);
thd->host[strnlen(thd->host, HOSTNAME_LENGTH)]= 0;
if (connect_errors > max_connect_errors)
return(ER_HOST_IS_BLOCKED);
}
@ -532,6 +533,7 @@ check_connections(THD *thd)
thd->ip=0;
bzero((char*) &thd->remote,sizeof(struct sockaddr));
}
/* Ensure that wrong hostnames doesn't cause buffer overflows */
vio_keepalive(net->vio, TRUE);
ulong pkt_len=0;
@ -763,7 +765,7 @@ pthread_handler_decl(handle_one_connection,arg)
goto end_thread;
}
if ((ulong) thd->variables.max_join_size == (ulonglong) HA_POS_ERROR)
if (thd->variables.max_join_size == HA_POS_ERROR)
thd->options |= OPTION_BIG_SELECTS;
if (thd->client_capabilities & CLIENT_COMPRESS)
net->compress=1; // Use compression
@ -839,7 +841,7 @@ extern "C" pthread_handler_decl(handle_bootstrap,arg)
#endif
if ((ulong) thd->variables.max_join_size == (ulonglong) HA_POS_ERROR)
if (thd->variables.max_join_size == HA_POS_ERROR)
thd->options |= OPTION_BIG_SELECTS;
thd->proc_info=0;
@ -969,6 +971,12 @@ bool do_command(THD *thd)
vio_description(net->vio) ));
return TRUE;
}
else if (!packet_length)
{
send_error(thd,net->last_errno,NullS);
net->error=0;
DBUG_RETURN(FALSE);
}
else
{
packet=(char*) net->read_pos;
@ -1254,6 +1262,8 @@ bool dispatch_command(enum enum_server_command command, THD *thd,
}
if (lower_case_table_names)
my_casedn_str(files_charset_info, db);
if (check_access(thd,DROP_ACL,db,0,1))
break;
if (thd->locked_tables || thd->active_transaction())
{
send_error(thd,ER_LOCK_OR_ACTIVE_TRANSACTION);
@ -1294,10 +1304,8 @@ bool dispatch_command(enum enum_server_command command, THD *thd,
if (check_global_access(thd,RELOAD_ACL))
break;
mysql_log.write(thd,command,NullS);
if (reload_acl_and_cache(thd, options, (TABLE_LIST*) 0))
send_error(thd,0);
else
send_eof(thd);
/* error sending is deferred to reload_acl_and_cache */
reload_acl_and_cache(thd, options, (TABLE_LIST*) 0) ;
break;
}
case COM_SHUTDOWN:
@ -1638,7 +1646,9 @@ mysql_execute_command(THD *thd)
{
res= mysqld_show_warnings(thd, (ulong)
((1L << (uint) MYSQL_ERROR::WARN_LEVEL_NOTE) |
(1L << (uint) MYSQL_ERROR::WARN_LEVEL_WARN)));
(1L << (uint) MYSQL_ERROR::WARN_LEVEL_WARN) |
(1L << (uint) MYSQL_ERROR::WARN_LEVEL_ERROR)
));
break;
}
case SQLCOM_SHOW_ERRORS:
@ -1880,6 +1890,24 @@ mysql_execute_command(THD *thd)
break;
}
case SQLCOM_SLAVE_STOP:
/*
If the client thread has locked tables, a deadlock is possible.
Assume that
- the client thread does LOCK TABLE t READ.
- then the master updates t.
- then the SQL slave thread wants to update t,
so it waits for the client thread because t is locked by it.
- then the client thread does SLAVE STOP.
SLAVE STOP waits for the SQL slave thread to terminate its
update t, which waits for the client thread because t is locked by it.
To prevent that, refuse SLAVE STOP if the
client thread has locked tables
*/
if (thd->locked_tables || thd->active_transaction())
{
send_error(thd,ER_LOCK_OR_ACTIVE_TRANSACTION);
break;
}
{
LOCK_ACTIVE_MI;
stop_slave(thd,active_mi,1/* net report*/);
@ -2287,12 +2315,17 @@ mysql_execute_command(THD *thd)
}
case SQLCOM_DROP_TABLE:
{
if (check_table_access(thd,DROP_ACL,tables))
goto error; /* purecov: inspected */
if (end_active_trans(thd))
res= -1;
else
res = mysql_rm_table(thd,tables,lex->drop_if_exists);
if (!lex->drop_temporary)
{
if (check_table_access(thd,DROP_ACL,tables))
goto error; /* purecov: inspected */
if (end_active_trans(thd))
{
res= -1;
break;
}
}
res= mysql_rm_table(thd,tables,lex->drop_if_exists, lex->drop_temporary);
}
break;
case SQLCOM_DROP_INDEX:
@ -2673,10 +2706,8 @@ mysql_execute_command(THD *thd)
case SQLCOM_RESET:
if (check_global_access(thd,RELOAD_ACL) || check_db_used(thd, tables))
goto error;
if (reload_acl_and_cache(thd, lex->type, tables))
send_error(thd,0);
else
send_ok(thd);
/* error sending is deferred to reload_acl_and_cache */
reload_acl_and_cache(thd, lex->type, tables) ;
break;
case SQLCOM_KILL:
kill_one_thread(thd,lex->thread_id);
@ -3690,10 +3721,15 @@ void add_join_natural(TABLE_LIST *a,TABLE_LIST *b)
b->natural_join=a;
}
/*
Reload/resets privileges and the different caches
*/
bool reload_acl_and_cache(THD *thd, ulong options, TABLE_LIST *tables)
{
bool result=0;
bool error_already_sent=0;
select_errors=0; /* Write if more errors */
if (options & REFRESH_GRANT)
{
@ -3751,11 +3787,29 @@ bool reload_acl_and_cache(THD *thd, ulong options, TABLE_LIST *tables)
{
LOCK_ACTIVE_MI;
if (reset_slave(thd, active_mi))
{
result=1;
/*
reset_slave() sends error itself.
If it didn't, one would either change reset_slave()'s prototype, to
pass *errorcode and *errmsg to it when it's called or
change reset_slave to use my_error() to register the error.
*/
error_already_sent=1;
}
UNLOCK_ACTIVE_MI;
}
if (options & REFRESH_USER_RESOURCES)
reset_mqh(thd,(LEX_USER *) NULL);
if (thd && !error_already_sent)
{
if (result)
send_error(thd,0);
else
send_ok(thd);
}
return result;
}

View file

@ -696,20 +696,48 @@ int stop_slave(THD* thd, MASTER_INFO* mi, bool net_report )
return 0;
}
/*
Remove all relay logs and start replication from the start
SYNOPSIS
reset_slave()
thd Thread handler
mi Master info for the slave
NOTES
We don't send ok in this functions as this is called from
reload_acl_and_cache() which may have done other tasks, which may
have failed for which we want to send and error.
RETURN
0 ok
1 error
In this case error is sent to the client with send_error()
*/
int reset_slave(THD *thd, MASTER_INFO* mi)
{
MY_STAT stat_area;
char fname[FN_REFLEN];
int restart_thread_mask = 0,error=0;
int thread_mask= 0, error= 0;
uint sql_errno=0;
const char* errmsg=0;
DBUG_ENTER("reset_slave");
lock_slave_threads(mi);
init_thread_mask(&restart_thread_mask,mi,0 /* not inverse */);
if ((error=terminate_slave_threads(mi,restart_thread_mask,1 /*skip lock*/))
|| (error=purge_relay_logs(&mi->rli, thd,
1 /* just reset */,
&errmsg)))
init_thread_mask(&thread_mask,mi,0 /* not inverse */);
if (thread_mask) // We refuse if any slave thread is running
{
sql_errno= ER_SLAVE_MUST_STOP;
error=1;
goto err;
}
if ((error= purge_relay_logs(&mi->rli, thd,
1 /* just reset */,
&errmsg)))
goto err;
end_master_info(mi);
@ -725,17 +753,15 @@ int reset_slave(THD *thd, MASTER_INFO* mi)
error=1;
goto err;
}
if (restart_thread_mask)
error=start_slave_threads(0 /* mutex not needed */,
1 /* wait for start*/,
mi,master_info_file,relay_log_info_file,
restart_thread_mask);
// TODO: fix error messages so they get to the client
err:
unlock_slave_threads(mi);
if (thd && error)
send_error(thd, sql_errno, errmsg);
DBUG_RETURN(error);
}
void kill_zombie_dump_threads(uint32 slave_server_id)
{
pthread_mutex_lock(&LOCK_thread_count);
@ -767,23 +793,20 @@ void kill_zombie_dump_threads(uint32 slave_server_id)
int change_master(THD* thd, MASTER_INFO* mi)
{
int error=0,restart_thread_mask;
int thread_mask;
const char* errmsg=0;
bool need_relay_log_purge=1;
DBUG_ENTER("change_master");
// kill slave thread
lock_slave_threads(mi);
init_thread_mask(&restart_thread_mask,mi,0 /*not inverse*/);
if (restart_thread_mask &&
(error=terminate_slave_threads(mi,
restart_thread_mask,
1 /*skip lock*/)))
init_thread_mask(&thread_mask,mi,0 /*not inverse*/);
if (thread_mask) // We refuse if any slave thread is running
{
send_error(thd,error);
net_printf(thd,ER_SLAVE_MUST_STOP);
unlock_slave_threads(mi);
DBUG_RETURN(1);
}
thd->proc_info = "changing master";
LEX_MASTER_INFO* lex_mi = &thd->lex.mi;
// TODO: see if needs re-write
@ -852,6 +875,7 @@ int change_master(THD* thd, MASTER_INFO* mi)
&errmsg))
{
net_printf(thd, 0, "Failed purging old relay logs: %s",errmsg);
unlock_slave_threads(mi);
DBUG_RETURN(1);
}
}
@ -882,18 +906,9 @@ int change_master(THD* thd, MASTER_INFO* mi)
pthread_cond_broadcast(&mi->data_cond);
pthread_mutex_unlock(&mi->rli.data_lock);
thd->proc_info = "starting slave";
if (restart_thread_mask)
error=start_slave_threads(0 /* mutex not needed*/,
1 /* wait for start*/,
mi,master_info_file,relay_log_info_file,
restart_thread_mask);
unlock_slave_threads(mi);
thd->proc_info = 0;
if (error)
send_error(thd,error);
else
send_ok(thd);
send_ok(thd);
DBUG_RETURN(0);
}

View file

@ -1507,7 +1507,7 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
select->quick=0;
if (records != HA_POS_ERROR)
{
s->records=s->found_records=records;
s->found_records=records;
s->read_time= (ha_rows) (s->quick ? s->quick->read_time : 0.0);
}
}
@ -4978,7 +4978,10 @@ join_read_const(JOIN_TAB *tab)
empty_record(table);
if (error != HA_ERR_KEY_NOT_FOUND)
{
sql_print_error("read_const: Got error %d when reading table %s",
/* Locking reads can legally return also these errors, do not
print them to the .err log */
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("read_const: Got error %d when reading table %s",
error, table->path);
table->file->print_error(error,MYF(0));
return 1;
@ -5041,7 +5044,8 @@ join_read_always_key(JOIN_TAB *tab)
{
if (error != HA_ERR_KEY_NOT_FOUND)
{
sql_print_error("read_const: Got error %d when reading table %s",error,
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("read_const: Got error %d when reading table %s",error,
table->path);
table->file->print_error(error,MYF(0));
return 1;
@ -5070,7 +5074,8 @@ join_read_last_key(JOIN_TAB *tab)
{
if (error != HA_ERR_KEY_NOT_FOUND)
{
sql_print_error("read_const: Got error %d when reading table %s",error,
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("read_const: Got error %d when reading table %s",error,
table->path);
table->file->print_error(error,MYF(0));
return 1;
@ -5102,7 +5107,8 @@ join_read_next_same(READ_RECORD *info)
{
if (error != HA_ERR_END_OF_FILE)
{
sql_print_error("read_next: Got error %d when reading table %s",error,
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("read_next: Got error %d when reading table %s",error,
table->path);
table->file->print_error(error,MYF(0));
return 1;
@ -5124,7 +5130,8 @@ join_read_prev_same(READ_RECORD *info)
{
if (error != HA_ERR_END_OF_FILE)
{
sql_print_error("read_next: Got error %d when reading table %s",error,
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("read_next: Got error %d when reading table %s",error,
table->path);
table->file->print_error(error,MYF(0));
error= 1;
@ -5195,7 +5202,8 @@ join_read_first(JOIN_TAB *tab)
{
if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
{
sql_print_error("read_first_with_key: Got error %d when reading table",
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("read_first_with_key: Got error %d when reading table",
error);
table->file->print_error(error,MYF(0));
return 1;
@ -5214,7 +5222,9 @@ join_read_next(READ_RECORD *info)
{
if (error != HA_ERR_END_OF_FILE)
{
sql_print_error("read_next_with_key: Got error %d when reading table %s",
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error(
"read_next_with_key: Got error %d when reading table %s",
error, info->table->path);
info->file->print_error(error,MYF(0));
return 1;
@ -5246,7 +5256,8 @@ join_read_last(JOIN_TAB *tab)
{
if (error != HA_ERR_END_OF_FILE)
{
sql_print_error("read_last_with_key: Got error %d when reading table",
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("read_last_with_key: Got error %d when reading table",
error, table->path);
table->file->print_error(error,MYF(0));
return 1;
@ -5265,7 +5276,9 @@ join_read_prev(READ_RECORD *info)
{
if (error != HA_ERR_END_OF_FILE)
{
sql_print_error("read_prev_with_key: Got error %d when reading table: %s",
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error(
"read_prev_with_key: Got error %d when reading table: %s",
error,info->table->path);
info->file->print_error(error,MYF(0));
return 1;
@ -5293,7 +5306,8 @@ join_ft_read_first(JOIN_TAB *tab)
{
if (error != HA_ERR_END_OF_FILE)
{
sql_print_error("ft_read_first: Got error %d when reading table %s",
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("ft_read_first: Got error %d when reading table %s",
error, table->path);
table->file->print_error(error,MYF(0));
return 1;
@ -5311,7 +5325,8 @@ join_ft_read_next(READ_RECORD *info)
{
if (error != HA_ERR_END_OF_FILE)
{
sql_print_error("ft_read_next: Got error %d when reading table %s",
if (error != HA_ERR_LOCK_DEADLOCK && error != HA_ERR_LOCK_WAIT_TIMEOUT)
sql_print_error("ft_read_next: Got error %d when reading table %s",
error, info->table->path);
info->file->print_error(error,MYF(0));
return 1;

View file

@ -1269,6 +1269,7 @@ void mysqld_list_processes(THD *thd,const char *user, bool verbose)
THD *tmp;
while ((tmp=it++))
{
struct st_my_thread_var *mysys_var;
if ((tmp->net.vio || tmp->system_thread) &&
(!user || (tmp->user && !strcmp(tmp->user,user))))
{
@ -1285,8 +1286,8 @@ void mysqld_list_processes(THD *thd,const char *user, bool verbose)
if ((thd_info->db=tmp->db)) // Safe test
thd_info->db=thd->strdup(thd_info->db);
thd_info->command=(int) tmp->command;
if (tmp->mysys_var)
pthread_mutex_lock(&tmp->mysys_var->mutex);
if ((mysys_var= tmp->mysys_var))
pthread_mutex_lock(&mysys_var->mutex);
thd_info->proc_info= (char*) (tmp->killed ? "Killed" : 0);
thd_info->state_info= (char*) (tmp->locked ? "Locked" :
tmp->net.reading_or_writing ?
@ -1298,8 +1299,8 @@ void mysqld_list_processes(THD *thd,const char *user, bool verbose)
tmp->mysys_var &&
tmp->mysys_var->current_cond ?
"Waiting on cond" : NullS);
if (tmp->mysys_var)
pthread_mutex_unlock(&tmp->mysys_var->mutex);
if (mysys_var)
pthread_mutex_unlock(&mysys_var->mutex);
#if !defined(DONT_USE_THR_ALARM) && ! defined(SCO)
if (pthread_kill(tmp->real_id,0))
@ -1444,6 +1445,9 @@ int mysqld_show(THD *thd, const char *wild, show_var_st *variables,
break;
case SHOW_LONGLONG:
end= longlong10_to_str(*(longlong*) value, buff, 10);
break;
case SHOW_HA_ROWS:
end= longlong10_to_str((longlong) *(ha_rows*) value, buff, 10);
break;
case SHOW_BOOL:
end= strmov(buff, *(bool*) value ? "ON" : "OFF");

View file

@ -46,7 +46,8 @@ static int copy_data_between_tables(TABLE *from,TABLE *to,
** This will wait for all users to free the table before dropping it
*****************************************************************************/
int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists)
int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists,
my_bool drop_temporary)
{
int error;
DBUG_ENTER("mysql_rm_table");
@ -57,7 +58,7 @@ int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists)
thd->mysys_var->current_cond= &COND_refresh;
VOID(pthread_mutex_lock(&LOCK_open));
if (global_read_lock)
if (!drop_temporary && global_read_lock)
{
if (thd->global_read_lock)
{
@ -72,7 +73,7 @@ int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists)
}
}
error=mysql_rm_table_part2(thd,tables,if_exists,0);
error=mysql_rm_table_part2(thd,tables, if_exists, drop_temporary, 0);
err:
pthread_mutex_unlock(&LOCK_open);
@ -91,14 +92,15 @@ int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists)
int mysql_rm_table_part2_with_lock(THD *thd,
TABLE_LIST *tables, bool if_exists,
bool dont_log_query)
bool drop_temporary, bool dont_log_query)
{
int error;
thd->mysys_var->current_mutex= &LOCK_open;
thd->mysys_var->current_cond= &COND_refresh;
VOID(pthread_mutex_lock(&LOCK_open));
error=mysql_rm_table_part2(thd,tables, if_exists, dont_log_query);
error=mysql_rm_table_part2(thd,tables, if_exists, drop_temporary,
dont_log_query);
pthread_mutex_unlock(&LOCK_open);
VOID(pthread_cond_broadcast(&COND_refresh)); // Signal to refresh
@ -111,6 +113,17 @@ int mysql_rm_table_part2_with_lock(THD *thd,
}
/*
Execute the drop of a normal or temporary table
SYNOPSIS
mysql_rm_table_part2()
thd Thread handler
tables Tables to drop
if_exists If set, don't give an error if table doesn't exists.
In this case we give an warning of level 'NOTE'
drop_temporary Only drop temporary tables
dont_log_query Don't log the query
TODO:
When logging to the binary log, we should log
tmp_tables and transactional tables as separate statements if we
@ -120,10 +133,15 @@ int mysql_rm_table_part2_with_lock(THD *thd,
The current code only writes DROP statements that only uses temporary
tables to the cache binary log. This should be ok on most cases, but
not all.
RETURN
0 ok
1 Error
-1 Thread was killed
*/
int mysql_rm_table_part2(THD *thd, TABLE_LIST *tables, bool if_exists,
bool dont_log_query)
bool drop_temporary, bool dont_log_query)
{
TABLE_LIST *table;
char path[FN_REFLEN];
@ -142,26 +160,28 @@ int mysql_rm_table_part2(THD *thd, TABLE_LIST *tables, bool if_exists,
continue; // removed temporary table
}
abort_locked_tables(thd,db,table->real_name);
while (remove_table_from_cache(thd,db,table->real_name) && !thd->killed)
{
dropping_tables++;
(void) pthread_cond_wait(&COND_refresh,&LOCK_open);
dropping_tables--;
}
drop_locked_tables(thd,db,table->real_name);
if (thd->killed)
DBUG_RETURN(-1);
/* remove form file and isam files */
strxmov(path, mysql_data_home, "/", db, "/", table->real_name, reg_ext,
NullS);
(void) unpack_filename(path,path);
error=0;
if (!drop_temporary)
{
abort_locked_tables(thd,db,table->real_name);
while (remove_table_from_cache(thd,db,table->real_name) && !thd->killed)
{
dropping_tables++;
(void) pthread_cond_wait(&COND_refresh,&LOCK_open);
dropping_tables--;
}
drop_locked_tables(thd,db,table->real_name);
if (thd->killed)
DBUG_RETURN(-1);
table_type=get_table_type(path);
/* remove form file and isam files */
strxmov(path, mysql_data_home, "/", db, "/", table->real_name, reg_ext,
NullS);
(void) unpack_filename(path,path);
if (access(path,F_OK))
table_type=get_table_type(path);
}
if (drop_temporary || access(path,F_OK))
{
if (if_exists)
store_warning(thd, ER_BAD_TABLE_ERROR, table->real_name);

View file

@ -416,8 +416,6 @@ int mysql_multi_update(THD *thd,
(ORDER *)NULL,
options | SELECT_NO_JOIN_CACHE,
result, unit, select_lex, 0);
end:
delete result;
DBUG_RETURN(res);
}
@ -631,7 +629,6 @@ bool multi_update::send_data(List<Item> &not_used_values)
TABLE_LIST *cur_table;
DBUG_ENTER("multi_update::send_data");
found++;
for (cur_table= update_tables; cur_table ; cur_table= cur_table->next)
{
TABLE *table= cur_table->table;
@ -647,6 +644,7 @@ bool multi_update::send_data(List<Item> &not_used_values)
store_record(table,1);
if (fill_record(*fields_for_table[offset], *values_for_table[offset]))
DBUG_RETURN(1);
found++;
if (compare_record(table, thd->query_id))
{
int error;
@ -673,7 +671,7 @@ bool multi_update::send_data(List<Item> &not_used_values)
int error;
TABLE *tmp_table= tmp_tables[offset];
fill_record(tmp_table->field+1, *values_for_table[offset]);
found++;
/* Store pointer to row */
memcpy((char*) tmp_table->field[0]->ptr,
(char*) table->file->ref, table->file->ref_length);
@ -772,7 +770,6 @@ int multi_update::do_updates(bool from_send_error)
continue; // May happen on dup key
goto err;
}
found++;
if ((local_error= table->file->rnd_pos(table->record[0], ref_pos)))
goto err;
table->status|= STATUS_UPDATED;

View file

@ -130,7 +130,7 @@ enum SHOW_TYPE
SHOW_UNDEF,
SHOW_LONG, SHOW_LONGLONG, SHOW_INT, SHOW_CHAR, SHOW_CHAR_PTR, SHOW_BOOL,
SHOW_MY_BOOL, SHOW_OPENTABLES, SHOW_STARTTIME, SHOW_QUESTION,
SHOW_LONG_CONST, SHOW_INT_CONST, SHOW_HAVE, SHOW_SYS,
SHOW_LONG_CONST, SHOW_INT_CONST, SHOW_HAVE, SHOW_SYS, SHOW_HA_ROWS,
#ifdef HAVE_OPENSSL
SHOW_SSL_CTX_SESS_ACCEPT, SHOW_SSL_CTX_SESS_ACCEPT_GOOD,
SHOW_SSL_GET_VERSION, SHOW_SSL_CTX_GET_SESSION_CACHE_MODE,

View file

@ -174,9 +174,16 @@ case "$mode" in
fi
;;
'restart')
# Stop the service and regardless of whether it was
# running or not, start it again.
$0 stop
$0 start
;;
*)
# usage
echo "usage: $0 start|stop"
echo "Usage: $0 start|stop|restart"
exit 1
;;
esac

View file

@ -71,7 +71,7 @@ Este pacote cont
%package bench
Release: %{release}
Requires: %{name}-client MySQL-DBI-perl-bin perl
Requires: %{name}-client perl-DBI perl
Summary: MySQL - Benchmarks and test system
Group: Applications/Databases
Summary(pt_BR): MySQL - Medições de desempenho
@ -269,7 +269,7 @@ RBR=$RPM_BUILD_ROOT
MBD=$RPM_BUILD_DIR/mysql-%{mysql_version}
# Ensure that needed directories exists
install -d $RBR/etc/{logrotate.d,rc.d/init.d}
install -d $RBR/etc/{logrotate.d,init.d}
install -d $RBR/var/lib/mysql/mysql
install -d $RBR/usr/share/{sql-bench,mysql-test}
install -d $RBR%{_mandir}
@ -290,14 +290,20 @@ install -m644 $MBD/sql/mysqld.sym $RBR/usr/lib/mysql/mysqld.sym
# Install logrotate and autostart
install -m644 $MBD/support-files/mysql-log-rotate $RBR/etc/logrotate.d/mysql
install -m755 $MBD/support-files/mysql.server $RBR/etc/rc.d/init.d/mysql
install -m755 $MBD/support-files/mysql.server $RBR/etc/init.d/mysql
# Create symbolic compatibility link safe_mysqld -> mysqld_safe
# (safe_mysqld will be gone in MySQL 4.1)
ln -sf ./mysqld_safe $RBR/usr/bin/safe_mysqld
%pre
if test -x /etc/rc.d/init.d/mysql
# Shut down a previously installed server first
if test -x /etc/init.d/mysql
then
/etc/init.d/mysql stop > /dev/null 2>&1
echo "Giving mysqld a couple of seconds to exit nicely"
sleep 5
elif test -x /etc/rc.d/init.d/mysql
then
/etc/rc.d/init.d/mysql stop > /dev/null 2>&1
echo "Giving mysqld a couple of seconds to exit nicely"
@ -313,7 +319,15 @@ if test ! -d $mysql_datadir/mysql; then mkdir $mysql_datadir/mysql; fi
if test ! -d $mysql_datadir/test; then mkdir $mysql_datadir/test; fi
# Make MySQL start/shutdown automatically when the machine does it.
/sbin/chkconfig --add mysql
# use insserv for older SuSE Linux versions
if test -x /sbin/insserv
then
/sbin/insserv /etc/init.d/mysql
# use chkconfig on Red Hat and newer SuSE releases
elif test -x /sbin/chkconfig
then
/sbin/chkconfig --add mysql
fi
# Create a MySQL user. Do not report any problems if it already
# exists. This is redhat specific and should be handled more portable
@ -334,31 +348,37 @@ chown -R mysql $mysql_datadir
chmod -R og-rw $mysql_datadir/mysql
# Restart in the same way that mysqld will be started normally.
/etc/rc.d/init.d/mysql start
/etc/init.d/mysql start
# Allow safe_mysqld to start mysqld and print a message before we exit
sleep 2
%post Max
# Restart mysqld, to use the new binary.
# There may be a better way to handle this.
/etc/rc.d/init.d/mysql stop > /dev/null 2>&1
echo "Giving mysqld a couple of seconds to restart"
sleep 5
/etc/rc.d/init.d/mysql start
sleep 2
echo "Restarting mysqld."
/etc/init.d/mysql restart > /dev/null 2>&1
%preun
if test $1 = 0
then
if test -x /etc/rc.d/init.d/mysql
# Stop MySQL before uninstalling it
if test -x /etc/init.d/mysql
then
/etc/rc.d/init.d/mysql stop > /dev/null
/etc/init.d/mysql stop > /dev/null
fi
# Remove autostart of mysql
/sbin/chkconfig --del mysql
# for older SuSE Linux versions
if test -x /sbin/insserv
then
/sbin/insserv -r /etc/init.d/mysql
# use chkconfig on Red Hat and newer SuSE releases
elif test -x /sbin/chkconfig
then
/sbin/chkconfig --del mysql
fi
fi
# We do not remove the mysql user since it may still own a lot of
# database files.
@ -412,7 +432,7 @@ fi
%attr(644, root, root) /usr/lib/mysql/mysqld.sym
%attr(644, root, root) /etc/logrotate.d/mysql
%attr(755, root, root) /etc/rc.d/init.d/mysql
%attr(755, root, root) /etc/init.d/mysql
%attr(755, root, root) /usr/share/mysql/
@ -482,6 +502,17 @@ fi
%changelog
* Wed Nov 27 2002 Lenz Grimmer <lenz@mysql.com>
- moved init script from /etc/rc.d/init.d to /etc/init.d (the majority of
Linux distributions now support this scheme as proposed by the LSB either
directly or via a compatibility symlink)
- Use new "restart" init script action instead of starting and stopping
separately
- Be more flexible in activating the automatic bootup - use insserv (on
older SuSE versions) or chkconfig (Red Hat, newer SuSE versions and
others) to create the respective symlinks
* Wed Sep 25 2002 Lenz Grimmer <lenz@mysql.com>
- MySQL-Max now requires MySQL >= 4.0 to avoid version mismatches