sync using replicate-wild-ignore-table
Problem: changes in character set variables
before an action on an replication-ignored table
makes slave to forget new variable values.
Fix: initialize one_shot variables only when
4.1 -> 5.x replication is running.
Problem: SHOW CREATE TABLE printed garbage in table
name for tables having TURKISH I
(i.e. LATIN CAPITABLE LETTER I WITH DOT ABOVE)
when lower-case-table-name=1.
Reason: In some cases during lower/upper conversion in utf8,
the result string can be shorter the original string
(including the above letter). Old implementation of caseup_str()
and casedn_str() didn't handle the result length properly,
assuming that length cannot change.
This fix changes the result type of cs->cset->casedn_str()
and cs->cset->caseup_str() from VOID to UINT, to return
the result length, as well as put '\0' terminator on a
proper place.
Also, my_caseup_str_utf8() and my_casedn_str_utf8() were
rewritten not to use strlen() for performance purposes.
It was done with help of adding of new functions - my_utf8_uni_no_range()
and my_uni_utf8_no_range() - for null terminated strings.
a updatable view.
When there's a VIEW on a base table that have AUTO_INCREMENT column, and
this VIEW doesn't provide an access such column, after INSERT to such
VIEW LAST_INSERT_ID() did not return the value just generated.
This behaviour is intended and correct, because if the VIEW doesn't list
some columns then these columns are effectively hidden from the user,
and so any side effects of inserting default values to them.
However, there was a bug that such statement inserting into a view would
reset LAST_INSERT_ID() instead of leaving it unchanged.
This patch restores the original value of LAST_INSERT_ID() instead of
resetting it to zero.
When executing the init_connect statement, thd->net.vio is set to 0, to
forbid sending any results to the client. As a side effect we don't log
possible errors, either.
Now we write warnings to the error log if an init_connect query
fails.
When statement to be prepared contained CREATE PROCEDURE, CREATE FUNCTION
or CREATE TRIGGER statements with a syntax error in it, the preparation
would fail with syntax error message, but the memory could be corrupted.
The problem occurred because we switch memroot when parse stored
routine or trigger definitions, and on parse error we restored the
original memroot only after performing some memory operations. In more
detail:
- prepared statement would activate its own memory root to parse
the definition of the stored procedure.
- SP would reset this memory root with its own memory root to
parse SP statements
- a syntax error would happen
- prepared statement would restore the original memory root
- stored procedure would restore what it thinks was the original
memory root, but actually was the statement memory root.
That led to double free - in destruction of the statement and in
a next call to mysql_parse().
The solution is to restore memroot right after the failed parsing.
Do not consider SHOW commands slow queries, just because they don't use proper indexes.
This bug fix is not needed in 5.1, and the code changes will be null merged. However, the test cases will be propogated up to 5.1.
Note: bug#21726 does not directly apply to 4.1, as it doesn't have stored
procedures. However, 4.1 had some bugs that were fixed in 5.0 by the
patch for bug#21726, and this patch is a backport of those fixes.
Namely, in 4.1 it fixes:
- LAST_INSERT_ID(expr) didn't return value of expr (4.1 specific).
- LAST_INSERT_ID() could return the value generated by current
statement if the call happens after the generation, like in
CREATE TABLE t1 (i INT AUTO_INCREMENT PRIMARY KEY, j INT);
INSERT INTO t1 VALUES (NULL, 0), (NULL, LAST_INSERT_ID());
- Redundant binary log LAST_INSERT_ID_EVENTs could be generated.
Set a flag when a SHOW command is parsed, and check it in log_slow_statement(). SHOW commands are not counted as slow queries, even if they use table scans.
(race cond)
It was possible for one thread to interrupt a Data Definition Language
statement and thereby get messages to the binlog out of order. Consider:
Connection 1: Drop Foo x
Connection 2: Create or replace Foo x
Connection 2: Log "Create or replace Foo x"
Connection 1: Log "Drop Foo x"
Local end would have Foo x, but the replicated slaves would not.
The fix for this is to wrap all DDL and logging of a kind in the same mutex.
Since we already use mutexes for the various parts of altering the server,
this only entails moving the logging events down close to the action, inside
the mutex protection.
invocations of LAST_INSERT_ID.
Reding of LAST_INSERT_ID inside stored function wasn't noted by caller,
and no LAST_INSERT_ID_EVENT was issued for binary log.
The solution is to add THD::last_insert_id_used_bin_log, which is much
like THD::last_insert_id_used, but is reset only for upper-level
statements. This new variable is used to issue LAST_INSERT_ID_EVENT.
Non-upper-level INSERTs (the ones in the body of stored procedure,
stored function, or trigger) into a table that have AUTO_INCREMENT
column didn't affected the result of LAST_INSERT_ID() on this level.
The problem was introduced with the fix of bug 6880, which in turn was
introduced with the fix of bug 3117, where current insert_id value was
remembered on the first call to LAST_INSERT_ID() (bug 3117) and was
returned from that function until it was reset before the next
_upper-level_ statement (bug 6880).
The fix for bug#21726 brings back the behaviour of version 4.0, and
implements the following: remember insert_id value at the beginning
of the statement or expression (which at that point equals to
the first insert_id value generated by the previous statement), and
return that remembered value from LAST_INSERT_ID() or @@LAST_INSERT_ID.
Thus, the value returned by LAST_INSERT_ID() is not affected by values
generated by current statement, nor by LAST_INSERT_ID(expr) calls in
this statement.
Version 5.1 does not have this bug (it was fixed by WL 3146).
The cause of the bug was an incomplete fix for bug 18080.
The problem was that setup_tables() unconditionally reset the
name resolution context to its 'tables' argument, which pointed
to the first table of an SQL statement.
The bug fix limits resetting of the name resolution context in
setup_tables() only in the cases when the context was not set
by earlier parser/optimizer phases.
1003: Incorrect table name
in multi-table DELETE the set of tables to delete from actually
references then tables in the other list, e.g:
DELETE alias_of_t1 FROM t1 alias_of_t1 WHERE ....
is a valid statement.
So we must turn off table name syntactical validity check for alias_of_t1
because it's not a table name (even if it looks like one).
In order to do that we add a special flag (TL_OPTION_ALIAS) to
disable the name checking for the aliases in multi-table DELETE.
erroneous check
Problem: Actually there were two problems in the server code. The check
for SQLCOM_FLUSH in SF/Triggers were not according to the existing
architecture which uses sp_get_flags_for_command() from sp_head.cc .
This function was also missing a check for SQLCOM_FLUSH which has a
problem combined with prelocking. This changeset fixes both of these
deficiencies as well as the erroneous check in
sp_head::is_not_allowed_in_function() which was a copy&paste error.
User name (host name) has limit on length. The server code relies on these
limits when storing the names. The problem was that sometimes these limits
were not checked properly, so that could lead to buffer overflow.
The fix is to check length of user/host name in parser and if string is too
long, throw an error.
Changed the automake build process :
- ./configure.in
- ./sql/Makefile.am
to compile an instrumented parser for debug=yes or debug=full builds
Changed the (primary) runtime invocation of the parser :
- sql/sql_parse.cc
to generate bison traces in stderr when the DBUG "parser_debug" flag is set.
"real" table fails in JOINs".
This is a regression caused by the fix for Bug 18444.
This fix removed the assignment of empty_c_string to table->db performed
in add_table_to_list, as neither me nor anyone else knew what it was
there for. Now we know it and it's covered with tests: the only case
when a table database name can be empty is when the table is a derived
table. The fix puts the assignment back but makes it a bit more explicit.
Additionally, finally drop sp.result.orig which was checked in by mistake.
context.
Routine arguments were evaluated in the security context of the routine
itself, not in the caller's context.
The bug is fixed the following way:
- Item_func_sp::find_and_check_access() has been split into two
functions: Item_func_sp::find_and_check_access() itself only
finds the function and check that the caller have EXECUTE privilege
on it. New function set_routine_security_ctx() changes security
context for SUID routines and checks that definer have EXECUTE
privilege too.
- new function sp_head::execute_trigger() is called from
Table_triggers_list::process_triggers() instead of
sp_head::execute_function(), and is effectively just as the
sp_head::execute_function() is, with all non-trigger related code
removed, and added trigger-specific security context switch.
- call to Item_func_sp::find_and_check_access() stays outside
of sp_head::execute_function(), and there is a code in
sql_parse.cc before the call to sp_head::execute_procedure() that
checks that the caller have EXECUTE privilege, but both
sp_head::execute_function() and sp_head::execute_procedure() call
set_routine_security_ctx() after evaluating their parameters,
and restore the context after the body is executed.
run at startup"
The server returned an error when trying to execute init-file with a
stored procedure that could return multiple result sets to the client.
A stored procedure can return multiple result sets if it contains
PREPARE, SELECT, SHOW and similar statements.
The fix is to set client_capabilites|=CLIENT_MULTI_RESULTS in
sql_parse.cc:handle_bootstrap(). There is no "client" really, so
nothing is ever sent. This makes init-file feature behave consistently:
the prepared statements that can be called directly in the init-file
can be used in a stored procedure too.
Re-committed the patch originally submitted by Per-Erik after review.
NDB table".
SQL-layer was not marking fields which were used in triggers as such. As
result these fields were not always properly retrieved/stored by handler
layer. So one might got wrong values or lost changes in triggers for NDB,
Federated and possibly InnoDB tables.
This fix solves the problem by marking fields used in triggers
appropriately.
Also this patch contains the following cleanup of ha_ndbcluster code:
We no longer rely on reading LEX::sql_command value in handler in order
to determine if we can enable optimization which allows us to handle REPLACE
statement in more efficient way by doing replaces directly in write_row()
method without reporting error to SQL-layer.
Instead we rely on SQL-layer informing us whether this optimization
applicable by calling handler::extra() method with
HA_EXTRA_WRITE_CAN_REPLACE flag.
As result we no longer apply this optimzation in cases when it should not
be used (e.g. if we have on delete triggers on table) and use in some
additional cases when it is applicable (e.g. for LOAD DATA REPLACE).
Finally this patch includes fix for bug#20728 "REPLACE does not work
correctly for NDB table with PK and unique index".
This was yet another problem which was caused by improper field mark-up.
During row replacement fields which weren't explicity used in REPLACE
statement were not marked as fields to be saved (updated) so they have
retained values from old row version. The fix is to mark all table
fields as set for REPLACE statement. Note that in 5.1 we already solve
this problem by notifying handler that it should save values from all
fields only in case when real replacement happens.
Produce a warning if DATA/INDEX DIRECTORY is specified in
ALTER TABLE statement.
Ignoring of these options is documented in the symbolic links
section of the manual.
Bug#19022 "Memory bug when switching db during trigger execution"
Bug#17199 "Problem when view calls function from another database."
Bug#18444 "Fully qualified stored function names don't work correctly in
SELECT statements"
Documentation note: this patch introduces a change in behaviour of prepared
statements.
This patch adds a few new invariants with regard to how THD::db should
be used. These invariants should be preserved in future:
- one should never refer to THD::db by pointer and always make a deep copy
(strmake, strdup)
- one should never compare two databases by pointer, but use strncmp or
my_strncasecmp
- TABLE_LIST object table->db should be always initialized in the parser or
by creator of the object.
For prepared statements it means that if the current database is changed
after a statement is prepared, the database that was current at prepare
remains active. This also means that you can not prepare a statement that
implicitly refers to the current database if the latter is not set.
This is not documented, and therefore needs documentation. This is NOT a
change in behavior for almost all SQL statements except:
- ALTER TABLE t1 RENAME t2
- OPTIMIZE TABLE t1
- ANALYZE TABLE t1
- TRUNCATE TABLE t1 --
until this patch t1 or t2 could be evaluated at the first execution of
prepared statement.
CURRENT_DATABASE() still works OK and is evaluated at every execution
of prepared statement.
Note, that in stored routines this is not an issue as the default
database is the database of the stored procedure and "use" statement
is prohibited in stored routines.
This patch makes obsolete the use of check_db_used (it was never used in the
old code too) and all other places that check for table->db and assign it
from THD::db if it's NULL, except the parser.
How this patch was created: THD::{db,db_length} were replaced with a
LEX_STRING, THD::db. All the places that refer to THD::{db,db_length} were
manually checked and:
- if the place uses thd->db by pointer, it was fixed to make a deep copy
- if a place compared two db pointers, it was fixed to compare them by value
(via strcmp/my_strcasecmp, whatever was approproate)
Then this intermediate patch was used to write a smaller patch that does the
same thing but without a rename.
TODO in 5.1:
- remove check_db_used
- deploy THD::set_db in mysql_change_db
See also comments to individual files.
Addendum fixes after changing the condition variable
for the global read lock.
The stress test suite revealed some deadlocks. Some were
related to the new condition variable (COND_global_read_lock)
and some were general problems with the global read lock.
It is now necessary to signal COND_global_read_lock whenever
COND_refresh is signalled.
We need to wait for the release of a global read lock if one
is set before every operation that requires a write lock.
But we must not wait if we have locked tables by LOCK TABLES.
After setting a global read lock a thread waits until all
write locks are released.
schemas
The function check_one_table_access() called to check access to tables in
SELECT/INSERT/UPDATE was doing additional checks/modifications that don't hold
in the context of setup_tables_and_check_access().
That's why the check_one_table() was split into two : the functionality needed by
setup_tables_and_check_access() into check_single_table_access() and the rest of
the functionality stays in check_one_table_access() that is made to call the new
check_single_table_access() function.
function crashes server".
Attempts to execute prepared multi-delete statement which involved trigger or
stored function caused server crashes (the same happened for such statements
included in stored procedures in cases when one tried to execute them more
than once).
The problem was caused by yet another incorrect usage of check_table_access()
routine (the latter assumes that table list which it gets as argument
corresponds to value LEX::query_tables_own_last). We solve this problem by
juggling with LEX::query_tables_own_last value when we call
check_table_access() for LEX::auxilliary_table_list (better solution is too
intrusive and should be done in 5.1).
there was two problems about charsets in embedded server
1. mysys/charset.c - defined there default_charset_info variable is
modified by both server and client code (particularly when
--default-charset option is handled)
In embedded server we get two codelines modifying one variable.
I created separate default_client_charset_info for client code
2. mysql->charset and mysql->options.charset initialization isn't
properly done for embedded server - necessary calls added
There was an incomplete reset of the name resolution context, that caused
INSERT ... SELECT ... JOIN statements to resolve not by joint row type calculated
for the join.
Removed the redundant re-initialization of the context, because
mysql_insert_select_prepare() now correctly saves/restores the context.
There was a wrong determination of the DB name (witch is
not always the one in TABLE_LIST because derived tables
may be calculated using temp tables that have their db name
set to "").
The fix determines the database name according to the type
of table reference, and calls the function check_access()
with the correct db name so the correct set of grants is found.
There actually was 3 different problems -
hash_user_connections wasn't cleaned
one strdupped database name wasn't freed
and stmt->mem_root wasn't cleaned as it was
replased with mysql->field_alloc for result
For the last one - i made the library using stmt's
fields to store result if it's the case.
In multi-table delete a table for delete can't be used for selecting in
subselects. Appropriate error was raised but wasn't checked which leads to a
crash at the execution phase.
The mysql_execute_command() now checks for errors before executing select
for multi-delete.
The check for view security was lacking several points :
1. Check with the right set of permissions : for each table ref that
participates in a view there were the right credentials to use in it's
security_ctx member, but these weren't used for checking the credentials.
This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property
consistently.
2. Because of the above the security checking for views was just ruled out
in explicit ways in several places.
3. The security was checked only for the columns of the tables that are
brought into the query from a view. So if there is no column reference
outside of the view definition it was not detecting the lack of access to
the tables in the view in SQL SECURITY INVOKER mode.
The fix below tries to fix the above 3 points.
which explicitly or implicitly uses stored function gives 'Table not locked'
error"
Test case for these bugs crashed in --ps-protocol mode. The crash was caused
by incorrect usage of check_grant() routine from create_table_precheck()
routine. The former assumes that either number of tables to be inspected by
it is limited explicitly (i.e. is is not UINT_MAX) or table list used and
thd->lex->query_tables_own_last value correspond to each other.
create_table_precheck() was not fulfilling this condition and crash happened.
The fix simply sets number of tables to be inspected by check_grant() to 1.
after merge.
Concurrent read and update of privilege structures (like simultaneous
run of SHOW GRANTS and ADD USER) could result in server crash.
Ensure that proper locking of ACL structures is done.
No test case is provided because this bug can't be reproduced
deterministically.
There were two distict bugs: parse error was returned for valid
statement and that error wasn't reported to the client.
The fix ensures that EXPLAIN SELECT..INTO is accepted by parser and any
other parse error will be reported to the client.
Bug#17667: An attacker has the opportunity to bypass query logging.
This adds a new, local-only printf format specifier to our *printf functions
that allows us to print known-size buffers that must not be interpreted as
NUL-terminated "strings."
It uses this format-specifier to print to the log, thus fixing this
problem.
The bug caused wrong result sets for union constructs of the form
(SELECT ... ORDER BY order_list1 [LIMIT n]) ORDER BY order_list2.
For such queries order lists were concatenated and limit clause was
completely neglected.
After FLUSH STATUS max_used_connections was reset to 0, and haven't
been updated while cached threads were reused, until the moment a new
thread was created.
The first suggested fix from original bug report was implemented:
a) On flushing the status, set max_used_connections to
threads_connected, not to 0.
b) Check if it is necessary to increment max_used_connections when
taking a thread from the cache as well as when creating new threads
counter".
When TRUNCATE TABLE was called within an stored procedure the
auto_increment counter was not reset to 0 even if straight
TRUNCATE for this table did this.
This fix makes TRUNCATE in stored procedures to be handled exactly
in the same way as straight TRUNCATE. We achieve this by rolling
back the fix for bug 8850, which is no longer needed since stored
procedures don't require prelocked mode anymore (and TRUNCATE is
not allowed in stored functions or triggers).
The idea is to add DEFINER-clause in CREATE PROCEDURE and CREATE FUNCTION
statements. Almost all support of definer in stored routines had been already
done before this patch.
NOTE: this patch changes behaviour of dumping stored routines in mysqldump.
Before this patch, mysqldump did not dump DEFINER-clause for stored routines
and this was documented behaviour. In order to get full information about stored
routines, one should have dumped mysql.proc table. This patch changes this
behaviour, so that DEFINER-clause is dumped.
Since DEFINER-clause is not supported in CREATE PROCEDURE | FUNCTION statements
before this patch, the clause is covered by additional version-specific comments.
if --skip-grant-tables specified.
The problem is that there is a check that prevents creating a definer
with empty host name.
In --skip-grant-tables mode this check prevents the user from creating a
trigger/view without explicitly specifying its definer. This happens, because
in --skip-grant-tables mode CURRENT_USER is ''@''. According to Sanja this
check was implemented intentionally.
However, according to the MySQL manual it is possible to specify empty host
name (as well as empty user name). Moreover, the behaviour for stored routines
is different in this aspect -- we allow them to be created with implicit
definer.
Based on this, we believe it is OK to change the behaviour for views to be
similar with the behaviour for stored routines.
The idea of the fix is to extend support of non-SUID triggers for backward
compatibility. Formerly non-SUID triggers were appeared when "new" server
is being started against "old" database. Now, they are also created when
"new" slave receives updates from "old" master.
- Added empty constructors and virtual destructors to many classes and structs
- Removed some usage of the offsetof() macro to instead use C++ class pointers
column is increasing when table is recreated with PS/SP":
make use of create_field::char_length more consistent in the code.
Reinit create_field::length from create_field::char_length
for every execution of a prepared statement (actually fixes the
bug).
After trying multiple inheritance (to messy and hard make it work) and
sublassing jump_if_not (worked, but ugly), decided to on this solution
instead:
Inserting an abstract sp_instr_opt_meta class as parent for all instructions
with destinations makes it possible to handle a continuation pointer for
sp_instr_set_case_expr too.
Note: No special test case; the fix is captured by the changed behaviour of
bug14643_2, and bug14498_4 (formerly disabled), in sp.test.
Since replication rules execute after `mysql_multi_update_prepare' returns we
delay to `break' in case this functions returns non-zero (some tables are not found)
for to examine if there is an ignore rule for a not-found table. By doing that
it is guaranteed do/ignore replication rules logically preceed opening table routine.
There are two main idea of this fix:
- introduce a common function for server and client to split user value
(<user name>@<host name>) into user name and host name parts;
- dump DEFINER clause in correct format in mysqldump.
- Fixed tests
- Optimized new code
- Fixed some unlikely core dumps
- Better bug fixes for:
- #14397 - OPTIMIZE TABLE with an open HANDLER causes a crash
- #14850 (ERROR 1062 when a quering a view using a Group By on a column that can be null
Problem #1: INSERT...SELECT, Version for 5.0.
Extended the unique table check by a check of lock data.
Merge sub-tables cannot be detected by doing name checks only.
Problem #1: INSERT...SELECT, Version for 4.1.
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
according to the standard.
The idea is to use Field-classes to implement stored routines
variables. Also, we should provide facade to Item-hierarchy
by Item_field class (it is necessary, since SRVs take part
in expressions).
The patch fixes the following bugs:
- BUG#8702: Stored Procedures: No Error/Warning shown for inappropriate data
type matching;
- BUG#8768: Functions: For any unsigned data type, -ve values can be passed
and returned;
- BUG#8769: Functions: For Int datatypes, out of range values can be passed
and returned;
- BUG#9078: STORED PROCDURE: Decimal digits are not displayed when we use
DECIMAL datatype;
- BUG#9572: Stored procedures: variable type declarations ignored;
- BUG#12903: upper function does not work inside a function;
- BUG#13705: parameters to stored procedures are not verified;
- BUG#13808: ENUM type stored procedure parameter accepts non-enumerated
data;
- BUG#13909: Varchar Stored Procedure Parameter always BINARY string (ignores
CHARACTER SET);
- BUG#14161: Stored procedure cannot retrieve bigint unsigned;
- BUG#14188: BINARY variables have no 0x00 padding;
- BUG#15148: Stored procedure variables accept non-scalar values;
Problem #1: INSERT...SELECT
INSERT ... SELECT with the same table on both sides (hidden
below a MERGE table) does now work by buffering the select result.
The duplicate detection works now after open_and_lock_tables()
on the locks.
I did not find a test case that failed without the change in
sql_update.cc. I made the change anyway as it should in theory
fix a possible MERGE table problem with multi-table update.
Post-review version. Some minor review fixes, but also changed the way
some errors are handled: Don't return specific parse errors; instead
always use the more general "table corrupt" error (amended accordingly).
Bad examples of usage of a string with its length fixed.
The incorrect length in the trigger file configuration descriptor
fixed (BUG#14090).
A hook for unknown keys added to the parser to support old .TRG files.
handling of savepoints in stored routines.
Fixed ha_rollback_to_savepoint()/ha_savepoint()/ha_release_savepoint()
functions to properly handle savepoints inside of stored functions and
triggers.
Also now when we invoke stored function or trigger we create new savepoint
level. We destroy it at the end of function/trigger execution and return back
to old savepoint level.
Since long, the compiled code of stored routines has been printed in the trace file
when starting mysqld with the "--debug" flag. (At creation time only, and only in
debug builds of course.) This has been helpful when debugging stored procedure
execution, but it's a bit awkward to use. Also, the printing of some of the
instructions is a bit terse, in particular for sp_instr_stmt where only the command
code was printed.
This improves the printout of several of the instructions, and adds the debugging-
only commands "show procedure code <name>" and "show function code <name>".
(In non-debug builds they are not available.)
we changing current db temporarily and restore it when sp is created. however thd->db
in this case becomes empty string rather than NULL and so all checks of thd->db == NULL
will be false. So if after this we'll issue create procedure sp2()... without specifying
db it will succeed and create sp with db=NULL, which causes mysqldto crash on
show procedure status statement.
This patch fixes the problem.
it is added a check of not being empty value. When modifying SP with Admin
application on win32 it does not pass curent database so sp is stored with
db=null which causes a crash later on show procedure status;
Indeed now that stored procedures CALL is not binlogged, but instead the invoked substatements are,
the restrictions applied by log-bin-trust-routine-creators=0 are superfluous for procedures.
They still need to apply to functions where function calls are written to the binlog (for example as "DO myfunc(3)").
We rename the variable to log-bin-trust-function-creators but allow the old name until some future version (and issue a warning if old name is used).
the READ_ONLY global variable now allows statements which are to update only temporary tables
(note: if a statement, after parse stage, looks like it will update a non-temp table, it will be rejected,
even if at execution it would have turned out that 0 rows would be updated; for example
UPDATE my_non_tem_table SET a=1 WHERE 1 = 0; will be rejected).