id
stringlengths 22
25
| commit_message
stringlengths 137
6.96k
| diffs
listlengths 0
63
|
|---|---|---|
derby-DERBY-3425-08010c53
|
DERBY-3425: J2EEDataSourceTest throws away stack trace for many errors
Preserve stack trace on error by throwing the original exception
instead of using fail() to report the error.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@628746 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3427-ce68ae05
|
DERBY-3427: setting transaction isolation level to read committed raise ERROR X0X03: Invalid transaction state - held cursor requires same isolation level
Made the network server use the requested holdability when preparing calls.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@634206 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -41,7 +41,6 @@ import java.lang.reflect.Array;",
"removed": [
"import org.apache.derby.iapi.jdbc.EngineConnection;"
]
},
{
"added": [
"\t\tparsePkgidToFindHoldability();",
"",
"\t\t\tps = database.getConnection().prepareCall(",
"\t\t\t\tsqlStmt, scrollType, concurType, withHoldCursor);",
"\t\telse",
"\t\t{",
"\t\t\tps = database.getConnection().prepareStatement(",
"\t\t\t\tsqlStmt, scrollType, concurType, withHoldCursor);",
"\t\t}",
""
],
"header": "@@ -622,18 +621,21 @@ class DRDAStatement",
"removed": [
"\t\t\tps = database.getConnection().prepareCall(sqlStmt);",
"\t\t\tif (isolationSet)",
"\t\t\t\tdatabase.setPrepareIsolation(saveIsolationLevel);",
"\t\t\treturn ps;",
"\t\tparsePkgidToFindHoldability();",
"\t\tps = prepareStatementJDBC3(sqlStmt, scrollType, concurType, ",
"\t\t\t\t\t\t\t\t\t withHoldCursor);"
]
}
]
}
] |
derby-DERBY-3428-5a6acbff
|
DERBY-3428: Doing a replication failover should shutdown the database and the connection should no longer be available
The shutdown exception needs to be a StandardException in order to shut down the database (SQLException does not work)
After failove, unfreeze the database before trying shutdown.
Contributed by V Narayanan
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@632369 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection.java",
"hunks": [
{
"added": [
" * @throws StandardException 1) If the failover succeeds, an exception is",
" * 2) If a failure occurs during network",
" * @throws SQLException 1) Thrown upon a authorization failure.",
" throws SQLException, StandardException {"
],
"header": "@@ -852,15 +852,15 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
" * @throws java.sql.SQLException 1) Thrown upon a authorization failure ",
" * 2) If the failover succeeds, an exception is",
" * 3) If a failure occurs during network ",
" throws SQLException {"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/replication/master/MasterController.java",
"hunks": [
{
"added": [
"",
" //If we require an exception of Database Severity to shutdown the",
" //database to shutdown the database we need to unfreeze first",
" //before throwing the exception. Unless we unfreeze the shutdown",
" //hangs.",
" rawStoreFactory.unfreeze();",
""
],
"header": "@@ -278,6 +278,13 @@ public class MasterController",
"removed": []
}
]
}
] |
derby-DERBY-3430-a3203bfb
|
DERBY-3430 Inconsistency in JDBC autogen APIs between Connection.prepareStatement(...) and Statement.execute(...)
Change prepareStatement to treat empty arrays for columnNames or columnIndexes as NO_GENERATED_KEYS
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@629578 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection.java",
"hunks": [
{
"added": [
"\t\t\t(columnIndexes == null || columnIndexes.length == 0)"
],
"header": "@@ -1213,7 +1213,7 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\tcolumnIndexes == null"
]
}
]
}
] |
derby-DERBY-3431-7bceacaa
|
DERBY-3431: DatabaseMetaData.getConnection returns the wrong connection when using connection pooling.
Added another tests for this issue in DatabaseMetaDataTest.
Only the test for embedded is enabled, as the client driver has a bug.
Also note that the embedded driver has a related bug, but it is not exposed by this test (see J2EEDataSourceTest.testConnectionLeakInDatabaseMetaData instead).
Patch file: derby-3431-1b-test_repro.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@658181 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3431-8594dfd3
|
DERBY-3431: DatabaseMetaData.getConnection returns the wrong connection when using connection pooling.
Added a test case exposing the bug where DatabaseMetaData.getConnection returns a reference to a connection it should not publish.
Note that the test is disabled, because the bug is still at large.
Patch file: derby-3431-2a-test.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@650814 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3431-ef81d0e4
|
DERBY-3431: DatabaseMetaData.getConnection returns the wrong connection when using connection pooling.
Introduced a logical database metadata object in the client driver. This object is tightly associated with the logical connection, instead of the underlying physical connection. It will only publish a reference to the logical connection.
Added regression tests. Note that one of the tests fails for embedded, which appers to have a similar bug to what the client driver had.
Patch file: derby-3431-3b-client_logical_metadata.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@662383 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/LogicalConnection.java",
"hunks": [
{
"added": [
" /**",
" * Logical database metadata object created on demand and then cached.",
" * The lifetime of the metadata object is the same as this logical",
" * connection, in the sense that it will raise exceptions on method",
" * invocations after the logical connection has been closed.",
" */",
" private LogicalDatabaseMetaData logicalDatabaseMetaData = null;"
],
"header": "@@ -36,6 +36,13 @@ import java.sql.SQLException;",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/LogicalConnection40.java",
"hunks": [
{
"added": [],
"header": "@@ -29,7 +29,6 @@ import java.sql.NClob;",
"removed": [
"import java.sql.Wrapper;"
]
}
]
}
] |
derby-DERBY-3432-1bac3f3a
|
DERBY-3432: Move replication code from org.apache.derby.impl.services.replication to o.a.d.i.store.replication
Contributed by Jorgen Loland.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@634706 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/store/replication/master/MasterFactory.java",
"hunks": [
{
"added": [
" org.apache.derby.iapi.store.replication.master.MasterFactory"
],
"header": "@@ -1,7 +1,7 @@",
"removed": [
" org.apache.derby.iapi.services.replication.master.MasterFactory"
]
},
{
"added": [
"package org.apache.derby.iapi.store.replication.master;"
],
"header": "@@ -20,7 +20,7 @@",
"removed": [
"package org.apache.derby.iapi.services.replication.master;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/store/replication/slave/SlaveFactory.java",
"hunks": [
{
"added": [
" org.apache.derby.iapi.store.replication.slave.SlaveFactory"
],
"header": "@@ -1,7 +1,7 @@",
"removed": [
" org.apache.derby.iapi.services.replication.slave.SlaveFactory"
]
},
{
"added": [
"package org.apache.derby.iapi.store.replication.slave;"
],
"header": "@@ -20,7 +20,7 @@",
"removed": [
"package org.apache.derby.iapi.services.replication.slave;"
]
},
{
"added": [
" \"org.apache.derby.iapi.store.replication.slave.SlaveFactory\";"
],
"header": "@@ -44,7 +44,7 @@ public interface SlaveFactory {",
"removed": [
" \"org.apache.derby.iapi.services.replication.slave.SlaveFactory\";"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/ReplicationLogger.java",
"hunks": [
{
"added": [
" org.apache.derby.impl.store.replication.ReplicationLogger"
],
"header": "@@ -1,7 +1,7 @@",
"removed": [
" org.apache.derby.impl.services.replication.ReplicationLogger"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/buffer/LogBufferElement.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.buffer.LogBufferElement"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.buffer.LogBufferElement"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/buffer/LogBufferFullException.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.buffer.LogBufferFullException"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.buffer.LogBufferFullException"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/buffer/ReplicationLogBuffer.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.buffer.ReplicationLogBuffer"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.buffer.ReplicationLogBuffer"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/AsynchronousLogShipper.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.master.AsynchronousLogShipper"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.master.AsynchronousLogShipper"
]
},
{
"added": [
"package org.apache.derby.impl.store.replication.master;"
],
"header": "@@ -19,7 +19,7 @@",
"removed": [
"package org.apache.derby.impl.services.replication.master;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/LogShipper.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.master.LogShipper"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.master.LogShipper"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/MasterController.java",
"hunks": [
{
"added": [
" org.apache.derby.impl.store.replication.master.MasterController"
],
"header": "@@ -1,7 +1,7 @@",
"removed": [
" org.apache.derby.impl.services.replication.master.MasterController"
]
},
{
"added": [
"package org.apache.derby.impl.store.replication.master;"
],
"header": "@@ -20,7 +20,7 @@",
"removed": [
"package org.apache.derby.impl.services.replication.master;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessage.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.net.ReplicationMessage"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.net.ReplicationMessage"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessageReceive.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.net.ReplicationMessageReceive"
],
"header": "@@ -1,7 +1,7 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.net.ReplicationMessageReceive"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessageTransmit.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.net.ReplicationMessageTransmit"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.net.ReplicationMessageTransmit"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/SlaveAddress.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.net.SlaveAddress"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.net.SlaveAddress"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/SocketConnection.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.net.SocketConnection"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.net.SocketConnection"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/slave/ReplicationLogScan.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.replication.slave.ReplicationLogScan"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.services.replication.slave.ReplicationLogScan"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/slave/SlaveController.java",
"hunks": [
{
"added": [
" org.apache.derby.impl.store.replication.slave.SlaveController"
],
"header": "@@ -1,7 +1,7 @@",
"removed": [
" org.apache.derby.impl.services.replication.slave.SlaveController"
]
},
{
"added": [
"package org.apache.derby.impl.store.replication.slave;"
],
"header": "@@ -20,7 +20,7 @@",
"removed": [
"package org.apache.derby.impl.services.replication.slave;"
]
}
]
}
] |
derby-DERBY-3438-89e0c080
|
DERBY-3438: Allow SQL query text to be null in StatementKey.
Patch file: derby-3438-1a-allow_sql_null.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@631217 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/stmtcache/StatementKey.java",
"hunks": [
{
"added": [
" * @throws IllegalArgumentException if {@code schema} is {@code null}",
" if (schema == null) {",
" throw new IllegalArgumentException(\"schema is <null>\");"
],
"header": "@@ -67,17 +67,14 @@ public class StatementKey {",
"removed": [
" * @throws IllegalArgumentException if <code>sql</code> and/or",
" * <code>schema</code> is <code>null</code>",
" if (sql == null || schema == null) {",
" throw new IllegalArgumentException(",
" \"sql and/or schema is <null>: sql=\" + (sql == null) +",
" \", schema=\" + (schema == null));"
]
},
{
"added": [
" if (this.sql == null && other.sql != null) {",
" return false;",
" }"
],
"header": "@@ -125,6 +122,9 @@ public class StatementKey {",
"removed": []
},
{
"added": [
" hash = 47 * hash + (this.sql == null ? 3 : this.sql.hashCode());"
],
"header": "@@ -140,7 +140,7 @@ public class StatementKey {",
"removed": [
" hash = 47 * hash + this.sql.hashCode();"
]
}
]
}
] |
derby-DERBY-3446-142b9afe
|
DERBY-3446: Make ResultSet.getStatement return the correct statement when created by a logical statement.
Made LogicalStatementEntity constructor set itself as the statement owner in am.Statement, and ResultSet return the owner of the creating statement if set. If not set, the statement itself will be returned (as before).
Patch file: derby-3446-2c_rs_getstatement_alternative.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@630784 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/LogicalStatementEntity.java",
"hunks": [
{
"added": [
"abstract class LogicalStatementEntity",
" implements java.sql.Statement {"
],
"header": "@@ -46,7 +46,8 @@ import org.apache.derby.shared.common.sanity.SanityManager;",
"removed": [
"class LogicalStatementEntity {"
]
},
{
"added": [
" ((PreparedStatement)physicalPs).setOwner(this);"
],
"header": "@@ -103,6 +104,7 @@ class LogicalStatementEntity {",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Statement.java",
"hunks": [
{
"added": [
" /** The owner of this statement, if any. */",
" private java.sql.Statement owner = null;"
],
"header": "@@ -51,6 +51,8 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": []
},
{
"added": [
" /**",
" * Designates the owner of this statement, typically a logical statement.",
" *",
" * @param owner the owning statement, if any",
" */",
" protected final void setOwner(java.sql.Statement owner) {",
" this.owner = owner;",
" }",
"",
" /**",
" * Returns the owner of this statement, if any.",
" *",
" * @return The designated owner of this statement, or {@code null} if none.",
" */",
" final java.sql.Statement getOwner() {",
" return this.owner;",
" }"
],
"header": "@@ -1561,6 +1563,23 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": []
}
]
}
] |
derby-DERBY-3446-d798bb11
|
DERBY-3446: Make ResultSet.getStatement return the correct statement when created by a logical statement.
Added regression test for the bug in JDBC statement caching environments.
Patch file: derby-3446-3a-stmtpool_test.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@673327 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3448-04014b20
|
DERBY-3448 - backing out revision 636829, it doesn't actually build.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@636892 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/system/mailjdbc/utils/DbTasks.java",
"hunks": [
{
"added": [
"import java.util.Properties;"
],
"header": "@@ -37,6 +37,7 @@ import java.sql.SQLException;",
"removed": []
},
{
"added": [
"\tstatic boolean saveAutoCommit;",
""
],
"header": "@@ -47,6 +48,8 @@ public class DbTasks {",
"removed": []
},
{
"added": [
"\t\t\t// database and the backup datatbase"
],
"header": "@@ -69,7 +72,7 @@ public class DbTasks {",
"removed": [
"\t\t\t// database and the backup database"
]
},
{
"added": [
"\tpublic void readMail(Connection conn, String thread_name) {",
"\t\t// Getiing the number of rows in the table and getting the",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();",
"\t\t\tconn",
"\t\t\t\t\t.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);"
],
"header": "@@ -127,20 +130,20 @@ public class DbTasks {",
"removed": [
"\tpublic void readMail(Connection conn, String thread_name) throws Exception{",
"\t\t// Getting the number of rows in the table and getting the",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\tint saveIsolation = conn.getTransactionIsolation();",
"\t\t\tconn.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);"
]
},
{
"added": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t}"
],
"header": "@@ -174,8 +177,14 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
],
"header": "@@ -220,6 +229,7 @@ public class DbTasks {",
"removed": []
},
{
"added": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\tpublic synchronized void deleteMailByUser(Connection conn,",
"\t\t\tString thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -229,25 +239,27 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\tconn.setTransactionIsolation(saveIsolation);",
"\tpublic synchronized void deleteMailByUser (Connection conn,",
"\t\t\tString thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -284,25 +296,29 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}",
"\tpublic void moveToFolders(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -323,25 +339,30 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic void moveToFolders(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
""
],
"header": "@@ -374,18 +395,23 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -397,9 +423,8 @@ public class DbTasks {",
"removed": [
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\tint saveIsolation = conn.getTransactionIsolation();"
]
},
{
"added": [
"\t\t\t\t\tconn",
"\t\t\t\t\t\t\t.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);"
],
"header": "@@ -437,7 +462,8 @@ public class DbTasks {",
"removed": [
"\t\t\t\t\tconn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);"
]
},
{
"added": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.INFO + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}"
],
"header": "@@ -467,8 +493,15 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.INFO + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}",
"\tpublic synchronized void deleteMailByExp(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -511,24 +544,28 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setTransactionIsolation(saveIsolation);",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic synchronized void deleteMailByExp(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\tpublic void Backup(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -548,26 +585,30 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally {",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic void Backup(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
],
"header": "@@ -579,6 +620,7 @@ public class DbTasks {",
"removed": []
},
{
"added": [],
"header": "@@ -586,9 +628,6 @@ public class DbTasks {",
"removed": [
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t}"
]
},
{
"added": [
"\t\t\tString thread_name)",
"\t// preiodically compresses the table to get back the free spaces available"
],
"header": "@@ -597,8 +636,8 @@ public class DbTasks {",
"removed": [
"\t\t\tString thread_name) throws Exception",
"\t// periodically compresses the table to get back the free spaces available"
]
},
{
"added": [
"\t\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -606,8 +645,8 @@ public class DbTasks {",
"removed": [
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
],
"header": "@@ -617,6 +656,7 @@ public class DbTasks {",
"removed": []
},
{
"added": [],
"header": "@@ -625,9 +665,6 @@ public class DbTasks {",
"removed": [
"\t\tfinally{",
"\t\t conn.setAutoCommit(saveAutoCommit);",
"\t\t}"
]
},
{
"added": [
"\tpublic void grantRevoke(Connection conn, String thread_name) {",
"\t\t\t// Giving appropriate permission to eahc threads",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -688,10 +725,10 @@ public class DbTasks {",
"removed": [
"\tpublic void grantRevoke(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\t\t// Giving appropriate permission to each threads"
]
},
{
"added": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
],
"header": "@@ -715,6 +752,7 @@ public class DbTasks {",
"removed": []
},
{
"added": [
"",
"",
""
],
"header": "@@ -723,11 +761,11 @@ public class DbTasks {",
"removed": [
"\t\tfinally {",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t}"
]
},
{
"added": [
"\t\t\t}"
],
"header": "@@ -780,7 +818,7 @@ public class DbTasks {",
"removed": [
"\t\t\t} "
]
}
]
}
] |
derby-DERBY-3448-58a99b78
|
DERBY-3448 ; adjusting re-setting of auto-commit state and isolation level.
Reinstating modifications of revision 636829, plus additional changes to
enable build.
Patch contributed by Manjula Kutty.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@638077 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/system/mailjdbc/tasks/Backup.java",
"hunks": [
{
"added": [
"public class Backup extends Thread{",
"\tprivate Connection conn = null;",
"\t",
"\tpublic Backup(String name)throws Exception{",
"\t\tconn = DbTasks.getConnection(\"BACKUP\", \"Backup\");"
],
"header": "@@ -28,15 +28,16 @@ import org.apache.derbyTesting.system.mailjdbc.MailJdbc;",
"removed": [
"public class Backup extends Thread {",
"\tprivate Connection conn = DbTasks.getConnection(\"BACKUP\", \"Backup\");",
"",
"\tpublic Backup(String name) {"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/system/mailjdbc/tasks/Browse.java",
"hunks": [
{
"added": [
"\tprivate Connection conn = null;",
"\tpublic Browse(String name) throws Exception{",
"\t\tconn = DbTasks.getConnection(\"BROWSE\", \"Browse\");"
],
"header": "@@ -32,11 +32,11 @@ import org.apache.derbyTesting.system.mailjdbc.utils.LogFile;",
"removed": [
"\tprivate Connection conn = DbTasks.getConnection(\"BROWSE\", \"Browse\");",
"",
"\tpublic Browse(String name) {"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/system/mailjdbc/tasks/Purge.java",
"hunks": [
{
"added": [
"\tprivate Connection conn = null;",
"\tpublic Purge(String name) throws Exception{",
"\t\tconn = DbTasks.getConnection(\"PURGE\", \"Purge\");"
],
"header": "@@ -35,11 +35,11 @@ public class Purge extends Thread {",
"removed": [
"\tprivate Connection conn = DbTasks.getConnection(\"PURGE\", \"Purge\");",
"",
"\tpublic Purge(String name) {"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/system/mailjdbc/utils/DbTasks.java",
"hunks": [
{
"added": [],
"header": "@@ -37,19 +37,15 @@ import java.sql.SQLException;",
"removed": [
"import java.util.Properties;",
"",
"\tstatic boolean saveAutoCommit;",
""
]
},
{
"added": [
"\t\t\t// database and the backup database"
],
"header": "@@ -72,7 +68,7 @@ public class DbTasks {",
"removed": [
"\t\t\t// database and the backup datatbase"
]
},
{
"added": [
"\tpublic static Connection getConnection(String usr, String passwd){"
],
"header": "@@ -112,7 +108,7 @@ public class DbTasks {",
"removed": [
"\tpublic static Connection getConnection(String usr, String passwd) {"
]
},
{
"added": [
"\tpublic void readMail(Connection conn, String thread_name) throws Exception{",
"\t\t// Getting the number of rows in the table and getting the",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\tint saveIsolation = conn.getTransactionIsolation();",
"\t\t\tconn.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);"
],
"header": "@@ -130,20 +126,20 @@ public class DbTasks {",
"removed": [
"\tpublic void readMail(Connection conn, String thread_name) {",
"\t\t// Getiing the number of rows in the table and getting the",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();",
"\t\t\tconn",
"\t\t\t\t\t.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;"
],
"header": "@@ -177,14 +173,8 @@ public class DbTasks {",
"removed": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t}"
]
},
{
"added": [],
"header": "@@ -229,7 +219,6 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\tconn.setTransactionIsolation(saveIsolation);",
"\tpublic synchronized void deleteMailByUser (Connection conn,",
"\t\t\tString thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -239,27 +228,25 @@ public class DbTasks {",
"removed": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\tpublic synchronized void deleteMailByUser(Connection conn,",
"\t\t\tString thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -296,29 +283,25 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic void moveToFolders(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -339,30 +322,25 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}",
"\tpublic void moveToFolders(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
],
"header": "@@ -395,23 +373,18 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
""
]
},
{
"added": [
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\tint saveIsolation = conn.getTransactionIsolation();"
],
"header": "@@ -423,8 +396,9 @@ public class DbTasks {",
"removed": [
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\t\t\tconn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);"
],
"header": "@@ -462,8 +436,7 @@ public class DbTasks {",
"removed": [
"\t\t\t\t\tconn",
"\t\t\t\t\t\t\t.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;"
],
"header": "@@ -493,15 +466,8 @@ public class DbTasks {",
"removed": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.INFO + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setTransactionIsolation(saveIsolation);",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic synchronized void deleteMailByExp(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -544,28 +510,24 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.INFO + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}",
"\tpublic synchronized void deleteMailByExp(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally {",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic void Backup(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -585,30 +547,26 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\tpublic void Backup(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [],
"header": "@@ -620,7 +578,6 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t}"
],
"header": "@@ -628,6 +585,9 @@ public class DbTasks {",
"removed": []
},
{
"added": [
"\t\t\tString thread_name) throws Exception",
"\t// periodically compresses the table to get back the free spaces available"
],
"header": "@@ -636,8 +596,8 @@ public class DbTasks {",
"removed": [
"\t\t\tString thread_name)",
"\t// preiodically compresses the table to get back the free spaces available"
]
},
{
"added": [
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -645,8 +605,8 @@ public class DbTasks {",
"removed": [
"\t\t\tboolean saveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [],
"header": "@@ -656,7 +616,6 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\tfinally{",
"\t\t conn.setAutoCommit(saveAutoCommit);",
"\t\t}"
],
"header": "@@ -665,6 +624,9 @@ public class DbTasks {",
"removed": []
},
{
"added": [
"\tpublic void grantRevoke(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\t\t// Giving appropriate permission to each threads"
],
"header": "@@ -725,10 +687,10 @@ public class DbTasks {",
"removed": [
"\tpublic void grantRevoke(Connection conn, String thread_name) {",
"\t\t\t// Giving appropriate permission to eahc threads",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [],
"header": "@@ -752,7 +714,6 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\tfinally {",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t}"
],
"header": "@@ -761,11 +722,11 @@ public class DbTasks {",
"removed": [
"",
"",
""
]
},
{
"added": [
"\t\t\t} "
],
"header": "@@ -818,7 +779,7 @@ public class DbTasks {",
"removed": [
"\t\t\t}"
]
}
]
}
] |
derby-DERBY-3448-63e167ec
|
DERBY-3448 : adjusting re-setting of auto-commit state and isolation level.
Patch contributed by Manjula Kutty.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@636829 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/system/mailjdbc/utils/DbTasks.java",
"hunks": [
{
"added": [],
"header": "@@ -37,7 +37,6 @@ import java.sql.SQLException;",
"removed": [
"import java.util.Properties;"
]
},
{
"added": [],
"header": "@@ -48,8 +47,6 @@ public class DbTasks {",
"removed": [
"\tstatic boolean saveAutoCommit;",
""
]
},
{
"added": [
"\t\t\t// database and the backup database"
],
"header": "@@ -72,7 +69,7 @@ public class DbTasks {",
"removed": [
"\t\t\t// database and the backup datatbase"
]
},
{
"added": [
"\tpublic void readMail(Connection conn, String thread_name) throws Exception{",
"\t\t// Getting the number of rows in the table and getting the",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\tint saveIsolation = conn.getTransactionIsolation();",
"\t\t\tconn.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);"
],
"header": "@@ -130,20 +127,20 @@ public class DbTasks {",
"removed": [
"\tpublic void readMail(Connection conn, String thread_name) {",
"\t\t// Getiing the number of rows in the table and getting the",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();",
"\t\t\tconn",
"\t\t\t\t\t.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;"
],
"header": "@@ -177,14 +174,8 @@ public class DbTasks {",
"removed": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t}"
]
},
{
"added": [],
"header": "@@ -229,7 +220,6 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\tconn.setTransactionIsolation(saveIsolation);",
"\tpublic synchronized void deleteMailByUser (Connection conn,",
"\t\t\tString thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -239,27 +229,25 @@ public class DbTasks {",
"removed": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\tpublic synchronized void deleteMailByUser(Connection conn,",
"\t\t\tString thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -296,29 +284,25 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic void moveToFolders(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -339,30 +323,25 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}",
"\tpublic void moveToFolders(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
],
"header": "@@ -395,23 +374,18 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
""
]
},
{
"added": [
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\tint saveIsolation = conn.getTransactionIsolation();"
],
"header": "@@ -423,8 +397,9 @@ public class DbTasks {",
"removed": [
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\t\t\tconn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);"
],
"header": "@@ -462,8 +437,7 @@ public class DbTasks {",
"removed": [
"\t\t\t\t\tconn",
"\t\t\t\t\t\t\t.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;"
],
"header": "@@ -493,15 +467,8 @@ public class DbTasks {",
"removed": [
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.INFO + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally{",
"\t\t\tconn.setTransactionIsolation(saveIsolation);",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic synchronized void deleteMailByExp(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -544,28 +511,24 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.INFO + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t\tthrow sqe;",
"\t\t\t}",
"\tpublic synchronized void deleteMailByExp(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [
"\t\t\tconn.rollback();",
"\t\t\tthrow sqe;",
"\t\t}",
"\t\tfinally {",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\tpublic void Backup(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -585,30 +548,26 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t\ttry {",
"\t\t\t\tconn.rollback();",
"\t\t\t} catch (SQLException sq) {",
"\t\t\t\tMailJdbc.logAct.logMsg(LogFile.ERROR + thread_name + \" : \"",
"\t\t\t\t\t\t+ \"Exception while rolling back: \" + sq);",
"\t\t\t\tsq.printStackTrace();",
"\t\t\t\terrorPrint(sq);",
"\t\t\t}",
"\tpublic void Backup(Connection conn, String thread_name) {",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [],
"header": "@@ -620,7 +579,6 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\tfinally{",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t}"
],
"header": "@@ -628,6 +586,9 @@ public class DbTasks {",
"removed": []
},
{
"added": [
"\t\t\tString thread_name) throws Exception",
"\t// periodically compresses the table to get back the free spaces available"
],
"header": "@@ -636,8 +597,8 @@ public class DbTasks {",
"removed": [
"\t\t\tString thread_name)",
"\t// preiodically compresses the table to get back the free spaces available"
]
},
{
"added": [
"\t\tboolean saveAutoCommit = conn.getAutoCommit();"
],
"header": "@@ -645,8 +606,8 @@ public class DbTasks {",
"removed": [
"\t\t\tboolean saveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [],
"header": "@@ -656,7 +617,6 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\tfinally{",
"\t\t conn.setAutoCommit(saveAutoCommit);",
"\t\t}"
],
"header": "@@ -665,6 +625,9 @@ public class DbTasks {",
"removed": []
},
{
"added": [
"\tpublic void grantRevoke(Connection conn, String thread_name) throws Exception{",
"\t\tboolean saveAutoCommit = conn.getAutoCommit();",
"\t\t\t// Giving appropriate permission to each threads"
],
"header": "@@ -725,10 +688,10 @@ public class DbTasks {",
"removed": [
"\tpublic void grantRevoke(Connection conn, String thread_name) {",
"\t\t\t// Giving appropriate permission to eahc threads",
"\t\t\tsaveAutoCommit = conn.getAutoCommit();"
]
},
{
"added": [],
"header": "@@ -752,7 +715,6 @@ public class DbTasks {",
"removed": [
"\t\t\tconn.setAutoCommit(saveAutoCommit);"
]
},
{
"added": [
"\t\tfinally {",
"\t\t\tconn.setAutoCommit(saveAutoCommit);",
"\t\t}"
],
"header": "@@ -761,11 +723,11 @@ public class DbTasks {",
"removed": [
"",
"",
""
]
},
{
"added": [
"\t\t\t} "
],
"header": "@@ -818,7 +780,7 @@ public class DbTasks {",
"removed": [
"\t\t\t}"
]
}
]
}
] |
derby-DERBY-3448-cae4ed4f
|
DERBY-3448 Minor cleanup of DbTasks class
- no need to implement Thread since it is never used as a thread
- don't set system property for authorization as it is set in the required derby.properties
- fix mt issue for getting connections (remove shared properties object)
- don't have if != null checks for objects that cannot be null.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@631402 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/system/mailjdbc/utils/DbTasks.java",
"hunks": [
{
"added": [
"public class DbTasks {"
],
"header": "@@ -44,7 +44,7 @@ import org.apache.derbyTesting.functionTests.util.streams.LoopingAlphabetReader;",
"removed": [
"public class DbTasks extends Thread {"
]
},
{
"added": [
""
],
"header": "@@ -62,10 +62,8 @@ public class DbTasks extends Thread {",
"removed": [
"\tpublic static Properties prop = new Properties();",
"",
"\t\tsetSystemProperty(\"derby.database.sqlAuthorization\", \"true\");"
]
},
{
"added": [
"",
"\t\t\t\t\t.getProperty(\"database\"), usr, passwd);"
],
"header": "@@ -118,10 +116,9 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tprop.setProperty(\"user\", usr);",
"\t\t\tprop.setProperty(\"password\", passwd);",
"\t\t\t\t\t.getProperty(\"database\"), prop);"
]
},
{
"added": [
"\t\t\trs.close();",
"\t\t\tstmt1.close();",
"\t\t\trs1.close();"
],
"header": "@@ -172,12 +169,9 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (rs != null)",
"\t\t\t\trs.close();",
"\t\t\tif (stmt != null)",
"\t\t\t\tstmt1.close();",
"\t\t\tif (rs1 != null)",
"\t\t\t\trs1.close();"
]
},
{
"added": [
"\t\t\trs.close();"
],
"header": "@@ -210,8 +204,7 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (rs != null)",
"\t\t\t\trs.close();"
]
},
{
"added": [
"\t\t\trs.close();",
"\t\t\tstmt.close();"
],
"header": "@@ -233,10 +226,8 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (rs != null)",
"\t\t\t\trs.close();",
"\t\t\tif (stmt != null)",
"\t\t\t\tstmt.close();"
]
},
{
"added": [
"\t\t\trs.close();",
"\t\t\tupdateUser.close();",
"\t\t\tstmt.close();"
],
"header": "@@ -301,12 +292,9 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (rs != null)",
"\t\t\t\trs.close();",
"\t\t\tif (updateUser != null)",
"\t\t\t\tupdateUser.close();",
"\t\t\tif (stmt != null)",
"\t\t\t\tstmt.close();"
]
},
{
"added": [
"\t\t\tdeleteThread.close();",
"\t\t\trs.close();",
"\t\t\tstmt.close();"
],
"header": "@@ -347,12 +335,9 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (deleteThread != null)",
"\t\t\t\tdeleteThread.close();",
"\t\t\tif (rs != null)",
"\t\t\t\trs.close();",
"\t\t\tif (stmt != null)",
"\t\t\t\tstmt.close();"
]
},
{
"added": [
"\t\t\tstmt.close();",
"\t\t\tmoveToFolder.close();",
"\t\t\trs.close();"
],
"header": "@@ -406,12 +391,9 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (stmt != null)",
"\t\t\t\tstmt.close();",
"\t\t\tif (moveToFolder != null)",
"\t\t\t\tmoveToFolder.close();",
"\t\t\tif (rs != null)",
"\t\t\t\trs.close();"
]
},
{
"added": [
"\t\t\t\t\trs.close();"
],
"header": "@@ -479,8 +461,7 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\t\t\tif (rs != null)",
"\t\t\t\t\t\trs.close();"
]
},
{
"added": [
"\t\t\tinsertFirst.close();"
],
"header": "@@ -495,8 +476,7 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (insertFirst != null)",
"\t\t\t\tinsertFirst.close();"
]
},
{
"added": [
"\t\t\trs.close();",
"\t\t\tstmt.close();",
"\t\t\tstmt1.close();",
"\t\t\tinsertAttach.close();"
],
"header": "@@ -559,14 +539,10 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (rs != null)",
"\t\t\t\trs.close();",
"\t\t\tif (stmt != null)",
"\t\t\t\tstmt.close();",
"\t\t\tif (stmt1 != null)",
"\t\t\t\tstmt1.close();",
"\t\t\tif (insertAttach != null)",
"\t\t\t\tinsertAttach.close();"
]
},
{
"added": [
"\t\t\tdeleteExp.close();",
"\t\t\tselExp.close();"
],
"header": "@@ -606,10 +582,8 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (deleteExp != null)",
"\t\t\t\tdeleteExp.close();",
"\t\t\tif (selExp != null)",
"\t\t\t\tselExp.close();"
]
},
{
"added": [
"\t\t\t\trs1.close();",
"\t\t\trs.close();",
"\t\t\tstmt.close();",
"\t\t\tstmt1.close();",
"\t\t\tstmt2.close();",
"\t\t\tstmt3.close();"
],
"header": "@@ -735,20 +709,14 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\t\tif (rs1 != null)",
"\t\t\t\t\trs1.close();",
"\t\t\tif (rs != null)",
"\t\t\t\trs.close();",
"\t\t\tif (stmt != null)",
"\t\t\t\tstmt.close();",
"\t\t\tif (stmt1 != null)",
"\t\t\t\tstmt1.close();",
"\t\t\tif (stmt2 != null)",
"\t\t\t\tstmt2.close();",
"\t\t\tif (stmt3 != null)",
"\t\t\t\tstmt3.close();"
]
},
{
"added": [
"\t\t\tstmt.close();"
],
"header": "@@ -785,8 +753,7 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tif (stmt != null)",
"\t\t\t\tstmt.close();"
]
}
]
}
] |
derby-DERBY-3454-c085d07b
|
DERBY-3454:All the public methods of ReplicationMessageTransmit and ReplicationMessageReceive should
ensure that the socket connection exists (is not null) before performing the respective operations.
Contributed by V Narayanan
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@633026 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/replication/net/ReplicationMessageReceive.java",
"hunks": [
{
"added": [
" sendMessage(ack);"
],
"header": "@@ -236,7 +236,7 @@ public class ReplicationMessageReceive {",
"removed": [
" socketConn.writeMessage(ack);"
]
},
{
"added": [
" sendMessage(ack);"
],
"header": "@@ -244,7 +244,7 @@ public class ReplicationMessageReceive {",
"removed": [
" socketConn.writeMessage(ack);"
]
},
{
"added": [
" sendMessage(ack);"
],
"header": "@@ -294,7 +294,7 @@ public class ReplicationMessageReceive {",
"removed": [
" socketConn.writeMessage(ack);"
]
},
{
"added": [
" sendMessage(ack);"
],
"header": "@@ -311,7 +311,7 @@ public class ReplicationMessageReceive {",
"removed": [
" socketConn.writeMessage(ack);"
]
},
{
"added": [
" sendMessage(ack);"
],
"header": "@@ -344,7 +344,7 @@ public class ReplicationMessageReceive {",
"removed": [
" socketConn.writeMessage(ack);"
]
},
{
"added": [
" * @throws IOException 1) if an exception occurs while transmitting",
" * the message,",
" * 2) if the connection handle is invalid.",
" checkSocketConnection();"
],
"header": "@@ -357,10 +357,12 @@ public class ReplicationMessageReceive {",
"removed": [
" * @throws IOException if an exception occurs while transmitting",
" * the message."
]
},
{
"added": [
" * @throws IOException 1) if an exception occurs while reading from the",
" * stream,",
" * 2) if the connection handle is invalid.",
" checkSocketConnection();"
],
"header": "@@ -375,11 +377,13 @@ public class ReplicationMessageReceive {",
"removed": [
" * @throws IOException if an exception occurs while reading from the",
" * stream."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/replication/net/ReplicationMessageTransmit.java",
"hunks": [
{
"added": [
"import org.apache.derby.shared.common.reference.MessageId;"
],
"header": "@@ -30,6 +30,7 @@ import java.security.PrivilegedExceptionAction;",
"removed": []
},
{
"added": [
" if(socketConn != null) {",
" socketConn.tearDown();",
" }"
],
"header": "@@ -140,7 +141,9 @@ public class ReplicationMessageTransmit {",
"removed": [
" socketConn.tearDown();"
]
},
{
"added": [
" * @throws IOException 1) if an exception occurs while transmitting",
" * the message.",
" * 2) if the connection handle is invalid.",
" checkSocketConnection();"
],
"header": "@@ -149,10 +152,12 @@ public class ReplicationMessageTransmit {",
"removed": [
" * @throws IOException if an exception occurs while transmitting",
" * the message."
]
},
{
"added": [
" * @throws IOException 1) if an exception occurs while reading from the",
" * stream.",
" * 2) if the connection handle is invalid.",
" checkSocketConnection();"
],
"header": "@@ -166,11 +171,13 @@ public class ReplicationMessageTransmit {",
"removed": [
" * @throws IOException if an exception occurs while reading from the",
" * stream."
]
}
]
},
{
"file": "java/shared/org/apache/derby/shared/common/reference/MessageId.java",
"hunks": [
{
"added": [
" String REPLICATION_INVALID_CONNECTION_HANDLE = \"R012\";"
],
"header": "@@ -189,6 +189,7 @@ public interface MessageId {",
"removed": []
}
]
}
] |
derby-DERBY-3457-77a3bd13
|
DERBY-3457: Closing a logical connection must close all associated logical statements.
Patch file: derby-3457-1c-stmt_closing.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@632112 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/LogicalStatementEntity.java",
"hunks": [
{
"added": [
" /** The owner of this logical entity. */",
" private StatementCacheInteractor owner;"
],
"header": "@@ -71,6 +71,8 @@ abstract class LogicalStatementEntity",
"removed": []
},
{
"added": [
" this.owner = cacheInteractor;"
],
"header": "@@ -96,6 +98,7 @@ abstract class LogicalStatementEntity",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/StatementCacheInteractor.java",
"hunks": [
{
"added": [
"import java.util.Iterator;"
],
"header": "@@ -5,6 +5,7 @@ import java.sql.PreparedStatement;",
"removed": []
},
{
"added": [
" /**",
" * Tells if this interactor is in the process of shutting down.",
" * <p>",
" * If this is true, it means that the logical connection is being closed.",
" */",
" private boolean connCloseInProgress = false;"
],
"header": "@@ -41,6 +42,12 @@ public final class StatementCacheInteractor {",
"removed": []
},
{
"added": [
" /**",
" * Closes all open logical statements created by this cache interactor.",
" * <p>",
" * A cache interactor is bound to a single (caching) logical connection.",
" * @throws SQLException if closing an open logical connection fails",
" */",
" public synchronized void closeOpenLogicalStatements()",
" throws SQLException {",
" // Transist to closing state, to avoid changing the list of open",
" // statements as we work our way through the list.",
" this.connCloseInProgress = true;",
" // Iterate through the list and close the logical statements.",
" Iterator logicalStatements = this.openLogicalStatements.iterator();",
" while (logicalStatements.hasNext()) {",
" LogicalStatementEntity logicalStatement =",
" (LogicalStatementEntity)logicalStatements.next();",
" logicalStatement.close();",
" }",
" // Clear the list for good measure.",
" this.openLogicalStatements.clear();",
" }",
"",
" /**",
" * Designates the specified logical statement as closed.",
" *",
" * @param logicalStmt the logical statement being closed",
" */",
" public synchronized void markClosed(LogicalStatementEntity logicalStmt) {",
" // If we are not in the process of shutting down the logical connection,",
" // remove the notifying statement from the list of open statements.",
" if (!connCloseInProgress) {",
" boolean removed = this.openLogicalStatements.remove(logicalStmt);",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(removed,",
" \"Tried to remove unregistered logical statement: \" +",
" logicalStmt);",
" }",
" }",
" }",
""
],
"header": "@@ -177,6 +184,46 @@ public final class StatementCacheInteractor {",
"removed": []
}
]
}
] |
derby-DERBY-3457-8a0018d2
|
DERBY-3457 (partial): Closing a caching logical connection must close all associated logical statements.
This partial patch adds a list of open logical statements in SCI. The logic for maintaining the list when statements are closed, and for closing remaining open statements when the caching logical connection is closed, will follow in another patch.
Patch file: derby-3457-2a-stmt_registration.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@631577 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/StatementCacheInteractor.java",
"hunks": [
{
"added": [
"import java.util.ArrayList;",
"",
"import org.apache.derby.shared.common.sanity.SanityManager;"
],
"header": "@@ -4,10 +4,13 @@ import java.sql.CallableStatement;",
"removed": []
},
{
"added": [
" * <li>Return reference to existing or newly created statement.</li>"
],
"header": "@@ -20,7 +23,7 @@ import org.apache.derby.jdbc.ClientDriver;",
"removed": [
" * <li>Return referecne to existing or newly created statement.</li>"
]
},
{
"added": [
" /** List of open logical statements created by this cache interactor. */",
" //@GuardedBy(\"this\")",
" private final ArrayList openLogicalStatements = new ArrayList();"
],
"header": "@@ -37,6 +40,9 @@ public final class StatementCacheInteractor {",
"removed": []
},
{
"added": [
" return createLogicalPreparedStatement(ps, stmtKey);"
],
"header": "@@ -61,8 +67,7 @@ public final class StatementCacheInteractor {",
"removed": [
" return ClientDriver.getFactory().newLogicalPreparedStatement(",
" ps, stmtKey, this);"
]
},
{
"added": [
" return createLogicalPreparedStatement(ps, stmtKey);"
],
"header": "@@ -81,8 +86,7 @@ public final class StatementCacheInteractor {",
"removed": [
" return ClientDriver.getFactory().newLogicalPreparedStatement(",
" ps, stmtKey, this);"
]
},
{
"added": [
" return createLogicalPreparedStatement(ps, stmtKey);"
],
"header": "@@ -103,8 +107,7 @@ public final class StatementCacheInteractor {",
"removed": [
" return ClientDriver.getFactory().newLogicalPreparedStatement(",
" ps, stmtKey, this);"
]
},
{
"added": [
" return createLogicalPreparedStatement(ps, stmtKey);"
],
"header": "@@ -121,8 +124,7 @@ public final class StatementCacheInteractor {",
"removed": [
" return ClientDriver.getFactory().newLogicalPreparedStatement(",
" ps, stmtKey, this);"
]
},
{
"added": [
" return createLogicalCallableStatement(cs, stmtKey);"
],
"header": "@@ -136,8 +138,7 @@ public final class StatementCacheInteractor {",
"removed": [
" return ClientDriver.getFactory().newLogicalCallableStatement(",
" cs, stmtKey, this);"
]
},
{
"added": [
" return createLogicalCallableStatement(cs, stmtKey);"
],
"header": "@@ -154,8 +155,7 @@ public final class StatementCacheInteractor {",
"removed": [
" return ClientDriver.getFactory().newLogicalCallableStatement(",
" cs, stmtKey, this);"
]
},
{
"added": [
" return createLogicalCallableStatement(cs, stmtKey);",
" }",
"",
" /**",
" * Creates a logical prepared statement.",
" *",
" * @param ps the underlying physical prepared statement",
" * @param stmtKey the statement key for the physical statement",
" * @return A logical prepared statement.",
" * @throws SQLException if creating a logical prepared statement fails",
" */",
" private PreparedStatement createLogicalPreparedStatement(",
" PreparedStatement ps,",
" StatementKey stmtKey)",
" throws SQLException {",
" LogicalPreparedStatement logicalPs =",
" ClientDriver.getFactory().newLogicalPreparedStatement(",
" ps, stmtKey, this);",
" this.openLogicalStatements.add(logicalPs);",
" return logicalPs;",
" }",
"",
" /**",
" * Creates a logical callable statement.",
" *",
" * @param cs the underlying physical callable statement",
" * @param stmtKey the statement key for the physical statement",
" * @return A logical callable statement.",
" * @throws SQLException if creating a logical callable statement fails",
" */",
" private CallableStatement createLogicalCallableStatement(",
" CallableStatement cs,",
" StatementKey stmtKey)",
" throws SQLException {",
" LogicalCallableStatement logicalCs =",
" ClientDriver.getFactory().newLogicalCallableStatement(",
" cs, stmtKey, this);",
" this.openLogicalStatements.add(logicalCs);",
" return logicalCs;"
],
"header": "@@ -174,8 +174,45 @@ public final class StatementCacheInteractor {",
"removed": [
" return ClientDriver.getFactory().newLogicalCallableStatement(",
" cs, stmtKey, this);"
]
}
]
}
] |
derby-DERBY-3458-cd8191c8
|
DERBY-3458
Patch submitted by Stephan van Loendersloot. dblook should set the current schema to be
a system schema so that collation of system schema (UCS_BASIC) gets used when dealing
with queries using system table columns and character constants. Stephan has also
provided comprehensive test for the dblook code change.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@634037 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/tools/org/apache/derby/tools/dblook.java",
"hunks": [
{
"added": [
"\t\t// Set the system schema to ensure that UCS_BASIC collation is used.",
"\t\tStatement stmt = conn.createStatement();",
"\t\tstmt.executeUpdate(\"SET SCHEMA SYS\");",
""
],
"header": "@@ -584,13 +584,16 @@ public final class dblook {",
"removed": [
"\t\tStatement stmt = conn.createStatement();"
]
}
]
}
] |
derby-DERBY-3461-e8aba825
|
DERBY-3461 The class EmbedSQLWarning is really an SQLWarning factory class. Therefore, it has been renamed to SQLWarningFactory.
The unused method generateCsSQLWarning() has been removed.
The remaining methods have been renamed to remove the reference to 'Embed'.
Since the re-factored class is generic, it has been moved to org.apache.derby.iapi.error package.
Contributed by Dibyendu Majumdar Email: dibyendu at mazumdar dot demon dot co dot uk
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@633290 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -30,7 +30,6 @@ import java.sql.DatabaseMetaData;",
"removed": [
"import org.apache.derby.impl.jdbc.EmbedSQLWarning;"
]
},
{
"added": [
"import org.apache.derby.iapi.error.SQLWarningFactory;"
],
"header": "@@ -39,6 +38,7 @@ import java.io.ObjectInput;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.SQLWarningFactory;"
],
"header": "@@ -40,6 +40,7 @@ import org.apache.derby.iapi.jdbc.EngineConnection;",
"removed": []
},
{
"added": [
"\t\t\t\t\taddWarning(SQLWarningFactory.newSQLWarning(SQLState.DATABASE_EXISTS, getDBName()));"
],
"header": "@@ -346,7 +347,7 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\t\t\taddWarning(EmbedSQLWarning.newEmbedSQLWarning(SQLState.DATABASE_EXISTS, getDBName()));"
]
},
{
"added": [
"\t\t\t\taddWarning(SQLWarningFactory.newSQLWarning(SQLState.SQL_AUTHORIZATION_WITH_NO_AUTHENTICATION));"
],
"header": "@@ -499,7 +500,7 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\t\taddWarning(EmbedSQLWarning.newEmbedSQLWarning(SQLState.SQL_AUTHORIZATION_WITH_NO_AUTHENTICATION));"
]
},
{
"added": [
"\t\t\t\taddWarning(SQLWarningFactory.newSQLWarning(SQLState.DATABASE_EXISTS, dbname));"
],
"header": "@@ -2318,7 +2319,7 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\t\taddWarning(EmbedSQLWarning.newEmbedSQLWarning(SQLState.DATABASE_EXISTS, dbname));"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/GenericAggregateResultSet.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.SQLWarningFactory;"
],
"header": "@@ -23,6 +23,7 @@ package org.apache.derby.impl.sql.execute;",
"removed": []
},
{
"added": [],
"header": "@@ -32,7 +33,6 @@ import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;",
"removed": [
"import org.apache.derby.impl.jdbc.EmbedSQLWarning;"
]
},
{
"added": [
"\t\t\taddWarning(SQLWarningFactory.newSQLWarning(SQLState.LANG_NULL_ELIMINATED_IN_SET_FUNCTION));"
],
"header": "@@ -174,7 +174,7 @@ abstract class GenericAggregateResultSet extends NoPutResultSetImpl",
"removed": [
"\t\t\taddWarning(EmbedSQLWarning.newEmbedSQLWarning(SQLState.LANG_NULL_ELIMINATED_IN_SET_FUNCTION));"
]
}
]
}
] |
derby-DERBY-3465-9dc398ab
|
DERBY-3465: Removed a number of try-catch clauses that caught exceptions and printed a generic message without any information about the exception itself.
Patch file: derby-3465-1a-remove_try_catch.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@631286 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3472-8dd309eb
|
DERBY-3472: Move the mf.workToDo call outside the synchronized block to avoid deadlocks.
Contributed by Jorgen Loland
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@633063 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/replication/buffer/ReplicationLogBuffer.java",
"hunks": [
{
"added": [
" * ",
" * Important: If methods in this class calls methods outside this package",
" * (e.g. MasterFactory#workToDo), make sure that deadlocks are not ",
" * introduced. If possible, a call to any method in another package should be ",
" * done without holding latches in this class."
],
"header": "@@ -53,6 +53,11 @@ import org.apache.derby.iapi.services.replication.master.MasterFactory;",
"removed": []
},
{
"added": [
" boolean switchedBuffer = false; ",
" // element or throws a LogBufferFullException. No need to call",
" // MasterFactory.workToDo becase switchDirtyBuffer will not add",
" // a buffer to the dirty buffer list when currentDirtyBuffer ",
" // is null",
" switchedBuffer = true;"
],
"header": "@@ -135,17 +140,22 @@ public class ReplicationLogBuffer {",
"removed": [
" // element or throws a LogBufferFullException"
]
},
{
"added": [
" // DERBY-3472 - we need to release the listLatch before calling workToDo",
" // to avoid deadlock with the logShipper thread",
" if (switchedBuffer) {",
" // Notify the master controller that a log buffer element is full ",
" // and work needs to be done.",
" mf.workToDo();",
" }"
],
"header": "@@ -164,6 +174,13 @@ public class ReplicationLogBuffer {",
"removed": []
},
{
"added": [
" // No need to call MasterFactory.workToDo because the ",
" // caller of next() will perform the work required on the ",
" // buffer that was just moved to the dirty buffer list."
],
"header": "@@ -186,6 +203,9 @@ public class ReplicationLogBuffer {",
"removed": []
},
{
"added": [],
"header": "@@ -317,9 +337,6 @@ public class ReplicationLogBuffer {",
"removed": [
" //Notify the master controller that a log buffer element is full and ",
" //work needs to be done.",
" mf.workToDo();"
]
}
]
}
] |
derby-DERBY-3478-edeac317
|
DERBY-3478 Simple column names specified as part of "AS" clause in a table expression are ignored if the table expression is a view.
Patch DERBY-3478 fixes this issue and adds a new test case.
The fix adds a call to propagateDCLInfo also for views in
FromBaseTable.bindNonVTITables which seems to have been always
missing.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@808494 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3484-1a4ea31f
|
DERBY-3484 For JDBC 3.0 java.sql.Types constants use directly instead of through JDBC30Translation as JSR169 supports all the types
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@632413 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Types.java",
"hunks": [
{
"added": [],
"header": "@@ -21,7 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/FdocaConstants.java",
"hunks": [
{
"added": [],
"header": "@@ -21,7 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/SQLTypes.java",
"hunks": [
{
"added": [],
"header": "@@ -24,7 +24,6 @@ package org.apache.derby.impl.drda;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/catalog/types/BaseTypeIdImpl.java",
"hunks": [
{
"added": [],
"header": "@@ -41,7 +41,6 @@ import org.apache.derby.iapi.reference.SQLState;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation; // needed for BLOB/CLOB types"
]
},
{
"added": [
" JDBCTypeId = Types.CLOB;"
],
"header": "@@ -344,7 +343,7 @@ public class BaseTypeIdImpl implements Formatable",
"removed": [
" JDBCTypeId = JDBC20Translation.SQL_TYPES_CLOB;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/reference/JDBC20Translation.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.iapi.reference;",
"removed": [
"import java.sql.Types;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/info/JVMInfo.java",
"hunks": [
{
"added": [
"import java.sql.Types;",
""
],
"header": "@@ -21,6 +21,8 @@",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/DataTypeUtilities.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
"\t\t\tcase Types.BOOLEAN:"
],
"header": "@@ -56,7 +55,7 @@ public abstract class DataTypeUtilities {",
"removed": [
"\t\t\tcase JDBC30Translation.SQL_TYPES_BOOLEAN:"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/TypeId.java",
"hunks": [
{
"added": [],
"header": "@@ -42,8 +42,6 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;",
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
" case Types.BOOLEAN:"
],
"header": "@@ -371,7 +369,7 @@ public final class TypeId implements Formatable",
"removed": [
" case JDBC30Translation.SQL_TYPES_BOOLEAN:"
]
},
{
"added": [
" case Types.BLOB:",
" case Types.CLOB:"
],
"header": "@@ -402,14 +400,14 @@ public final class TypeId implements Formatable",
"removed": [
" case JDBC20Translation.SQL_TYPES_BLOB:",
" case JDBC20Translation.SQL_TYPES_CLOB:"
]
},
{
"added": [
" return getBuiltInTypeId(Types.BLOB);",
" return getBuiltInTypeId(Types.CLOB);"
],
"header": "@@ -505,11 +503,11 @@ public final class TypeId implements Formatable",
"removed": [
" return getBuiltInTypeId(JDBC20Translation.SQL_TYPES_BLOB);",
" return getBuiltInTypeId(JDBC20Translation.SQL_TYPES_CLOB);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java",
"hunks": [
{
"added": [
"\t\tif (colType == Types.JAVA_OBJECT) {"
],
"header": "@@ -1192,7 +1192,7 @@ public abstract class EmbedPreparedStatement",
"removed": [
"\t\tif (colType == org.apache.derby.iapi.reference.JDBC20Translation.SQL_TYPES_JAVA_OBJECT) {"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/Util.java",
"hunks": [
{
"added": [],
"header": "@@ -32,7 +32,6 @@ import org.apache.derby.iapi.error.ExceptionSeverity;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/SYSALIASESRowFactory.java",
"hunks": [
{
"added": [
"import java.sql.Types;",
""
],
"header": "@@ -21,10 +21,11 @@",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/BinaryRelationalOperatorNode.java",
"hunks": [
{
"added": [],
"header": "@@ -22,7 +22,6 @@",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CastNode.java",
"hunks": [
{
"added": [],
"header": "@@ -45,7 +45,6 @@ import org.apache.derby.iapi.sql.compile.TypeCompiler;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
"\t\t\t\tcase Types.BOOLEAN:",
"\t\t\t\t\tif (destJDBCTypeId == Types.BIT || destJDBCTypeId == Types.BOOLEAN)"
],
"header": "@@ -285,9 +284,9 @@ public class CastNode extends ValueNode",
"removed": [
"\t\t\t\tcase JDBC30Translation.SQL_TYPES_BOOLEAN:",
"\t\t\t\t\tif (destJDBCTypeId == Types.BIT || destJDBCTypeId == JDBC30Translation.SQL_TYPES_BOOLEAN)"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/LengthOperatorNode.java",
"hunks": [
{
"added": [],
"header": "@@ -34,7 +34,6 @@ import org.apache.derby.iapi.sql.compile.TypeCompiler;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/TypeCompilerFactoryImpl.java",
"hunks": [
{
"added": [],
"header": "@@ -27,8 +27,6 @@ import org.apache.derby.iapi.types.TypeId;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;",
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
" case Types.BOOLEAN:"
],
"header": "@@ -92,7 +90,7 @@ public class TypeCompilerFactoryImpl implements TypeCompilerFactory",
"removed": [
" case JDBC30Translation.SQL_TYPES_BOOLEAN:"
]
},
{
"added": [
" case Types.BLOB:"
],
"header": "@@ -130,7 +128,7 @@ public class TypeCompilerFactoryImpl implements TypeCompilerFactory",
"removed": [
" case JDBC20Translation.SQL_TYPES_BLOB:"
]
},
{
"added": [
" case Types.CLOB:"
],
"header": "@@ -142,7 +140,7 @@ public class TypeCompilerFactoryImpl implements TypeCompilerFactory",
"removed": [
" case JDBC20Translation.SQL_TYPES_CLOB:"
]
}
]
},
{
"file": "java/shared/org/apache/derby/shared/common/reference/JDBC30Translation.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.shared.common.reference;",
"removed": [
"import java.sql.Types;"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/TestUtil.java",
"hunks": [
{
"added": [],
"header": "@@ -37,7 +37,6 @@ import java.security.PrivilegedExceptionAction;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
"\t\t\tcase Types.BOOLEAN : return \"Types.BOOLEAN\";"
],
"header": "@@ -525,7 +524,7 @@ public class TestUtil {",
"removed": [
"\t\t\tcase JDBC30Translation.SQL_TYPES_BOOLEAN : return \"Types.BOOLEAN\";"
]
}
]
},
{
"file": "java/tools/org/apache/derby/tools/JDBCDisplayUtil.java",
"hunks": [
{
"added": [
"\t\t\tcase Types.JAVA_OBJECT:"
],
"header": "@@ -677,7 +677,7 @@ public class JDBCDisplayUtil {",
"removed": [
"\t\t\tcase org.apache.derby.iapi.reference.JDBC20Translation.SQL_TYPES_JAVA_OBJECT:"
]
},
{
"added": [
"\t\t\tcase Types.JAVA_OBJECT:"
],
"header": "@@ -1187,7 +1187,7 @@ public class JDBCDisplayUtil {",
"removed": [
"\t\t\tcase org.apache.derby.iapi.reference.JDBC20Translation.SQL_TYPES_JAVA_OBJECT:"
]
}
]
}
] |
derby-DERBY-3484-1e8a20fb
|
DERBY-3484 For JDBC 2.0/3.0 java.sql.ResultSet constants use directly instead of through JDBC[2,3]0Translation as JSR169 supports all the types
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@632456 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.client.am;",
"removed": [
"import org.apache.derby.shared.common.reference.JDBC30Translation;"
]
},
{
"added": [
" private int holdability = ResultSet.HOLD_CURSORS_OVER_COMMIT;"
],
"header": "@@ -71,7 +70,7 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" private int holdability = JDBC30Translation.HOLD_CURSORS_OVER_COMMIT;"
]
},
{
"added": [
" holdability = ResultSet.HOLD_CURSORS_OVER_COMMIT;"
],
"header": "@@ -300,7 +299,7 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" holdability = JDBC30Translation.HOLD_CURSORS_OVER_COMMIT;"
]
},
{
"added": [
" if (holdability == ResultSet.HOLD_CURSORS_OVER_COMMIT)"
],
"header": "@@ -1429,7 +1428,7 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" if (holdability == JDBC30Translation.HOLD_CURSORS_OVER_COMMIT)"
]
},
{
"added": [
" if (resultSetHoldability == ResultSet.HOLD_CURSORS_OVER_COMMIT) {",
" resultSetHoldability = ResultSet.CLOSE_CURSORS_AT_COMMIT;"
],
"header": "@@ -1701,8 +1700,8 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" if (resultSetHoldability == JDBC30Translation.HOLD_CURSORS_OVER_COMMIT) {",
" resultSetHoldability = JDBC30Translation.CLOSE_CURSORS_AT_COMMIT;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/DatabaseMetaData.java",
"hunks": [
{
"added": [],
"header": "@@ -24,7 +24,6 @@ package org.apache.derby.client.am;",
"removed": [
"import org.apache.derby.shared.common.reference.JDBC30Translation;"
]
},
{
"added": [
" if (connection_.holdability() == ResultSet.HOLD_CURSORS_OVER_COMMIT) {"
],
"header": "@@ -1438,7 +1437,7 @@ public abstract class DatabaseMetaData implements java.sql.DatabaseMetaData {",
"removed": [
" if (connection_.holdability() == JDBC30Translation.HOLD_CURSORS_OVER_COMMIT) {"
]
},
{
"added": [
" if (connection_.holdability() == ResultSet.HOLD_CURSORS_OVER_COMMIT) {"
],
"header": "@@ -1783,7 +1782,7 @@ public abstract class DatabaseMetaData implements java.sql.DatabaseMetaData {",
"removed": [
" if (connection_.holdability() == JDBC30Translation.HOLD_CURSORS_OVER_COMMIT) {"
]
},
{
"added": [
" if (connection_.holdability() == ResultSet.HOLD_CURSORS_OVER_COMMIT) {"
],
"header": "@@ -1840,7 +1839,7 @@ public abstract class DatabaseMetaData implements java.sql.DatabaseMetaData {",
"removed": [
" if (connection_.holdability() == JDBC30Translation.HOLD_CURSORS_OVER_COMMIT) {"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/SectionManager.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.client.am;",
"removed": [
"import org.apache.derby.shared.common.reference.JDBC30Translation;"
]
},
{
"added": [
" if (resultSetHoldability == ResultSet.HOLD_CURSORS_OVER_COMMIT) {",
" } else if (resultSetHoldability == ResultSet.CLOSE_CURSORS_AT_COMMIT) {"
],
"header": "@@ -103,9 +102,9 @@ public class SectionManager {",
"removed": [
" if (resultSetHoldability == JDBC30Translation.HOLD_CURSORS_OVER_COMMIT) {",
" } else if (resultSetHoldability == JDBC30Translation.CLOSE_CURSORS_AT_COMMIT) {"
]
},
{
"added": [
" if (resultSetHoldability == ResultSet.HOLD_CURSORS_OVER_COMMIT) {",
" } else if (resultSetHoldability == ResultSet.CLOSE_CURSORS_AT_COMMIT) {"
],
"header": "@@ -116,9 +115,9 @@ public class SectionManager {",
"removed": [
" if (resultSetHoldability == JDBC30Translation.HOLD_CURSORS_OVER_COMMIT) {",
" } else if (resultSetHoldability == JDBC30Translation.CLOSE_CURSORS_AT_COMMIT) {"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Statement.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.client.am;",
"removed": [
"import org.apache.derby.shared.common.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetStatementReply.java",
"hunks": [
{
"added": [
"import java.sql.ResultSet;",
""
],
"header": "@@ -21,6 +21,8 @@",
"removed": []
},
{
"added": [],
"header": "@@ -35,7 +37,6 @@ import org.apache.derby.client.am.Utils;",
"removed": [
"import org.apache.derby.shared.common.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
"\t\tif (stmt.getCurrentDrdaResultSet().withHoldCursor == ResultSet.HOLD_CURSORS_OVER_COMMIT)"
],
"header": "@@ -2743,7 +2743,7 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\tif (stmt.getCurrentDrdaResultSet().withHoldCursor == JDBC30Translation.HOLD_CURSORS_OVER_COMMIT)"
]
}
]
},
{
"file": "java/engine/org/apache/derby/catalog/TriggerNewTransitionRows.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.catalog;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/catalog/TriggerOldTransitionRows.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.catalog;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredConnection.java",
"hunks": [
{
"added": [
"import java.sql.ResultSet;"
],
"header": "@@ -22,6 +22,7 @@",
"removed": []
},
{
"added": [],
"header": "@@ -37,7 +38,6 @@ import java.io.ObjectInput;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/reference/JDBC20Translation.java",
"hunks": [
{
"added": [],
"header": "@@ -21,7 +21,6 @@",
"removed": [
"import java.sql.ResultSet;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -25,8 +25,6 @@ import org.apache.derby.iapi.error.ExceptionSeverity;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;",
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
"import java.sql.ResultSet;"
],
"header": "@@ -61,6 +59,7 @@ import java.sql.CallableStatement;",
"removed": []
},
{
"added": [
"\tprivate int\tconnectionHoldAbility = ResultSet.HOLD_CURSORS_OVER_COMMIT;"
],
"header": "@@ -147,7 +146,7 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\tprivate int\tconnectionHoldAbility = JDBC30Translation.HOLD_CURSORS_OVER_COMMIT;"
]
},
{
"added": [
"\t\treturn createStatement(ResultSet.TYPE_FORWARD_ONLY,",
"\t\t\t\t\t\t\t ResultSet.CONCUR_READ_ONLY,"
],
"header": "@@ -1119,8 +1118,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\treturn createStatement(JDBC20Translation.TYPE_FORWARD_ONLY,",
"\t\t\t\t\t\t\t JDBC20Translation.CONCUR_READ_ONLY,"
]
},
{
"added": [
"\t\treturn prepareStatement(sql,ResultSet.TYPE_FORWARD_ONLY,",
"\t\t\tResultSet.CONCUR_READ_ONLY,"
],
"header": "@@ -1198,8 +1197,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\treturn prepareStatement(sql,JDBC20Translation.TYPE_FORWARD_ONLY,",
"\t\t\tJDBC20Translation.CONCUR_READ_ONLY,"
]
},
{
"added": [
"\t\t\tResultSet.TYPE_FORWARD_ONLY,",
"\t\t\tResultSet.CONCUR_READ_ONLY,"
],
"header": "@@ -1287,8 +1286,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\tJDBC20Translation.TYPE_FORWARD_ONLY,",
"\t\t\tJDBC20Translation.CONCUR_READ_ONLY,"
]
},
{
"added": [
"\t\t\tResultSet.TYPE_FORWARD_ONLY,",
"\t\t\tResultSet.CONCUR_READ_ONLY,"
],
"header": "@@ -1322,8 +1321,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\tJDBC20Translation.TYPE_FORWARD_ONLY,",
"\t\t\tJDBC20Translation.CONCUR_READ_ONLY,"
]
},
{
"added": [
"\t\t\tResultSet.TYPE_FORWARD_ONLY,",
"\t\t\tResultSet.CONCUR_READ_ONLY,"
],
"header": "@@ -1354,8 +1353,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\tJDBC20Translation.TYPE_FORWARD_ONLY,",
"\t\t\tJDBC20Translation.CONCUR_READ_ONLY,"
]
},
{
"added": [
"\t\treturn prepareCall(sql, ResultSet.TYPE_FORWARD_ONLY,",
"\t\t\t\t\t\t ResultSet.CONCUR_READ_ONLY,"
],
"header": "@@ -1411,8 +1410,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\treturn prepareCall(sql, JDBC20Translation.TYPE_FORWARD_ONLY,",
"\t\t\t\t\t\t JDBC20Translation.CONCUR_READ_ONLY,"
]
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t\t ResultSet.TYPE_FORWARD_ONLY,",
"\t\t\t\t\t\t\t\t\t\t\t ResultSet.CONCUR_READ_ONLY,"
],
"header": "@@ -2373,8 +2372,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\t\t\t\t\t\t\t\t\t JDBC20Translation.TYPE_FORWARD_ONLY,",
"\t\t\t\t\t\t\t\t\t\t\t JDBC20Translation.CONCUR_READ_ONLY,"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData.java",
"hunks": [
{
"added": [],
"header": "@@ -39,8 +39,6 @@ import org.apache.derby.impl.sql.execute.GenericExecutionFactory;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;",
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
"\t\tif ((type == ResultSet.TYPE_FORWARD_ONLY) ||",
"\t\t (type == ResultSet.TYPE_SCROLL_INSENSITIVE)) {"
],
"header": "@@ -2818,8 +2816,8 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\tif ((type == JDBC20Translation.TYPE_FORWARD_ONLY) ||",
"\t\t (type == JDBC20Translation.TYPE_SCROLL_INSENSITIVE)) {"
]
},
{
"added": [
" \t\tif (type == ResultSet.TYPE_SCROLL_SENSITIVE) {"
],
"header": "@@ -2838,7 +2836,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
" \t\tif (type == JDBC20Translation.TYPE_SCROLL_SENSITIVE) {"
]
},
{
"added": [
" \t\tif (type == ResultSet.TYPE_SCROLL_INSENSITIVE) {"
],
"header": "@@ -2859,7 +2857,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
" \t\tif (type == JDBC20Translation.TYPE_SCROLL_INSENSITIVE) {"
]
},
{
"added": [
" \t\tif (type == ResultSet.TYPE_SCROLL_INSENSITIVE) {"
],
"header": "@@ -2875,7 +2873,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
" \t\tif (type == JDBC20Translation.TYPE_SCROLL_INSENSITIVE) {"
]
},
{
"added": [
"\t\tif (type == ResultSet.TYPE_FORWARD_ONLY)"
],
"header": "@@ -2907,7 +2905,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\tif (type == JDBC20Translation.TYPE_FORWARD_ONLY)"
]
},
{
"added": [
"\t\tif (type == ResultSet.TYPE_FORWARD_ONLY)"
],
"header": "@@ -2921,7 +2919,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\tif (type == JDBC20Translation.TYPE_FORWARD_ONLY)"
]
},
{
"added": [
"\t\tif (type == ResultSet.TYPE_FORWARD_ONLY)"
],
"header": "@@ -2935,7 +2933,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\tif (type == JDBC20Translation.TYPE_FORWARD_ONLY)"
]
},
{
"added": [
"\t\tif (type == ResultSet.TYPE_SCROLL_INSENSITIVE) {"
],
"header": "@@ -2950,7 +2948,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\tif (type == JDBC20Translation.TYPE_SCROLL_INSENSITIVE) {"
]
},
{
"added": [
"\t\tif (type == ResultSet.TYPE_SCROLL_INSENSITIVE) {"
],
"header": "@@ -2971,7 +2969,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\tif (type == JDBC20Translation.TYPE_SCROLL_INSENSITIVE) {"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -44,7 +44,6 @@ import org.apache.derby.iapi.services.io.LimitReader;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [],
"header": "@@ -52,8 +52,6 @@ import org.apache.derby.iapi.services.io.StreamStorable;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;",
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
"\t\t\tconcurrencyOfThisResultSet = java.sql.ResultSet.CONCUR_READ_ONLY;",
"\t\telse if (stmt.resultSetConcurrency == java.sql.ResultSet.CONCUR_READ_ONLY)",
"\t\t\tconcurrencyOfThisResultSet = java.sql.ResultSet.CONCUR_READ_ONLY;",
"\t\t\t\tconcurrencyOfThisResultSet = java.sql.ResultSet.CONCUR_READ_ONLY;",
"\t\t\t\t\tconcurrencyOfThisResultSet = java.sql.ResultSet.CONCUR_UPDATABLE;"
],
"header": "@@ -239,16 +237,16 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t\tconcurrencyOfThisResultSet = JDBC20Translation.CONCUR_READ_ONLY;",
"\t\telse if (stmt.resultSetConcurrency == JDBC20Translation.CONCUR_READ_ONLY)",
"\t\t\tconcurrencyOfThisResultSet = JDBC20Translation.CONCUR_READ_ONLY;",
"\t\t\t\tconcurrencyOfThisResultSet = JDBC20Translation.CONCUR_READ_ONLY;",
"\t\t\t\t\tconcurrencyOfThisResultSet = JDBC20Translation.CONCUR_UPDATABLE;"
]
},
{
"added": [
"\t\tif (concurrencyOfThisResultSet == java.sql.ResultSet.CONCUR_UPDATABLE)"
],
"header": "@@ -256,7 +254,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\tif (concurrencyOfThisResultSet == JDBC20Translation.CONCUR_UPDATABLE)"
]
},
{
"added": [
" if (stmt.resultSetType == java.sql.ResultSet.TYPE_FORWARD_ONLY)"
],
"header": "@@ -284,7 +282,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" if (stmt.resultSetType == JDBC20Translation.TYPE_FORWARD_ONLY)"
]
},
{
"added": [
"\t\t\t\t\t concurrencyOfThisResultSet==java.sql.ResultSet.CONCUR_READ_ONLY, "
],
"header": "@@ -406,7 +404,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t\t\t\t concurrencyOfThisResultSet==JDBC20Translation.CONCUR_READ_ONLY, "
]
},
{
"added": [
"\t\t// some code assumes you can close a java.sql.ResultSet more than once."
],
"header": "@@ -550,7 +548,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t// some code assumes you can close a ResultSet more than once."
]
},
{
"added": [
" return java.sql.ResultSet.HOLD_CURSORS_OVER_COMMIT;",
" return java.sql.ResultSet.CLOSE_CURSORS_AT_COMMIT;"
],
"header": "@@ -1614,9 +1612,9 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" return JDBC30Translation.HOLD_CURSORS_OVER_COMMIT;",
" return JDBC30Translation.CLOSE_CURSORS_AT_COMMIT;"
]
},
{
"added": [
"\t\tif (stmt.getResultSetType() == java.sql.ResultSet.TYPE_FORWARD_ONLY)"
],
"header": "@@ -4436,7 +4434,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\tif (stmt.getResultSetType() == JDBC20Translation.TYPE_FORWARD_ONLY)"
]
},
{
"added": [
" if (getConcurrency() != java.sql.ResultSet.CONCUR_UPDATABLE) {"
],
"header": "@@ -4446,7 +4444,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" if (getConcurrency() != JDBC20Translation.CONCUR_UPDATABLE) {"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -21,8 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;",
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
" private int fetchDirection = java.sql.ResultSet.FETCH_FORWARD;"
],
"header": "@@ -93,7 +91,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" private int fetchDirection = JDBC20Translation.FETCH_FORWARD;"
]
},
{
"added": [
"\t\t\t\t (lcc.getDefaultSchema(), sql, resultSetConcurrency==",
" java.sql.ResultSet.CONCUR_READ_ONLY, false);",
"\t\t\t\t\tpreparedStatement.getActivation(lcc, resultSetType ==",
" java.sql.ResultSet.TYPE_SCROLL_INSENSITIVE);"
],
"header": "@@ -606,9 +604,11 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\t\t (lcc.getDefaultSchema(), sql, resultSetConcurrency==JDBC20Translation.CONCUR_READ_ONLY, false);",
"\t\t\t\t\tpreparedStatement.getActivation(lcc, resultSetType == JDBC20Translation.TYPE_SCROLL_INSENSITIVE);"
]
},
{
"added": [
" if (direction == java.sql.ResultSet.FETCH_FORWARD || ",
" direction == java.sql.ResultSet.FETCH_REVERSE ||",
" direction == java.sql.ResultSet.FETCH_UNKNOWN )"
],
"header": "@@ -804,9 +804,9 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" if (direction == JDBC20Translation.FETCH_FORWARD || ",
" direction == JDBC20Translation.FETCH_REVERSE ||",
" direction == JDBC20Translation.FETCH_UNKNOWN )"
]
},
{
"added": [
" if (resultSetHoldability == java.sql.ResultSet.CLOSE_CURSORS_AT_COMMIT)"
],
"header": "@@ -1706,7 +1706,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" if (resultSetHoldability == JDBC30Translation.CLOSE_CURSORS_AT_COMMIT)"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromVTI.java",
"hunks": [
{
"added": [],
"header": "@@ -54,7 +54,6 @@ import org.apache.derby.iapi.sql.dictionary.TableDescriptor;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/EmbedPooledConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -24,7 +24,6 @@ package org.apache.derby.jdbc;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
"import java.sql.ResultSet;"
],
"header": "@@ -38,6 +37,7 @@ import org.apache.derby.impl.jdbc.EmbedCallableStatement;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/EmbedXAConnection.java",
"hunks": [
{
"added": [
"import java.sql.ResultSet;"
],
"header": "@@ -26,9 +26,9 @@ import org.apache.derby.iapi.jdbc.EngineConnection;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/EmbedXAResource.java",
"hunks": [
{
"added": [
"import java.sql.ResultSet;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
},
{
"added": [],
"header": "@@ -32,7 +33,6 @@ import org.apache.derby.iapi.error.StandardException;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/vti/UpdatableVTITemplate.java",
"hunks": [
{
"added": [],
"header": "@@ -41,7 +41,6 @@ import java.sql.Time;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC20Translation;"
]
}
]
},
{
"file": "java/shared/org/apache/derby/shared/common/reference/JDBC30Translation.java",
"hunks": [
{
"added": [],
"header": "@@ -21,7 +21,6 @@",
"removed": [
"import java.sql.ResultSet;"
]
}
]
}
] |
derby-DERBY-3484-a0118e1c
|
DERBY-3484 For JDBC 3.0 java.sql.Statement constants use directly instead of through JDBC30Translation as JSR169 supports all the types
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@632414 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement.java",
"hunks": [
{
"added": [
"import java.sql.Statement;"
],
"header": "@@ -37,6 +37,7 @@ import java.sql.CallableStatement;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection.java",
"hunks": [
{
"added": [
"\t\t\tStatement.NO_GENERATED_KEYS,"
],
"header": "@@ -1201,7 +1201,7 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\tJDBC30Translation.NO_GENERATED_KEYS,"
]
},
{
"added": [
"\t\t\tStatement.NO_GENERATED_KEYS,"
],
"header": "@@ -1227,7 +1227,7 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\tJDBC30Translation.NO_GENERATED_KEYS,"
]
},
{
"added": [
"\t\t\tStatement.NO_GENERATED_KEYS,"
],
"header": "@@ -1255,7 +1255,7 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\tJDBC30Translation.NO_GENERATED_KEYS,"
]
},
{
"added": [
"\t\t\t\t? Statement.NO_GENERATED_KEYS",
"\t\t\t\t: Statement.RETURN_GENERATED_KEYS,"
],
"header": "@@ -1291,8 +1291,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\t\t? JDBC30Translation.NO_GENERATED_KEYS",
"\t\t\t\t: JDBC30Translation.RETURN_GENERATED_KEYS,"
]
},
{
"added": [
"\t\t\t\t? Statement.NO_GENERATED_KEYS",
"\t\t\t\t: Statement.RETURN_GENERATED_KEYS,"
],
"header": "@@ -1326,8 +1326,8 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\t\t? JDBC30Translation.NO_GENERATED_KEYS",
"\t\t\t\t: JDBC30Translation.RETURN_GENERATED_KEYS,"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -44,7 +44,6 @@ import org.apache.derby.iapi.services.io.LimitReader;",
"removed": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
]
},
{
"added": [
"import java.sql.Statement;"
],
"header": "@@ -58,6 +57,7 @@ import java.sql.ResultSet;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedStatement.java",
"hunks": [
{
"added": [
"import java.sql.Statement;"
],
"header": "@@ -37,6 +37,7 @@ import org.apache.derby.iapi.jdbc.EngineStatement;",
"removed": []
},
{
"added": [
"\t\texecute(sql, true, false, Statement.NO_GENERATED_KEYS, null, null);"
],
"header": "@@ -150,7 +151,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\texecute(sql, true, false, JDBC30Translation.NO_GENERATED_KEYS, null, null);"
]
},
{
"added": [
"\t\texecute(sql, false, true, Statement.NO_GENERATED_KEYS, null, null);"
],
"header": "@@ -173,7 +174,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\texecute(sql, false, true, JDBC30Translation.NO_GENERATED_KEYS, null, null);"
]
},
{
"added": [
"\t\t\t\t? Statement.NO_GENERATED_KEYS",
"\t\t\t\t: Statement.RETURN_GENERATED_KEYS,"
],
"header": "@@ -219,8 +220,8 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\t\t? JDBC30Translation.NO_GENERATED_KEYS",
"\t\t\t\t: JDBC30Translation.RETURN_GENERATED_KEYS,"
]
},
{
"added": [
"\t\t\t\t? Statement.NO_GENERATED_KEYS",
"\t\t\t\t: Statement.RETURN_GENERATED_KEYS,"
],
"header": "@@ -246,8 +247,8 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\t\t? JDBC30Translation.NO_GENERATED_KEYS",
"\t\t\t\t: JDBC30Translation.RETURN_GENERATED_KEYS,"
]
},
{
"added": [
"\t\treturn execute(sql, false, false, Statement.NO_GENERATED_KEYS, null, null);"
],
"header": "@@ -553,7 +554,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\treturn execute(sql, false, false, JDBC30Translation.NO_GENERATED_KEYS, null, null);"
]
},
{
"added": [
"\t\t\tif (autoGeneratedKeys == Statement.RETURN_GENERATED_KEYS)"
],
"header": "@@ -619,7 +620,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\tif (autoGeneratedKeys == JDBC30Translation.RETURN_GENERATED_KEYS)"
]
},
{
"added": [
"\t\t\t\t? Statement.NO_GENERATED_KEYS",
"\t\t\t\t: Statement.RETURN_GENERATED_KEYS,"
],
"header": "@@ -673,8 +674,8 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\t\t? JDBC30Translation.NO_GENERATED_KEYS",
"\t\t\t\t: JDBC30Translation.RETURN_GENERATED_KEYS,"
]
},
{
"added": [
"\t\t\t\t? Statement.NO_GENERATED_KEYS",
"\t\t\t\t: Statement.RETURN_GENERATED_KEYS,"
],
"header": "@@ -702,8 +703,8 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\t\t? JDBC30Translation.NO_GENERATED_KEYS",
"\t\t\t\t: JDBC30Translation.RETURN_GENERATED_KEYS,"
]
},
{
"added": [
"\t\treturn getMoreResults(Statement.CLOSE_ALL_RESULTS);"
],
"header": "@@ -756,7 +757,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\treturn getMoreResults(JDBC30Translation.CLOSE_ALL_RESULTS);"
]
},
{
"added": [
"\t\treturn execute((String)batchElement, false, true, Statement.NO_GENERATED_KEYS, null, null);"
],
"header": "@@ -1009,7 +1010,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\treturn execute((String)batchElement, false, true, JDBC30Translation.NO_GENERATED_KEYS, null, null);"
]
},
{
"added": [
"\t\t\tcase Statement.CLOSE_ALL_RESULTS:",
"\t\t\tcase Statement.CLOSE_CURRENT_RESULT:",
"\t\t\tcase Statement.KEEP_CURRENT_RESULT:"
],
"header": "@@ -1060,14 +1061,14 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\tcase JDBC30Translation.CLOSE_ALL_RESULTS:",
"\t\t\tcase JDBC30Translation.CLOSE_CURRENT_RESULT:",
"\t\t\tcase JDBC30Translation.KEEP_CURRENT_RESULT:"
]
}
]
},
{
"file": "java/shared/org/apache/derby/shared/common/reference/JDBC30Translation.java",
"hunks": [
{
"added": [],
"header": "@@ -22,7 +22,6 @@",
"removed": [
"import java.sql.Statement;"
]
}
]
}
] |
derby-DERBY-3489-f2ec1d8c
|
DERBY-3489: Error message XRE04 does not include the right port number.
Contributed by V Narayanan
* java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessageTransmit.java
The constructor has been modified to accept the SlaveAddress object instead of a host name
and port number as was happening previously.
* java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessageReceive.java
Modified the constructor to accept a SlaveAddress object instead of a host name and port number.
Removed the getHostName() and getPort() functions, these functions seemed superfluous. They are
no longer used in the SlaveController, they are used in only once place in the receiver when an
exception was being thrown.
* java/engine/org/apache/derby/impl/store/replication/slave/SlaveController.java
slavehost and slaveport are no longer used (SlaveAddress object is instead used).
introduced two functions getHostName and getPortNumber here that return the hostName
and portNumber from SlaveAddress.
* java/engine/org/apache/derby/impl/store/replication/master/MasterController.java
slavehost and slaveport are no longer used (SlaveAddress object is instead used).
introduced two functions getHostName and getPortNumber here that return the hostName
and portNumber from SlaveAddress.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@644742 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/MasterController.java",
"hunks": [
{
"added": [
"import java.net.UnknownHostException;"
],
"header": "@@ -24,6 +24,7 @@ package org.apache.derby.impl.store.replication.master;",
"removed": []
},
{
"added": [
"import org.apache.derby.impl.store.replication.net.SlaveAddress;"
],
"header": "@@ -47,6 +48,7 @@ import org.apache.derby.impl.store.replication.buffer.ReplicationLogBuffer;",
"removed": []
},
{
"added": [
" private SlaveAddress slaveAddr;"
],
"header": "@@ -78,8 +80,7 @@ public class MasterController",
"removed": [
" private String slavehost;",
" private int slaveport;"
]
},
{
"added": [
" try {",
" slaveAddr = new SlaveAddress(slavehost, ",
" (new Integer(slaveport)).intValue());",
" } catch (UnknownHostException uhe) {",
" throw StandardException.newException",
" (SQLState.REPLICATION_CONNECTION_EXCEPTION, uhe, ",
" dbname, getHostName(), String.valueOf(getPortNumber()));",
" }",
" "
],
"header": "@@ -202,8 +203,15 @@ public class MasterController",
"removed": [
" this.slavehost = slavehost;",
" this.slaveport = new Integer(slaveport).intValue();"
]
},
{
"added": [
" transmitter = new ReplicationMessageTransmit(slaveAddr);",
" "
],
"header": "@@ -468,9 +476,8 @@ public class MasterController",
"removed": [
" transmitter = new ReplicationMessageTransmit(slavehost,",
" slaveport,",
" dbname);"
]
},
{
"added": [
" dbname, getHostName(), String.valueOf(getPortNumber()));",
" dbname, getHostName(), String.valueOf(getPortNumber()));"
],
"header": "@@ -493,13 +500,13 @@ public class MasterController",
"removed": [
" dbname, slavehost, String.valueOf(slaveport));",
" dbname, slavehost, String.valueOf(slaveport));"
]
},
{
"added": [
" transmitter = new ReplicationMessageTransmit(slaveAddr);"
],
"header": "@@ -521,13 +528,7 @@ public class MasterController",
"removed": [
" if (transmitter != null) {",
" transmitter.tearDown();",
" }",
" transmitter = new ReplicationMessageTransmit(slavehost,",
" slaveport,",
" dbname);",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessageReceive.java",
"hunks": [
{
"added": [],
"header": "@@ -25,7 +25,6 @@ package org.apache.derby.impl.store.replication.net;",
"removed": [
"import java.net.UnknownHostException;"
]
},
{
"added": [
" * @param slaveAddress the address (host name and port number) of the slave",
" * to connect to.",
" * @param dbname the name of the database.",
" public ReplicationMessageReceive(SlaveAddress slaveAddress, ",
" String dbname) {",
" this.slaveAddress = slaveAddress;",
" Monitor.logTextMessage(MessageId.REPLICATION_SLAVE_NETWORK_LISTEN,",
" dbname, ",
" slaveAddress.getHostAddress().getHostName(),",
" String.valueOf(slaveAddress.getPortNumber()));"
],
"header": "@@ -93,36 +92,17 @@ public class ReplicationMessageReceive {",
"removed": [
" * @param hostName a <code>String</code> that contains the host name of",
" * the slave to replicate to.",
" * @param portNumber an integer that contains the port number of the",
" * slave to replicate to.",
" * @param dbname the name of the database",
" *",
" * @throws StandardException If an exception occurs while trying to",
" * resolve the host name.",
" public ReplicationMessageReceive(String hostName, int portNumber, ",
" String dbname)",
" throws StandardException {",
" try {",
" slaveAddress = new SlaveAddress(hostName, portNumber);",
" Monitor.logTextMessage(MessageId.REPLICATION_SLAVE_NETWORK_LISTEN, ",
" dbname, getHostName(), ",
" String.valueOf(getPort()));",
" } catch (UnknownHostException uhe) {",
" // cannot use getPort because SlaveAddress creator threw",
" // exception and has therefore not been initialized",
" String port;",
" if (portNumber > 0) {",
" port = String.valueOf(portNumber);",
" } else {",
" port = String.valueOf(SlaveAddress.DEFAULT_PORT_NO);",
" }",
" throw StandardException.newException",
" (SQLState.REPLICATION_CONNECTION_EXCEPTION, uhe, ",
" dbname, hostName, port);",
" }"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessageTransmit.java",
"hunks": [
{
"added": [],
"header": "@@ -24,7 +24,6 @@ import java.io.IOException;",
"removed": [
"import java.net.UnknownHostException;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/slave/SlaveController.java",
"hunks": [
{
"added": [
"import org.apache.derby.impl.store.replication.net.SlaveAddress;"
],
"header": "@@ -34,6 +34,7 @@ import org.apache.derby.iapi.store.raw.RawStoreFactory;",
"removed": []
},
{
"added": [
"import java.net.UnknownHostException;"
],
"header": "@@ -43,6 +44,7 @@ import org.apache.derby.iapi.store.replication.slave.SlaveFactory;",
"removed": []
},
{
"added": [
" private SlaveAddress slaveAddr;"
],
"header": "@@ -76,8 +78,7 @@ public class SlaveController",
"removed": [
" private String slavehost;",
" private int slaveport;"
]
},
{
"added": [
" ",
" try {",
" //if slavePort is -1 the default port",
" //value will be used.",
" int slavePort = -1;",
" if (port != null) {",
" slavePort = (new Integer(port)).intValue();",
" }",
" slaveAddr = new SlaveAddress(",
" properties.getProperty(Attribute.REPLICATION_SLAVE_HOST), ",
" slavePort);",
" } catch (UnknownHostException uhe) {",
" throw StandardException.newException",
" (SQLState.REPLICATION_CONNECTION_EXCEPTION, uhe, ",
" dbname, getHostName(), String.valueOf(getPortNumber()));"
],
"header": "@@ -130,11 +131,22 @@ public class SlaveController",
"removed": [
" slavehost = properties.getProperty(Attribute.REPLICATION_SLAVE_HOST);",
"",
" if (port != null) {",
" slaveport = new Integer(port).intValue();"
]
},
{
"added": [
" receiver = new ReplicationMessageReceive(slaveAddr, dbname);"
],
"header": "@@ -222,11 +234,7 @@ public class SlaveController",
"removed": [
" receiver = new ReplicationMessageReceive(slavehost, slaveport, dbname);",
" // If slaveport was not specified when starting the slave, the",
" // receiver will use the default port. Set slaveport to the port",
" // actually used by the receiver",
" slaveport = receiver.getPort();"
]
},
{
"added": [
" dbname, getHostName(), String.valueOf(getPortNumber()));"
],
"header": "@@ -351,7 +359,7 @@ public class SlaveController",
"removed": [
" dbname, slavehost, String.valueOf(slaveport));"
]
},
{
"added": [
" ",
" /**",
" * Used to return the host name of the slave.",
" *",
" * @return a String containing the host name of the slave.",
" */",
" private String getHostName() {",
" return slaveAddr.getHostAddress().getHostName();",
" }",
" ",
" /**",
" * Used to return the port number of the slave.",
" *",
" * @return an Integer that represents the port number of the slave.",
" */",
" private int getPortNumber() {",
" return slaveAddr.getPortNumber();",
" }"
],
"header": "@@ -464,6 +472,24 @@ public class SlaveController",
"removed": []
}
]
}
] |
derby-DERBY-349-1f3d2714
|
DERBY-349: Re-enable parameterMapping batch tests for DerbyNetClient
Due to bug DERBY-349, the parameterMapping tests of various data types
in the executeBatch() configuration had been disabled, because those
tests were hanging. The tests are no longer hanging, so this submission
re-enables the tests.
This change also modifies the parameterMapping test program so that it
knows how to unwind the BatchUpdateException when looking for an
underlying Invalid Conversion exception. This means that we can have
simpler and easier-to-read master output files, which makes the test
easier to maintain.
This change also deletes the separate jdk14 master file, since it is
identical to the primary master file.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@391086 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3491-356ff6fc
|
DERBY-3462 DERBY-3491
Add permission checks for SystemPermission("server", "monitor" | "control") to NetworkServerMBean.
Fix SystemPermission's handling of multiple actions and add tests.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@636878 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/mbeans/drda/NetworkServerMBean.java",
"hunks": [
{
"added": [
" * @see org.apache.derby.security.SystemPermission"
],
"header": "@@ -31,7 +31,7 @@ package org.apache.derby.mbeans.drda;",
"removed": [
" *"
]
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"control\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -48,6 +48,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -56,6 +60,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -64,6 +72,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"control\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -75,6 +87,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"control\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -84,6 +100,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"control\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -93,6 +113,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -104,6 +128,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -112,6 +140,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -121,6 +153,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"control\")</code> if a security",
" * manager is installed.",
" *",
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -133,11 +169,19 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *",
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -151,12 +195,20 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *",
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *",
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *",
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -164,24 +216,40 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -189,6 +257,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -197,6 +269,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -204,6 +280,10 @@ public interface NetworkServerMBean {",
"removed": []
},
{
"added": [
" * <P>",
" * Require <code>SystemPermission(\"server\", \"monitor\")</code> if a security",
" * manager is installed.",
" *"
],
"header": "@@ -211,6 +291,10 @@ public interface NetworkServerMBean {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/security/SystemPermission.java",
"hunks": [
{
"added": [
" StringTokenizer st = new StringTokenizer(actions, \",\");",
" String action = st.nextToken().trim().toLowerCase(Locale.ENGLISH);",
" int validAction = LEGAL_ACTIONS.indexOf(action);"
],
"header": "@@ -132,9 +132,10 @@ final public class SystemPermission extends BasicPermission {",
"removed": [
" StringTokenizer st = new StringTokenizer(actions);",
" int validAction = LEGAL_ACTIONS.indexOf(st.nextElement());"
]
}
]
}
] |
derby-DERBY-3491-59176943
|
DERBY-3491 Change Derby's SystemPermission to be a two argument permission with:
target-name: jmx|server|engine
action: control|monitor|shutdown
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@636435 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/security/SystemPermission.java",
"hunks": [
{
"added": [
"import java.security.Permission;",
"import java.util.ArrayList;",
"import java.util.List;",
"import java.util.Locale;",
"import java.util.Set;",
"import java.util.StringTokenizer;",
" ",
" /**",
" * Permission target name for actions applicable",
" * to the network server.",
" */",
" public static final String SERVER = \"server\";",
" /**",
" * Permission target name for actions applicable",
" * to the core database engine.",
" */",
" public static final String ENGINE = \"engine\";",
" /**",
" * Permission target name for actions applicable",
" * to management of Derby's JMX MBeans.",
" */",
" public static final String JMX = \"jmx\";",
" * The server and engine shutdown action.",
" ",
" /**",
" * Permission to perform control actions through JMX",
" * on engine, server or jmx.",
" */",
" public static final String CONTROL = \"control\";",
" ",
" /**",
" * Permission to perform monitoring actions through JMX",
" * on engine and server.",
" */",
" public static final String MONITOR = \"monitor\";",
" static private final Set LEGAL_NAMES = new HashSet(); ",
" LEGAL_NAMES.add(SERVER);",
" LEGAL_NAMES.add(ENGINE);",
" LEGAL_NAMES.add(JMX);",
" ",
" * Set of legal actions in their canonical form.",
" static private final List LEGAL_ACTIONS = new ArrayList();",
" static {",
" LEGAL_ACTIONS.add(CONTROL);",
" LEGAL_ACTIONS.add(MONITOR);",
" LEGAL_ACTIONS.add(SHUTDOWN);",
" /**",
" * Actions for this permission.",
" */",
" private final String actions;",
" "
],
"header": "@@ -22,45 +22,79 @@",
"removed": [
"import java.util.Set;",
" * The server and engine shutdown permission.",
" static protected final Set LEGAL_PERMISSIONS = new HashSet(); ",
" LEGAL_PERMISSIONS.add(SHUTDOWN);",
"",
" * Checks a name for denoting a legal SystemPermission.",
" *",
" * @param name the name of a SystemPermission",
" * @throws IllegalArgumentException if name is not a legal SystemPermission",
" static protected void checkPermission(String name) {",
" // superclass BasicPermission has checked that name isn't null",
" // (NullPointerException) or empty (IllegalArgumentException)",
" //assert(name != null);",
" //assert(!name.equals(\"\"));",
" if (!LEGAL_PERMISSIONS.contains(name)) {",
" throw new IllegalArgumentException(\"Unknown permission \" + name);",
" }"
]
}
]
}
] |
derby-DERBY-3493-f30ee415
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
DERBY-3493: stress.multi times out waiting on testers with blocked testers waiting on the same statement
Changed ConcurrentCache.create() to match Clock.create() more closely.
The patch basically makes ConcurrentCache.create() use
ConcurrentHashMap.get() directly instead of going through
ConcurrentCache.getEntry(), which may block until the identity has
been set by another thread. Then create() fails immediately if the
object already exists in the cache, also if another thread is in the
process of inserting the object into the cache. Since this introduced
yet another difference between find() and create() in
findOrCreateObject(), I also followed Øystein's suggestion from his
review of DERBY-2911 and split findOrCreateObject() into a number of
smaller methods, which I think makes the code easier to follow.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@635183 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" * Insert a {@code CacheEntry} into a free slot in the {@code",
" * ReplacementPolicy}'s internal data structure, and return a {@code",
" * Cacheable} that the caller can reuse. The entry must have been locked",
" * before this method is called.",
" * @param key the identity of the object being inserted",
" * @param entry the entry that is being inserted",
" * @return a {@code Cacheable} object that the caller can reuse",
" * @throws StandardException if an error occurs while inserting the entry",
" * or while allocating a new {@code Cacheable}",
" private Cacheable insertIntoFreeSlot(Object key, CacheEntry entry)",
" try {",
" replacementPolicy.insertEntry(entry);",
" } catch (StandardException se) {",
" // Failed to insert the entry into the replacement policy. Make",
" // sure that it's also removed from the hash table.",
" removeEntry(key);",
" throw se;",
" Cacheable free = entry.getCacheable();",
"",
" if (free == null) {",
" // We didn't get a reusable cacheable. Create a new one.",
" free = holderFactory.newCacheable(this);",
" entry.keep(true);",
" return free;",
" }",
"",
" /**",
" * Complete the setting of the identity. This includes notifying the",
" * threads that are waiting for the setting of the identity to complete,",
" * so that they can wake up and continue. If setting the identity failed,",
" * the entry will be removed from the cache.",
" *",
" * @param key the identity of the object being inserted",
" * @param entry the entry which is going to hold the cached object",
" * @param item a {@code Cacheable} object with the identity set (if",
" * the identity was successfully set), or {@code null} if setting the",
" * identity failed",
" */",
" private void settingIdentityComplete(",
" Object key, CacheEntry entry, Cacheable item) {",
" entry.lock();",
" entry.settingIdentityComplete();",
" entry.setCacheable(item);",
" } else {"
],
"header": "@@ -183,95 +183,66 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" * Find or create an object in the cache. If the object is not presently",
" * in the cache, it will be added to the cache.",
" * @param key the identity of the object to find or create",
" * @param create whether or not the object should be created",
" * @param createParameter used as argument to <code>createIdentity()</code>",
" * when <code>create</code> is <code>true</code>",
" * @return the cached object, or <code>null</code> if it cannot be found",
" * @throws StandardException if an error happens when accessing the object",
" private Cacheable findOrCreateObject(Object key, boolean create,",
" Object createParameter)",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(createParameter == null || create,",
" \"createParameter should be null when create is false\");",
" if (stopped) {",
" return null;",
" // A free cacheable which we'll initialize if we don't find the object",
" // in the cache.",
" Cacheable free;",
" CacheEntry entry = getEntry(key);",
" Cacheable item = entry.getCacheable();",
" if (create) {",
" throw StandardException.newException(",
" SQLState.OBJECT_EXISTS_IN_CACHE, name, key);",
" }",
" entry.keep(true);",
" return item;",
" }",
"",
" // not currently in the cache",
" try {",
" replacementPolicy.insertEntry(entry);",
" } catch (StandardException se) {",
" throw se;",
" }",
"",
" free = entry.getCacheable();",
" if (free == null) {",
" // We didn't get a reusable cacheable. Create a new one.",
" free = holderFactory.newCacheable(this);",
"",
" entry.keep(true);",
"",
"",
" // Set the identity in a try/finally so that we can remove the entry",
" // if the operation fails. We have released the lock on the entry so",
" // that we don't run into deadlocks if the user code (setIdentity() or",
" // createIdentity()) reenters the cache.",
" Cacheable c = null;",
" try {",
" if (create) {",
" c = free.createIdentity(key, createParameter);",
" } else {",
" c = free.setIdentity(key);",
" }",
" } finally {",
" entry.lock();",
" try {",
" // Notify the entry that setIdentity() or createIdentity() has",
" // finished.",
" entry.settingIdentityComplete();",
" if (c == null) {",
" // Setting identity failed, or the object was not found.",
" removeEntry(key);",
" } else {",
" // Successfully set the identity.",
" entry.setCacheable(c);",
" }",
" } finally {",
" entry.unlock();",
" }",
" }",
"",
" return c;"
]
},
{
"added": [
"",
" if (stopped) {",
" return null;",
" }",
"",
" CacheEntry entry = getEntry(key);",
"",
" Cacheable item;",
" try {",
" item = entry.getCacheable();",
" if (item != null) {",
" // The object is already cached. Increase the use count and",
" // return it.",
" entry.keep(true);",
" return item;",
" } else {",
" // The object is not cached. Insert the entry into a free",
" // slot and retrieve a reusable Cacheable.",
" item = insertIntoFreeSlot(key, entry);",
" }",
" } finally {",
" entry.unlock();",
" }",
"",
" // Set the identity without holding the lock on the entry. If we",
" // hold the lock, we may run into a deadlock if the user code in",
" // setIdentity() re-enters the buffer manager.",
" Cacheable itemWithIdentity = null;",
" try {",
" itemWithIdentity = item.setIdentity(key);",
" } finally {",
" // Always invoke settingIdentityComplete(), also on error,",
" // otherwise other threads may wait forever. If setIdentity()",
" // fails, itemWithIdentity is going to be null.",
" settingIdentityComplete(key, entry, itemWithIdentity);",
" }",
"",
" return itemWithIdentity;"
],
"header": "@@ -285,7 +256,44 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" return findOrCreateObject(key, false, null);"
]
},
{
"added": [
"",
" if (stopped) {",
" return null;",
" }",
"",
" CacheEntry entry = new CacheEntry();",
" entry.lock();",
"",
" if (cache.putIfAbsent(key, entry) != null) {",
" // We can't create the object if it's already in the cache.",
" throw StandardException.newException(",
" SQLState.OBJECT_EXISTS_IN_CACHE, name, key);",
" }",
"",
" Cacheable item;",
" try {",
" item = insertIntoFreeSlot(key, entry);",
" } finally {",
" entry.unlock();",
" }",
"",
" // Create the identity without holding the lock on the entry.",
" // Otherwise, we may run into a deadlock if the user code in",
" // createIdentity() re-enters the buffer manager.",
" Cacheable itemWithIdentity = null;",
" try {",
" itemWithIdentity = item.createIdentity(key, createParameter);",
" } finally {",
" // Always invoke settingIdentityComplete(), also on error,",
" // otherwise other threads may wait forever. If createIdentity()",
" // fails, itemWithIdentity is going to be null.",
" settingIdentityComplete(key, entry, itemWithIdentity);",
" }",
"",
" return itemWithIdentity;"
],
"header": "@@ -340,7 +348,41 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" return findOrCreateObject(key, true, createParameter);"
]
}
]
}
] |
derby-DERBY-3494-860148c2
|
DERBY-3494 Move the setup of NormalizeResultSetNode into the NormalizeResultSetNode
Removes ResultSetColumList.copyLengthsAndTypesToSource() and incorporate into the NormalizeResultSetNode init method.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@643644 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/NormalizeResultSetNode.java",
"hunks": [
{
"added": [
" * @param targetResultColumnList The target resultColumnList from ",
" * the InsertNode or UpdateNode. These will",
" * be the types used for the NormalizeResultSetNode."
],
"header": "@@ -87,6 +87,9 @@ public class NormalizeResultSetNode extends SingleChildResultSetNode",
"removed": []
},
{
"added": [
" Object targetResultColumnList,"
],
"header": "@@ -95,6 +98,7 @@ public class NormalizeResultSetNode extends SingleChildResultSetNode",
"removed": []
},
{
"added": [
"\t\tResultColumnList targetRCL = (ResultColumnList) targetResultColumnList;",
" "
],
"header": "@@ -103,7 +107,8 @@ public class NormalizeResultSetNode extends SingleChildResultSetNode",
"removed": [
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UpdateNode.java",
"hunks": [
{
"added": [
"\t\t\t C_NodeTypes.NORMALIZE_RESULT_SET_NODE, ",
"\t\t\t resultSet, resultColumnList, null, Boolean.TRUE,",
"\t\t\t"
],
"header": "@@ -549,10 +549,10 @@ public final class UpdateNode extends DMLModStatementNode",
"removed": [
"\t\t\t C_NodeTypes.NORMALIZE_RESULT_SET_NODE,",
"\t\t\t resultSet, null, Boolean.TRUE,",
"\t\t\tresultColumnList.copyTypesAndLengthsToSource(resultSet.getResultColumns());"
]
}
]
}
] |
derby-DERBY-3494-c6564415
|
DERBY-3597 Incorporate DERBY-3310 and DERBY-3494 write-ups into NormalizeResultSetNode code comments.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@645638 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/NormalizeResultSetNode.java",
"hunks": [
{
"added": [
" *",
" * child result set that needs one. See non-javadoc comments for ",
" * a walk-through of a couple sample code paths.",
" */",
"",
" /*",
" * Below are a couple of sample code paths for NormlizeResultSetNodes.",
" * These samples were derived from Army Brown's write-ups attached to DERBY-3310",
" * and DERBY-3494. The text was changed to include the new code path now that ",
" * all of the NormalizeResultSetNode code has been moved into the init() method.",
" * There are two sections of code in NormalizeResultSetNode.init() that are relevant:",
" * First the code to generate the new node based on the child result set. ",
" * We will call this \"normalize node creation\".",
" * ",
" * ResultSetNode rsn = (ResultSetNode) childResult;",
" * ResultColumnList rcl = rsn.getResultColumns();",
" * ResultColumnList targetRCL = (ResultColumnList) targetResultColumnList;",
" * ...",
" * ResultColumnList prRCList = rcl;",
" * rsn.setResultColumns(rcl.copyListAndObjects());",
" * ...",
" * this.resultColumns = prRCList;",
" *",
" * Next the code to adjust the types for the NormalizeResultSetNode. ",
" * We will call this \"type adjustment\"",
" * ",
" * if (targetResultColumnList != null) {",
" * int size = Math.min(targetRCL.size(), resultColumns.size());",
" * for (int index = 0; index < size; index++) {",
" * ResultColumn sourceRC = (ResultColumn) resultColumns.elementAt(index);",
" * ResultColumn resultColumn = (ResultColumn) targetRCL.elementAt(index);",
" * sourceRC.setType(resultColumn.getTypeServices());",
" * } ",
" * ",
" * --- Sample 1 : Type conversion from Decimal to BigInt on insert --- ",
" * (DERBY-3310 write-up variation) ",
" * The SQL statement on which this sample focuses is:",
" * ",
" * create table d3310 (x bigint);",
" * insert into d3310 select distinct * from (values 2.0, 2.1, 2.2) v; ",
" * ",
" * There are three compilation points of interest for this discussion:",
" * 1. Before the \"normalize node creation\"",
" * 2. Before the \"type adjustment\"",
" * 3. After the \"type adjustment\"",
" * ",
" * Upon completion of the \"type adjustment\", the compilation query ",
" * tree is then manipulated during optimization and code generation, the ",
" * latter of which ultimately determines how the execution-time ResultSet ",
" * tree is going to look.\\u00a0 So for this discussion we walk through the query",
" * tree as it exists at the various points of interest just described.",
" * ",
" * 1) To start, the (simplified) query tree that we have looks something like the following:",
" * ",
" * InsertNode",
" * (RCL_0:ResultColumn_0<BigInt>)",
" * |",
" * SelectNode",
" * (RCL_1:ResultColumn_1<Decimal>)",
" * |",
" * FromSubquery",
" * (RCL_2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL_3:ResultColumn_3<Decimal>)",
" * ",
" * Notation: In the above tree, node names with \"_x\" trailing them are used to",
" * distinguish Java Objects from each other. So if ResultColumn_0 shows up ",
" * more than once, then it is the *same* Java object showing up in different ",
" * parts of the query tree. Type names in angle brackets, such as \"<BigInt>\",",
" * describe the type of the entity immediately preceding the brackets. ",
" * So a line of the form:",
" * ",
" * RCL_0:ResultColumn_0<BigInt>",
" * ",
" * describes a ResultColumnList object containing one ResultColumn object ",
" * whose type is BIGINT. We can see from the above tree that, before ",
" * normalize node creation, the top of the compile tree contains an ",
" * InsertNode, a SelectNode, a FromSubquery, and a UnionNode, all of ",
" * which have different ResultColumnList objects and different ResultColumn ",
" * objects within those lists.",
" * ",
" * 2) After the normalize node creation",
" * The childresult passed to the init method of NormalizeResultSetNode is ",
" * the InsertNode's child, so it ends up creating a new NormalizeResultSetNode ",
" * and putting that node on top of the InsertNode's child--that is, on top of ",
" * the SelectNode.",
" *",
" * At this point it's worth noting that a NormalizeResultSetNode operates ",
" * based on two ResultColumnLists: a) its own (call it NRSN_RCL), and b) ",
" * the ResultColumnList of its child (call it NRSN_CHILD_RCL). More ",
" * specifically, during execution a NormalizeResultSet will take a row ",
" * whose column types match the types of NRSN_CHILD_RCL, and it will ",
" * \"normalize\" the values from that row so that they agree with the ",
" * types of NRSN_RCL. Thus is it possible--and in fact, it should generally ",
" * be the case--that the types of the columns in the NormalizeResultSetNode's ",
" * own ResultColumnList are *different* from the types of the columns in ",
" * its child's ResultColumnList. That should not be the case for most ",
" * (any?) other Derby result set.",
" * ",
" * So we now have:",
" *",
" * InsertNode",
" * (RCL_0:ResultColumn_0<BigInt>)",
" * |",
" * NormalizeResultSetNode",
" * (RCL_1:ResultColumn_1<Decimal> -> VirtualColumnNode<no_type> -> ResultColumn_4<Decimal>)",
" * |",
" * SelectNode",
" * (RCL_4:ResultColumn_4<Decimal>)",
" * |",
" * FromSubquery",
" * (RCL_2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL_3:ResultColumn_3<Decimal>)",
" *",
" * Notice how, when we generate the NormalizeResultSetNode, three things happen:",
" * ",
" * a) The ResultColumList object for the SelectNode is \"pulled up\" into the ",
" * NormalizeResultSetNode.",
" * b) SelectNode is given a new ResultColumnList--namely, a clone of its old",
" * ResultColumnList, including clones of the ResultColumn objects.",
" * c) VirtualColumnNodes are generated beneath NormalizeResultSetNode's ",
" * ResultColumns, and those VCNs point to the *SAME RESULT COLUMN OBJECTS* ",
" * that now sit in the SelectNode's new ResultColumnList. ",
" * Also note how the generated VirtualColumnNode has no type of its own; ",
" * since it is an instance of ValueNode it does have a dataTypeServices ",
" * field, but that field was not set when the NormalizeResultSetNode was ",
" * created. Hence \"<no_type>\" in the above tree.",
" * ",
" * And finally, note that at this point, NormalizeResultSetNode's ",
" * ResultColumnList has the same types as its child's ResultColumnList",
" * --so the NormalizeResultSetNode doesn't actually do anything ",
" * in its current form.",
" * ",
" * 3) Within the \"type adjustment\"",
" * ",
" * The purpose of the \"type adjustment\" is to take the types from ",
" * the InsertNode's ResultColumnList and \"push\" them down to the ",
" * NormalizeResultSetNode. It is this method which sets NRSN_RCL's types ",
" * to match the target (INSERT) table's types--and in doing so, makes them ",
" * different from NRSN_CHILD_RCL's types. Thus this is important because ",
" * without it, NormalizeResultSetNode would never change the types of the ",
" * values it receives.",
" * ",
" * That said, after the call to sourceRC.setType(...) we have:",
" *",
" * InsertNode",
" * (RCL0:ResultColumn_0<BigInt>)",
" * |",
" * NormalizeResultSetNode",
" * (RCL1:ResultColumn_1<BigInt> -> VirtualColumnNode_0<no_type> -> ResultColumn_4<Decimal>)",
" * |",
" * SelectNode",
" * (RCL4:ResultColumn_4<Decimal>)",
" * |",
" * FromSubquery",
" * (RCL2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL3:ResultColumn_3<Decimal>)",
" *",
" * The key change here is that ResultColumn_1 now has a type of BigInt ",
" * intead of Decimal. Since the SelectNode's ResultColumn, ResultColumn_4,",
" * still has a type of Decimal, the NormalizeResulSetNode will take as input",
" * a Decimal value (from SelectNode) and will output that value as a BigInt, ",
" * where output means pass the value further up the tree during execution ",
" * (see below).",
" * ",
" * Note before the fix for DERBY-3310, there was an additional type change ",
" * that caused problems with this case. ",
" * See the writeup attached to DERBY-3310 for details on why this was a problem. ",
" * ",
" * 4) After preprocessing and optimization:",
" * ",
" * After step 3 above, Derby will move on to the optimization phase, which ",
" * begins with preprocessing. During preprocessing the nodes in the tree ",
" * may change shape/content to reflect the needs of the optimizer and/or to ",
" * perform static optimizations/rewrites. In the case of our INSERT statement ",
" * the preprocessing does not change much:",
" *",
" * InsertNode",
" * (RCL0:ResultColumn_0<BigInt>)",
" * |",
" * NormalizeResultSetNode",
" * (RCL1:ResultColumn_1<BigInt> -> VirtualColumnNode<no_type> -> ResultColumn_4<Decimal>)",
" * |",
" * SelectNode",
" * (RCL4:ResultColumn_4<Decimal>)",
" * |",
" * ProjectRestrictNode_0",
" * (RCL2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL3:ResultColumn_3<Decimal>)",
" *",
" * The only thing that has changed between this tree and the one shown in ",
" * step 3 is that the FromSubquery has been replaced with a ProjectRestrictNode.",
" * Note that the ProjectRestrictNode has the same ResultColumnList object as ",
" * the FromSubquery, and the same ResultColumn object as well. That's worth ",
" * noting because it's another example of how Java objects can be \"moved\" ",
" * from one node to another during Derby compilation.",
" * ",
" * 5) After modification of access paths:",
" * As the final stage of optimization Derby will go through the modification ",
" * of access paths phase, in which the query tree is modified to prepare for ",
" * code generation. When we are done modifying access paths, our tree looks ",
" * something like this:",
"",
" InsertNode",
" (RCL0:ResultColumn_0<BigInt>)",
" |",
" NormalizeResultSetNode",
" (RCL1:ResultColumn_1<BigInt> -> VirtualColumnNode<no_type> -> ResultColumn_4<Decimal>)",
" |",
" DistinctNode",
" (RCL4:ResultColumn_4<Decimal> -> VirtualColumnNode<no_type> -> ResultColumn_5<Decimal>)",
" |",
" ProjectRestrictNode_1",
" (RCL5:ResultColumn_5<Decimal>)",
" |",
" ProjectRestrictNode_0",
" (RCL2:ResultColumn_2<Decimal>)",
" |",
" UnionNode",
" (RCL3:ResultColumn_3<Decimal>)",
"",
" * The key thing to note here is that the SelectNode has been replaced with two ",
" * new nodes: a ProjectRestrictNode whose ResultColumnList is a clone of the ",
" * SelectNode's ResultColumnList, and a DistinctNode, whose ResultColumnList ",
" * is the same object as the SelectNode's old ResultColumnList. More ",
" * specifically, all of the following occurred as part of modification of ",
" * access paths:",
" * ",
" * a) The SelectNode was replaced with ProjectRestrictNode_1, whose ",
" * ResultColumnList was the same object as the SelectNode's ResultColumnList.",
" *",
" * b) the ResultColumList object for ProjectRestrictNode_1 was pulled up ",
" * into a new DistinctNode.",
" *",
" * c) ProjectRestrictNode_1 was given a new ResultColumnList--namely, a ",
" * clone of its old ResultColumnList, including clones of the ResultColumn ",
" * objects.",
" * ",
" * d) VirtualColumnNodes were generated beneath the DistinctNode's ",
" * ResultColumns, and those VCNs point to the same result column objects ",
" * that now sit in ProjectRestrictNode_1's new ResultColumnList.",
" * ",
" * 6) After code generation:",
" *",
" * During code generation we will walk the compile-time query tree one final ",
" * time and, in doing so, we will generate code to build the execution-time ",
" * ResultSet tree. As part of that process the two ProjectRestrictNodes will ",
" * be skipped because they are both considered no-ops--i.e. they perform ",
" * neither projections nor restrictions, and hence are not needed. ",
" * (Note that, when checking to see if a ProjectRestrictNode is a no-op, ",
" * column types do *NOT* come into play.)",
" *",
" * Thus the execution tree that we generate ends up looking something like:",
" *",
" * InsertNode",
" * (RCL0:ResultColumn_0<BigInt>)",
" * |",
" * NormalizeResultSetNode",
" * (RCL1:ResultColumn_1<BigInt> -> VirtualColumnNode<no_type> -> ResultColumn_4<Decimal>)",
" * |",
" * DistinctNode",
" * (RCL4:ResultColumn_4<Decimal> -> VirtualColumnNode<no_type> -> ResultColumn_5<Decimal>)",
" * |",
" * ProjectRestrictNode_1",
" * (RCL5:ResultColumn_5<Decimal>)",
" * |",
" * ProjectRestrictNode_0",
" * (RCL2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL3:ResultColumn_3<Decimal>)",
" *",
" * At code generation the ProjectRestrictNodes will again be removed and the ",
" * execution tree will end up looking like this:",
" * ",
" * InsertResultSet",
" * (BigInt)",
" * |",
" * NormalizeResultSet",
" * (BigInt)",
" * |",
" * SortResultSet",
" * (Decimal)",
" * |",
" * UnionResultSet",
" * (Decimal)",
" *",
" * where SortResultSet is generated to enforce the DistinctNode, ",
" * and thus expects the DistinctNode's column type--i.e. Decimal.",
" * ",
" * When it comes time to execute the INSERT statement, then, the UnionResultSet ",
" * will create a row having a column whose type is DECIMAL, i.e. an SQLDecimal ",
" * value. The UnionResultSet will then pass that up to the SortResultSet, ",
" * who is *also* expecting an SQLDecimal value. So the SortResultSet is ",
" * satisfied and can sort all of the rows from the UnionResultSet. ",
" * Then those rows are passed up the tree to the NormalizeResultSet, ",
" * which takes the DECIMAL value from its child (SortResultSet) and normalizes ",
" * it to a value having its own type--i.e. to a BIGINT. The BIGINT is then ",
" * passed up to InsertResultSet, which inserts it into the BIGINT column ",
" * of the target table. And so the INSERT statement succeeds.",
" * ",
" * ---- Sample 2 - NormalizeResultSetNode and Union (DERBY-3494 write-up variation)",
" * Query for discussion",
" * ",
" *",
" * create table t1 (bi bigint, i int);",
" * insert into t1 values (100, 10), (288, 28), (4820, 2);",
" *",
" * select * from",
" * (select bi, i from t1 union select i, bi from t1) u(a,b) where a > 28;",
" *",
" *",
" * Some things to notice about this query:",
" * a) The UNION is part of a subquery.",
" * b) This is *not* a UNION ALL; i.e. we need to eliminate duplicate rows.",
" * c) The left side of the UNION and the right side of the UNION have ",
" * different (but compatible) types: the left has (BIGINT, INT), while the ",
" * right has (INT, BIGINT).",
" * d) There is a predicate in the WHERE clause which references a column ",
" * from the UNION subquery.",
" * e) The table T1 has at least one row.",
" * All of these factors plays a role in the handling of the query and are ",
" * relevant to this discussion.",
" * ",
" * Building the NormalizeResultSetNode. ",
" * When compiling a query, the final stage of optimization in Derby is the ",
" * \"modification of access paths\" phase, in which each node in the query ",
" * tree is given a chance to modify or otherwise perform maintenance in ",
" * preparation for code generation. In the case of a UnionNode, a call ",
" * to modifyAccessPaths() will bring us to the addNewNodes() method, ",
" * which is where the call is made to generate the NormalizeResultSetNode.",
" * ",
" *",
" * if (! columnTypesAndLengthsMatch())",
" * {",
" * treeTop = ",
" * (NormalizeResultSetNode) getNodeFactory().getNode(",
" * C_NodeTypes.NORMALIZE_RESULT_SET_NODE,",
" * treeTop, null, null, Boolean.FALSE,",
" * getContextManager()); ",
" * }",
" *",
" * The fact that the left and right children of the UnionNode have different ",
" * types (observation c above) means that the if condition will return ",
" * true and thus we will generate a NormalizeResultSetNode above the ",
" * UnionNode. At this point (before the NormalizeResultSetNode has been ",
" * generated) our (simplified) query tree looks something like the following.",
" * PRN stands for ProjectRestrictNode, RCL stands for ResultColumnList:",
" *",
" * PRN0",
" * (RCL0)",
" * (restriction: a > 28 {RCL1})",
" * |",
" * UnionNode // <-- Modifying access paths...",
" * (RCL1)",
" * / \\",
" * PRN2 PRN3",
" * | |",
" * PRN4 PRN5",
" * | |",
" * T1 T1",
" *",
" *",
" * where 'a > 28 {RCL1}' means that the column reference A in the predicate a > 28 points to a ResultColumn object in the ResultColumnList that corresponds to \"RCL1\". I.e. at this point, the predicate's column reference is pointing to an object in the UnionNode's RCL.",
" * \"normalize node creation\" will execute:",
" *",
" * ResultColumnList prRCList = rcl;",
" * rsn.setResultColumns(rcl.copyListAndObjects());",
" * // Remove any columns that were generated.",
" * prRCList.removeGeneratedGroupingColumns();",
" * ...",
" * prRCList.genVirtualColumnNodes(rsn, rsn.getResultColumns());",
" * ",
" * this.resultColumns = prRCList;",
" * ",
" * to create a NormalizeResultSetNode whose result column list is prRCList. ",
" * This gives us:",
" *",
" * PRN0",
" * (RCL0)",
" * (restriction: a > 28 {RCL1})",
" * |",
" * NormalizeResultSetNode",
" * (RCL1) // RCL1 \"pulled up\" to NRSN",
" * |",
" * UnionNode",
" * (RCL2) // RCL2 is a (modified) *copy* of RCL1",
" * / \\",
" * PRN2 PRN3",
" * | |",
" * PRN4 PRN5",
" * | |",
" * T1 T1",
" *",
" * Note how RCL1, the ResultColumnList object for the UnionNode, has now been ",
" * *MOVED* so that it belongs to the NormalizeResultSetNode. So the predicate ",
" * a > 28, which (still) points to RCL1, is now pointing to the ",
" * NormalizeResultSetNode instead of to the UnionNode.",
" * ",
" * After this, we go back to UnionNode.addNewNodes() where we see the following:",
" * ",
" *",
" * treeTop = (ResultSetNode) getNodeFactory().getNode(",
" * C_NodeTypes.DISTINCT_NODE,",
" * treeTop.genProjectRestrict(),",
" * Boolean.FALSE,",
" * tableProperties,",
" * getContextManager());",
" *",
" *",
" * I.e. we have to generate a DistinctNode to eliminate duplicates because the query ",
" * specified UNION, not UNION ALL.",
" * ",
" * Note the call to treeTop.genProjectRestrict(). Since NormalizeResultSetNode ",
" * now sits on top of the UnionNode, treeTop is a reference to the ",
" * NormalizeResultSetNode. That means we end up at the genProjectRestrict() ",
" * method of NormalizeResultSetNode. And guess what? The method does ",
" * something very similar to what we did in NormalizeResultSetNode.init(), ",
" * namely:",
" *",
" * ResultColumnList prRCList = resultColumns;",
" * resultColumns = resultColumns.copyListAndObjects();",
" *",
" * and then creates a ProjectRestrictNode whose result column list is prRCList. This gives us:",
" *",
" * PRN0",
" * (RCL0)",
" * (restriction: a > 28 {RCL1})",
" * |",
" * PRN6",
" * (RCL1) // RCL1 \"pulled up\" to new PRN.",
" * |",
" * NormalizeResultSetNode",
" * (RCL3) // RCL3 is a (modified) copy of RCL1",
" * |",
" * UnionNode",
" * (RCL2) // RCL2 is a (modified) copy of RCL1",
" * / \\",
" * PRN2 PRN3",
" * | |",
" * PRN4 PRN5",
" * | |",
" * T1 T1",
" *",
" * On top of that we then put a DistinctNode. And since the init() method ",
" * of DistinctNode does the same kind of thing as the previously-discussed ",
" * methods, we ultimatley end up with:",
" *",
" * PRN0",
" * (RCL0)",
" * (restriction: a > 28 {RCL1})",
" * |",
" * DistinctNode",
" * (RCL1) // RCL1 pulled up to DistinctNode",
" * |",
" * PRN6",
" * (RCL4) // RCL4 is a (modified) copy of RCL1",
" * |",
" * NormalizeResultSetNode",
" * (RCL3) // RCL3 is a (modified) copy of RCL1",
" * |",
" * UnionNode",
" * (RCL2) // RCL2 is a (modified) copy of RCL1",
" * / \\",
" * PRN2 PRN3",
" * | |",
" * PRN4 PRN5",
" * | |",
" * T1 T1",
" *",
" * And thus the predicate a > 28, which (still) points to RCL1, is now ",
" * pointing to the DistinctNode instead of to the UnionNode. And this ",
" * is what we want: i.e. we want the predicate a > 28 to be applied ",
" * to the rows that we retrieve from the node at the *top* of the ",
" * subtree generated for the UnionNode. It is the non-intuitive code ",
" * in the normalize node creation that allows this to happen.",
" *"
],
"header": "@@ -53,8 +53,491 @@ import org.apache.derby.iapi.services.classfile.VMOpcode;",
"removed": [
" * child result set that needs one."
]
}
]
}
] |
derby-DERBY-3496-4c8f5703
|
DERBY-3496: CallableStatement with output parameter leaves cursor open after execution
Patch file: derby-3496.diff (fixed the indentation problem)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@640787 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3502-477fd5e2
|
DERBY-3502: Ensure that conglomerate sharing works correctly in the
presence of unique constraints over nullable columns.
Contributed by: Anurag Shekhar (anurag dot shekhar at sun dot com)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@634852 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/ConglomerateDescriptor.java",
"hunks": [
{
"added": [
"\t\t\t\t(indexRowGenerator.isUnique() && !othersIRG.isUnique()) ||",
"\t\t\t\t\t(indexRowGenerator.isUniqueWithDuplicateNulls() && ",
"\t\t\t\t\t\t!othersIRG.isUniqueWithDuplicateNulls());"
],
"header": "@@ -456,7 +456,9 @@ public final class ConglomerateDescriptor extends TupleDescriptor",
"removed": [
"\t\t\t\tindexRowGenerator.isUnique() && !othersIRG.isUnique();"
]
},
{
"added": [
"\t\t * 2. If none of sharing descriptors are unique and any of ",
"\t\t * the descriptors are UniqueWithDuplicateNulls the physical",
"\t\t * conglomerate must also be UniqueWithDuplicateNulls",
"\t\t * 3. If none of the sharing descriptors are unique or ",
"\t\t * UniqueWithDuplicateNulls, the physical conglomerate ",
"\t\t * must not be unique.",
"\t\t *",
"\t\t * 4. If the physical conglomerate has n columns, then all"
],
"header": "@@ -561,10 +563,15 @@ public final class ConglomerateDescriptor extends TupleDescriptor",
"removed": [
"\t\t * 2. If none of the sharing descriptors are unique, the",
"\t\t * physical conglomerate must not be unique.",
"\t\t * 3. If the physical conglomerate has n columns, then all"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/CreateIndexConstantAction.java",
"hunks": [
{
"added": [
"\t\tthis.uniqueWithDuplicateNulls = irg.isUniqueWithDuplicateNulls();"
],
"header": "@@ -205,6 +205,7 @@ class CreateIndexConstantAction extends IndexConstantAction",
"removed": []
},
{
"added": [
"\t\t\t *",
"\t\t\t *",
"\t\t\t * 3. one of the following is true:",
"\t\t\t * a) the existing index is unique, OR",
"\t\t\t * b) the existing index is non-unique with uniqueWhenNotNulls",
"\t\t\t * set to TRUE and the index being created is non-unique, OR",
"\t\t\t * c) both the existing index and the one being created are",
"\t\t\t * non-unique and have uniqueWithDuplicateNulls set to FALSE.",
"\t\t\t//check if existing index is non unique and uniqueWithDuplicateNulls",
"\t\t\t//is set to true (backing index for unique constraint)",
"\t\t\tif (possibleShare && !irg.isUnique ())",
"\t\t\t{",
"\t\t\t\t/* If the existing index has uniqueWithDuplicateNulls set to",
"\t\t\t\t * TRUE it can be shared by other non-unique indexes; otherwise",
"\t\t\t\t * the existing non-unique index has uniqueWithDuplicateNulls",
"\t\t\t\t * set to FALSE, which means the new non-unique conglomerate",
"\t\t\t\t * can only share if it has uniqueWithDuplicateNulls set to",
"\t\t\t\t * FALSE, as well.",
"\t\t\t\t */",
"\t\t\t\tpossibleShare = (irg.isUniqueWithDuplicateNulls() ||",
"\t\t\t\t\t\t\t\t! uniqueWithDuplicateNulls);",
"\t\t\t}",
""
],
"header": "@@ -440,13 +441,34 @@ class CreateIndexConstantAction extends IndexConstantAction",
"removed": [
"\t\t\t * 3. both the previously existing index and the one being created ",
"\t\t\t * are non-unique OR the previously existing index is unique"
]
}
]
}
] |
derby-DERBY-3504-a63ab5e5
|
DERBY-3504 Fix timeout errors in management._Suite when running with classes. Was due to the spawned vm to execute the server failing as installing the policy file requires jars. Changed the decorator to add the -noSecurityManager flag if classes is being used with comments indicating if tests need a different behaviour they need to provide it.
Added a SpawnedProcess utilitly class that correctly handles the output streams written by a spawned process by having two background threads that read from the streams into a buffer. This stops the change the process hangs due to being blocked writing stdout or stderr. Used this utility class in one more location where a vm was being spawned. Ideally the spawning of a java process should be in a single utility not scattered around multple tests, separate cleanup issue.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@634425 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/NetworkServerTestSetup.java",
"hunks": [
{
"added": [
"import java.io.IOException;",
"import java.security.PrivilegedActionException;",
"import java.security.PrivilegedExceptionAction;"
],
"header": "@@ -23,10 +23,13 @@ import java.io.FileNotFoundException;",
"removed": []
},
{
"added": [
" private static final long WAIT_TIME = 10000;",
" private static final int SLEEP_TIME = 100;"
],
"header": "@@ -47,10 +50,10 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" private static final long WAIT_TIME = 300000;",
" private static final int SLEEP_TIME = 500;"
]
},
{
"added": [],
"header": "@@ -62,7 +65,6 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" private final InputStream[] inputStreamHolder;"
]
},
{
"added": [
" private String[] startupArgs;",
" /**",
" * The server as a process if started in a different vm.",
" */",
" private SpawnedProcess spawnedServer;"
],
"header": "@@ -74,8 +76,11 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" private final String[] startupArgs;",
" private Process serverProcess;"
]
},
{
"added": [],
"header": "@@ -97,7 +102,6 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" this.inputStreamHolder = null;"
]
},
{
"added": [],
"header": "@@ -122,7 +126,6 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" this.inputStreamHolder = null;"
]
},
{
"added": [
" * <P>",
" * If the classes are being loaded from the classes",
" * folder instead of jar files then this will start",
" * the server up with no security manager using -noSecurityManager,",
" * unless the systemProperties or startupArgs set up any security",
" * manager.",
" * This is because the default policy",
" * installed by the network server only works from jar files.",
" * If this not desired then the test should skip the",
" * fixtures when loading from classes or",
" * install its own security manager.",
" boolean serverShouldComeUp"
],
"header": "@@ -131,14 +134,24 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" boolean serverShouldComeUp,",
" InputStream[] inputStreamHolder"
]
},
{
"added": [],
"header": "@@ -149,7 +162,6 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" this.inputStreamHolder = inputStreamHolder;"
]
},
{
"added": [
" { spawnedServer = startSeparateProcess(); }",
" if (serverShouldComeUp)",
" {",
" if (!pingForServerStart(networkServerController)) {",
" String msg = \"Timed out waiting for network server to start\";",
" // Dump the output from the spawned process",
" // and destroy it.",
" if (spawnedServer != null) {",
" spawnedServer.complete(true);",
" msg = spawnedServer.getFailMessage(msg);",
" spawnedServer = null;",
" }",
" fail(msg);",
" }",
" }"
],
"header": "@@ -164,13 +176,26 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" { serverProcess = startSeparateProcess(); }",
" if ( serverShouldComeUp ) { waitForServerStart(networkServerController); }"
]
},
{
"added": [
" private SpawnedProcess startSeparateProcess() throws Exception"
],
"header": "@@ -214,7 +239,7 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" private Process startSeparateProcess() throws Exception"
]
},
{
"added": [
" ",
" // Loading from classes need to work-around the limitation",
" // of the default policy file doesn't work with classes.",
" if (!TestConfiguration.loadingFromJars())",
" {",
" boolean setNoSecurityManager = true;",
" for (int i = 0; i < systemProperties.length; i++)",
" {",
" if (systemProperties[i].startsWith(\"java.security.\"))",
" {",
" setNoSecurityManager = false;",
" break;",
" }",
" }",
" for (int i = 0; i < startupArgs.length; i++)",
" {",
" if (startupArgs[i].equals(\"-noSecurityManager\"))",
" {",
" setNoSecurityManager = false;",
" break;",
" }",
" }",
" if (setNoSecurityManager)",
" {",
" String[] newArgs = new String[startupArgs.length + 1];",
" System.arraycopy(startupArgs, 0, newArgs, 0, startupArgs.length);",
" newArgs[newArgs.length - 1] = \"-noSecurityManager\";",
" startupArgs = newArgs;",
" }",
" }"
],
"header": "@@ -223,6 +248,36 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": []
},
{
"added": [
" Process serverProcess;",
" ",
" try {",
" serverProcess = (Process)",
" AccessController.doPrivileged",
" (",
" new PrivilegedExceptionAction()",
" public Object run() throws IOException",
" {",
" return Runtime.getRuntime().exec(command);",
" );",
" } catch (PrivilegedActionException e) {",
" throw e.getException();",
" }",
" return new SpawnedProcess(serverProcess, \"SpawnedNetworkServer\");",
" public SpawnedProcess getServerProcess() {",
" return spawnedServer;"
],
"header": "@@ -264,34 +319,33 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" Process serverProcess = (Process) AccessController.doPrivileged",
" (",
" new PrivilegedAction()",
" {",
" public Object run()",
" Process result = null;",
" try {",
" result = Runtime.getRuntime().exec( command );",
" } catch (Exception ex) {",
" ex.printStackTrace();",
" ",
" return result;",
" }",
" );",
" inputStreamHolder[ 0 ] = serverProcess.getInputStream();",
" return serverProcess;",
" public Process getServerProcess() {",
" return serverProcess;"
]
},
{
"added": [
" if (spawnedServer != null) {",
" spawnedServer.complete(false);",
" spawnedServer = null;"
],
"header": "@@ -322,9 +376,9 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" if (serverProcess != null) {",
" serverProcess.waitFor();",
" serverProcess = null;"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/SpawnedProcess.java",
"hunks": [
{
"added": [
"/*",
" *",
" * Derby - Class org.apache.derbyTesting.junit.SpawnedProcess",
" *",
" * Licensed to the Apache Software Foundation (ASF) under one or more",
" * contributor license agreements. See the NOTICE file distributed with",
" * this work for additional information regarding copyright ownership.",
" * The ASF licenses this file to You under the Apache License, Version 2.0",
" * (the \"License\"); you may not use this file except in compliance with",
" * the License. You may obtain a copy of the License at",
" *",
" * http://www.apache.org/licenses/LICENSE-2.0",
" *",
" * Unless required by applicable law or agreed to in writing, ",
" * software distributed under the License is distributed on an ",
" * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, ",
" * either express or implied. See the License for the specific ",
" * language governing permissions and limitations under the License.",
" */",
"package org.apache.derbyTesting.junit;",
"",
"import java.io.ByteArrayOutputStream;",
"import java.io.IOException;",
"import java.io.InputStream;",
"import java.io.PrintStream;",
"",
"/**",
" * Utility code that wraps a spawned process (Java Process object).",
" * Handles the output streams (stderr and stdout) written",
" * by the process by spawning off background threads to read",
" * them into byte arrays. The class provides access to the",
" * output, typically called once the process is complete.",
" */",
"public final class SpawnedProcess {",
"",
" private final String name;",
"",
" private final Process javaProcess;",
"",
" private final ByteArrayOutputStream err;",
"",
" private final ByteArrayOutputStream out;",
"",
" public SpawnedProcess(Process javaProcess, String name) {",
" this.javaProcess = javaProcess;",
" this.name = name;",
"",
" err = streamSaver(javaProcess.getErrorStream(), name",
" .concat(\":System.err\"));",
" out = streamSaver(javaProcess.getInputStream(), name",
" .concat(\":System.out\"));",
" }",
"",
" /**",
" * Get the Java Process object",
" */",
" public Process getProcess() {",
" return javaProcess;",
" }",
" ",
" /**",
" * Get the full server output (stdout) as a string using the default",
" * encoding which is assumed is how it was orginally",
" * written.",
" */",
" public String getFullServerOutput() throws Exception {",
" Thread.sleep(500);",
" synchronized (this) {",
" return out.toString(); ",
" }",
" }",
" ",
" /**",
" * Position offset for getNextServerOutput().",
" */",
" int stdOutReadOffset;",
" /**",
" * Get the next set of server output (stdout) as a string using the default",
" * encoding which is assumed is how it was orginally",
" * written. Assumes a single caller is executing the calls",
" * to this method.",
" */",
" public String getNextServerOutput() throws Exception",
" {",
" byte[] fullData;",
" synchronized (this) {",
" fullData = out.toByteArray();",
" }",
" ",
" String output = new String(fullData, stdOutReadOffset,",
" fullData.length - stdOutReadOffset);",
" stdOutReadOffset = fullData.length;",
" return output;",
" }",
" /**",
" * Get a fail message that is the passed in reason plus",
" * the stderr and stdout for any output written. Allows",
" * easier debugging if the reason the process failed is there!",
" */",
" public String getFailMessage(String reason) throws InterruptedException",
" {",
" Thread.sleep(500);",
" StringBuffer sb = new StringBuffer();",
" sb.append(reason);",
" sb.append(\":Spawned \");",
" sb.append(name);",
" sb.append(\" exitCode=\");",
" try {",
" sb.append(javaProcess.exitValue());",
" } catch (IllegalThreadStateException e) {",
" sb.append(\"running\");",
" }",
" ",
" synchronized (this) {",
" if (err.size() != 0)",
" {",
" sb.append(\"\\nSTDERR:\\n\");",
" sb.append(err.toString()); ",
" }",
" if (out.size() != 0)",
" {",
" sb.append(\"\\nSTDOUT:\\n\");",
" sb.append(out.toString()); ",
" }",
" }",
" return sb.toString();",
" }",
"",
" /**",
" * Complete the method.",
" * @param destroy True to destroy it, false to wait for it to complete.",
" */",
" public int complete(boolean destroy) throws InterruptedException, IOException {",
" if (destroy)",
" javaProcess.destroy();",
"",
" int exitCode = javaProcess.waitFor();",
" Thread.sleep(500);",
" synchronized (this) {",
"",
" // Always write the error",
" if (err.size() != 0) {",
" System.err.println(\"START-SPAWNED:\" + name + \" ERROR OUTPUT:\");",
" err.writeTo(System.err);",
" System.err.println(\"END-SPAWNED :\" + name + \" ERROR OUTPUT:\");",
" }",
"",
" // Only write the error if it appeared the server",
" // failed in some way.",
" if ((destroy || exitCode != 0) && out.size() != 0) {",
" System.out.println(\"START-SPAWNED:\" + name",
" + \" STANDARD OUTPUT: exit code=\" + exitCode);",
" out.writeTo(System.out);",
" System.out.println(\"END-SPAWNED :\" + name",
" + \" STANDARD OUTPUT:\");",
" }",
" }",
" ",
" return exitCode;",
" }",
"",
" private ByteArrayOutputStream streamSaver(final InputStream in,",
" final String name) {",
"",
" final ByteArrayOutputStream out = new ByteArrayOutputStream() {",
" public void reset() {",
" super.reset();",
" new Throwable(\"WWW\").printStackTrace(System.out);",
" }",
"",
" };",
"",
" Thread streamReader = new Thread(new Runnable() {",
"",
" public void run() {",
" try {",
" byte[] buffer = new byte[1024];",
" int read;",
" while ((read = in.read(buffer)) != -1) {",
" synchronized (SpawnedProcess.this) {",
" out.write(buffer, 0, read);",
" }",
" }",
"",
" } catch (IOException ioe) {",
" ioe.printStackTrace(new PrintStream(out, true));",
" }",
" }",
"",
" }, name);",
" streamReader.setDaemon(true);",
" streamReader.setPriority(Thread.MIN_PRIORITY);",
" streamReader.start();",
"",
" return out;",
"",
" }",
"}"
],
"header": "@@ -0,0 +1,198 @@",
"removed": []
}
]
}
] |
derby-DERBY-3509-8971c8b7
|
DERBY-3509: The replication log shipper is not notified when a new replication transmitter is instantiated in MC#handleException.
- Modified the handleException method to return the transmitter object that is created.
- The log shipper bases its decision on whether it should continue with the log shipping based on if a valid transmitter object is created.
Contributed by V Narayanan
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@644291 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/AsynchronousLogShipper.java",
"hunks": [
{
"added": [
" private ReplicationMessageTransmit transmitter;"
],
"header": "@@ -66,7 +66,7 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": [
" final private ReplicationMessageTransmit transmitter;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/MasterController.java",
"hunks": [
{
"added": [
" *",
" * @return an instance of the transmitter used to transmit messages to the",
" * slave.",
" ReplicationMessageTransmit handleExceptions(Exception exception) {"
],
"header": "@@ -503,8 +503,11 @@ public class MasterController",
"removed": [
" void handleExceptions(Exception exception) {"
]
},
{
"added": [
" return null;",
" return null;",
" return transmitter;"
],
"header": "@@ -540,11 +543,14 @@ public class MasterController",
"removed": []
}
]
}
] |
derby-DERBY-3514-740feb0a
|
DERBY-3514 Ensure that the client socket used for network server commands is closed if the command throws an exception.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@634865 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
" "
],
"header": "@@ -760,7 +760,7 @@ public final class NetworkServerControlImpl {",
"removed": [
""
]
},
{
"added": [
" \t\t\t"
],
"header": "@@ -806,7 +806,7 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t"
]
},
{
"added": [],
"header": "@@ -817,7 +817,6 @@ public final class NetworkServerControlImpl {",
"removed": [
""
]
},
{
"added": [
"\t \t"
],
"header": "@@ -860,10 +859,7 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t ",
" ",
"",
"\t"
]
},
{
"added": [
"\t\tPrintWriter savWriter;",
" try {",
" setUpSocket();",
" writeCommandHeader(COMMAND_SHUTDOWN);",
" // DERBY-2109: transmit user credentials for System Privileges check",
" writeLDString(userArg);",
" writeLDString(passwordArg);",
" send();",
" readResult();",
" savWriter = logWriter;",
" // DERBY-1571: If logWriter is null, stack traces are printed to",
" // System.err. Set logWriter to a silent stream to suppress stack",
" // traces too.",
" FilterOutputStream silentStream = new FilterOutputStream(null) {",
" public void write(int b) {",
" }",
"",
" public void flush() {",
" }",
"",
" public void close() {",
" }",
" };",
" setLogWriter(new PrintWriter(silentStream));",
" for (ntry = 0; ntry < SHUTDOWN_CHECK_ATTEMPTS; ntry++) {",
" Thread.sleep(SHUTDOWN_CHECK_INTERVAL);",
" try {",
" pingWithNoOpen();",
" } catch (Exception e) {",
" // as soon as we can't ping return",
" break;",
" }",
" }",
" } finally {",
" closeSocket();",
" }\t\t"
],
"header": "@@ -996,39 +992,45 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tsetUpSocket();",
"\t\twriteCommandHeader(COMMAND_SHUTDOWN);",
"\t\t// DERBY-2109: transmit user credentials for System Privileges check",
"\t\twriteLDString(userArg);",
"\t\twriteLDString(passwordArg);",
"\t\tsend();",
"\t\treadResult();",
"\t\tPrintWriter savWriter = logWriter;",
"\t\t// DERBY-1571: If logWriter is null, stack traces are printed to",
"\t\t// System.err. Set logWriter to a silent stream to suppress stack",
"\t\t// traces too.",
"\t\tFilterOutputStream silentStream = new FilterOutputStream(null) {",
"\t\t\t\tpublic void write(int b) { }",
"\t\t\t\tpublic void flush() { }",
"\t\t\t\tpublic void close() { }",
"\t\t\t};",
"\t\tsetLogWriter(new PrintWriter(silentStream));",
"\t\tfor (ntry = 0; ntry < SHUTDOWN_CHECK_ATTEMPTS; ntry++)",
"\t\t{",
"\t\t\tThread.sleep(SHUTDOWN_CHECK_INTERVAL);",
"\t\t\ttry {",
" pingWithNoOpen();",
"\t\t\t} catch (Exception e) ",
"\t\t\t{",
" // as soon as we can't ping return",
"\t\t\t\tbreak;",
"\t\t\t}",
"\t\t}",
" closeSocket();",
" "
]
},
{
"added": [
" try {",
" setUpSocket();",
" pingWithNoOpen();",
" } finally {",
" closeSocket();",
" }"
],
"header": "@@ -1152,9 +1154,12 @@ public final class NetworkServerControlImpl {",
"removed": [
" setUpSocket();",
" pingWithNoOpen();",
" closeSocket();"
]
},
{
"added": [
" try {",
" setUpSocket();",
" writeCommandHeader(COMMAND_TRACE);",
" commandOs.writeInt(connNum);",
" writeByte(on ? 1 : 0);",
" send();",
" readResult();",
" consoleTraceMessage(connNum, on);",
" } finally {",
" closeSocket();",
" }",
" "
],
"header": "@@ -1204,14 +1209,18 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tsetUpSocket();",
"\t\twriteCommandHeader(COMMAND_TRACE);",
"\t\tcommandOs.writeInt(connNum);",
"\t\twriteByte(on ? 1 : 0);",
"\t\tsend();",
"\t\treadResult();",
"\t\tconsoleTraceMessage(connNum, on);",
" closeSocket();"
]
},
{
"added": [
" try {",
" setUpSocket();",
" writeCommandHeader(COMMAND_LOGCONNECTIONS);",
" writeByte(on ? 1 : 0);",
" send();",
" readResult();",
" } finally {",
" closeSocket();",
" }\t\t"
],
"header": "@@ -1246,12 +1255,15 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tsetUpSocket();",
"\t\twriteCommandHeader(COMMAND_LOGCONNECTIONS);",
"\t\twriteByte(on ? 1 : 0);",
"\t\tsend();",
"\t\treadResult();",
" closeSocket();"
]
},
{
"added": [
" try {",
" setUpSocket();",
" writeCommandHeader(COMMAND_TRACEDIRECTORY);",
" writeLDString(traceDirectory);",
" send();",
" readResult();",
" } finally {",
" closeSocket();",
" }\t\t",
" "
],
"header": "@@ -1260,12 +1272,16 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tsetUpSocket();",
"\t\twriteCommandHeader(COMMAND_TRACEDIRECTORY);",
"\t\twriteLDString(traceDirectory);",
"\t\tsend();",
"\t\treadResult();",
" closeSocket();"
]
},
{
"added": [
" try {",
" setUpSocket();",
" writeCommandHeader(COMMAND_MAXTHREADS);",
" commandOs.writeInt(max);",
" send();",
" readResult();",
" int newval = readInt();",
" consolePropertyMessage(\"DRDA_MaxThreadsChange.I\", new Integer(",
" newval).toString());",
" } finally {",
" closeSocket();",
" }\t\t",
" "
],
"header": "@@ -1323,15 +1339,19 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tsetUpSocket();",
"\t\twriteCommandHeader(COMMAND_MAXTHREADS);",
"\t\tcommandOs.writeInt(max);",
"\t\tsend();",
"\t\treadResult();",
"\t\tint newval = readInt();",
"\t\tconsolePropertyMessage(\"DRDA_MaxThreadsChange.I\", ",
" \t\t\t\t\tnew Integer(newval).toString());",
" closeSocket();"
]
},
{
"added": [
" try {",
" setUpSocket();",
" writeCommandHeader(COMMAND_TIMESLICE);",
" commandOs.writeInt(timeslice);",
" send();",
" readResult();",
" int newval = readInt();",
" consolePropertyMessage(\"DRDA_TimeSliceChange.I\",",
" new Integer(newval).toString());",
" } finally {",
" closeSocket();",
" } "
],
"header": "@@ -1345,15 +1365,18 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tsetUpSocket();",
"\t\twriteCommandHeader(COMMAND_TIMESLICE);",
"\t\tcommandOs.writeInt(timeslice);",
"\t\tsend();",
"\t\treadResult();",
"\t\tint newval = readInt();",
"\t\tconsolePropertyMessage(\"DRDA_TimeSliceChange.I\", ",
"\t\t\t\t\t\t\t\t\t new Integer(newval).toString());",
" closeSocket();"
]
},
{
"added": [
" try {",
" setUpSocket();",
" writeCommandHeader(COMMAND_PROPERTIES);",
" send();",
" byte[] val = readBytesReply(\"DRDA_PropertyError.S\");",
" ",
" Properties p = new Properties();",
" try {",
" ByteArrayInputStream bs = new ByteArrayInputStream(val);",
" p.load(bs);",
" } catch (IOException io) {",
" consolePropertyMessage(\"DRDA_IOException.S\", io.getMessage());",
" }",
" return p;",
" } finally {",
" closeSocket();",
" }\t\t"
],
"header": "@@ -1365,20 +1388,23 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tsetUpSocket();",
"\t\twriteCommandHeader(COMMAND_PROPERTIES);",
"\t\tsend();",
"\t\tbyte [] val = readBytesReply(\"DRDA_PropertyError.S\");",
" closeSocket();",
"\t\tProperties p = new Properties();",
"\t\ttry {",
"\t\t\tByteArrayInputStream bs = new ByteArrayInputStream(val);",
"\t\t\tp.load(bs);",
"\t\t} catch (IOException io) {",
"\t\t\tconsolePropertyMessage(\"DRDA_IOException.S\", ",
"\t\t\t\t\t\tio.getMessage());",
"\t\t}",
"\t\treturn p;"
]
},
{
"added": [
" "
],
"header": "@@ -2511,7 +2537,7 @@ public final class NetworkServerControlImpl {",
"removed": [
""
]
}
]
}
] |
derby-DERBY-3519-33efbae6
|
DERBY-3519: junit regression test failure in testInertTime(org.apache.derbyTesting.functionTests.tests.lang.TimeHandlingTest)junit.framework.AssertionFailedError: expected:<2> but was:<3>
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1464386 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3520-86be795f
|
DERBY-3520 Convert views.sql to JUnit
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@635964 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3523-09828472
|
DERBY-3523 sql states (X0Y63, X0Y63, X0Y63.S) related to nulls in unique constraints are associated with wrong message texts
Add upgrade test to verify messages are the same when created with 10.3 and
with soft upgrade.
Contributed by Anurag Shekhar (anurag dot shekhar at sun dot com)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@641398 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3523-5915e886
|
Back out upgrade test change for DERBY-3523. It was causing failures in the
tinderbox run.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@641615 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3523-a8edfcf1
|
DERBY-3523 sql states (X0Y63, X0Y63, X0Y63.S) related to nulls in unique constraints are associated with wrong message texts
add upgrade tests. Contributed by Anurag Shekhar (anurag dot shekhar at sun dot com)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@641911 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3525-3be5c9d7
|
DERBY-3525 Remove unneeded code to get JDBC level in BrokeredConnection and BrokeredStatement classes
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@636753 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredConnection.java",
"hunks": [
{
"added": [
"\t\treturn new BrokeredStatement(statementControl);"
],
"header": "@@ -447,7 +447,7 @@ public abstract class BrokeredConnection implements EngineConnection",
"removed": [
"\t\treturn new BrokeredStatement(statementControl, getJDBCLevel());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredConnection40.java",
"hunks": [
{
"added": [
" return new BrokeredStatement40(statementControl);"
],
"header": "@@ -257,7 +257,7 @@ public class BrokeredConnection40 extends BrokeredConnection30 {",
"removed": [
" return new BrokeredStatement40(statementControl, getJDBCLevel());"
]
},
{
"added": [
" return new BrokeredPreparedStatement40(statementControl, sql, generatedKeys);"
],
"header": "@@ -265,7 +265,7 @@ public class BrokeredConnection40 extends BrokeredConnection30 {",
"removed": [
" return new BrokeredPreparedStatement40(statementControl, getJDBCLevel(), sql, generatedKeys);"
]
},
{
"added": [
" return new BrokeredCallableStatement40(statementControl, sql);"
],
"header": "@@ -273,7 +273,7 @@ public class BrokeredConnection40 extends BrokeredConnection30 {",
"removed": [
" return new BrokeredCallableStatement40(statementControl, getJDBCLevel(), sql);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -50,7 +50,6 @@ public class BrokeredStatement implements EngineStatement",
"removed": [
"\tfinal int jdbcLevel;"
]
},
{
"added": [
" BrokeredStatement(BrokeredStatementControl control) throws SQLException"
],
"header": "@@ -61,10 +60,9 @@ public class BrokeredStatement implements EngineStatement",
"removed": [
" BrokeredStatement(BrokeredStatementControl control, int jdbcLevel) throws SQLException",
"\t\tthis.jdbcLevel = jdbcLevel;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredStatement40.java",
"hunks": [
{
"added": [
" BrokeredStatement40(BrokeredStatementControl control) ",
" super(control);"
],
"header": "@@ -31,14 +31,13 @@ public class BrokeredStatement40 extends BrokeredStatement {",
"removed": [
" * @param jdbcLevel int",
" BrokeredStatement40(BrokeredStatementControl control, int jdbcLevel) ",
" super(control, jdbcLevel);"
]
}
]
}
] |
derby-DERBY-3526-d18f3385
|
DERBY-3526: AsynchronousLogShipper#workToDo is blocked while the log shipper sends a log chunk
Use a different object to synchronize on while calling wait and notifying the log shipper thread.
Contributed by V Narayanan
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@642184 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/AsynchronousLogShipper.java",
"hunks": [
{
"added": [
" /**",
" * Object used to synchronize on while the log shipper thread",
" * is moved into the wait state, or while notifying it.",
" */",
" private Object objLSTSync = new Object(); // LST->Log Shippper Thread",
" "
],
"header": "@@ -106,6 +106,12 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": []
},
{
"added": [
" shippingInterval = calculateSIfromFI();",
" if (shippingInterval != -1) {",
" synchronized(objLSTSync) {",
" objLSTSync.wait(shippingInterval);"
],
"header": "@@ -197,10 +203,10 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": [
" synchronized(this) {",
" shippingInterval = calculateSIfromFI();",
" if (shippingInterval != -1) {",
" wait(shippingInterval);"
]
},
{
"added": [
" synchronized(objLSTSync) {",
" objLSTSync.notify();"
],
"header": "@@ -302,11 +308,11 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": [
" synchronized(this) {",
" notify();"
]
},
{
"added": [
" synchronized (objLSTSync) {",
" objLSTSync.notify();"
],
"header": "@@ -362,8 +368,8 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": [
" synchronized (this) {",
" notify();"
]
}
]
}
] |
derby-DERBY-3527-c283bcdc
|
DERBY-3527: The slave will not notice that a network cable is unplugged and will therefore reject failover/stopSlave commands
Checks if the network connection is up by sending a ping message from the slave to the master.
The shipment of the message has to happen in a separate thread because TCP timeout for send message is 2 minutes, and not configurable.
Also added a message reader thread on the master that currently accepts two kinds of messages: ping and ack
Contributed by Jorgen Loland
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@642982 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/AsynchronousLogShipper.java",
"hunks": [
{
"added": [
" int buffers = ReplicationLogBuffer.DEFAULT_NUMBER_LOG_BUFFERS;"
],
"header": "@@ -417,7 +417,7 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": [
" int buffers = logBuffer.DEFAULT_NUMBER_LOG_BUFFERS;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/MasterController.java",
"hunks": [
{
"added": [
" ack = transmitter.sendMessageWaitForReply(mesg);"
],
"header": "@@ -308,14 +308,11 @@ public class MasterController",
"removed": [
" transmitter.sendMessage(mesg);",
" ack = transmitter.readMessage();",
" } catch (ClassNotFoundException cnfe) {",
" handleFailoverFailure(cnfe);"
]
},
{
"added": [
" if (transmitter != null) {",
" transmitter.tearDown();",
" }",
" transmitter = new ReplicationMessageTransmit(slavehost,",
" slaveport,",
" dbname);"
],
"header": "@@ -462,7 +459,12 @@ public class MasterController",
"removed": [
" transmitter = new ReplicationMessageTransmit(slavehost, slaveport);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessageReceive.java",
"hunks": [
{
"added": [
" /* -- Ping-thread related fields start -- */",
"",
" /** The maximum number of millis to wait before giving up waiting for",
" * a ping response*/",
" private static final int DEFAULT_PING_TIMEOUT = 5000; // 5 seconds",
"",
" /** Thread used to send ping messages to master to check if the connection",
" * is working. The ping message must be sent from a separate thread",
" * because failed message shipping over TCP does not timeout for two",
" * minutes (not configurable). */",
" private Thread pingThread = null;",
"",
" /** Used to terminate the ping thread. */",
" private boolean killPingThread = false;",
"",
" /** Whether or not the connection with the master is confirmed to be",
" * working. Set to false by isConnectedToMaster, set to true when",
" * a pong (i.e., a response to a ping) is received. Field protected by",
" * receivePongSemephore */",
" private boolean connectionConfirmed = false;",
"",
" /** Used for synchronization of the ping thread */",
" private final Object sendPingSemaphore = new Object();",
"",
" /** Used for synchronization when waiting for a ping reply message */",
" private final Object receivePongSemaphore = new Object();",
"",
" /* -- Ping-thread related fields stop -- */",
""
],
"header": "@@ -59,6 +59,35 @@ public class ReplicationMessageReceive {",
"removed": []
},
{
"added": [
"",
" killPingThread = false;",
" pingThread = new SlavePingThread(dbname);",
" pingThread.setDaemon(true);",
" pingThread.start();",
""
],
"header": "@@ -157,6 +186,12 @@ public class ReplicationMessageReceive {",
"removed": []
},
{
"added": [
" synchronized (sendPingSemaphore) {",
" killPingThread = true;",
" sendPingSemaphore.notify();",
" }",
""
],
"header": "@@ -189,6 +224,11 @@ public class ReplicationMessageReceive {",
"removed": []
},
{
"added": [
" * or a connection failure occurs. Replication network layer specific",
" * messages (i.e. ping/pong messages) are handled internally and are not",
" * returned."
],
"header": "@@ -369,7 +409,9 @@ public class ReplicationMessageReceive {",
"removed": [
" * or a connection failure occurs."
]
},
{
"added": [
" ReplicationMessage msg = (ReplicationMessage)socketConn.readMessage();",
"",
" if (msg.getType() == ReplicationMessage.TYPE_PONG) {",
" // If a pong is received, connection is confirmed to be working.",
" synchronized (receivePongSemaphore) {",
" connectionConfirmed = true;",
" receivePongSemaphore.notify();",
" }",
" // Pong messages are network layer specific. Do not return these",
" return readMessage();",
" } else {",
" return msg;",
" }"
],
"header": "@@ -384,7 +426,19 @@ public class ReplicationMessageReceive {",
"removed": [
" return (ReplicationMessage)socketConn.readMessage();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/net/ReplicationMessageTransmit.java",
"hunks": [
{
"added": [
"import java.net.SocketTimeoutException;"
],
"header": "@@ -23,6 +23,7 @@ package org.apache.derby.impl.store.replication.net;",
"removed": []
},
{
"added": [
" /** Number of millis to wait for a response message before timing out",
" */",
" private final int DEFAULT_MESSAGE_RESPONSE_TIMEOUT = 5000;",
"",
" /** The thread that listens for messages from the slave */",
" private Thread msgReceiver = null;",
"",
" /** Used to synchronize when waiting for a response message from the slave",
" */",
" private final Object receiveSemaphore = new Object();",
"",
" /** The message received from the slave as a response to sending a",
" * message. */",
" private ReplicationMessage receivedMsg = null;",
"",
" /** Whether or not to keep the message receiver thread alive. Set to true",
" * to terminate the thread */",
" private volatile boolean stopMessageReceiver = false;",
""
],
"header": "@@ -41,6 +42,25 @@ import org.apache.derby.shared.common.reference.MessageId;",
"removed": []
},
{
"added": [
"",
" /**",
" * The name of the replicated database",
" */",
" private String dbname;"
],
"header": "@@ -51,6 +71,11 @@ public class ReplicationMessageTransmit {",
"removed": []
},
{
"added": [
" * @param dbname The name of the replicated database",
" public ReplicationMessageTransmit(String hostName, int portNumber,",
" String dbname)",
" throws UnknownHostException {",
" this.dbname = dbname;"
],
"header": "@@ -59,12 +84,15 @@ public class ReplicationMessageTransmit {",
"removed": [
" public ReplicationMessageTransmit(String hostName, int portNumber) ",
" throws UnknownHostException {"
]
},
{
"added": [
" // keep socket alive even if no log is shipped for a long time",
" s.setKeepAlive(true);",
"",
" // Start the thread that will listen for incoming messages.",
" startMessageReceiverThread(dbname);",
" // Verify that the master and slave have the same software version",
" // and exactly equal log files.",
" brokerConnection(synchOnInstant);"
],
"header": "@@ -120,17 +148,17 @@ public class ReplicationMessageTransmit {",
"removed": [
" //The reads on the InputStreams obtained from the socket on the",
" //transmitter should not hang indefinitely. Use the timeout",
" //used for the connection establishment here to ensure that the",
" //reads timeout after the timeout period mentioned for the",
" //connection.",
" s.setSoTimeout(timeout_);",
" //send the initiate message and receive acknowledgment",
" sendInitiatorAndReceiveAck(synchOnInstant);"
]
},
{
"added": [
" stopMessageReceiver = true;",
" msgReceiver = null;",
" socketConn = null;"
],
"header": "@@ -141,8 +169,11 @@ public class ReplicationMessageTransmit {",
"removed": []
},
{
"added": [
" * Send a replication message to the slave and return the",
" * message received as a response. Will only wait",
" * DEFAULT_MESSAGE_RESPONSE_TIMEOUT millis for the response",
" * message. If not received when the wait times out, no message is",
" * returned. The method is synchronized to guarantee that only one",
" * thread will be waiting for a response message at any time.",
" * @param message a ReplicationMessage object that contains the message to",
" * be transmitted.",
" * @return the response message",
" * @throws IOException 1) if an exception occurs while sending or receiving",
" * a message.",
" * @throws StandardException if the response message has not been received",
" * after DEFAULT_MESSAGE_RESPONSE_TIMEOUT millis",
" public synchronized ReplicationMessage",
" sendMessageWaitForReply(ReplicationMessage message)",
" throws IOException, StandardException {",
" receivedMsg = null;",
" socketConn.writeMessage(message);",
" synchronized (receiveSemaphore) {",
" try {",
" receiveSemaphore.wait(DEFAULT_MESSAGE_RESPONSE_TIMEOUT);",
" } catch (InterruptedException ie) {",
" }",
" }",
" if (receivedMsg == null) {",
" throw StandardException.",
" newException(SQLState.REPLICATION_CONNECTION_LOST, dbname);",
"",
" }",
" return receivedMsg;"
],
"header": "@@ -162,23 +193,41 @@ public class ReplicationMessageTransmit {",
"removed": [
" * Used to read a replication message sent by the slave. This method",
" * would wait on the connection from the slave until a message is received",
" * or a connection failure occurs.",
" *",
" * @return the reply message.",
" * @throws ClassNotFoundException Class of a serialized object cannot",
" * be found.",
" * @throws IOException 1) if an exception occurs while reading from the",
" * stream.",
" public ReplicationMessage readMessage() throws",
" ClassNotFoundException, IOException {",
" return (ReplicationMessage)socketConn.readMessage();"
]
},
{
"added": [
" private void brokerConnection(long synchOnInstant)",
" verifyMessageType(sendMessageWaitForReply(initiatorMsg),",
" ReplicationMessage.TYPE_ACK);",
" verifyMessageType(sendMessageWaitForReply(initiatorMsg),",
" ReplicationMessage.TYPE_ACK);"
],
"header": "@@ -205,22 +254,22 @@ public class ReplicationMessageTransmit {",
"removed": [
" private void sendInitiatorAndReceiveAck(long synchOnInstant)",
" sendMessage(initiatorMsg);",
" verifyMessageAck(readMessage());",
" sendMessage(initiatorMsg);",
" verifyMessageAck(readMessage());"
]
},
{
"added": [
" private boolean verifyMessageType(ReplicationMessage message,",
" int expectedType)",
" if (message.getType() == expectedType) {",
" return true;",
" } else if (message.getType() == ReplicationMessage.TYPE_ERROR) {",
" String exception[] = (String[])message.getMessage();"
],
"header": "@@ -236,15 +285,16 @@ public class ReplicationMessageTransmit {",
"removed": [
" private void verifyMessageAck(ReplicationMessage ack) ",
" if (ack.getType() == ReplicationMessage.TYPE_ACK) {",
" return;",
" } else if (ack.getType() == ReplicationMessage.TYPE_ERROR) {",
" String exception[] = (String[])ack.getMessage();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/slave/SlaveController.java",
"hunks": [
{
"added": [],
"header": "@@ -76,7 +76,6 @@ public class SlaveController",
"removed": [
" private volatile boolean connectedToMaster = false;"
]
},
{
"added": [
" if (!forcedStop && isConnectedToMaster()){"
],
"header": "@@ -266,7 +265,7 @@ public class SlaveController",
"removed": [
" if (!forcedStop && connectedToMaster){"
]
},
{
"added": [
" if (isConnectedToMaster()){"
],
"header": "@@ -274,7 +273,7 @@ public class SlaveController",
"removed": [
" if (connectedToMaster){"
]
},
{
"added": [],
"header": "@@ -339,7 +338,6 @@ public class SlaveController",
"removed": [
" connectedToMaster = true;"
]
},
{
"added": [],
"header": "@@ -369,7 +367,6 @@ public class SlaveController",
"removed": [
" connectedToMaster = false;"
]
}
]
},
{
"file": "java/shared/org/apache/derby/shared/common/reference/SQLState.java",
"hunks": [
{
"added": [
" String REPLICATION_CONNECTION_EXCEPTION = \"XRE04.U.1\";",
" String REPLICATION_CONNECTION_LOST = \"XRE04.U.2\";"
],
"header": "@@ -1777,7 +1777,8 @@ public interface SQLState {",
"removed": [
" String REPLICATION_CONNECTION_EXCEPTION = \"XRE04\";"
]
}
]
}
] |
derby-DERBY-353-55b25e3e
|
DERBY-353: Return last user specified value for IDENTITY BY DEFAULT columns. The fix changed:
a) Inside the InsertResultSet class getSetAutoincrementValue function the Data Dictionary is accessed to get the current identity value. Which is then updated in the identityVal variable which is then used by the identity_val_local() function to display to the user.
b) The above process did not happen when there was a user supplied value. The system table was not accessed and the setIdentity() function in the GenericLanguageContext class was not called.
c) My fix attacked the case when the user supplies a value for the identity column which can be done in the case of Generated by default and calls the setIdentity function to store the value in this case also.
Thanks to Rick for doing code reviews.
Submitted by V Narayanan (V.Narayanan@Sun.COM)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@267331 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/InsertResultSet.java",
"hunks": [
{
"added": [],
"header": "@@ -178,7 +178,6 @@ public class InsertResultSet extends DMLWriteResultSet implements TargetResultSe",
"removed": [
""
]
},
{
"added": [
"",
" RowLocation[] rla;",
""
],
"header": "@@ -371,8 +370,9 @@ public class InsertResultSet extends DMLWriteResultSet implements TargetResultSe",
"removed": [
"\t\t",
"\t\tRowLocation[] rla;"
]
},
{
"added": [
" NumberDataValue dvd;"
],
"header": "@@ -719,7 +719,7 @@ public class InsertResultSet extends DMLWriteResultSet implements TargetResultSe",
"removed": [
"\t\tNumberDataValue dvd;"
]
},
{
"added": [
" long user_autoinc=0;"
],
"header": "@@ -861,6 +861,7 @@ public class InsertResultSet extends DMLWriteResultSet implements TargetResultSe",
"removed": []
},
{
"added": [
" ",
" if(constants.hasAutoincrement())",
" {",
" dd = lcc.getDataDictionary();",
" td = dd.getTableDescriptor(constants.targetUUID);",
" ",
" int maxColumns = td.getMaxColumnID();",
" int col;",
" ",
" for(col=1;col<=maxColumns;col++)",
" {",
" ColumnDescriptor cd = td.getColumnDescriptor(col);",
" if(cd.isAutoincrement())",
" {",
" break;",
" }",
" }",
" ",
" if(col <= maxColumns)",
" {",
" DataValueDescriptor dvd = row.cloneColumn(col);",
" user_autoinc = dvd.getLong();",
" }",
" } ",
" if (constants.singleRowSource)",
"\t {",
"\t\t\trow = null;",
"\t }",
"\t else",
"\t {",
"\t\trow = getNextRowCore(sourceResultSet);",
"\t }"
],
"header": "@@ -999,16 +1000,40 @@ public class InsertResultSet extends DMLWriteResultSet implements TargetResultSe",
"removed": [
"\t\t\tif (constants.singleRowSource)",
"\t\t\t{",
"\t\t\t\trow = null;",
"\t\t\t}",
"\t\t\telse",
"\t\t\t{",
"\t\t\t\trow = getNextRowCore(sourceResultSet);",
"\t\t\t}"
]
}
]
}
] |
derby-DERBY-3536-b2fb1d5d
|
DERBY-3536: Fix casting of DECIMALs in TableFunctions on J2ME platforms.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@639428 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/VTIResultSet.java",
"hunks": [
{
"added": [
" else if ( typeID.isDecimalTypeId() ) { castDecimal( dtd, dvd ); }"
],
"header": "@@ -718,6 +718,7 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
" /**",
" * <p>",
" * Set the correct precision and scale for a decimal value.",
" * </p>",
" */",
" private void castDecimal( DataTypeDescriptor dtd, DataValueDescriptor dvd )",
" throws StandardException",
" {",
" VariableSizeDataValue vsdv = (VariableSizeDataValue) dvd;",
" ",
" vsdv.setWidth( dtd.getPrecision(), dtd.getScale(), false );",
" }",
" "
],
"header": "@@ -772,5 +773,18 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": []
}
]
}
] |
derby-DERBY-3542-9f61c8b3
|
DERBY-3543: NetworkServerControl with options but no command does not give usage message.
Patch contributed by Martin Zaun
Patch file: DERBY-3542-1.diff (wrong name, correct file)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@649381 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
" int command = findCommand(args);",
" if (command == COMMAND_UNKNOWN)",
" consolePropertyMessage(\"DRDA_NoCommand.U\");"
],
"header": "@@ -2103,12 +2103,10 @@ public final class NetworkServerControlImpl {",
"removed": [
" int command = COMMAND_START; ",
" if (args.length > 0)",
" command = findCommand(args);",
" else",
" consolePropertyMessage(\"DRDA_NoArgs.U\");"
]
}
]
}
] |
derby-DERBY-3543-9f61c8b3
|
DERBY-3543: NetworkServerControl with options but no command does not give usage message.
Patch contributed by Martin Zaun
Patch file: DERBY-3542-1.diff (wrong name, correct file)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@649381 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
" int command = findCommand(args);",
" if (command == COMMAND_UNKNOWN)",
" consolePropertyMessage(\"DRDA_NoCommand.U\");"
],
"header": "@@ -2103,12 +2103,10 @@ public final class NetworkServerControlImpl {",
"removed": [
" int command = COMMAND_START; ",
" if (args.length > 0)",
" command = findCommand(args);",
" else",
" consolePropertyMessage(\"DRDA_NoArgs.U\");"
]
}
]
}
] |
derby-DERBY-3544-c5d78768
|
DERBY-3544 If the network server fails to shutdown in NetworkServerTestSetup.teardown() then destory the process if it was run as a spearate jvm. Otherwise a chance of a hang existed when the executing the wait for on the process object.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@639433 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/NetworkServerTestSetup.java",
"hunks": [
{
"added": [
" Throwable failedShutdown = null;",
" failedShutdown = t;"
],
"header": "@@ -369,13 +369,14 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" t.printStackTrace( System.out );"
]
},
{
"added": [
" // Destroy the process if a failed shutdown",
" // to avoid hangs running tests as the complete()",
" // waits for the process to complete.",
" spawnedServer.complete(failedShutdown != null);",
" ",
" // Throw an error to record the fact that the",
" // shutdown failed.",
" if (failedShutdown != null)",
" {",
" if (failedShutdown instanceof Exception)",
" throw (Exception) failedShutdown;",
" ",
" throw (Error) failedShutdown;",
" }",
" "
],
"header": "@@ -384,9 +385,23 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" spawnedServer.complete(false);"
]
}
]
}
] |
derby-DERBY-3548-ce9c46e9
|
DERBY-3548 Change SystemPrivilegesPermissionTest to only perform the fixture that tests using Subject if that class is available on the current virtual machine. Allows the test to pass and run useful fixtures in J2ME/CDC/Foundation 1.1
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@639059 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3549-7a6650c3
|
DERBY-3549: Unable to start slave mode after authentication failure on a previous startSlave attempt
Unboot database if startslave command fails on authentication.
Contributed by Jorgen Loland
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@642219 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection.java",
"hunks": [
{
"added": [
" // Set to true if startSlave command is attempted on an",
" // already booted database. Will raise an exception when",
" // credentials have been verified",
" boolean slaveDBAlreadyBooted = false;"
],
"header": "@@ -234,6 +234,10 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": []
},
{
"added": [
" // If the slave database has already been booted,",
" // the command should fail. Setting",
" // slaveDBAlreadyBooted to true will cause an",
" // exception to be thrown, but not until after",
" // credentials have been verified so that db boot",
" // information is not exposed to unauthorized",
" // users",
" slaveDBAlreadyBooted = true;",
" } else {",
" // We need to boot the slave database two times. The",
" // first boot will check authentication and",
" // authorization. The second boot will put the",
" // database in replication slave mode. SLAVE_PRE_MODE",
" // ensures that log records are not written to disk",
" // during the first boot. This is necessary because",
" // the second boot needs a log that is exactly equal",
" // to the log at the master.",
" info.setProperty(SlaveFactory.REPLICATION_MODE,",
" SlaveFactory.SLAVE_PRE_MODE);"
],
"header": "@@ -276,21 +280,26 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
" throw StandardException.newException(",
" SQLState.CANNOT_START_SLAVE_ALREADY_BOOTED,",
" getTR().getDBName());",
"",
" // We need to boot the slave database two times. The",
" // first boot will check authentication and",
" // authorization. The second boot will put the",
" // database in replication slave mode. SLAVE_PRE_MODE",
" // ensures that log records are not written to disk",
" // during the first boot. This is necessary because",
" // the second boot needs a log that is exactly equal",
" // to the log at the master.",
" info.setProperty(SlaveFactory.REPLICATION_MODE,",
" SlaveFactory.SLAVE_PRE_MODE);"
]
},
{
"added": [
" try {",
" checkUserCredentials(tr.getDBName(), info);",
" } catch (SQLException sqle) {",
" if (isStartSlaveBoot && !slaveDBAlreadyBooted) {",
" // Failing credentials check on a previously",
" // unbooted db should not leave the db booted",
" // for startSlave command.",
"",
" // tr.startTransaction is needed to get the",
" // Database context. Without this context,",
" // handleException will not shutdown the",
" // database",
" tr.startTransaction();",
" handleException(tr.shutdownDatabaseException());",
" }",
" throw sqle;",
" }"
],
"header": "@@ -374,7 +383,23 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\tcheckUserCredentials(tr.getDBName(), info);"
]
}
]
}
] |
derby-DERBY-3554-c6d09729
|
DERBY-3554 Change Collation test to run DatabaseMetaDataTest, BatchUpdateTest,GroupByExpressionTest, and UpdateableResultSetTest for only one locale
Contributed by Suran Jayathilaka (suranjay at gmail dot com)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@640982 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3562-a292c89b
|
DERBY-3562: Delete unnecessary, old log files on the slave when a checkpoint is encountered in the log received from the master.
Contributed by Jorgen Loland
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@643336 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/log/LogToFile.java",
"hunks": [
{
"added": [
"\t\tif ((firstLogNeeded = getFirstLogNeeded(checkpoint))==-1)",
"\t\t\treturn;",
"\t\ttruncateLog(firstLogNeeded);",
"\t}",
"\t/** Get rid of old and unnecessary log files",
"\t * @param firstLogNeeded The log file number of the oldest log file",
"\t * needed for recovery.",
"\t */",
"\tprivate void truncateLog(long firstLogNeeded) {",
"\t\tlong oldFirstLog;"
],
"header": "@@ -2194,13 +2194,20 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": [
"\t\tlong oldFirstLog;",
"\t\tif ((firstLogNeeded = getFirstLogNeeded(checkpoint))==-1)",
"\t\t\treturn;"
]
},
{
"added": [
" * ",
" * @throws org.apache.derby.iapi.error.StandardException ",
" */",
"\tpublic void checkpointInRFR(LogInstant cinstant, long redoLWM,",
"\t\t\t\t\t\t\t\tlong undoLWM, DataFactory df)",
"\t\t\t\t\t\t\t\tthrows StandardException"
],
"header": "@@ -5049,8 +5056,12 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": [
"\t*/",
"\tpublic void checkpointInRFR(LogInstant cinstant, long redoLWM, DataFactory df) throws StandardException"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/xact/Xact.java",
"hunks": [
{
"added": [
"\t * Perform a checkpoint during rollforward recovery.",
" * ",
" * @throws org.apache.derby.iapi.error.StandardException ",
" */",
"\t\t\t\t\t\t\t\t\t\t\t\tlong redoLWM, long undoLWM)",
"\t\tlogFactory.checkpointInRFR(cinstant, redoLWM, undoLWM, dataFactory);"
],
"header": "@@ -2745,13 +2745,15 @@ public class Xact extends RawTransaction implements Limit {",
"removed": [
"\t\tperform a checkpoint during rollforward recovery",
"\t*/",
"\t\t\t\t\t\t\t\t\t\t\t\tlong redoLWM) ",
"\t\tlogFactory.checkpointInRFR(cinstant, redoLWM, dataFactory);"
]
}
]
}
] |
derby-DERBY-3566-419210d9
|
DERBY-3566 Alter column set data type not allowed in soft upgrade mode with unique constraint.
Contributed by Anurag Shekhar (anurag dot shekhar at sun dot com)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@642420 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3567-0c97a104
|
DERBY-3567: Makes ALS#forceFlush time out after 5 seconds if not able to send message.
Contributed by Jorgen Loland with modifications by Oystein Grovlen
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@644716 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/AsynchronousLogShipper.java",
"hunks": [
{
"added": [
" private volatile boolean stopShipping = false;"
],
"header": "@@ -99,7 +99,7 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": [
" private boolean stopShipping = false;"
]
},
{
"added": [
"",
" /** Used to synchronize forceFlush calls */",
" private Object forceFlushSemaphore = new Object();",
"",
" /** The number of millis a call to forceFlush will wait before giving",
" * up sending a chunk of log to the slave */",
" public static final int DEFAULT_FORCEFLUSH_TIMEOUT = 5000;"
],
"header": "@@ -111,6 +111,13 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": []
},
{
"added": [
" synchronized (forceFlushSemaphore) {",
" // Wake up a thread waiting for forceFlush, if any",
" forceFlushSemaphore.notify();",
" }"
],
"header": "@@ -201,6 +208,10 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": []
},
{
"added": [
" public void forceFlush() throws IOException, StandardException ",
" {",
" if (stopShipping) return;",
" synchronized (forceFlushSemaphore) {",
" synchronized (objLSTSync) {",
" // Notify the log shipping thread that",
" // it is time for another send.",
" objLSTSync.notify();",
" }",
"",
" try {",
" forceFlushSemaphore.wait(DEFAULT_FORCEFLUSH_TIMEOUT);",
" } catch (InterruptedException ex) {",
" }"
],
"header": "@@ -310,16 +321,20 @@ public class AsynchronousLogShipper extends Thread implements",
"removed": [
" public void forceFlush() throws IOException, StandardException {",
" if (!stopShipping) {",
" shipALogChunk();",
" }",
" ",
" synchronized(objLSTSync) {",
" //There will still be more log to send after the forceFlush",
" //has sent one chunk. Notify the log shipping thread that",
" //it is time for another send.",
" objLSTSync.notify();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/replication/master/MasterController.java",
"hunks": [
{
"added": [
" // Either the forceFlush succeeded in sending a chunk of log",
" // (making room for this log chunk in the buffer), or",
" // forceFlush did not succeed (in which case replication is",
" // stopped)",
" logBuffer.appendLog(greatestInstant, log,",
" logOffset, logLength);",
" } catch (LogBufferFullException lbfe2) {",
" printStackAndStopMaster(lbfe2);"
],
"header": "@@ -413,8 +413,14 @@ public class MasterController",
"removed": [
" // There should now be room for this log chunk in the buffer",
" appendLog(greatestInstant, log, logOffset, logLength);"
]
}
]
}
] |
derby-DERBY-3570-fba255cd
|
DERBY-3570: Add ability to declare DETERMINISTIC routines.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@701367 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3571-f55d30a1
|
DERBY-3571: LOB locators are not released if the LOB columns are not accessed by the client.
Added a release mechanism for LOBs. The client will keep track of locators and release them when the result set position is changed, or the result set closed. Locators are released one by one with individual stored procedure calls. This is rather ineffective and should be optimized (for instance by piggybacking).
Also enabled a new test as part of the derbynet suite.
Patch file: derby-3571-2a-simple_release.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@643819 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
"import org.apache.derby.shared.common.sanity.SanityManager;"
],
"header": "@@ -28,6 +28,7 @@ import java.sql.SQLException;",
"removed": []
},
{
"added": [
" /** Tracker object for LOB state, used to free locators on the server. */",
" private LOBStateTracker lobState = null;"
],
"header": "@@ -37,6 +38,8 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" // See if there are open locators on the current row, if valid.",
" if (isValidCursorPosition_ && !isOnInsertRow_) {",
" lobState.checkCurrentRow(cursor_);",
" }",
" // NOTE: The preClose_ method must also check for locators if",
" // prefetching of data is enabled for result sets containing LOBs."
],
"header": "@@ -427,6 +430,12 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" /**",
" * Moves off the insert row if positioned there, and checks the current row",
" * for releasable LOB locators if positioned on a valid data row.",
" *",
" * @throws SqlException if releasing a LOB locator fails",
" */"
],
"header": "@@ -3758,6 +3767,12 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" if (isValidCursorPosition_) {",
" // isOnInsertRow must be false here.",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(!isOnInsertRow_,",
" \"Cannot check current row if positioned on insert row\");",
" }",
" lobState.checkCurrentRow(cursor_);",
" }"
],
"header": "@@ -3768,6 +3783,14 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" lobState.discardState(); // Locators released on server side."
],
"header": "@@ -4339,6 +4362,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" lobState.discardState(); // Locators released on server side."
],
"header": "@@ -4351,6 +4375,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Statement.java",
"hunks": [
{
"added": [
" // Create tracker for LOB locator columns.",
" resultSet.createLOBColumnTracker();"
],
"header": "@@ -1489,6 +1489,8 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetCursor.java",
"hunks": [
{
"added": [
" * <p>",
" * Note that this method cannot be invoked on a LOB column that is NULL.",
" *",
" protected int locator(int column)"
],
"header": "@@ -1054,11 +1054,14 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" private int locator(int column)"
]
},
{
"added": [
" netResultSet_.markLOBAsAccessed(column);"
],
"header": "@@ -1075,6 +1078,7 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": []
}
]
}
] |
derby-DERBY-3574-66a98185
|
DERBY-3574: Clean up whitespace problems introduced in revision 647931
(trailing whitespace, extra blank lines, mixing tabs and spaces on the
same line)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@648180 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Blob.java",
"hunks": [
{
"added": [
""
],
"header": "@@ -29,9 +29,7 @@ import java.sql.SQLException;",
"removed": [
" ",
" ",
" "
]
},
{
"added": [],
"header": "@@ -610,7 +608,6 @@ public class Blob extends Lob implements java.sql.Blob {",
"removed": [
" "
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Clob.java",
"hunks": [
{
"added": [],
"header": "@@ -48,17 +48,12 @@ public class Clob extends Lob implements java.sql.Clob {",
"removed": [
" ",
"",
" ",
" ",
" "
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [
""
],
"header": "@@ -118,7 +118,7 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" "
]
},
{
"added": [
" flowCommit();"
],
"header": "@@ -561,7 +561,7 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" flowCommit(); "
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Lob.java",
"hunks": [
{
"added": [
""
],
"header": "@@ -58,7 +58,7 @@ public abstract class Lob implements UnitOfWorkListener {",
"removed": [
" "
]
},
{
"added": [
""
],
"header": "@@ -76,7 +76,7 @@ public abstract class Lob implements UnitOfWorkListener {",
"removed": [
" "
]
},
{
"added": [
" if (lengthObtained_) return sqlLength_;"
],
"header": "@@ -110,7 +110,7 @@ public abstract class Lob implements UnitOfWorkListener {",
"removed": [
" \tif (lengthObtained_) return sqlLength_;"
]
},
{
"added": [
" * Checks if isValid is true and whether the transaction that",
" * created the Lob is still active. If any of which is not true throws",
" * an invalid LOB object.",
" * @throws SQLException if isValid is not true or the transaction that",
" * created the Lob is not active",
"",
" // If there isn't an open connection, the Lob is invalid.",
" try {",
" agent_.connection_.checkForClosedConnection();",
" } catch (SqlException se) {",
" throw se.getSQLException();",
" }",
""
],
"header": "@@ -387,24 +387,23 @@ public abstract class Lob implements UnitOfWorkListener {",
"removed": [
" * Checks is isValid is true and whether the transaction that",
" * created the Lob is still active. If any of which is not true throws ",
" * an invalid LOB object",
" * @throws SQLException if isValid is not true",
" \t",
" \t/**",
" \t * If there isn't an open connection, the Lob is invalid.",
" \t */",
" \ttry{",
" \t\tagent_.connection_.checkForClosedConnection();",
" \t}catch(SqlException se){",
" \t\tthrow se.getSQLException();",
" \t}",
" \t"
]
}
]
}
] |
derby-DERBY-3574-80555116
|
DERBY-3574 With client, attempting to get the lob length after commit or connection close if there was a call to length() before commit does not throw an exception
Contributed by Tiago R. Espinha (tiago at espinhas dot net)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@647931 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Blob.java",
"hunks": [
{
"added": [
" "
],
"header": "@@ -30,9 +30,7 @@ import org.apache.derby.shared.common.reference.SQLState;",
"removed": [
" //This boolean variable indicates whether the Blob object has",
" //been invalidated by calling free() on it",
" private boolean isValid = true;"
]
},
{
"added": [
" if (!isValid_) return;",
" ",
" isValid_ = false;"
],
"header": "@@ -608,11 +606,12 @@ public class Blob extends Lob implements java.sql.Blob {",
"removed": [
" if (!isValid) return;",
" isValid = false;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Clob.java",
"hunks": [
{
"added": [
" ",
" ",
" ",
" "
],
"header": "@@ -48,16 +48,17 @@ public class Clob extends Lob implements java.sql.Clob {",
"removed": [
" //This boolean variable indicates whether the Clob object has",
" //been invalidated by calling free() on it",
" private boolean isValid = true;"
]
},
{
"added": [
" "
],
"header": "@@ -240,6 +241,7 @@ public class Clob extends Lob implements java.sql.Clob {",
"removed": []
},
{
"added": [
" if (!isValid_) return;",
" isValid_ = false;"
],
"header": "@@ -816,11 +818,11 @@ public class Clob extends Lob implements java.sql.Clob {",
"removed": [
" if (!isValid) return;",
" isValid = false;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [
" private int transactionID_ = 0;",
" "
],
"header": "@@ -117,6 +117,8 @@ public abstract class Connection implements java.sql.Connection,",
"removed": []
},
{
"added": [
" flowCommit(); "
],
"header": "@@ -559,7 +561,7 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" flowCommit();"
]
},
{
"added": [
" /**",
" * Returns the ID of the active transaction for this connection.",
" * @return the ID of the active transaction",
" */",
" public int getTransactionID(){",
" \treturn transactionID_;",
" }",
" "
],
"header": "@@ -1103,6 +1105,14 @@ public abstract class Connection implements java.sql.Connection,",
"removed": []
},
{
"added": [
" transactionID_++;"
],
"header": "@@ -1973,6 +1983,7 @@ public abstract class Connection implements java.sql.Connection,",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Lob.java",
"hunks": [
{
"added": [
" /**",
" * This boolean variable indicates whether the Lob object has been",
" * invalidated by calling free() on it",
" */",
" protected boolean isValid_ = true;",
" "
],
"header": "@@ -53,6 +53,12 @@ public abstract class Lob implements UnitOfWorkListener {",
"removed": []
},
{
"added": [
" /**",
" * This integer identifies which transaction the Lob is associated with",
" */",
" private int transactionID_;",
" "
],
"header": "@@ -66,6 +72,11 @@ public abstract class Lob implements UnitOfWorkListener {",
"removed": []
},
{
"added": [
" transactionID_ = agent_.connection_.getTransactionID();"
],
"header": "@@ -77,6 +88,7 @@ public abstract class Lob implements UnitOfWorkListener {",
"removed": []
},
{
"added": [
" \tif (lengthObtained_) return sqlLength_;"
],
"header": "@@ -98,7 +110,7 @@ public abstract class Lob implements UnitOfWorkListener {",
"removed": [
" if (lengthObtained_) return sqlLength_;"
]
}
]
}
] |
derby-DERBY-3574-b905a171
|
DERBY-3574 With client, attempting to get the lob length after commit or connection close if there was a call to length() before commit does not throw an exception
Add test for connection closed case.
Contributed by Tiago R. Espinha (tiago at espinhas dot net)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@648012 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3580-10cd9408
|
DERBY-3580: Remove unused method Connection.resetConnection(LogWriter, String, Properties).
Patch file: derby-3580-1a-remove-method.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@642726 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [],
"header": "@@ -308,37 +308,6 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" protected void resetConnection(LogWriter logWriter,",
" String databaseName,",
" java.util.Properties properties) throws SqlException {",
" // clearWarningsX() will re-initialize the following properties",
" // warnings_, accumulated440ForMessageProcFailure_,",
" // and accumulated444ForMessageProcFailure_",
" clearWarningsX();",
"",
" databaseName_ = databaseName;",
" user_ = ClientDataSource.getUser(properties);",
"",
" retrieveMessageText_ = ClientDataSource.getRetrieveMessageText(properties);",
"",
"",
" // property encryptionManager_",
" // if needed this will later be initialized by NET calls to initializePublicKeyForEncryption()",
" encryptionManager_ = null;",
"",
" // property: open_",
" // this should already be true",
"",
" isolation_ = TRANSACTION_UNKNOWN;",
" currentSchemaName_ = null;",
" autoCommit_ = true;",
" inUnitOfWork_ = false;",
"",
" this.agent_.resetAgent(this, logWriter, loginTimeout_, serverNameIP_, portNumber_);",
"",
" }",
"",
""
]
}
]
}
] |
derby-DERBY-3581-080c38f3
|
DERBY-3581 (partial): Changing certain properties on client DataSource objects causes existing connections to reflect the new values.
First iteration of code removal / refactoring.
Patch file: derby-3581-1a-remove_user_password_iteration1.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@648280 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/ClientPooledConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -69,8 +69,6 @@ public class ClientPooledConnection implements javax.sql.PooledConnection {",
"removed": [
" private String user_;",
" private String password_;"
]
},
{
"added": [],
"header": "@@ -91,8 +89,6 @@ public class ClientPooledConnection implements javax.sql.PooledConnection {",
"removed": [
" user_ = user;",
" password_ = password;"
]
},
{
"added": [],
"header": "@@ -146,8 +142,6 @@ public class ClientPooledConnection implements javax.sql.PooledConnection {",
"removed": [
" user_ = user;",
" password_ = password;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [],
"header": "@@ -271,19 +271,13 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" String user,",
" user_ = (user != null) ? user : user_;",
"",
" user_ = (user != null) ? user : ds.getUser();",
" ;",
""
]
},
{
"added": [
" synchronized public void reset(LogWriter logWriter, ClientBaseDataSource ds, ",
" logWriter.traceConnectResetEntry(this, logWriter, user_, ",
" reset_(logWriter, ds, recomputeFromDataSource);"
],
"header": "@@ -2100,15 +2094,14 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" synchronized public void reset(LogWriter logWriter, String user, ",
" String password, ClientBaseDataSource ds, ",
" logWriter.traceConnectResetEntry(this, logWriter, user, ",
" reset_(logWriter, user, password, ds, recomputeFromDataSource);"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetConnection.java",
"hunks": [
{
"added": [
" initialize(password, dataSource, rmId, isXAConn);"
],
"header": "@@ -232,7 +232,7 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" initialize(user, password, dataSource, rmId, isXAConn);"
]
},
{
"added": [
" initialize(password, dataSource, rmId, isXAConn);",
" private void initialize(String password,"
],
"header": "@@ -283,13 +283,12 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" initialize(user, password, dataSource, rmId, isXAConn);",
" private void initialize(String user,",
" String password,"
]
},
{
"added": [
" super.resetConnection(logWriter, ds, recomputeFromDataSource);"
],
"header": "@@ -308,11 +307,9 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" String user,",
" String password,",
" super.resetConnection(logWriter, user, ds, recomputeFromDataSource);"
]
},
{
"added": [
" securityMechanism_ =",
" ds.getSecurityMechanism(getDeferredResetPassword());",
" boolean isDeferredReset = flowReconnect(getDeferredResetPassword(),",
" securityMechanism_);",
" new ClientMessageId(",
" SQLState.NET_CONNECTION_RESET_NOT_ALLOWED_IN_UNIT_OF_WORK));",
" resetNetConnection(logWriter, ds, recomputeFromDataSource);"
],
"header": "@@ -328,40 +325,30 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" securityMechanism_ = ds.getSecurityMechanism(password);",
" if (password != null) {",
" deferredResetPassword_ = null;",
" } else {",
" password = getDeferredResetPassword();",
" }",
" boolean isDeferredReset = flowReconnect(password, securityMechanism_);",
" String user, String password,",
" checkResetPreconditions(logWriter, user, password, ds);",
" resetNetConnection(logWriter, user, password, ds, recomputeFromDataSource);",
" }",
"",
" protected void checkResetPreconditions(org.apache.derby.client.am.LogWriter logWriter,",
" String user,",
" String password,",
" ClientBaseDataSource ds) throws SqlException {",
" new ClientMessageId(SQLState.NET_CONNECTION_RESET_NOT_ALLOWED_IN_UNIT_OF_WORK));"
]
}
]
}
] |
derby-DERBY-3581-bbc2fd8f
|
DERBY-3581: Changing certain properties on client DataSource objects causes existing connections to reflect the new values.
Removed a boolean flag that was only set to true and used in only one place.
The meaning/use of the flag was also invalid, as new logical connections should not "recompute" its properties from the associated data source.
This patch should not change any behavior, only remove unnecessary code.
Patch file: derby-3581-3b-remove_recomputeFromDataSource_flag.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@667568 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [
" protected void resetConnection(LogWriter logWriter)",
" throws SqlException {",
" // property encryptionManager_",
" // if needed this will later be initialized by NET calls to initializePublicKeyForEncryption()",
" encryptionManager_ = null;",
" currentSchemaName_ = null;",
" autoCommit_ = true;",
" inUnitOfWork_ = false;",
" holdability = ResultSet.HOLD_CURSORS_OVER_COMMIT;",
" this.agent_.resetAgent(",
" this, logWriter, loginTimeout_, serverNameIP_, portNumber_);"
],
"header": "@@ -270,30 +270,23 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" protected void resetConnection(LogWriter logWriter,",
" boolean recomputeFromDataSource) throws SqlException {",
" if (recomputeFromDataSource) { // no need to reinitialize connection state if ds hasn't changed",
" // property encryptionManager_",
" // if needed this will later be initialized by NET calls to initializePublicKeyForEncryption()",
" encryptionManager_ = null;",
"",
" // property: open_",
" // this should already be true",
" currentSchemaName_ = null;",
" autoCommit_ = true;",
" inUnitOfWork_ = false;",
" holdability = ResultSet.HOLD_CURSORS_OVER_COMMIT;",
" }",
" ",
" if (recomputeFromDataSource) {",
" this.agent_.resetAgent(this, logWriter, loginTimeout_, serverNameIP_, portNumber_);",
" }"
]
},
{
"added": [
" synchronized public void reset(LogWriter logWriter)",
" throws SqlException {",
" reset_(logWriter);"
],
"header": "@@ -2090,14 +2083,14 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" synchronized public void reset(LogWriter logWriter, ",
" boolean recomputeFromDataSource) throws SqlException {",
" reset_(logWriter, recomputeFromDataSource);"
]
},
{
"added": [
" abstract protected void reset_(LogWriter logWriter) throws SqlException;"
],
"header": "@@ -2114,8 +2107,7 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" abstract protected void reset_(LogWriter logWriter, ",
" boolean recomputerFromDataSource) throws SqlException;"
]
},
{
"added": [
" * @param closeStatementsOnClose is used to differentiate between",
" protected void completeReset(boolean isDeferredReset,",
" boolean closeStatementsOnClose)",
" throws SqlException {"
],
"header": "@@ -2126,13 +2118,15 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" * @param recomputeFromDataSource is now used to differentiate between",
" protected void completeReset(boolean isDeferredReset, boolean recomputeFromDataSource) throws SqlException {"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -34,7 +34,6 @@ import org.apache.derby.shared.common.i18n.MessageUtil;",
"removed": [
"import org.apache.derby.jdbc.ClientDataSource;"
]
},
{
"added": [
" public void resetNetConnection(org.apache.derby.client.am.LogWriter logWriter)",
" throws SqlException {",
" super.resetConnection(logWriter);",
" // do not reset managers on a connection reset. this information shouldn't",
" // change and can be used to check secmec support.",
"",
" targetExtnam_ = null;",
" targetSrvclsnm_ = null;",
" targetSrvnam_ = null;",
" targetSrvrlslv_ = null;",
" publicKey_ = null;",
" targetPublicKey_ = null;",
" sourceSeed_ = null;",
" targetSeed_ = null;",
" targetSecmec_ = 0;",
" resetConnectionAtFirstSql_ = false;",
" completeReset(isDeferredReset);",
" protected void reset_(org.apache.derby.client.am.LogWriter logWriter)",
" throws SqlException {",
" resetNetConnection(logWriter);"
],
"header": "@@ -306,43 +305,40 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" public void resetNetConnection(org.apache.derby.client.am.LogWriter logWriter,",
" boolean recomputeFromDataSource) throws SqlException {",
" super.resetConnection(logWriter, recomputeFromDataSource);",
" if (recomputeFromDataSource) {",
" // do not reset managers on a connection reset. this information shouldn't",
" // change and can be used to check secmec support.",
"",
" targetExtnam_ = null;",
" targetSrvclsnm_ = null;",
" targetSrvnam_ = null;",
" targetSrvrlslv_ = null;",
" publicKey_ = null;",
" targetPublicKey_ = null;",
" sourceSeed_ = null;",
" targetSeed_ = null;",
" targetSecmec_ = 0;",
" resetConnectionAtFirstSql_ = false;",
"",
" }",
" completeReset(isDeferredReset, recomputeFromDataSource);",
" protected void reset_(org.apache.derby.client.am.LogWriter logWriter,",
" boolean recomputeFromDataSource) throws SqlException {",
" resetNetConnection(logWriter, recomputeFromDataSource);"
]
},
{
"added": [
" protected void completeReset(boolean isDeferredReset)",
" throws SqlException {",
" super.completeReset(isDeferredReset, closeStatementsOnClose);"
],
"header": "@@ -363,13 +359,9 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" protected void completeReset(boolean isDeferredReset, boolean recomputeFromDataSource) throws SqlException {",
" // NB! Override the recomputFromDataSource flag.",
" // This was done as a temporary, minimal intrusive fix to support",
" // JDBC statement pooling.",
" // See DERBY-3341 for details.",
" super.completeReset(isDeferredReset,",
" recomputeFromDataSource && closeStatementsOnClose);"
]
}
]
}
] |
derby-DERBY-3581-ea141d7c
|
DERBY-3581 (partial): Changing certain properties on client DataSource objects causes existing connections to reflect the new values.
Removed data source reference from ClientPooledConnection, and also removed usage of "the new datasource reference" (new as in passed in from CPC) in Connection/NetConnection.
Patch file: derby-3581-2a-remove_datasource_iteration1.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@649473 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/ClientPooledConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -67,9 +67,6 @@ public class ClientPooledConnection implements javax.sql.PooledConnection {",
"removed": [
" // Cached stuff from constructor",
" private ClientBaseDataSource ds_;",
""
]
},
{
"added": [],
"header": "@@ -88,7 +85,6 @@ public class ClientPooledConnection implements javax.sql.PooledConnection {",
"removed": [
" ds_ = ds;"
]
},
{
"added": [],
"header": "@@ -141,7 +137,6 @@ public class ClientPooledConnection implements javax.sql.PooledConnection {",
"removed": [
" ds_ = ds;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [
" if (recomputeFromDataSource) { // no need to reinitialize connection state if ds hasn't changed"
],
"header": "@@ -271,16 +271,12 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" ClientBaseDataSource ds,",
" if (ds != null && recomputeFromDataSource) { // no need to reinitialize connection state if ds hasn't changed",
" retrieveMessageText_ = ds.getRetrieveMessageText();",
"",
""
]
},
{
"added": [],
"header": "@@ -291,10 +287,6 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
"",
" loginTimeout_ = ds.getLoginTimeout();",
" dataSource_ = ds;",
" "
]
},
{
"added": [
" synchronized public void reset(LogWriter logWriter, ",
" dataSource_);",
" reset_(logWriter, recomputeFromDataSource);"
],
"header": "@@ -2094,14 +2086,14 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" synchronized public void reset(LogWriter logWriter, ClientBaseDataSource ds, ",
" (ds != null) ? ds : dataSource_);",
" reset_(logWriter, ds, recomputeFromDataSource);"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetConnection.java",
"hunks": [
{
"added": [
" super.resetConnection(logWriter, recomputeFromDataSource);"
],
"header": "@@ -307,9 +307,8 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" org.apache.derby.jdbc.ClientBaseDataSource ds,",
" super.resetConnection(logWriter, ds, recomputeFromDataSource);"
]
},
{
"added": [],
"header": "@@ -324,10 +323,6 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" if (ds != null && securityMechanism_ == 0) {",
" securityMechanism_ =",
" ds.getSecurityMechanism(getDeferredResetPassword());",
" }"
]
},
{
"added": [
" resetNetConnection(logWriter, recomputeFromDataSource);"
],
"header": "@@ -341,14 +336,13 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" ClientBaseDataSource ds,",
" resetNetConnection(logWriter, ds, recomputeFromDataSource);"
]
}
]
}
] |
derby-DERBY-3582-4848ae56
|
DERBY-3582: IndexOutOfBoundsError in ClockPolicy.moveHand
Make sure moveHand() and rotateClock() work if the clock is empty.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@643870 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
" * @return the holder under the clock hand, or {@code null} if the clock is",
" * empty",
" if (clock.isEmpty()) {",
" return null;",
" }"
],
"header": "@@ -354,10 +354,14 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" * @return the holder under the clock hand"
]
},
{
"added": [
"",
" if (h == null) {",
" // There are no elements in the clock, hence there is no",
" // reusable entry.",
" return null;",
" }",
""
],
"header": "@@ -402,6 +406,13 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
}
]
}
] |
derby-DERBY-3586-f5e51e93
|
DERBY-3586: Remove am.Connection.reset(LogWriter,ClientBaseDataSource,boolean) and called methods.
Removal of unused code in the client driver.
Patch file: derby-3586-1a-connection_reset3_removal.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@643789 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [],
"header": "@@ -2098,21 +2098,6 @@ public abstract class Connection implements java.sql.Connection,",
"removed": [
" synchronized public void reset(LogWriter logWriter, ClientBaseDataSource ds, ",
" boolean recomputeFromDataSource) throws SqlException {",
" if (logWriter != null) {",
" logWriter.traceConnectResetEntry(this, logWriter, null, (ds != null) ? ds : dataSource_);",
" }",
" try {",
" reset_(logWriter, ds, recomputeFromDataSource);",
" } catch (SqlException sqle) {",
" DisconnectException de = new DisconnectException(agent_, ",
" new ClientMessageId(SQLState.CONNECTION_FAILED_ON_RESET));",
" de.setNextException(sqle);",
" throw de;",
" }",
" }",
""
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -354,43 +354,6 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" protected void reset_(org.apache.derby.client.am.LogWriter logWriter,",
" ClientBaseDataSource ds,",
" boolean recomputeFromDataSource) throws SqlException {",
" checkResetPreconditions(logWriter, null, null, ds);",
" resetNetConnection(logWriter, ds, recomputeFromDataSource);",
" }",
"",
" private void resetNetConnection(org.apache.derby.client.am.LogWriter logWriter,",
" org.apache.derby.jdbc.ClientBaseDataSource ds,",
" boolean recomputeFromDataSource) throws SqlException {",
" super.resetConnection(logWriter, null, ds, recomputeFromDataSource);",
" //----------------------------------------------------",
" if (recomputeFromDataSource) {",
" // do not reset managers on a connection reset. this information shouldn't",
" // change and can be used to check secmec support.",
"",
" targetExtnam_ = null;",
" targetSrvclsnm_ = null;",
" targetSrvnam_ = null;",
" targetSrvrlslv_ = null;",
" publicKey_ = null;",
" targetPublicKey_ = null;",
" sourceSeed_ = null;",
" targetSeed_ = null;",
" targetSecmec_ = 0;",
" if (ds != null && securityMechanism_ == 0) {",
" securityMechanism_ = ds.getSecurityMechanism();",
" }",
" resetConnectionAtFirstSql_ = false;",
" }",
" // properties prddta_ and crrtkn_ will be initialized by",
" // calls to constructPrddta() and constructCrrtkn()",
" //----------------------------------------------------------",
" boolean isDeferredReset = flowReconnect(null, securityMechanism_);",
" completeReset(isDeferredReset, recomputeFromDataSource);",
" }",
""
]
}
]
}
] |
derby-DERBY-3589-1dbc0b85
|
DERBY-3589: AllocPage.createPage() doesn't initialize minimumRecordSize correctly
Use an object with named fields (PageCreationArgs) instead of an array
to pass in arguments when creating a new page. This resolves the
problem with StoredPage and AllocPage expecting the same field to be
found on different positions in the array.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@644620 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/AllocPage.java",
"hunks": [
{
"added": [
"\tprotected void createPage(PageKey newIdentity, PageCreationArgs args)",
"\t\tborrowedSpace = args.containerInfoSize;"
],
"header": "@@ -270,22 +270,13 @@ public class AllocPage extends StoredPage",
"removed": [
"\tprotected void createPage(PageKey newIdentity, int[] args) ",
"\t\t// args[0] is the format id",
"\t\t// args[1] is whether to sync the page to disk or not",
"\t\t// args[2] is the pagesize (used by StoredPage)",
"\t\t// args[3] is the spareSize (used by StoredPage)",
"\t\t// args[4] is the number of bytes to reserve for container header",
"\t\t// args[5] is the minimumRecordSize",
"\t\t// NOTE: the arg list here must match the one in FileContainer",
"\t\tint pageSize = args[2];",
"\t\tint minimumRecordSize = args[5];",
"\t\tborrowedSpace = args[4];"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/CachedPage.java",
"hunks": [
{
"added": [
"\t\tPageCreationArgs createArgs = (PageCreationArgs) createParameter;",
" int formatId = createArgs.formatId;",
"\t\tif (formatId == -1)"
],
"header": "@@ -256,9 +256,10 @@ public abstract class CachedPage extends BasePage implements Cacheable",
"removed": [
"\t\tint[] createArgs = (int[]) createParameter;",
"\t\tif (createArgs[0] == -1)"
]
},
{
"added": [
"\t\tif (formatId != getTypeFormatId())",
" changeInstanceTo(formatId, newIdentity).createIdentity("
],
"header": "@@ -267,10 +268,10 @@ public abstract class CachedPage extends BasePage implements Cacheable",
"removed": [
"\t\tif (createArgs[0] != getTypeFormatId())",
" changeInstanceTo(createArgs[0], newIdentity).createIdentity("
]
},
{
"added": [
" int syncFlag = createArgs.syncFlag;",
"\t\tif ((syncFlag & WRITE_SYNC) != 0 ||",
"\t\t\t(syncFlag & WRITE_NO_SYNC) != 0)",
"\t\t\twritePage(newIdentity, (syncFlag & WRITE_SYNC) != 0);",
"\t\t\t\tString sync =",
" ((syncFlag & WRITE_SYNC) != 0) ? \"Write_Sync\" :",
"\t\t\t\t\t(((syncFlag & WRITE_NO_SYNC) != 0) ? \"Write_NO_Sync\" :",
" \"creating new page \" + newIdentity + \" with \" + sync);"
],
"header": "@@ -296,22 +297,23 @@ public abstract class CachedPage extends BasePage implements Cacheable",
"removed": [
"\t\tif ((createArgs[1] & WRITE_SYNC) != 0 ||",
"\t\t\t(createArgs[1] & WRITE_NO_SYNC) != 0)",
"\t\t\twritePage(newIdentity, (createArgs[1] & WRITE_SYNC) != 0);",
"\t\t\t\tString syncFlag = ",
" ((createArgs[1] & WRITE_SYNC) != 0) ? \"Write_Sync\" :",
"\t\t\t\t\t(((createArgs[1] & WRITE_NO_SYNC) != 0) ? \"Write_NO_Sync\" : ",
" \"creating new page \" + newIdentity + \" with \" + syncFlag);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/FileContainer.java",
"hunks": [
{
"added": [],
"header": "@@ -289,12 +289,6 @@ abstract class FileContainer",
"removed": [
"\t/**",
"\t\tthe number of arguments we need to pass to alloc page for create",
"\t*/",
"\tprotected static final int STORED_PAGE_ARG_NUM = 5;",
"\tprotected static final int ALLOC_PAGE_ARG_NUM = 6;",
""
]
},
{
"added": [
" PageCreationArgs createPageArgs = new PageCreationArgs(",
" StoredPage.FORMAT_NUMBER,",
" prealloced ? 0 : (noIO ? 0 : CachedPage.WRITE_SYNC),",
" pageSize,",
" spareSpace,",
" minimumRecordSize,",
" 0 /* containerInfoSize - unused for StoredPage */);"
],
"header": "@@ -1768,13 +1762,13 @@ abstract class FileContainer",
"removed": [
"\t\t\t int[] createPageArgs = new int[STORED_PAGE_ARG_NUM];",
"\t\t\t createPageArgs[0] = StoredPage.FORMAT_NUMBER;",
"\t\t\t createPageArgs[1] = prealloced ? ",
" 0 : (noIO ? 0 : CachedPage.WRITE_SYNC);",
"\t\t\t createPageArgs[2] = pageSize;",
"\t\t\t createPageArgs[3] = spareSpace;",
"\t\t\t createPageArgs[4] = minimumRecordSize;"
]
},
{
"added": [
" \"\\nsyncFlag = \" + createPageArgs.syncFlag +"
],
"header": "@@ -1797,7 +1791,7 @@ abstract class FileContainer",
"removed": [
" \"\\ncreatePageArgs[1] = \" + createPageArgs[1] +"
]
},
{
"added": [
"\t\tPageCreationArgs createAllocPageArgs = new PageCreationArgs(",
" AllocPage.FORMAT_NUMBER,",
" noIO ? 0 : CachedPage.WRITE_SYNC,",
" pageSize,",
" 0, // allocation page has no need for spare",
" minimumRecordSize,",
" containerInfoSize);"
],
"header": "@@ -2223,13 +2217,13 @@ abstract class FileContainer",
"removed": [
"\t\tint[] createAllocPageArgs = new int[ALLOC_PAGE_ARG_NUM];",
"\t\tcreateAllocPageArgs[0] = AllocPage.FORMAT_NUMBER;\t",
"\t\tcreateAllocPageArgs[1] = noIO ? 0 : CachedPage.WRITE_SYNC;",
"\t\tcreateAllocPageArgs[2] = pageSize;",
"\t\tcreateAllocPageArgs[3] = 0;\t\t// allocation page has no need for spare",
"\t\tcreateAllocPageArgs[4] = containerInfoSize;",
"\t\tcreateAllocPageArgs[5] = minimumRecordSize;"
]
},
{
"added": [
"\t\t@param createArgs the arguments for page creation"
],
"header": "@@ -2271,7 +2265,7 @@ abstract class FileContainer",
"removed": [
"\t\t@param createArgs the int array for page creation"
]
},
{
"added": [
"\t\t\t\t\t\t\t\tPageCreationArgs createArgs,"
],
"header": "@@ -2279,7 +2273,7 @@ abstract class FileContainer",
"removed": [
"\t\t\t\t\t\t\t\tint[] createArgs,"
]
},
{
"added": [
"\t\tPageCreationArgs reCreatePageArgs;",
" reCreatePageArgs = new PageCreationArgs(",
" pageFormat,",
" CachedPage.WRITE_SYNC,",
" pageSize,",
" spareSpace,",
" minimumRecordSize,",
" 0 /* containerInfoSize - unused for StoredPage */);"
],
"header": "@@ -2553,20 +2547,20 @@ abstract class FileContainer",
"removed": [
"\t\tint[] reCreatePageArgs = null;",
"\t\t\treCreatePageArgs = new int[STORED_PAGE_ARG_NUM];",
"\t\t\treCreatePageArgs[0] = pageFormat;",
"\t\t\treCreatePageArgs[1] = CachedPage.WRITE_SYNC;",
"\t\t\treCreatePageArgs[2] = pageSize;",
"\t\t\treCreatePageArgs[3] = spareSpace;",
"\t\t\treCreatePageArgs[4] = minimumRecordSize;",
"\t\t\treCreatePageArgs = new int[ALLOC_PAGE_ARG_NUM];"
]
},
{
"added": [
" reCreatePageArgs = new PageCreationArgs(",
" pageFormat,",
" CachedPage.WRITE_SYNC,",
" pageSize,",
" 0, // allocation page has no need for spare",
" minimumRecordSize,",
" containerInfoSize);"
],
"header": "@@ -2579,13 +2573,14 @@ abstract class FileContainer",
"removed": [
"\t\t\treCreatePageArgs[0] = pageFormat;",
"\t\t\treCreatePageArgs[1] = CachedPage.WRITE_SYNC;",
"\t\t\treCreatePageArgs[2] = pageSize;",
"\t\t\treCreatePageArgs[3] = 0; // allocation page has no need for spare",
"\t\t\treCreatePageArgs[4] = containerInfoSize;",
"\t\t\treCreatePageArgs[5] = minimumRecordSize;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [
" * container header and passed in through the object.",
"\tprotected void createPage(PageKey newIdentity, PageCreationArgs args)",
"\t\tspareSpace = args.spareSpace;",
"\t\tminimumRecordSize = args.minimumRecordSize;",
" setPageArray(args.pageSize);"
],
"header": "@@ -746,23 +746,18 @@ public class StoredPage extends CachedPage",
"removed": [
" * container header and passed in through the array.",
"\tprotected void createPage(",
" PageKey newIdentity, ",
" int[] args) ",
"\t\t// arg[0] is the formatId of the page",
"\t\t// arg[1] is whether to sync the page to disk or not",
"\t\tint pageSize = args[2];",
"\t\tspareSpace = args[3];",
"\t\tminimumRecordSize = args[4];",
" setPageArray(pageSize);"
]
}
]
}
] |
derby-DERBY-3595-51345f15
|
DERBY-3595: Make stack checking smarter in TableFunctionTest.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@652092 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3596-98ead91e
|
DERBY-3596: Creation of logical connections from a pooled connection causes resource leak on the server.
Exposed method 'resetFromPool' through EngineConnection.
The network server now detects when a client is requesting new logical connectio
ns. This triggers some special logic, where the physical connection on the serve
r side is kept and reset instead of being closed and opened again (this caused resources to leak earlier).
The special logic must *not* be triggered for XA connections, as the XA code is
already well-behaved.
Patch file: derby-3596-5a-complex_skip_creds.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@666088 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
" /**",
" * Tells if the reset / connect request is a deferred request.",
" * This information is used to work around a bug (DERBY-3596) in a",
" * compatible manner, which also avoids any changes in the client driver.",
" * <p>",
" * The bug manifests itself when a connection pool data source is used and",
" * logical connections are obtained from the physical connection associated",
" * with the data source. Each new logical connection causes a new physical",
" * connection on the server, including a new transaction. These connections",
" * and transactions are not closed / cleaned up.",
" */",
" private boolean deferredReset = false;"
],
"header": "@@ -205,6 +205,18 @@ class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" // DERBY-3596",
" // Don't perform this assert if a deferred reset is",
" // happening or has recently taken place, because the",
" // connection state has been changed under the feet of the",
" // piggy-backing mechanism.",
" if (!this.deferredReset && pbsd != null) {"
],
"header": "@@ -1026,7 +1038,12 @@ class DRDAConnThread extends Thread {",
"removed": [
" if (pbsd != null) {"
]
},
{
"added": [
" this.deferredReset = false; // Always reset, only set to true below.",
" // DERBY-3596",
" // Don't mess with XA requests, as the logic for these are handled",
" // by the server side (embedded) objects. Note that XA requests",
" // results in a different database object implementation, and it",
" // does not have the bug we are working around.",
" if (!appRequester.isXARequester()) {",
" this.deferredReset = true; // Non-XA deferred reset detected.",
" }"
],
"header": "@@ -1470,8 +1487,17 @@ class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" // DERBY-3596",
" // If we are reusing resources for a new physical",
" // connection, reset the database object. If the client",
" // is in the process of creating a new logical",
" // connection only, don't reset the database object.",
" if (!deferredReset) {",
" d.reset();",
" }",
" database = d;"
],
"header": "@@ -1890,8 +1916,15 @@ class DRDAConnThread extends Thread {",
"removed": [
" d.reset();",
"\t\t\t\t\t\tdatabase = d;"
]
},
{
"added": [
" // DERBY-3596",
" // Reset the flag. In sane builds it is used to avoid an assert, but",
" // we want to reset it as soon as possible to avoid masking real bugs.",
" // We have to do this because we are changing the connection state",
" // at an unexpected time (deferred reset, see parseSECCHK). This was",
" // done to avoid having to change the client code.",
" this.deferredReset = false;"
],
"header": "@@ -2665,7 +2698,13 @@ class DRDAConnThread extends Thread {",
"removed": [
""
]
},
{
"added": [
" if (this.deferredReset) {",
" // Skip the SECCHK, but assure a minimal degree of correctness.",
" while (codePoint != -1) {",
" switch (codePoint) {",
" // Note the fall-through.",
" // Minimal level of checking to detect protocol errors.",
" // NOTE: SECMGR level 8 code points are not handled.",
" case CodePoint.SECMGRNM:",
" case CodePoint.SECMEC:",
" case CodePoint.SECTKN:",
" case CodePoint.PASSWORD:",
" case CodePoint.NEWPASSWORD:",
" case CodePoint.USRID:",
" case CodePoint.RDBNAM:",
" reader.skipBytes();",
" break;",
" default:",
" invalidCodePoint(codePoint);",
" }",
" codePoint = reader.getCodePoint();",
" }",
" } else {"
],
"header": "@@ -2938,6 +2977,28 @@ class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" } // End \"if (deferredReset) ... else ...\" block",
" {",
" // DERBY-3596: Reset server side (embedded) physical connection for",
" // use with a new logical connection on the client.",
" if (this.deferredReset) {",
" // Reset the existing connection here.",
" try {",
" database.getConnection().resetFromPool();",
" database.getConnection().setHoldability(",
" ResultSet.HOLD_CURSORS_OVER_COMMIT);",
" // Reset isolation level to default, as the client is in",
" // the process of creating a new logical connection.",
" database.getConnection().setTransactionIsolation(",
" Connection.TRANSACTION_READ_COMMITTED);",
" } catch (SQLException sqle) {",
" handleException(sqle);",
" }",
" } else {",
" securityCheckCode = verifyUserIdPassword();",
" }"
],
"header": "@@ -3110,11 +3171,29 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t{",
"\t\t\tsecurityCheckCode = verifyUserIdPassword();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/jdbc/EngineConnection.java",
"hunks": [
{
"added": [
"",
" /**",
" * Resets the connection before it is returned from a PooledConnection",
" * to a new application request (wrapped by a BrokeredConnection).",
" * <p>",
" * Note that resetting the transaction isolation level is not performed as",
" * part of this method. Temporary tables, IDENTITY_VAL_LOCAL and current",
" * schema are reset.",
" */",
" public void resetFromPool() throws SQLException;"
],
"header": "@@ -100,4 +100,14 @@ public interface EngineConnection extends Connection {",
"removed": []
}
]
}
] |
derby-DERBY-3596-f2a8f001
|
DERBY-3596: Test cleanup only.
Renamed test (added the word 'Physical'), added a missing fail(), changed some comments and added a constant for a SQL statement used multiple times.
Patch file: derby-3596-3a-test_cleanup.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@660165 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3597-c6564415
|
DERBY-3597 Incorporate DERBY-3310 and DERBY-3494 write-ups into NormalizeResultSetNode code comments.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@645638 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/NormalizeResultSetNode.java",
"hunks": [
{
"added": [
" *",
" * child result set that needs one. See non-javadoc comments for ",
" * a walk-through of a couple sample code paths.",
" */",
"",
" /*",
" * Below are a couple of sample code paths for NormlizeResultSetNodes.",
" * These samples were derived from Army Brown's write-ups attached to DERBY-3310",
" * and DERBY-3494. The text was changed to include the new code path now that ",
" * all of the NormalizeResultSetNode code has been moved into the init() method.",
" * There are two sections of code in NormalizeResultSetNode.init() that are relevant:",
" * First the code to generate the new node based on the child result set. ",
" * We will call this \"normalize node creation\".",
" * ",
" * ResultSetNode rsn = (ResultSetNode) childResult;",
" * ResultColumnList rcl = rsn.getResultColumns();",
" * ResultColumnList targetRCL = (ResultColumnList) targetResultColumnList;",
" * ...",
" * ResultColumnList prRCList = rcl;",
" * rsn.setResultColumns(rcl.copyListAndObjects());",
" * ...",
" * this.resultColumns = prRCList;",
" *",
" * Next the code to adjust the types for the NormalizeResultSetNode. ",
" * We will call this \"type adjustment\"",
" * ",
" * if (targetResultColumnList != null) {",
" * int size = Math.min(targetRCL.size(), resultColumns.size());",
" * for (int index = 0; index < size; index++) {",
" * ResultColumn sourceRC = (ResultColumn) resultColumns.elementAt(index);",
" * ResultColumn resultColumn = (ResultColumn) targetRCL.elementAt(index);",
" * sourceRC.setType(resultColumn.getTypeServices());",
" * } ",
" * ",
" * --- Sample 1 : Type conversion from Decimal to BigInt on insert --- ",
" * (DERBY-3310 write-up variation) ",
" * The SQL statement on which this sample focuses is:",
" * ",
" * create table d3310 (x bigint);",
" * insert into d3310 select distinct * from (values 2.0, 2.1, 2.2) v; ",
" * ",
" * There are three compilation points of interest for this discussion:",
" * 1. Before the \"normalize node creation\"",
" * 2. Before the \"type adjustment\"",
" * 3. After the \"type adjustment\"",
" * ",
" * Upon completion of the \"type adjustment\", the compilation query ",
" * tree is then manipulated during optimization and code generation, the ",
" * latter of which ultimately determines how the execution-time ResultSet ",
" * tree is going to look.\\u00a0 So for this discussion we walk through the query",
" * tree as it exists at the various points of interest just described.",
" * ",
" * 1) To start, the (simplified) query tree that we have looks something like the following:",
" * ",
" * InsertNode",
" * (RCL_0:ResultColumn_0<BigInt>)",
" * |",
" * SelectNode",
" * (RCL_1:ResultColumn_1<Decimal>)",
" * |",
" * FromSubquery",
" * (RCL_2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL_3:ResultColumn_3<Decimal>)",
" * ",
" * Notation: In the above tree, node names with \"_x\" trailing them are used to",
" * distinguish Java Objects from each other. So if ResultColumn_0 shows up ",
" * more than once, then it is the *same* Java object showing up in different ",
" * parts of the query tree. Type names in angle brackets, such as \"<BigInt>\",",
" * describe the type of the entity immediately preceding the brackets. ",
" * So a line of the form:",
" * ",
" * RCL_0:ResultColumn_0<BigInt>",
" * ",
" * describes a ResultColumnList object containing one ResultColumn object ",
" * whose type is BIGINT. We can see from the above tree that, before ",
" * normalize node creation, the top of the compile tree contains an ",
" * InsertNode, a SelectNode, a FromSubquery, and a UnionNode, all of ",
" * which have different ResultColumnList objects and different ResultColumn ",
" * objects within those lists.",
" * ",
" * 2) After the normalize node creation",
" * The childresult passed to the init method of NormalizeResultSetNode is ",
" * the InsertNode's child, so it ends up creating a new NormalizeResultSetNode ",
" * and putting that node on top of the InsertNode's child--that is, on top of ",
" * the SelectNode.",
" *",
" * At this point it's worth noting that a NormalizeResultSetNode operates ",
" * based on two ResultColumnLists: a) its own (call it NRSN_RCL), and b) ",
" * the ResultColumnList of its child (call it NRSN_CHILD_RCL). More ",
" * specifically, during execution a NormalizeResultSet will take a row ",
" * whose column types match the types of NRSN_CHILD_RCL, and it will ",
" * \"normalize\" the values from that row so that they agree with the ",
" * types of NRSN_RCL. Thus is it possible--and in fact, it should generally ",
" * be the case--that the types of the columns in the NormalizeResultSetNode's ",
" * own ResultColumnList are *different* from the types of the columns in ",
" * its child's ResultColumnList. That should not be the case for most ",
" * (any?) other Derby result set.",
" * ",
" * So we now have:",
" *",
" * InsertNode",
" * (RCL_0:ResultColumn_0<BigInt>)",
" * |",
" * NormalizeResultSetNode",
" * (RCL_1:ResultColumn_1<Decimal> -> VirtualColumnNode<no_type> -> ResultColumn_4<Decimal>)",
" * |",
" * SelectNode",
" * (RCL_4:ResultColumn_4<Decimal>)",
" * |",
" * FromSubquery",
" * (RCL_2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL_3:ResultColumn_3<Decimal>)",
" *",
" * Notice how, when we generate the NormalizeResultSetNode, three things happen:",
" * ",
" * a) The ResultColumList object for the SelectNode is \"pulled up\" into the ",
" * NormalizeResultSetNode.",
" * b) SelectNode is given a new ResultColumnList--namely, a clone of its old",
" * ResultColumnList, including clones of the ResultColumn objects.",
" * c) VirtualColumnNodes are generated beneath NormalizeResultSetNode's ",
" * ResultColumns, and those VCNs point to the *SAME RESULT COLUMN OBJECTS* ",
" * that now sit in the SelectNode's new ResultColumnList. ",
" * Also note how the generated VirtualColumnNode has no type of its own; ",
" * since it is an instance of ValueNode it does have a dataTypeServices ",
" * field, but that field was not set when the NormalizeResultSetNode was ",
" * created. Hence \"<no_type>\" in the above tree.",
" * ",
" * And finally, note that at this point, NormalizeResultSetNode's ",
" * ResultColumnList has the same types as its child's ResultColumnList",
" * --so the NormalizeResultSetNode doesn't actually do anything ",
" * in its current form.",
" * ",
" * 3) Within the \"type adjustment\"",
" * ",
" * The purpose of the \"type adjustment\" is to take the types from ",
" * the InsertNode's ResultColumnList and \"push\" them down to the ",
" * NormalizeResultSetNode. It is this method which sets NRSN_RCL's types ",
" * to match the target (INSERT) table's types--and in doing so, makes them ",
" * different from NRSN_CHILD_RCL's types. Thus this is important because ",
" * without it, NormalizeResultSetNode would never change the types of the ",
" * values it receives.",
" * ",
" * That said, after the call to sourceRC.setType(...) we have:",
" *",
" * InsertNode",
" * (RCL0:ResultColumn_0<BigInt>)",
" * |",
" * NormalizeResultSetNode",
" * (RCL1:ResultColumn_1<BigInt> -> VirtualColumnNode_0<no_type> -> ResultColumn_4<Decimal>)",
" * |",
" * SelectNode",
" * (RCL4:ResultColumn_4<Decimal>)",
" * |",
" * FromSubquery",
" * (RCL2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL3:ResultColumn_3<Decimal>)",
" *",
" * The key change here is that ResultColumn_1 now has a type of BigInt ",
" * intead of Decimal. Since the SelectNode's ResultColumn, ResultColumn_4,",
" * still has a type of Decimal, the NormalizeResulSetNode will take as input",
" * a Decimal value (from SelectNode) and will output that value as a BigInt, ",
" * where output means pass the value further up the tree during execution ",
" * (see below).",
" * ",
" * Note before the fix for DERBY-3310, there was an additional type change ",
" * that caused problems with this case. ",
" * See the writeup attached to DERBY-3310 for details on why this was a problem. ",
" * ",
" * 4) After preprocessing and optimization:",
" * ",
" * After step 3 above, Derby will move on to the optimization phase, which ",
" * begins with preprocessing. During preprocessing the nodes in the tree ",
" * may change shape/content to reflect the needs of the optimizer and/or to ",
" * perform static optimizations/rewrites. In the case of our INSERT statement ",
" * the preprocessing does not change much:",
" *",
" * InsertNode",
" * (RCL0:ResultColumn_0<BigInt>)",
" * |",
" * NormalizeResultSetNode",
" * (RCL1:ResultColumn_1<BigInt> -> VirtualColumnNode<no_type> -> ResultColumn_4<Decimal>)",
" * |",
" * SelectNode",
" * (RCL4:ResultColumn_4<Decimal>)",
" * |",
" * ProjectRestrictNode_0",
" * (RCL2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL3:ResultColumn_3<Decimal>)",
" *",
" * The only thing that has changed between this tree and the one shown in ",
" * step 3 is that the FromSubquery has been replaced with a ProjectRestrictNode.",
" * Note that the ProjectRestrictNode has the same ResultColumnList object as ",
" * the FromSubquery, and the same ResultColumn object as well. That's worth ",
" * noting because it's another example of how Java objects can be \"moved\" ",
" * from one node to another during Derby compilation.",
" * ",
" * 5) After modification of access paths:",
" * As the final stage of optimization Derby will go through the modification ",
" * of access paths phase, in which the query tree is modified to prepare for ",
" * code generation. When we are done modifying access paths, our tree looks ",
" * something like this:",
"",
" InsertNode",
" (RCL0:ResultColumn_0<BigInt>)",
" |",
" NormalizeResultSetNode",
" (RCL1:ResultColumn_1<BigInt> -> VirtualColumnNode<no_type> -> ResultColumn_4<Decimal>)",
" |",
" DistinctNode",
" (RCL4:ResultColumn_4<Decimal> -> VirtualColumnNode<no_type> -> ResultColumn_5<Decimal>)",
" |",
" ProjectRestrictNode_1",
" (RCL5:ResultColumn_5<Decimal>)",
" |",
" ProjectRestrictNode_0",
" (RCL2:ResultColumn_2<Decimal>)",
" |",
" UnionNode",
" (RCL3:ResultColumn_3<Decimal>)",
"",
" * The key thing to note here is that the SelectNode has been replaced with two ",
" * new nodes: a ProjectRestrictNode whose ResultColumnList is a clone of the ",
" * SelectNode's ResultColumnList, and a DistinctNode, whose ResultColumnList ",
" * is the same object as the SelectNode's old ResultColumnList. More ",
" * specifically, all of the following occurred as part of modification of ",
" * access paths:",
" * ",
" * a) The SelectNode was replaced with ProjectRestrictNode_1, whose ",
" * ResultColumnList was the same object as the SelectNode's ResultColumnList.",
" *",
" * b) the ResultColumList object for ProjectRestrictNode_1 was pulled up ",
" * into a new DistinctNode.",
" *",
" * c) ProjectRestrictNode_1 was given a new ResultColumnList--namely, a ",
" * clone of its old ResultColumnList, including clones of the ResultColumn ",
" * objects.",
" * ",
" * d) VirtualColumnNodes were generated beneath the DistinctNode's ",
" * ResultColumns, and those VCNs point to the same result column objects ",
" * that now sit in ProjectRestrictNode_1's new ResultColumnList.",
" * ",
" * 6) After code generation:",
" *",
" * During code generation we will walk the compile-time query tree one final ",
" * time and, in doing so, we will generate code to build the execution-time ",
" * ResultSet tree. As part of that process the two ProjectRestrictNodes will ",
" * be skipped because they are both considered no-ops--i.e. they perform ",
" * neither projections nor restrictions, and hence are not needed. ",
" * (Note that, when checking to see if a ProjectRestrictNode is a no-op, ",
" * column types do *NOT* come into play.)",
" *",
" * Thus the execution tree that we generate ends up looking something like:",
" *",
" * InsertNode",
" * (RCL0:ResultColumn_0<BigInt>)",
" * |",
" * NormalizeResultSetNode",
" * (RCL1:ResultColumn_1<BigInt> -> VirtualColumnNode<no_type> -> ResultColumn_4<Decimal>)",
" * |",
" * DistinctNode",
" * (RCL4:ResultColumn_4<Decimal> -> VirtualColumnNode<no_type> -> ResultColumn_5<Decimal>)",
" * |",
" * ProjectRestrictNode_1",
" * (RCL5:ResultColumn_5<Decimal>)",
" * |",
" * ProjectRestrictNode_0",
" * (RCL2:ResultColumn_2<Decimal>)",
" * |",
" * UnionNode",
" * (RCL3:ResultColumn_3<Decimal>)",
" *",
" * At code generation the ProjectRestrictNodes will again be removed and the ",
" * execution tree will end up looking like this:",
" * ",
" * InsertResultSet",
" * (BigInt)",
" * |",
" * NormalizeResultSet",
" * (BigInt)",
" * |",
" * SortResultSet",
" * (Decimal)",
" * |",
" * UnionResultSet",
" * (Decimal)",
" *",
" * where SortResultSet is generated to enforce the DistinctNode, ",
" * and thus expects the DistinctNode's column type--i.e. Decimal.",
" * ",
" * When it comes time to execute the INSERT statement, then, the UnionResultSet ",
" * will create a row having a column whose type is DECIMAL, i.e. an SQLDecimal ",
" * value. The UnionResultSet will then pass that up to the SortResultSet, ",
" * who is *also* expecting an SQLDecimal value. So the SortResultSet is ",
" * satisfied and can sort all of the rows from the UnionResultSet. ",
" * Then those rows are passed up the tree to the NormalizeResultSet, ",
" * which takes the DECIMAL value from its child (SortResultSet) and normalizes ",
" * it to a value having its own type--i.e. to a BIGINT. The BIGINT is then ",
" * passed up to InsertResultSet, which inserts it into the BIGINT column ",
" * of the target table. And so the INSERT statement succeeds.",
" * ",
" * ---- Sample 2 - NormalizeResultSetNode and Union (DERBY-3494 write-up variation)",
" * Query for discussion",
" * ",
" *",
" * create table t1 (bi bigint, i int);",
" * insert into t1 values (100, 10), (288, 28), (4820, 2);",
" *",
" * select * from",
" * (select bi, i from t1 union select i, bi from t1) u(a,b) where a > 28;",
" *",
" *",
" * Some things to notice about this query:",
" * a) The UNION is part of a subquery.",
" * b) This is *not* a UNION ALL; i.e. we need to eliminate duplicate rows.",
" * c) The left side of the UNION and the right side of the UNION have ",
" * different (but compatible) types: the left has (BIGINT, INT), while the ",
" * right has (INT, BIGINT).",
" * d) There is a predicate in the WHERE clause which references a column ",
" * from the UNION subquery.",
" * e) The table T1 has at least one row.",
" * All of these factors plays a role in the handling of the query and are ",
" * relevant to this discussion.",
" * ",
" * Building the NormalizeResultSetNode. ",
" * When compiling a query, the final stage of optimization in Derby is the ",
" * \"modification of access paths\" phase, in which each node in the query ",
" * tree is given a chance to modify or otherwise perform maintenance in ",
" * preparation for code generation. In the case of a UnionNode, a call ",
" * to modifyAccessPaths() will bring us to the addNewNodes() method, ",
" * which is where the call is made to generate the NormalizeResultSetNode.",
" * ",
" *",
" * if (! columnTypesAndLengthsMatch())",
" * {",
" * treeTop = ",
" * (NormalizeResultSetNode) getNodeFactory().getNode(",
" * C_NodeTypes.NORMALIZE_RESULT_SET_NODE,",
" * treeTop, null, null, Boolean.FALSE,",
" * getContextManager()); ",
" * }",
" *",
" * The fact that the left and right children of the UnionNode have different ",
" * types (observation c above) means that the if condition will return ",
" * true and thus we will generate a NormalizeResultSetNode above the ",
" * UnionNode. At this point (before the NormalizeResultSetNode has been ",
" * generated) our (simplified) query tree looks something like the following.",
" * PRN stands for ProjectRestrictNode, RCL stands for ResultColumnList:",
" *",
" * PRN0",
" * (RCL0)",
" * (restriction: a > 28 {RCL1})",
" * |",
" * UnionNode // <-- Modifying access paths...",
" * (RCL1)",
" * / \\",
" * PRN2 PRN3",
" * | |",
" * PRN4 PRN5",
" * | |",
" * T1 T1",
" *",
" *",
" * where 'a > 28 {RCL1}' means that the column reference A in the predicate a > 28 points to a ResultColumn object in the ResultColumnList that corresponds to \"RCL1\". I.e. at this point, the predicate's column reference is pointing to an object in the UnionNode's RCL.",
" * \"normalize node creation\" will execute:",
" *",
" * ResultColumnList prRCList = rcl;",
" * rsn.setResultColumns(rcl.copyListAndObjects());",
" * // Remove any columns that were generated.",
" * prRCList.removeGeneratedGroupingColumns();",
" * ...",
" * prRCList.genVirtualColumnNodes(rsn, rsn.getResultColumns());",
" * ",
" * this.resultColumns = prRCList;",
" * ",
" * to create a NormalizeResultSetNode whose result column list is prRCList. ",
" * This gives us:",
" *",
" * PRN0",
" * (RCL0)",
" * (restriction: a > 28 {RCL1})",
" * |",
" * NormalizeResultSetNode",
" * (RCL1) // RCL1 \"pulled up\" to NRSN",
" * |",
" * UnionNode",
" * (RCL2) // RCL2 is a (modified) *copy* of RCL1",
" * / \\",
" * PRN2 PRN3",
" * | |",
" * PRN4 PRN5",
" * | |",
" * T1 T1",
" *",
" * Note how RCL1, the ResultColumnList object for the UnionNode, has now been ",
" * *MOVED* so that it belongs to the NormalizeResultSetNode. So the predicate ",
" * a > 28, which (still) points to RCL1, is now pointing to the ",
" * NormalizeResultSetNode instead of to the UnionNode.",
" * ",
" * After this, we go back to UnionNode.addNewNodes() where we see the following:",
" * ",
" *",
" * treeTop = (ResultSetNode) getNodeFactory().getNode(",
" * C_NodeTypes.DISTINCT_NODE,",
" * treeTop.genProjectRestrict(),",
" * Boolean.FALSE,",
" * tableProperties,",
" * getContextManager());",
" *",
" *",
" * I.e. we have to generate a DistinctNode to eliminate duplicates because the query ",
" * specified UNION, not UNION ALL.",
" * ",
" * Note the call to treeTop.genProjectRestrict(). Since NormalizeResultSetNode ",
" * now sits on top of the UnionNode, treeTop is a reference to the ",
" * NormalizeResultSetNode. That means we end up at the genProjectRestrict() ",
" * method of NormalizeResultSetNode. And guess what? The method does ",
" * something very similar to what we did in NormalizeResultSetNode.init(), ",
" * namely:",
" *",
" * ResultColumnList prRCList = resultColumns;",
" * resultColumns = resultColumns.copyListAndObjects();",
" *",
" * and then creates a ProjectRestrictNode whose result column list is prRCList. This gives us:",
" *",
" * PRN0",
" * (RCL0)",
" * (restriction: a > 28 {RCL1})",
" * |",
" * PRN6",
" * (RCL1) // RCL1 \"pulled up\" to new PRN.",
" * |",
" * NormalizeResultSetNode",
" * (RCL3) // RCL3 is a (modified) copy of RCL1",
" * |",
" * UnionNode",
" * (RCL2) // RCL2 is a (modified) copy of RCL1",
" * / \\",
" * PRN2 PRN3",
" * | |",
" * PRN4 PRN5",
" * | |",
" * T1 T1",
" *",
" * On top of that we then put a DistinctNode. And since the init() method ",
" * of DistinctNode does the same kind of thing as the previously-discussed ",
" * methods, we ultimatley end up with:",
" *",
" * PRN0",
" * (RCL0)",
" * (restriction: a > 28 {RCL1})",
" * |",
" * DistinctNode",
" * (RCL1) // RCL1 pulled up to DistinctNode",
" * |",
" * PRN6",
" * (RCL4) // RCL4 is a (modified) copy of RCL1",
" * |",
" * NormalizeResultSetNode",
" * (RCL3) // RCL3 is a (modified) copy of RCL1",
" * |",
" * UnionNode",
" * (RCL2) // RCL2 is a (modified) copy of RCL1",
" * / \\",
" * PRN2 PRN3",
" * | |",
" * PRN4 PRN5",
" * | |",
" * T1 T1",
" *",
" * And thus the predicate a > 28, which (still) points to RCL1, is now ",
" * pointing to the DistinctNode instead of to the UnionNode. And this ",
" * is what we want: i.e. we want the predicate a > 28 to be applied ",
" * to the rows that we retrieve from the node at the *top* of the ",
" * subtree generated for the UnionNode. It is the non-intuitive code ",
" * in the normalize node creation that allows this to happen.",
" *"
],
"header": "@@ -53,8 +53,491 @@ import org.apache.derby.iapi.services.classfile.VMOpcode;",
"removed": [
" * child result set that needs one."
]
}
]
}
] |
derby-DERBY-3601-20276f1e
|
DERBY-3601: Optimize LOBStateTracker for non-locator servers.
From the Jira comment (improvements implemented);
- LOBStateTracker.checkCurrentRow(): couldn't Arrays.fill() be moved inside the if block?
- should discardState() and markAccessed() check the release flag?
- should ResultSet.createLOBColumnTracker() use LOBStateTracker.NO_OP_TRACKER instead of allocating a new when serverSupportsLocators() returns false?
Patch file: derby-3601-2a-non_locator_optimization.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@701156 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/LOBStateTracker.java",
"hunks": [
{
"added": [
" // Reset state for the next row.",
" Arrays.fill(this.published, false);"
],
"header": "@@ -130,9 +130,9 @@ class LOBStateTracker {",
"removed": [
" // Reset state for the next row.",
" Arrays.fill(this.published, false);"
]
},
{
"added": [
" if (this.doRelease) {",
" // Force the state to published for all LOB columns.",
" // This will cause checkCurrentRow to ignore all LOBs on the next",
" // invocation. The method markAsPublished cannot be called before",
" // after checkCurrentRow has been called again.",
" Arrays.fill(this.published, true);",
" }"
],
"header": "@@ -143,11 +143,13 @@ class LOBStateTracker {",
"removed": [
" // Force the state to published for all LOB columns.",
" // This will cause checkCurrentRow to ignore all LOBs on the next",
" // invocation. The method markAsPublished cannot be called before after",
" // checkCurrentRow has been called again.",
" Arrays.fill(this.published, true);"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
" * The state tracker is used to free LOB locators on the server. If the",
" * server doesn't support locators, or there are no LOBs in the result set,",
" * a no-op tracker will be used.",
" if (this.connection_.supportsSessionDataCaching() &&",
" this.resultSetMetaData_.hasLobColumns()) {"
],
"header": "@@ -6217,14 +6217,17 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" * The state tracker is used to free LOB locators on the server.",
" if (this.resultSetMetaData_.hasLobColumns()) {"
]
},
{
"added": [
" this.lobState = new LOBStateTracker(lobIndexes, isBlob, true);"
],
"header": "@@ -6241,8 +6244,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" this.lobState = new LOBStateTracker(lobIndexes, isBlob,",
" this.connection_.serverSupportsLocators());"
]
}
]
}
] |
derby-DERBY-3601-3b248981
|
DERBY-3601: Optimize LOBStateTracker for non-locator servers.
Cleanup patch changing comments and code naming. No functional changes.
Patch file: derby-3601-1a-comments_and_renaming.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@690133 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/LOBStateTracker.java",
"hunks": [
{
"added": [
" * An object that tracks the state of large objects (LOBs) for the current row",
" * in a result set.",
" * A LOB's state is either unpublished or published. When a LOB is published, it",
" * means that the end-user has been given a reference to the LOB object. This",
" * implies that the LOB cannot be automatically freed/released when the",
" * result set position changes (i.e. typically {@code rs.next()}), because the",
" * LOB object must be kept valid/alive until the transaction is ended or the",
" * LOB object is explicitly freed.",
" * <p>",
" * This class covers two types of functionality regarding LOBs;",
" * <li>Keep track of whether a LOB column has been published or not.</li>",
" * Both functionalities will be disabled if the server doesn't support locators.",
" * If locators are enabled, they will be freed when {@link checkCurrentRow} is",
" * called."
],
"header": "@@ -23,16 +23,24 @@ package org.apache.derby.client.am;",
"removed": [
" * An object that tracks the state of large objects (LOBs) in a result set.",
" * This object covers two types of functionality regarding LOBs;",
" * <li>Keep track of whether a LOB column has been accessed.</li>",
" * The former functionality is always present in a tracker object. The latter",
" * functionality may or may not be available. This is decided by whether",
" * locators are supported by the server or not."
]
},
{
"added": [
" /**",
" * Instance to use when there are no LOBs in the result set, or when the",
" * server doesn't support locators.",
" */",
" /** Tells whether the LOB is Blob or a Clob. */",
" /** Tells whether the LOB colum has been published for the current row. */",
" private final boolean[] published;",
" private final boolean doRelease;"
],
"header": "@@ -44,21 +52,23 @@ import java.util.Arrays;",
"removed": [
" /** Instance to use when there are no LOBs in the result set. */",
"",
" /** Tells whether a LOB is Blob or a Clob. */",
" /** Tells whether a LOB colum has been accessed in the current row. */",
" private final boolean[] accessed;",
" private final boolean release;"
]
},
{
"added": [
" * @param doRelease whether locators shall be released",
" LOBStateTracker(int[] lobIndexes, boolean[] isBlob, boolean doRelease) {",
" this.published = new boolean[columns.length];",
" this.doRelease = doRelease;",
" // Zero is an invalid locator, don't fill with a valid value."
],
"header": "@@ -71,15 +81,15 @@ class LOBStateTracker {",
"removed": [
" * @param release whether locators shall be released",
" LOBStateTracker(int[] lobIndexes, boolean[] isBlob, boolean release) {",
" this.accessed = new boolean[columns.length];",
" this.release = release;",
" // Zero is an invalid locator, so don't fill with different value."
]
},
{
"added": [
" if (this.doRelease) {",
" if (!this.published[i] && !cursor.isNull_[this.columns[i] -1]) {"
],
"header": "@@ -94,12 +104,12 @@ class LOBStateTracker {",
"removed": [
" if (this.release) {",
" if (!this.accessed[i] && !cursor.isNull_[this.columns[i] -1]) {"
]
},
{
"added": [
" Arrays.fill(this.published, false);"
],
"header": "@@ -122,7 +132,7 @@ class LOBStateTracker {",
"removed": [
" Arrays.fill(this.accessed, false);"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetCursor.java",
"hunks": [
{
"added": [
" netResultSet_.markLOBAsPublished(column);"
],
"header": "@@ -1083,7 +1083,7 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" netResultSet_.markLOBAsAccessed(column);"
]
},
{
"added": [
" netResultSet_.markLOBAsPublished(column);"
],
"header": "@@ -1125,7 +1125,7 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" netResultSet_.markLOBAsAccessed(column);"
]
}
]
}
] |
derby-DERBY-3603-aebcea3f
|
DERBY-3603: 'IN' clause ignores valid results.
Patch contributed by A B (qozinx at gmail dot com)
Some queries using multi-valued IN clauses were not returning the right
results. An example of a query which was processed incorrectly is:
select count(*) FROM spike.accounts account, spike.admin_units admin_unit,
spike.bookings booking
WHERE booking.child_id = 2 AND
admin_unit.admin_unit_id IN (1,21) AND
booking.booking_date_time_out >= 20080331000000 AND
booking.booking_date_time_in <= 20080406235900 AND
account.account_id = booking.account_id AND
admin_unit.admin_unit_id = account.admin_unit_id;
The issue involves the behavior of MultiProbeTableScanResultSet when it
goes to re-open the scan; under certain circumstances, it was failing to
reset the probing state, and so was performing the probing incorrectly,
using only partial portions of the IN list values. For example, in the
above query, there were certain rows which were only tested against the
value "admin_unit_id = 21"; the "admin_unit_id = 1" case was skipped.
MultiProbeTableScanResultSet.reopenCore() was using a heuristic test to
distinguish between the two cases of:
* A - The first is for join processing. In this case we have
* a(nother) row from some outer table and we want to reopen this
* scan to look for rows matching the new outer row.
*
* B - The second is for multi-probing. Here we want to reopen
* the scan on this table to look for rows matching the next value
* in the probe list.
The patch modifies this code so that the caller passes in a boolean flag
to specify which case is occurring, which avoids the problem thinking that
it was in case "B" when in fact it was actually in case "A".
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@648492 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/MultiProbeTableScanResultSet.java",
"hunks": [
{
"added": [
" reopenCore(false);",
" }",
"",
" /**",
" * There are two scenarios for which we reopen this kind of scan:",
" *",
" * A - The first is for join processing. In this case we have",
" * a(nother) row from some outer table and we want to reopen this",
" * scan to look for rows matching the new outer row.",
" *",
" * B - The second is for multi-probing. Here we want to reopen",
" * the scan on this table to look for rows matching the next value",
" * in the probe list.",
" *",
" * If we are reopening the scan for scenario A (join processing)",
" * then we need to reset our position within the probe list. ",
" * If we are reopening the scan for scenario B then we do *not*",
" * want to reset our position within the probe list because that",
" * position tells us where to find the next probe value.",
" *",
" * That said, this method does the work of reopenCore() using",
" * the received boolean to determine which of the two scenarios",
" * we are in. Note that if our current position (i.e. the value",
" * of probeValIndex) is beyond the length of the probe list then",
" * we know that we are reopening the scan for scenario A. Or put",
" * another away, we should never get here for scenario B if",
" * probeValIndex is greater than or equal to the length of the",
" * probe list. The reason is that the call to reopenCore() for",
" * scenario B will only ever happen when moreInListVals() returns",
" * true--and in that case we know that probeValIndex will be less",
" * than the length of the probeValues. But the opposite is not",
" * true: i.e. it is *not* safe to say that a probeValIndex which",
" * is less than the length of probe list is always for scenario",
" * B. That's not true because it's possible that the join to",
" * which this scan belongs is a \"oneRowRightSide\" join, meaning",
" * that this, the \"right\" side scan, will be \"interrupted\" after",
" * we return a single row for the current outer row. If we then",
" * come back with a new outer row we need to reset our position--",
" * even though probeValIndex will be less than probeValues.length",
" * in that case. DERBY-3603.",
" */",
" private void reopenCore(boolean forNextProbe) throws StandardException",
" {",
" if (!forNextProbe)"
],
"header": "@@ -242,34 +242,50 @@ class MultiProbeTableScanResultSet extends TableScanResultSet",
"removed": [
" /* There are two scenarios for which we reopen this kind of scan:",
" *",
" * A - The first is for join processing. In this case we have",
" * a(nother) row from some outer table and we want to reopen this",
" * scan to look for rows matching the new outer row.",
" *",
" * B - The second is for multi-probing. Here we want to reopen",
" * the scan on this table to look for rows matching the next value",
" * in the probe list.",
" *",
" * If we are reopening the scan for scenario A (join processing)",
" * then we need to reset our position within the probe list. ",
" * If we are reopening the scan for scenario B then we do *not*",
" * want to reset our position within the probe list because that",
" * position tells us where to find the next probe value.",
" *",
" * The way we tell the difference between the two scenarios is",
" * by looking at our current position in the probe list (i.e. the",
" * value of probeValIndex): if our current position is beyond the",
" * length of the probe list then we know that we are reopening the",
" * scan for scenario A. Or put another away, we should never get",
" * here for scenario B if probeValIndex is greater than or equal",
" * to the length of the probe list. The reason is that the call",
" * to reopenCore() for scenario B will only ever happen when",
" * moreInListVals() returns true--and in that case we know that",
" * probeValIndex will be less than the length of the probeValues.",
" */",
" if (probeValIndex >= probeValues.length)"
]
}
]
}
] |
derby-DERBY-361-bd541987
|
DERBY-361 fix.
Needed to force a checkpoint before compress, otherwise during crash recovery
it was possible to redo a page that no longer existed in the file because
of compress. This fix adds a number of recovery crash and transaction
abort tests specific to compress into the storerecovery test suite.
git-svn-id: https://svn.apache.org/repos/asf/incubator/derby/code/trunk@202302 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/FileContainer.java",
"hunks": [
{
"added": [
" // make sure we don't execute redo recovery on any page",
" // which is getting truncated. At this point we have an exclusive",
" // table lock on the table, so after checkpoint no page change",
" // can happen between checkpoint log record and compress of space.",
" dataFactory.getRawStoreFactory().checkpoint();",
""
],
"header": "@@ -1351,6 +1351,12 @@ public abstract class FileContainer",
"removed": []
}
]
}
] |
derby-DERBY-3613-83412958
|
DERBY-3613: SELECT DISTINCT with GROUP BY produces wrong results
Certain combinations of DISTINCT and GROUP BY in the same query were
producing incorrect results. Duplicate rows were appearing in the
results because the query was including all of the GROUP BY columns
in the evaluation of the DISTINCT clause, not just the columns that
were explicitly specified to be DISTINCT.
For example, in the query:
select distinct a, b from t group by a, b, c
Derby was including two separate rows in the result which had the same
value for columns a and b, but had different values for column c.
Internally, GroupByList.bindGroupByColumns() was generating the
extra column(s) from the group by list into the select's result
column list, but this processing should not be performed when the
query specifies distinct, because adding extra columns to the
set of distinct columns changes the results.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@650728 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3616-26d0c8ed
|
DERBY-3616: Devise a platform-independent encoding for passing the Table Function signature from the compiler to the execution machinery.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@648232 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatIdUtil.java",
"hunks": [
{
"added": [
"\tprivate\tstatic\tfinal\tint\t\tBYTE_MASK = 0xFF;",
"\tprivate\tstatic\tfinal\tint\t\tNIBBLE_MASK = 0xF;",
"\tprivate\tstatic\tfinal\tint\t\tNIBBLE_SHIFT = 4;",
"\tprivate\tstatic\tfinal\tint\t\tHEX_RADIX = 16;",
""
],
"header": "@@ -48,6 +48,11 @@ import java.io.IOException;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromVTI.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.io.FormatIdUtil;"
],
"header": "@@ -23,6 +23,7 @@ package\torg.apache.derby.impl.sql.compile;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/VTIResultSet.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.io.FormatIdUtil;"
],
"header": "@@ -49,6 +49,7 @@ import org.apache.derby.iapi.store.access.Qualifier;",
"removed": []
},
{
"added": [
" byte[] bytes = FormatIdUtil.fromString( ice );"
],
"header": "@@ -690,7 +691,7 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": [
" byte[] bytes = ice.getBytes();"
]
},
{
"added": [],
"header": "@@ -786,5 +787,4 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": [
" "
]
}
]
}
] |
derby-DERBY-3616-47e1295b
|
DERBY-3616: TableFunctionTest fails under Ubuntu 7.10
Removed all encoding/decoding of the table function's return type and
instead stored it directly as a saved object in the compiled plan.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@649007 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatIdUtil.java",
"hunks": [
{
"added": [],
"header": "@@ -48,10 +48,6 @@ import java.io.IOException;",
"removed": [
"\tprivate\tstatic\tfinal\tint\t\tBYTE_MASK = 0xFF;",
"\tprivate\tstatic\tfinal\tint\t\tNIBBLE_MASK = 0xF;",
"\tprivate\tstatic\tfinal\tint\t\tNIBBLE_SHIFT = 4;",
"\tprivate\tstatic\tfinal\tint\t\tHEX_RADIX = 16;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/sql/execute/ResultSetFactory.java",
"hunks": [
{
"added": [
"\t\t@param returnTypeNumber\tWhich saved object contains the return type",
"\t\t\t\t\t\t\t\t(a multi-set) serialized as a byte array"
],
"header": "@@ -643,7 +643,8 @@ public interface ResultSetFactory {",
"removed": [
"\t\t@param returnType The name of the return type (a multi-set) as a string"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromVTI.java",
"hunks": [
{
"added": [],
"header": "@@ -21,9 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.services.io.DynamicByteArrayOutputStream;",
"import org.apache.derby.iapi.services.io.FormatIdOutputStream;",
"import org.apache.derby.iapi.services.io.FormatIdUtil;"
]
},
{
"added": [
" int rtNum = -1;",
" rtNum = acb.addItem(methodCall.getRoutineInfo().getReturnType());",
" mb.push(rtNum);"
],
"header": "@@ -1327,15 +1324,12 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": [
" String returnType = freezeReturnType( methodCall.getRoutineInfo().getReturnType() );",
" mb.push( returnType );",
" }",
" else",
" {",
"\t\t\tmb.pushNull( String.class.getName());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/GenericResultSetFactory.java",
"hunks": [
{
"added": [
" int returnTypeNumber"
],
"header": "@@ -446,7 +446,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": [
" String returnType"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/VTIResultSet.java",
"hunks": [
{
"added": [],
"header": "@@ -48,8 +48,6 @@ import org.apache.derby.iapi.store.access.Qualifier;",
"removed": [
"import org.apache.derby.iapi.services.io.FormatIdInputStream;",
"import org.apache.derby.iapi.services.io.FormatIdUtil;"
]
},
{
"added": [],
"header": "@@ -62,7 +60,6 @@ import org.apache.derby.vti.DeferModification;",
"removed": [
"import java.io.ByteArrayInputStream;"
]
},
{
"added": [
" private final TypeDescriptor returnType;"
],
"header": "@@ -101,7 +98,7 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": [
" private String returnType;"
]
},
{
"added": [
" int returnTypeNumber"
],
"header": "@@ -126,7 +123,7 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": [
" String returnType"
]
},
{
"added": [
"",
" this.returnType = returnTypeNumber == -1 ? null :",
" (TypeDescriptor)",
" activation.getPreparedStatement().getSavedObject(returnTypeNumber);"
],
"header": "@@ -141,7 +138,10 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": [
" this.returnType = returnType;"
]
},
{
"added": [
" TypeDescriptor[] columnTypes = returnType.getRowTypes();"
],
"header": "@@ -668,8 +668,7 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": [
" TypeDescriptor td = thawReturnType( returnType );",
" TypeDescriptor[] columnTypes = td.getRowTypes();"
]
},
{
"added": [],
"header": "@@ -682,28 +681,6 @@ class VTIResultSet extends NoPutResultSetImpl",
"removed": [
" /**",
" * <p>",
" * Deserialize a type descriptor from a string.",
" * </p>",
" */",
" private TypeDescriptor thawReturnType( String ice )",
" throws StandardException",
" {",
" try {",
" byte[] bytes = FormatIdUtil.fromString( ice );",
" ByteArrayInputStream bais = new ByteArrayInputStream( bytes );",
" FormatIdInputStream fiis = new FormatIdInputStream( bais );",
" TypeDescriptor td = (TypeDescriptor) fiis.readObject();",
"",
" return td;",
" ",
" } catch (Throwable t)",
" {",
" throw StandardException.unexpectedUserException( t );",
" }",
" }",
" "
]
}
]
}
] |
derby-DERBY-3618-44900c52
|
DERBY-3618 Perform thread dump with ASSERTS with jdk 1.5 or higher
Contributed by Erlend Birkenes
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@678858 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/shared/org/apache/derby/shared/common/sanity/AssertFailure.java",
"hunks": [
{
"added": [
"import java.io.ByteArrayOutputStream;",
"import java.io.PrintStream;",
"import java.io.PrintWriter;",
"import java.lang.reflect.InvocationTargetException;",
"import java.lang.reflect.Method;",
"import java.security.AccessControlException;",
"import java.security.AccessController;",
"import java.security.PrivilegedActionException;",
"import java.security.PrivilegedExceptionAction;",
" * AssertFailure is raised when an ASSERT check fails. Because assertions are",
" * not used in production code, are never expected to fail, and recovering from",
" * their failure is expected to be hard, they are under RuntimeException so that",
" * no one needs to list them in their throws clauses. An AssertFailure at the",
" * ",
" * An AssertFailure also contains a string representation of a full thread dump ",
" * for all the live threads at the moment it was thrown if the JVM supports it ",
" * and we have the right permissions. ",
" * ",
" * If the JVM doesn't have the method Thread.getAllStackTraces i.e, we are on a",
" * JVM < 1.5, or if we don't have the permissions java.lang.RuntimePermission",
" * \"getStackTrace\" and \"modifyThreadGroup\", a message saying so is stored",
" * instead.",
" * ",
" * The thread dump string is printed to System.err after the normal stack trace ",
" * when the error is thrown, and it is also directly available by getThreadDump().",
" */",
"public class AssertFailure extends RuntimeException {",
" ",
" private String threadDump;",
" "
],
"header": "@@ -21,19 +21,39 @@",
"removed": [
"import java.io.*;",
" * AssertFailure is raised when an ASSERT check fails.",
" * Because assertions are not used in production code,",
" * are never expected to fail, and recovering from their",
" * failure is expected to be hard, they are under",
" * RuntimeException so that no one needs to list them",
" * in their throws clauses. An AssertFailure at the",
" **/",
"public class AssertFailure extends RuntimeException",
"{"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/unitTests/junit/_Suite.java",
"hunks": [
{
"added": [
"import org.apache.derbyTesting.junit.SecurityManagerSetup;"
],
"header": "@@ -24,6 +24,7 @@ package org.apache.derbyTesting.unitTests.junit;",
"removed": []
},
{
"added": [
"public class _Suite extends BaseTestCase {"
],
"header": "@@ -33,7 +34,7 @@ import junit.framework.TestSuite;",
"removed": [
"public class _Suite extends BaseTestCase {"
]
}
]
}
] |
derby-DERBY-3618-47ae0cd9
|
DERBY-3618: Addressed some review comments in AssertFailure and AssertFailureTest
Patch contributed by Erlend Birkenes <erlend@birkenes.net>.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@679742 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/shared/org/apache/derby/shared/common/sanity/AssertFailure.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.shared.common.sanity.AssertFailure"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.iapi.services.sanity.AssertFailure"
]
},
{
"added": [
"import java.io.StringWriter;"
],
"header": "@@ -21,9 +21,9 @@",
"removed": [
"import java.io.ByteArrayOutputStream;"
]
}
]
}
] |
derby-DERBY-3619-19a32c4e
|
DERBY-3619: Implement more load types for derbyTesting.perf.clients.Runner
Added a class that creates databases for use in tests that simulate
bank transactions. It is the intention that these databases follow the
rules defined by the TPC-B benchmark specification as closely as
possible.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@667586 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/BankAccountFiller.java",
"hunks": [
{
"added": [
"/*",
"",
"Derby - Class org.apache.derbyTesting.perf.clients.BankAccountFiller",
"",
"Licensed to the Apache Software Foundation (ASF) under one or more",
"contributor license agreements. See the NOTICE file distributed with",
"this work for additional information regarding copyright ownership.",
"The ASF licenses this file to You under the Apache License, Version 2.0",
"(the \"License\"); you may not use this file except in compliance with",
"the License. You may obtain a copy of the License at",
"",
" http://www.apache.org/licenses/LICENSE-2.0",
"",
"Unless required by applicable law or agreed to in writing, software",
"distributed under the License is distributed on an \"AS IS\" BASIS,",
"WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
"See the License for the specific language governing permissions and",
"limitations under the License.",
"",
"*/",
"",
"package org.apache.derbyTesting.perf.clients;",
"",
"import java.sql.Connection;",
"import java.sql.PreparedStatement;",
"import java.sql.Statement;",
"import java.sql.SQLException;",
"",
"import java.util.Arrays;",
"",
"/**",
" * This class creates and populates tables that can be used by the",
" * bank transactions test clients. It attempts to create tables that",
" * follow the rules defined by the TPC-B benchmark specification.",
" */",
"public class BankAccountFiller implements DBFiller {",
"",
" /** Name of the account table. */",
" private static final String ACCOUNT_TABLE = \"ACCOUNTS\";",
" /** Name of the branch table. */",
" private static final String BRANCH_TABLE = \"BRANCHES\";",
" /** Name of the teller table. */",
" private static final String TELLER_TABLE = \"TELLERS\";",
" /** Name of the history table. */",
" private static final String HISTORY_TABLE = \"HISTORY\";",
"",
" /**",
" * Number of extra bytes needed to make the rows in the account",
" * table at least 100 bytes, as required by the TPC-B spec. The",
" * table has two INT columns (4 bytes each) and one BIGINT column",
" * (8 bytes).",
" */",
" private static final int ACCOUNT_EXTRA = 100 - 4 - 4 - 8;",
"",
" /**",
" * Number of extra bytes needed to make the rows in the branch",
" * table at least 100 bytes, as required by the TPC-B spec. The",
" * table has one INT column (4 bytes) and one BIGINT column (8",
" * bytes).",
" */",
" private static final int BRANCH_EXTRA = 100 - 4 - 8;",
"",
" /**",
" * Number of extra bytes needed to make the rows in the teller",
" * table at least 100 bytes, as required by the TPC-B spec. The",
" * table has two INT columns (4 bytes each) and one BIGINT column",
" * (8 bytes).",
" */",
" private static final int TELLER_EXTRA = 100 - 4 - 4 - 8;",
"",
" /**",
" * Number of extra bytes needed to make the rows in the history",
" * table at least 50 bytes, as required by the TPC-B spec. The",
" * table has three INT columns (4 bytes each), one BIGINT column",
" * (8 bytes) and one TIMESTAMP column (12 bytes).",
" */",
" private static final int HISTORY_EXTRA = 50 - 4 - 4 - 4 - 8 - 12;",
"",
" /** Number of records in the account table. */",
" private final int accountRecords;",
" /** Number of records in the teller table. */",
" private final int tellerRecords;",
" /** Number of records in the branch table. */",
" private final int branchRecords;",
"",
" /**",
" * Create a filler that generates tables with the given sizes.",
" *",
" * @param accounts number of records in the account table",
" * @param tellers number of records in the teller table",
" * @param branches number of records in the branch table",
" */",
" public BankAccountFiller(int accounts, int tellers, int branches) {",
" if (accounts <= 0 || tellers <= 0 || branches <= 0) {",
" throw new IllegalArgumentException(",
" \"all arguments must be greater than 0\");",
" }",
" accountRecords = accounts;",
" tellerRecords = tellers;",
" branchRecords = branches;",
" }",
"",
" /**",
" * Create a filler that generate tables which have correct sizes",
" * relative to each other. With scale factor 1, the account table",
" * has 100000 rows, the teller table has 10 rows and the branch",
" * table has 1 row. If the scale factor is different from 1, the",
" * number of rows is multiplied with the scale factor.",
" *",
" * @param tps the scale factor for this database",
" */",
" public BankAccountFiller(int tps) {",
" this(tps * 100000, tps * 10, tps * 1);",
" }",
"",
" /**",
" * Populate the database.",
" */",
" public void fill(Connection c) throws SQLException {",
" c.setAutoCommit(false);",
" dropTables(c);",
" createTables(c);",
" fillTables(c);",
" }",
"",
" /**",
" * Drop the tables if they exits.",
" */",
" private static void dropTables(Connection c) throws SQLException {",
" WisconsinFiller.dropTable(c, ACCOUNT_TABLE);",
" WisconsinFiller.dropTable(c, BRANCH_TABLE);",
" WisconsinFiller.dropTable(c, TELLER_TABLE);",
" WisconsinFiller.dropTable(c, HISTORY_TABLE);",
" c.commit();",
" }",
"",
" /**",
" * Create the tables.",
" */",
" private static void createTables(Connection c) throws SQLException {",
" Statement s = c.createStatement();",
"",
" s.executeUpdate(\"CREATE TABLE \" + ACCOUNT_TABLE +",
" \"(ACCOUNT_ID INT PRIMARY KEY, \" +",
" \"BRANCH_ID INT NOT NULL, \" +",
" // The balance column must be able to hold 10",
" // digits and sign per TPC-B spec, so BIGINT",
" // is needed.",
" \"ACCOUNT_BALANCE BIGINT NOT NULL, \" +",
" \"EXTRA_DATA CHAR(\" + ACCOUNT_EXTRA + \") NOT NULL)\");",
"",
" s.executeUpdate(\"CREATE TABLE \" + BRANCH_TABLE +",
" \"(BRANCH_ID INT PRIMARY KEY, \" +",
" // The balance column must be able to hold 10",
" // digits and sign per TPC-B spec, so BIGINT",
" // is needed.",
" \"BRANCH_BALANCE BIGINT NOT NULL, \" +",
" \"EXTRA_DATA CHAR(\" + BRANCH_EXTRA + \") NOT NULL)\");",
"",
" s.executeUpdate(\"CREATE TABLE \" + TELLER_TABLE +",
" \"(TELLER_ID INT PRIMARY KEY, \" +",
" \"BRANCH_ID INT NOT NULL, \" +",
" // The balance column must be able to hold 10",
" // digits and sign per TPC-B spec, so BIGINT",
" // is needed.",
" \"TELLER_BALANCE INT NOT NULL, \" +",
" \"EXTRA_DATA CHAR(\" + TELLER_EXTRA + \") NOT NULL)\");",
"",
" s.executeUpdate(\"CREATE TABLE \" + HISTORY_TABLE +",
" \"(ACCOUNT_ID INT NOT NULL, \" +",
" \"TELLER_ID INT NOT NULL, \" +",
" \"BRANCH_ID INT NOT NULL, \" +",
" // The amount column must be able to hold 10",
" // digits and sign per TPC-B spec, so BIGINT",
" // is needed.",
" \"AMOUNT BIGINT NOT NULL, \" +",
" \"TIME_STAMP TIMESTAMP NOT NULL, \" +",
" \"EXTRA_DATA CHAR(\" + HISTORY_EXTRA + \") NOT NULL)\");",
"",
" s.close();",
" c.commit();",
" }",
"",
" /**",
" * Fill the tables with rows.",
" */",
" private void fillTables(Connection c) throws SQLException {",
"",
" PreparedStatement atIns =",
" c.prepareStatement(\"INSERT INTO \" + ACCOUNT_TABLE +",
" \"(ACCOUNT_ID, BRANCH_ID, ACCOUNT_BALANCE, \" +",
" \"EXTRA_DATA) VALUES (?, ?, 0, ?)\");",
" atIns.setString(3, createJunk(ACCOUNT_EXTRA)); // same for all rows",
" for (int id = 0; id < accountRecords; id++) {",
" atIns.setInt(1, id);",
" atIns.setInt(2, id % branchRecords);",
" atIns.executeUpdate();",
" }",
" atIns.close();",
" c.commit();",
"",
" PreparedStatement btIns =",
" c.prepareStatement(\"INSERT INTO \" + BRANCH_TABLE +",
" \"(BRANCH_ID, BRANCH_BALANCE, EXTRA_DATA) \" +",
" \"VALUES (?, 0, ?)\");",
" btIns.setString(2, createJunk(BRANCH_EXTRA)); // same for all rows",
" for (int id = 0; id < branchRecords; id++) {",
" btIns.setInt(1, id);",
" btIns.executeUpdate();",
" }",
" btIns.close();",
" c.commit();",
"",
" PreparedStatement ttIns =",
" c.prepareStatement(\"INSERT INTO \" + TELLER_TABLE +",
" \"(TELLER_ID, BRANCH_ID, TELLER_BALANCE, \" +",
" \"EXTRA_DATA) VALUES (?, ?, 0, ?)\");",
" ttIns.setString(3, createJunk(TELLER_EXTRA)); // same for all rows",
" for (int id = 0; id < tellerRecords; id++) {",
" ttIns.setInt(1, id);",
" ttIns.setInt(2, id % branchRecords);",
" ttIns.executeUpdate();",
" }",
" ttIns.close();",
" c.commit();",
" }",
"",
" /**",
" * Return a string of the specified length that can be used to",
" * increase the size of the rows. The string only contains",
" * x's. The rows have a defined minimum size in bytes, whereas the",
" * string length is in characters. For now, we assume that one",
" * character maps to one byte on the disk as long as the string",
" * only contains ASCII characters.",
" *",
" * @param length the length of the string",
" * @return a string of the specified length",
" */",
" private static String createJunk(int length) {",
" char[] junk = new char[length];",
" Arrays.fill(junk, 'x');",
" return new String(junk);",
" }",
"",
" // For testing until the test client that uses the database has",
" // been written.",
" public static void main(String[] args) throws Exception {",
" Class.forName(\"org.apache.derby.jdbc.EmbeddedDriver\");",
" Connection c = java.sql.DriverManager.getConnection(",
" \"jdbc:derby:wombat;create=true\");",
" DBFiller f = new BankAccountFiller(4000, 20, 3);",
" System.out.print(\"filling...\");",
" f.fill(c);",
" System.out.println(\"done!\");",
" }",
"}"
],
"header": "@@ -0,0 +1,256 @@",
"removed": []
}
]
}
] |
derby-DERBY-3619-2445ee83
|
DERBY-3619: Implement more load types for org.apache.derbyTesting.perf.clients.Runner
Added options to Runner to run the bank transaction test.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@671861 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/Runner.java",
"hunks": [
{
"added": [
"import java.util.HashMap;"
],
"header": "@@ -27,7 +27,7 @@ import java.sql.DriverManager;",
"removed": [
"import java.util.HashSet;"
]
},
{
"added": [
" /** Map containing load-specific options. */",
" private final static HashMap loadOpts = new HashMap();"
],
"header": "@@ -58,8 +58,8 @@ public class Runner {",
"removed": [
" /** Set of load-specific options. */",
" private final static HashSet loadOpts = new HashSet();"
]
},
{
"added": [
" parseLoadOpts(args[++i]);"
],
"header": "@@ -142,7 +142,7 @@ public class Runner {",
"removed": [
" loadOpts.addAll(Arrays.asList(args[++i].split(\",\")));"
]
},
{
"added": [
" /**",
" * Parse the load-specific options. It's a comma-separated list of options,",
" * where each option is either a keyword or a (keyword, value) pair",
" * separated by an equals sign (=). The parsed options will be put into the",
" * map {@link #loadOpts}.",
" *",
" * @param optsString the comma-separated list of options",
" */",
" private static void parseLoadOpts(String optsString) {",
" String[] opts = optsString.split(\",\");",
" for (int i = 0; i < opts.length; i++) {",
" String[] keyValue = opts[i].split(\"=\", 2);",
" if (keyValue.length == 2) {",
" loadOpts.put(keyValue[0], keyValue[1]);",
" } else {",
" loadOpts.put(opts[i], null);",
" }",
" }",
" }",
"",
" /**",
" * Checks whether the specified option is set.",
" *",
" * @param option the name of the option",
" * @return {@code true} if the option is set",
" */",
" private static boolean hasOption(String option) {",
" return loadOpts.keySet().contains(option);",
" }",
"",
" /**",
" * Get the {@code int} value of the specified option.",
" *",
" * @param option the name of the option",
" * @param defaultValue the value to return if the option is not set",
" * @return the value of the option",
" * @throws NumberFormatException if the value is not an {@code int}",
" */",
" private static int getLoadOpt(String option, int defaultValue) {",
" String val = (String) loadOpts.get(option);",
" return val == null ? defaultValue : Integer.parseInt(val);",
" }",
""
],
"header": "@@ -162,6 +162,49 @@ public class Runner {",
"removed": []
},
{
"added": [
"\" * bank_tx - emulate simple bank transactions, similar to TPC-B. The\\n\" +",
"\" following load-specific options are accepted:\\n\" +",
"\" - branches=NN: specifies the number of branches in the db\\n\" +",
"\" (default: 1)\\n\" +",
"\" - tellersPerBranch=NN: specifies how many tellers each branch\\n\" +",
"\" in the database has (default: 10)\\n\" +",
"\" - accountsPerBranch=NN: specifies the number of accounts in\\n\" +",
"\" each branch (default: 100000)\\n\" +"
],
"header": "@@ -192,6 +235,14 @@ public class Runner {",
"removed": []
},
{
"added": [
" boolean blob = hasOption(\"blob\");",
" boolean clob = hasOption(\"clob\");"
],
"header": "@@ -217,8 +268,8 @@ public class Runner {",
"removed": [
" boolean blob = loadOpts.contains(\"blob\");",
" boolean clob = loadOpts.contains(\"clob\");"
]
},
{
"added": [
" hasOption(\"secondary\"),",
" hasOption(\"nonIndexed\"));"
],
"header": "@@ -242,8 +293,8 @@ public class Runner {",
"removed": [
" loadOpts.contains(\"secondary\"),",
" loadOpts.contains(\"nonIndexed\"));"
]
},
{
"added": [
" } else if (load.equals(\"bank_tx\")) {",
" return new BankAccountFiller(",
" getLoadOpt(\"branches\", 1),",
" getLoadOpt(\"tellersPerBranch\", 10),",
" getLoadOpt(\"accountsPerBranch\", 100000));"
],
"header": "@@ -252,6 +303,11 @@ public class Runner {",
"removed": []
},
{
"added": [
" hasOption(\"secondary\"), hasOption(\"nonIndexed\"));",
" hasOption(\"secondary\"), hasOption(\"nonIndexed\"));"
],
"header": "@@ -267,12 +323,10 @@ public class Runner {",
"removed": [
" loadOpts.contains(\"secondary\"),",
" loadOpts.contains(\"nonIndexed\"));",
" loadOpts.contains(\"secondary\"),",
" loadOpts.contains(\"nonIndexed\"));"
]
},
{
"added": [
" } else if (load.equals(\"bank_tx\")) {",
" return new BankTransactionClient(",
" getLoadOpt(\"branches\", 1),",
" getLoadOpt(\"tellersPerBranch\", 10),",
" getLoadOpt(\"accountsPerBranch\", 100000));"
],
"header": "@@ -283,6 +337,11 @@ public class Runner {",
"removed": []
}
]
}
] |
derby-DERBY-3619-4ea38fbf
|
DERBY-3619: Implement more load types for org.apache.derbyTesting.perf.clients.Runner
Added BLOB/CLOB load for the single-record update client.
Added possibility to access rows by a non-indexed column or a column
with a non-unique index instead of the primary key column (in the
single-record update client and the single-record select client).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@658572 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/Runner.java",
"hunks": [
{
"added": [
"import java.util.Arrays;",
"import java.util.HashSet;"
],
"header": "@@ -26,6 +26,8 @@ import java.sql.Connection;",
"removed": []
},
{
"added": [
" /** Set of load-specific options. */",
" private final static HashSet loadOpts = new HashSet();"
],
"header": "@@ -56,6 +58,8 @@ public class Runner {",
"removed": []
},
{
"added": [
" } else if (args[i].equals(\"-load_opts\")) {",
" loadOpts.addAll(Arrays.asList(args[++i].split(\",\")));"
],
"header": "@@ -137,6 +141,8 @@ public class Runner {",
"removed": []
},
{
"added": [
"\" 100 000 rows. It accepts the following load-specific\\n\" +",
"\" options (see also -load_opts):\\n\" +",
"\" - blob or clob: use BLOB or CLOB data instead of VARCHAR\\n\" +",
"\" - secondary: select on a column with a secondary (non-unique)\\n\" +",
"\" index instead of the primary key\\n\" +",
"\" - nonIndexed: select on a non-indexed column instead of the\\n\" +",
"\" primary key\\n\" +",
"\" 100 000 rows. It accepts the same load-specific\\n\" +",
"\" options as sr_select.\\n\" +"
],
"header": "@@ -167,10 +173,16 @@ public class Runner {",
"removed": [
"\" 100 000 rows. If _blob or _clob is appended to the\\n\" +",
"\" name, BLOB/CLOB is used for the data columns.\\n\" +",
"\" 100 000 rows\\n\" +"
]
},
{
"added": [
"\" -load_opts: comma-separated list of load-specific options\\n\" +"
],
"header": "@@ -180,6 +192,7 @@ public class Runner {",
"removed": []
},
{
"added": [
" /**",
" * Get the data type to be used for sr_select and sr_update types of load.",
" *",
" * @return one of the {@code java.sql.Types} data type constants",
" */",
" private static int getTextType() {",
" boolean blob = loadOpts.contains(\"blob\");",
" boolean clob = loadOpts.contains(\"clob\");",
" if (blob && clob) {",
" System.err.println(\"Cannot specify both 'blob' and 'clob'\");",
" printUsage(System.err);",
" System.exit(1);",
" }",
" if (blob) {",
" return Types.BLOB;",
" }",
" if (clob) {",
" return Types.CLOB;",
" }",
" return Types.VARCHAR;",
" }",
""
],
"header": "@@ -198,6 +211,28 @@ public class Runner {",
"removed": []
},
{
"added": [
" return new SingleRecordFiller(100000, 1, getTextType(),",
" loadOpts.contains(\"secondary\"),",
" loadOpts.contains(\"nonIndexed\"));",
" return new SingleRecordFiller(100000000, 1);",
" return new SingleRecordFiller(1, 32);"
],
"header": "@@ -206,17 +241,15 @@ public class Runner {",
"removed": [
" return new SingleRecordFiller(100000, 1, Types.VARCHAR);",
" } else if (load.equals(\"sr_select_blob\")) {",
" return new SingleRecordFiller(100000, 1, Types.BLOB);",
" } else if (load.equals(\"sr_select_clob\")) {",
" return new SingleRecordFiller(100000, 1, Types.CLOB);",
" return new SingleRecordFiller(100000000, 1, Types.VARCHAR);",
" return new SingleRecordFiller(1, 32, Types.VARCHAR);"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/SingleRecordFiller.java",
"hunks": [
{
"added": [
"import org.apache.derbyTesting.functionTests.util.UniqueRandomSequence;"
],
"header": "@@ -29,6 +29,7 @@ import java.sql.SQLException;",
"removed": []
},
{
"added": [
" /** The number of tables to distribute the load over. */",
" /** The number of rows in each table. */",
" /**",
" * The data type of the text column (a constant from",
" * {@code java.sql.Types}).",
" */",
" /** SQL name of the data type specified by {@code dataType}. */",
" /**",
" * Whether or not the table includes an integer column with unique values",
" * in random order. A UNIQUE index will be created for the column.",
" */",
" private final boolean withSecIndexColumn;",
" /**",
" * Whether or not the table includes an integer column with unique values",
" * in random order not backed by an index.",
" */",
" private final boolean withNonIndexedColumn;",
" /**",
" * Generate a filler that creates the specified number of tables, each of",
" * which contains the specified number of records. When this constructor",
" * is used, the table only contains two columns: a primary key column (INT)",
" * and a text column (VARCHAR(100)).",
" *",
" * @param records the number of records in each table",
" * @param tables the number of tables to create",
" */",
" public SingleRecordFiller(int records, int tables) {",
" this(records, tables, Types.VARCHAR, false, false);",
" }",
""
],
"header": "@@ -38,13 +39,43 @@ import java.util.Random;",
"removed": []
},
{
"added": [
" public SingleRecordFiller(int records, int tables, int type,",
" boolean withSecIndex, boolean withNonIndexed) {"
],
"header": "@@ -55,7 +86,8 @@ public class SingleRecordFiller implements DBFiller {",
"removed": [
" public SingleRecordFiller(int records, int tables, int type) {"
]
},
{
"added": [
" withSecIndexColumn = withSecIndex;",
" withNonIndexedColumn = withNonIndexed;",
" String tableName = getTableName(tableSize, table, dataType,",
" withSecIndexColumn, withNonIndexedColumn);",
" (withSecIndexColumn ? \"SEC INT, \" : \"\") +",
" (withNonIndexedColumn ? \"NI INT, \" : \"\") +",
" String extraCols = \"\";",
" String extraParams = \"\";",
" if (withSecIndexColumn) {",
" extraCols += \", SEC\";",
" extraParams += \", ?\";",
" }",
" if (withNonIndexedColumn) {",
" extraCols += \", NI\";",
" extraParams += \", ?\";",
" }",
"",
" \"(ID, TEXT\" + extraCols +",
" \") VALUES (?, ?\" + extraParams + \")\");",
"",
" UniqueRandomSequence secIdSequence = null;",
" if (withSecIndexColumn) {",
" secIdSequence = new UniqueRandomSequence(tableSize);",
" }",
"",
" UniqueRandomSequence nonIndexedSequence = null;",
" if (withNonIndexedColumn) {",
" nonIndexedSequence = new UniqueRandomSequence(tableSize);",
" }",
" int col = 1;",
" ps.setInt(col++, i);",
" ps.setString(col++, randomString(i));",
" ps.setCharacterStream(col++, reader, TEXT_SIZE);",
" ps.setBinaryStream(col++, stream, TEXT_SIZE);",
" }",
" if (withSecIndexColumn) {",
" ps.setInt(col++, secIdSequence.nextValue());",
" }",
" if (withNonIndexedColumn) {",
" ps.setInt(col++, nonIndexedSequence.nextValue());"
],
"header": "@@ -72,33 +104,67 @@ public class SingleRecordFiller implements DBFiller {",
"removed": [
" String tableName = getTableName(tableSize, table, dataType);",
" \"(ID, TEXT) VALUES (?, ?)\");",
" ps.setInt(1, i);",
" ps.setString(2, randomString(i));",
" ps.setCharacterStream(2, reader, TEXT_SIZE);",
" ps.setBinaryStream(2, stream, TEXT_SIZE);"
]
},
{
"added": [
" if (withSecIndexColumn) {",
" s.executeUpdate(",
" \"CREATE INDEX \" + tableName + \"_SECONDARY_INDEX ON \" +",
" tableName + \"(SEC)\");",
" }",
""
],
"header": "@@ -106,6 +172,12 @@ public class SingleRecordFiller implements DBFiller {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/SingleRecordSelectClient.java",
"hunks": [
{
"added": [
" private final boolean secondaryIndex;",
" private final boolean noIndex;",
"",
" /**",
" * Construct a new single-record select client which fetches VARCHAR data",
" * by primary key.",
" *",
" * @param records the number of records in each table in the test",
" * @param tables the number of tables in the test",
" */",
" public SingleRecordSelectClient(int records, int tables) {",
" this(records, tables, Types.VARCHAR, false, false);",
" }"
],
"header": "@@ -42,6 +42,19 @@ public class SingleRecordSelectClient implements Client {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/SingleRecordUpdateClient.java",
"hunks": [
{
"added": [
"import java.io.ByteArrayInputStream;",
"import java.io.StringReader;"
],
"header": "@@ -21,6 +21,8 @@ limitations under the License.",
"removed": []
},
{
"added": [
" private final int dataType;",
" private final boolean secondaryIndex;",
" private final boolean noIndex;"
],
"header": "@@ -40,6 +42,9 @@ public class SingleRecordUpdateClient implements Client {",
"removed": []
},
{
"added": [
" this(records, tables, Types.VARCHAR, false, false);",
" }",
"",
" /**",
" * Construct a new single-record update client.",
" *",
" * @param records the number of records in each table in the test",
" * @param tables the number of tables in the test",
" */",
" public SingleRecordUpdateClient(int records, int tables, int type,",
" boolean secIndex, boolean nonIndexed) {",
" dataType = type;",
" if (secIndex && nonIndexed) {",
" throw new IllegalArgumentException(",
" \"Cannot select on both secondary index and non-index column\");",
" }",
" secondaryIndex = secIndex;",
" noIndex = nonIndexed;",
" SingleRecordFiller.getTableName(tableSize, i, dataType,",
" secondaryIndex, noIndex);",
" String column = \"ID\";",
" if (secondaryIndex) {",
" column = \"SEC\";",
" } else if (noIndex) {",
" column = \"NI\";",
" }",
" String sql = \"UPDATE \" + tableName + \" SET TEXT = ? WHERE \" +",
" column + \" = ?\";"
],
"header": "@@ -48,16 +53,42 @@ public class SingleRecordUpdateClient implements Client {",
"removed": [
" SingleRecordFiller.getTableName(tableSize, i, Types.VARCHAR);",
" String sql = \"UPDATE \" + tableName + \" SET TEXT = ? WHERE ID = ?\";"
]
},
{
"added": [
" int seed = r.nextInt();",
" if (dataType == Types.VARCHAR) {",
" ps.setString(1, SingleRecordFiller.randomString(seed));",
" } else if (dataType == Types.BLOB) {",
" byte[] bytes = SingleRecordFiller.randomBytes(seed);",
" ps.setBinaryStream(1, new ByteArrayInputStream(bytes),",
" SingleRecordFiller.TEXT_SIZE);",
" } else if (dataType == Types.CLOB) {",
" String string = SingleRecordFiller.randomString(seed);",
" ps.setCharacterStream(1, new StringReader(string),",
" SingleRecordFiller.TEXT_SIZE);",
" } else {",
" throw new IllegalArgumentException();",
" }"
],
"header": "@@ -66,7 +97,20 @@ public class SingleRecordUpdateClient implements Client {",
"removed": [
" ps.setString(1, SingleRecordFiller.randomString(r.nextInt()));"
]
}
]
}
] |
derby-DERBY-3619-6f53b7f4
|
DERBY-3619: Implement more load types for org.apache.derbyTesting.perf.clients.Runner
Added simple selects of rows with columns containing a BLOB or a CLOB.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@650112 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/Runner.java",
"hunks": [
{
"added": [
"import java.sql.Types;"
],
"header": "@@ -25,6 +25,7 @@ import java.io.PrintStream;",
"removed": []
},
{
"added": [
"\" 100 000 rows. If _blob or _clob is appended to the\\n\" +",
"\" name, BLOB/CLOB is used for the data columns.\\n\" +"
],
"header": "@@ -166,7 +167,8 @@ public class Runner {",
"removed": [
"\" 100 000 rows\\n\" +"
]
},
{
"added": [
" return new SingleRecordFiller(100000, 1, Types.VARCHAR);",
" } else if (load.equals(\"sr_select_blob\")) {",
" return new SingleRecordFiller(100000, 1, Types.BLOB);",
" } else if (load.equals(\"sr_select_clob\")) {",
" return new SingleRecordFiller(100000, 1, Types.CLOB);",
" return new SingleRecordFiller(100000000, 1, Types.VARCHAR);",
" return new SingleRecordFiller(1, 32, Types.VARCHAR);"
],
"header": "@@ -204,13 +206,17 @@ public class Runner {",
"removed": [
" return new SingleRecordFiller(100000, 1);",
" return new SingleRecordFiller(100000000, 1);",
" return new SingleRecordFiller(1, 32);"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/SingleRecordFiller.java",
"hunks": [
{
"added": [
"import java.io.ByteArrayInputStream;",
"import java.io.StringReader;",
"import java.sql.Types;"
],
"header": "@@ -21,10 +21,13 @@ limitations under the License.",
"removed": []
},
{
"added": [
" private final int dataType;",
" private final String dataTypeString;",
" static final int TEXT_SIZE = 100;"
],
"header": "@@ -37,8 +40,10 @@ public class SingleRecordFiller implements DBFiller {",
"removed": [
" private static final int TEXT_SIZE = 100;"
]
},
{
"added": [
" * @param type which SQL type to store the text as (one of",
" * {@code java.sql.Types.VARCHAR}, {@code java.sql.Types.BLOB} and",
" * {@code java.sql.Types.CLOB}.",
" public SingleRecordFiller(int records, int tables, int type) {",
" dataType = type;",
" switch (type) {",
" case Types.VARCHAR:",
" dataTypeString = \"VARCHAR\";",
" break;",
" case Types.BLOB:",
" dataTypeString = \"BLOB\";",
" break;",
" case Types.CLOB:",
" dataTypeString = \"CLOB\";",
" break;",
" default:",
" throw new IllegalArgumentException(\"type = \" + type);",
" }",
" String tableName = getTableName(tableSize, table, dataType);",
" \"TEXT \" + dataTypeString + \"(\" + TEXT_SIZE + \"))\");"
],
"header": "@@ -46,21 +51,38 @@ public class SingleRecordFiller implements DBFiller {",
"removed": [
" public SingleRecordFiller(int records, int tables) {",
" String tableName = getTableName(tableSize, table);",
" \"TEXT VARCHAR(\" + TEXT_SIZE + \"))\");"
]
},
{
"added": [
" if (dataType == Types.VARCHAR) {",
" ps.setString(2, randomString(i));",
" } else if (dataType == Types.CLOB) {",
" StringReader reader = new StringReader(randomString(i));",
" ps.setCharacterStream(2, reader, TEXT_SIZE);",
" } else if (dataType == Types.BLOB) {",
" ByteArrayInputStream stream =",
" new ByteArrayInputStream(randomBytes(i));",
" ps.setBinaryStream(2, stream, TEXT_SIZE);",
" }"
],
"header": "@@ -68,7 +90,16 @@ public class SingleRecordFiller implements DBFiller {",
"removed": [
" ps.setString(2, randomString(i));"
]
},
{
"added": [
" private static final byte[][] RANDOM_BYTES = new byte[16][TEXT_SIZE];"
],
"header": "@@ -83,6 +114,7 @@ public class SingleRecordFiller implements DBFiller {",
"removed": []
},
{
"added": [
" for (int j = 0; j < TEXT_SIZE; j++) {",
" RANDOM_BYTES[i][j] = (byte) RANDOM_STRINGS[i].charAt(j);",
" }"
],
"header": "@@ -94,6 +126,9 @@ public class SingleRecordFiller implements DBFiller {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/SingleRecordSelectClient.java",
"hunks": [
{
"added": [
"import java.sql.Types;"
],
"header": "@@ -25,6 +25,7 @@ import java.sql.Connection;",
"removed": []
},
{
"added": [
" private final int dataType;",
" * @param type the data type of the text column",
" * ({@code java.sql.Types.VARCHAR}, {@code java.sql.Types.BLOB} or",
" * {@code java.sql.Types.CLOB})",
" public SingleRecordSelectClient(int records, int tables, int type) {",
" dataType = type;",
" String tableName =",
" SingleRecordFiller.getTableName(tableSize, i, dataType);"
],
"header": "@@ -40,22 +41,28 @@ public class SingleRecordSelectClient implements Client {",
"removed": [
" public SingleRecordSelectClient(int records, int tables) {",
" String tableName = SingleRecordFiller.getTableName(tableSize, i);"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/SingleRecordUpdateClient.java",
"hunks": [
{
"added": [
"import java.sql.Types;"
],
"header": "@@ -24,6 +24,7 @@ package org.apache.derbyTesting.perf.clients;",
"removed": []
},
{
"added": [
" String tableName =",
" SingleRecordFiller.getTableName(tableSize, i, Types.VARCHAR);"
],
"header": "@@ -54,7 +55,8 @@ public class SingleRecordUpdateClient implements Client {",
"removed": [
" String tableName = SingleRecordFiller.getTableName(tableSize, i);"
]
}
]
}
] |
derby-DERBY-3619-f02a9f23
|
DERBY-3619: Implement more load types for org.apache.derbyTesting.perf.clients.Runner
Added a test client (BankTransactionClient) that performs operations
on a database created by BankAccountFiller. The transactions are
supposed to have the same profile as the transactions in the TPC-B
benchmark. The test client has not been wired into the Runner class
yet.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@671462 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/BankAccountFiller.java",
"hunks": [
{
"added": [
" static final String ACCOUNT_TABLE = \"ACCOUNTS\";",
" static final String BRANCH_TABLE = \"BRANCHES\";",
" static final String TELLER_TABLE = \"TELLERS\";",
" static final String HISTORY_TABLE = \"HISTORY\";",
"",
" /** The number of tellers per branch, if not specified. */",
" static final int DEFAULT_TELLERS_PER_BRANCH = 10;",
" /** The number of accounts per branch, if not specified. */",
" static final int DEFAULT_ACCOUNTS_PER_BRANCH = 100000;"
],
"header": "@@ -36,13 +36,18 @@ import java.util.Arrays;",
"removed": [
" private static final String ACCOUNT_TABLE = \"ACCOUNTS\";",
" private static final String BRANCH_TABLE = \"BRANCHES\";",
" private static final String TELLER_TABLE = \"TELLERS\";",
" private static final String HISTORY_TABLE = \"HISTORY\";"
]
},
{
"added": [
" static final int HISTORY_EXTRA = 50 - 4 - 4 - 4 - 8 - 12;",
" private final int branches;",
" /** Number of tellers per branch. */",
" private final int tellersPerBranch;",
" /** Number of accounts per branch. */",
" private final int accountsPerBranch;",
" * @param branches number of branches",
" * @param tellersPerBranch number of tellers per branch",
" * @param accountsPerBranch number of accounts per branch",
" public BankAccountFiller(int branches, int tellersPerBranch,",
" int accountsPerBranch) {",
" if (branches <= 0 || tellersPerBranch <= 0 || accountsPerBranch <= 0) {",
" this.branches = branches;",
" this.tellersPerBranch = tellersPerBranch;",
" this.accountsPerBranch = accountsPerBranch;"
],
"header": "@@ -74,30 +79,31 @@ public class BankAccountFiller implements DBFiller {",
"removed": [
" private static final int HISTORY_EXTRA = 50 - 4 - 4 - 4 - 8 - 12;",
" /** Number of records in the account table. */",
" private final int accountRecords;",
" /** Number of records in the teller table. */",
" private final int tellerRecords;",
" private final int branchRecords;",
" * @param accounts number of records in the account table",
" * @param tellers number of records in the teller table",
" * @param branches number of records in the branch table",
" public BankAccountFiller(int accounts, int tellers, int branches) {",
" if (accounts <= 0 || tellers <= 0 || branches <= 0) {",
" accountRecords = accounts;",
" tellerRecords = tellers;",
" branchRecords = branches;"
]
},
{
"added": [
" * @param scale the scale factor for this database",
" public BankAccountFiller(int scale) {",
" this(scale, DEFAULT_TELLERS_PER_BRANCH, DEFAULT_ACCOUNTS_PER_BRANCH);"
],
"header": "@@ -107,10 +113,10 @@ public class BankAccountFiller implements DBFiller {",
"removed": [
" * @param tps the scale factor for this database",
" public BankAccountFiller(int tps) {",
" this(tps * 100000, tps * 10, tps * 1);"
]
},
{
"added": [
" for (int id = 0; id < accountsPerBranch * branches; id++) {",
" atIns.setInt(2, id / accountsPerBranch);"
],
"header": "@@ -191,9 +197,9 @@ public class BankAccountFiller implements DBFiller {",
"removed": [
" for (int id = 0; id < accountRecords; id++) {",
" atIns.setInt(2, id % branchRecords);"
]
},
{
"added": [
" for (int id = 0; id < branches; id++) {"
],
"header": "@@ -204,7 +210,7 @@ public class BankAccountFiller implements DBFiller {",
"removed": [
" for (int id = 0; id < branchRecords; id++) {"
]
},
{
"added": [
" for (int id = 0; id < tellersPerBranch * branches; id++) {",
" ttIns.setInt(2, id / tellersPerBranch);"
],
"header": "@@ -216,9 +222,9 @@ public class BankAccountFiller implements DBFiller {",
"removed": [
" for (int id = 0; id < tellerRecords; id++) {",
" ttIns.setInt(2, id % branchRecords);"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/BankTransactionClient.java",
"hunks": [
{
"added": [
"/*",
"",
"Derby - Class org.apache.derbyTesting.perf.clients.BankTransactionClient",
"",
"Licensed to the Apache Software Foundation (ASF) under one or more",
"contributor license agreements. See the NOTICE file distributed with",
"this work for additional information regarding copyright ownership.",
"The ASF licenses this file to You under the Apache License, Version 2.0",
"(the \"License\"); you may not use this file except in compliance with",
"the License. You may obtain a copy of the License at",
"",
" http://www.apache.org/licenses/LICENSE-2.0",
"",
"Unless required by applicable law or agreed to in writing, software",
"distributed under the License is distributed on an \"AS IS\" BASIS,",
"WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
"See the License for the specific language governing permissions and",
"limitations under the License.",
"",
"*/",
"",
"package org.apache.derbyTesting.perf.clients;",
"",
"import java.sql.Connection;",
"import java.sql.PreparedStatement;",
"import java.sql.ResultSet;",
"import java.sql.SQLException;",
"import java.util.Random;",
"",
"/**",
" * This class implements a client thread which performs bank transactions. The",
" * transactions are intended to perform the same operations as the transactions",
" * specified by the TPC-B benchmark.",
" */",
"public class BankTransactionClient implements Client {",
"",
" /** Random number generator. */",
" private final Random random = new Random();",
"",
" /** The number of branches in the database. */",
" private final int branches;",
" /** The number of tellers per branch. */",
" private final int tellersPerBranch;",
" /** The number of accounts per branch. */",
" private final int accountsPerBranch;",
"",
" /** The connection on which the operations are performed. */",
" private Connection conn;",
" /** Statement that updates the balance of the account. */",
" private PreparedStatement updateAccount;",
" /** Statement that updated the history table. */",
" private PreparedStatement updateHistory;",
" /** Statement that updates the balance of the teller. */",
" private PreparedStatement updateTeller;",
" /** Statement that updated the balance of the branch. */",
" private PreparedStatement updateBranch;",
" /** Statement that retrieves the current account balance. */",
" private PreparedStatement retrieveAccountBalance;",
"",
" /**",
" * Create a client that works on a database with the given number of",
" * branches, tellers and accounts.",
" *",
" * @param branches the number of branches in the database",
" * @param tellersPerBranch the number of tellers per branch",
" * @param accountsPerBranch the number of accounts per branch",
" */",
" public BankTransactionClient(int branches, int tellersPerBranch,",
" int accountsPerBranch) {",
" if (branches <= 0 || tellersPerBranch <= 0 || accountsPerBranch <= 0) {",
" throw new IllegalArgumentException(",
" \"all arguments must be greater than 0\");",
" }",
" this.branches = branches;",
" this.tellersPerBranch = tellersPerBranch;",
" this.accountsPerBranch = accountsPerBranch;",
" }",
"",
" /**",
" * Create a client that works on a database with the default number of",
" * tellers and accounts per branch.",
" *",
" * @param scale the scale factor for the database (equal to the number of",
" * branches)",
" *",
" * @see BankAccountFiller#BankAccountFiller(int)",
" */",
" public BankTransactionClient(int scale) {",
" this(scale,",
" BankAccountFiller.DEFAULT_TELLERS_PER_BRANCH,",
" BankAccountFiller.DEFAULT_ACCOUNTS_PER_BRANCH);",
" }",
"",
" /**",
" * Initialize the connection and the statements used by the test.",
" */",
" public void init(Connection c) throws SQLException {",
" conn = c;",
" c.setAutoCommit(false);",
"",
" updateAccount = c.prepareStatement(",
" \"UPDATE \" + BankAccountFiller.ACCOUNT_TABLE +",
" \" SET ACCOUNT_BALANCE = ACCOUNT_BALANCE + ? WHERE ACCOUNT_ID = ?\");",
"",
" updateHistory = c.prepareStatement(",
" \"INSERT INTO \" + BankAccountFiller.HISTORY_TABLE +",
" \"(ACCOUNT_ID, TELLER_ID, BRANCH_ID, AMOUNT, TIME_STAMP, \" +",
" \"EXTRA_DATA) VALUES (?, ?, ?, ?, CURRENT_TIMESTAMP, '\" +",
" BankAccountFiller.createJunk(BankAccountFiller.HISTORY_EXTRA) +",
" \"')\");",
"",
" updateTeller = c.prepareStatement(",
" \"UPDATE \" + BankAccountFiller.TELLER_TABLE +",
" \" SET TELLER_BALANCE = TELLER_BALANCE + ? WHERE TELLER_ID = ?\");",
"",
" updateBranch = c.prepareStatement(",
" \"UPDATE \" + BankAccountFiller.BRANCH_TABLE +",
" \" SET BRANCH_BALANCE = BRANCH_BALANCE + ? WHERE BRANCH_ID = ?\");",
"",
" retrieveAccountBalance = c.prepareStatement(",
" \"SELECT ACCOUNT_BALANCE FROM \" + BankAccountFiller.ACCOUNT_TABLE +",
" \" WHERE ACCOUNT_ID = ?\");",
" }",
"",
" /**",
" * Perform a single transaction with a profile like the one specified in",
" * Clause 1.2 of the TPC-B specification.",
" */",
" public void doWork() throws SQLException {",
"",
" // Get the transaction input",
" final int tellerId = fetchTellerId();",
" final int branchId = fetchBranchId(tellerId);",
" final int accountId = fetchAccountId(branchId);",
" final int delta = fetchDelta();",
"",
" // Update the account balance",
" updateAccount.setInt(1, delta);",
" updateAccount.setInt(2, accountId);",
" updateAccount.executeUpdate();",
"",
" // Add a transaction log entry",
" updateHistory.setInt(1, accountId);",
" updateHistory.setInt(2, tellerId);",
" updateHistory.setInt(3, branchId);",
" updateHistory.setInt(4, delta);",
" updateHistory.executeUpdate();",
"",
" // Update the teller balance",
" updateTeller.setInt(1, delta);",
" updateTeller.setInt(2, tellerId);",
" updateTeller.executeUpdate();",
"",
" // Update the branch balance",
" updateBranch.setInt(1, delta);",
" updateBranch.setInt(2, branchId);",
" updateBranch.executeUpdate();",
"",
" // Retrieve the balance",
" retrieveAccountBalance.setInt(1, accountId);",
" ResultSet rs = retrieveAccountBalance.executeQuery();",
" rs.next();",
" rs.getString(1);",
" rs.close();",
" conn.commit();",
" }",
"",
" /**",
" * Generate a random teller id.",
" */",
" private int fetchTellerId() {",
" return random.nextInt(tellersPerBranch * branches);",
" }",
"",
" /**",
" * Find the branch the specified teller belongs to.",
" *",
" * @param tellerId the id of the teller",
" * @return the id of the branch for this teller",
" */",
" private int fetchBranchId(int tellerId) {",
" return tellerId / tellersPerBranch;",
" }",
"",
" /**",
" * Generate a random account id based on the specified branch. Per Clause",
" * 5.3.5 of the TPC-B specification, the accounts should be fetched from",
" * the selected branch 85% of the time (or always if that's the only",
" * branch), and from another branch the rest of the time.",
" *",
" * @param branchId the id of the selected branch",
" * @return the id of a random account",
" */",
" private int fetchAccountId(int branchId) {",
" int branch;",
" if (branches == 1 || random.nextFloat() < 0.85f) {",
" // pick an account in the same branch",
" branch = branchId;",
" } else {",
" // pick an account in one of the other branches",
" branch = random.nextInt(branches - 1);",
" if (branch >= branchId) {",
" branch++;",
" }",
" }",
" // select a random account in the selected branch",
" return branch * accountsPerBranch + random.nextInt(accountsPerBranch);",
" }",
"",
" /**",
" * Generate a random delta value between -99999 and +99999, both inclusive",
" * (TPC-B specification, Clause 5.3.6). The delta value specifies how much",
" * the balance should increase or decrease.",
" *",
" * @return a random value in the range [-99999,+99999]",
" */",
" private int fetchDelta() {",
" return random.nextInt(199999) - 99999; // [-99999,+99999]",
" }",
"}"
],
"header": "@@ -0,0 +1,220 @@",
"removed": []
}
]
}
] |
derby-DERBY-3624-4f29da3d
|
DERBY-3624: Missing deadlock in storetests/st_derby715.java
Make both tests wait until the other thread has executed the SELECT
statement and exhausted the ResultSet before they go on to the INSERT
statements that should deadlock.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1618821 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3624-d76dd28c
|
DERBY-3624 testfailure in storetests/st_derby715 with ibm 1.5 on iseries machine; one deadlock message missing
Change test to check locks rather than sleep for synchronization.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1530696 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3625-5314923a
|
DERBY-3625
Fixed SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE() bug where during the
defragment phase one part of the code would think there was enough room to
move a record, but because the new record id on the destination page would
take more room than on the source page the move actually would not succeed.
In this case an XSDA3 errror would be thrown and the compress would fail.
This would cause intermittent errors in the nightly concateTests.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@671605 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [
" /**",
" * Does this page have enough space to move the row to it.",
" * <p>",
" * Calculate if a row of length \"spaceNeeded\" with current record id",
" * \"source_id\" will fit on this page.",
" *",
" * @param spaceNeeded length of the row encoded with source_id record id.",
" * @param source_id record id of the row being moved. ",
" *",
"\t * @return true if the record will fit on this page, after being given a",
" * new record id as the next id on this page.",
" *",
" * @exception StandardException Standard exception policy.",
" **/",
"\tprotected boolean spaceForCopy(",
" int spaceNeeded, ",
" int source_id)",
" spaceNeeded = ",
" spaceNeeded ",
" - StoredRecordHeader.getStoredSizeRecordId(source_id) ",
" + StoredRecordHeader.getStoredSizeRecordId(nextId);",
""
],
"header": "@@ -1367,8 +1367,29 @@ public class StoredPage extends CachedPage",
"removed": [
"\tprotected boolean spaceForCopy(int spaceNeeded)"
]
},
{
"added": [
" int record_id = getHeaderAtSlot(slot).getId();"
],
"header": "@@ -6859,6 +6880,7 @@ public class StoredPage extends CachedPage",
"removed": []
},
{
"added": [
" (!dest_page.spaceForCopy(row_size, record_id)))"
],
"header": "@@ -6867,7 +6889,7 @@ public class StoredPage extends CachedPage",
"removed": [
" (!dest_page.spaceForCopy(row_size)))"
]
},
{
"added": [
" (!dest_page.spaceForCopy(row_size, record_id)))"
],
"header": "@@ -6885,7 +6907,7 @@ public class StoredPage extends CachedPage",
"removed": [
" (!dest_page.spaceForCopy(row_size)))"
]
},
{
"added": [],
"header": "@@ -7989,8 +8011,6 @@ public class StoredPage extends CachedPage",
"removed": [
" * RESOLVE - this has been hacked together and is not efficient. There",
" * are probably some java utilities to use."
]
}
]
}
] |
derby-DERBY-3625-cf775af5
|
DERBY-3625
Adding space check to case where we try to move row to a new page. I could
not come up with a repro for this case, but seems safer to check for space
than not. The case the code is trying to avoid is a free page that has been
reclaimed with a nextid that requires more space than the row on the source
page.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@672834 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [
" if ((dest_page.getPageNumber() >= getPageNumber()) ||",
" (!dest_page.spaceForCopy(row_size, record_id)))",
" // The only time a new page might not have enough space is",
" // if the source row fills or almost fills a page by itself",
" // and has a record id that is smaller than the record id",
" // will be on the destination page such that the increase",
" // in space. Record id's are stored on the page in a ",
" // compressed format such that depending on the value they",
" // may store in 1, 2, or 4 bytes, thus the destination page",
" // may need an additional 1, 2 or 3 bytes",
" // depending on the source and destination row id's.",
" // Because of record header overhead this can only happen",
" // if there is just one row on a page. For now just going",
" // to give up on moving this row. Future work could ",
" // improve the algorithm to find a page with an equal or",
" // smaller stored record id in this case.",
" "
],
"header": "@@ -6921,8 +6921,24 @@ public class StoredPage extends CachedPage",
"removed": [
" if (dest_page.getPageNumber() >= getPageNumber())"
]
}
]
}
] |
derby-DERBY-3631-e1e6ecb2
|
DERBY-3631 UDF used with aggregate arguments results in error 30000
Add test case. Actual issue fixed with DERBY-3649
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@719015 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3634-3527fd56
|
DERBY-3634 Cannot use row_number() in ORDER BY clause
Patch derby-3634-remove-2, which removes the old windowing code to
make the patch with new code (which fully replaces the old code except
for the tests) more readable. Before that code is committed, OLAPTest
is temporarily removed from lang/_Suite.java so as not to make
regresions fail.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@820483 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/execute/ResultSetFactory.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.sql.ResultDescription;"
],
"header": "@@ -25,6 +25,7 @@ import org.apache.derby.catalog.TypeDescriptor;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/C_NodeNames.java",
"hunks": [
{
"added": [],
"header": "@@ -234,8 +234,6 @@ public interface C_NodeNames",
"removed": [
"\tstatic final String ROW_NUMBER_COLUMN_NODE_NAME = \"org.apache.derby.impl.sql.compile.RowNumberColumnNode\"; ",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ColumnReference.java",
"hunks": [
{
"added": [],
"header": "@@ -611,7 +611,6 @@ public class ColumnReference extends ValueNode",
"removed": [
"\t\t\t ( ! source.isWindowFunction() ) && "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ProjectRestrictNode.java",
"hunks": [
{
"added": [],
"header": "@@ -634,38 +634,6 @@ public class ProjectRestrictNode extends SingleChildResultSetNode",
"removed": [
"\t\t\t/*",
"\t\t\t * If we have a subquery select with window function columns, we ",
"\t\t\t * have the following simplified querytre before the above call:",
"\t\t\t * SELECT -> PRN -> SELECT",
"\t\t\t * where middle PRN is what was originally a FromSubquery node.",
"\t\t\t * With window functions we pull any WindowNodes into the tree, ",
"\t\t\t * modify the lower selects RCL, and put a (noop) PRN on top in the ",
"\t\t\t * above call. This results in:",
"\t\t\t * SELECT -> PRN -> PRN(noop) -> WN -> ...\t\t\t ",
"\t\t\t * A DISTINCT query will place an additional DistinctNode on top of ",
"\t\t\t * the window node:",
"\t\t\t * SELECT -> PRN -> PRN(noop) -> DN -> WN -> ...",
"\t\t\t * Note that the RCL for the initial PRN and its child SELECT used ",
"\t\t\t * to be the same object. After the above call, the initial PRNs RCL ",
"\t\t\t * is incorrect, and we need to regenerate the VCNs. ",
"\t\t\t * The above two combinations are the only two possible from ",
"\t\t\t * modifyAccessPaths() that require regeneration of the VCNs.",
"\t\t\t */",
"\t\t\tif (childResult instanceof ProjectRestrictNode){",
"\t\t\t\tProjectRestrictNode prn = (ProjectRestrictNode) childResult;",
"\t\t\t\tif (prn.childResult.getResultColumns()",
"\t\t\t\t\t.containsWindowFunctionResultColumn())",
"\t\t\t\t{",
"\t\t\t\t\t/* ",
"\t\t\t\t\t * We have a window function column in the RCL of our child ",
"\t\t\t\t\t * PRN, and need to regenerate the VCNs.",
"\t\t\t\t\t */\t\t\t\t\t",
"\t\t\t\t\tresultColumns.genVirtualColumnNodes(",
"\t\t\t\t\t\tprn.childResult, prn.childResult.getResultColumns());",
"\t\t\t\t}",
"\t\t\t}",
"\t\t\t"
]
},
{
"added": [
"\t\t */",
"\t\tif (pushPList != null && (childResult instanceof SelectNode))"
],
"header": "@@ -1130,10 +1098,8 @@ public class ProjectRestrictNode extends SingleChildResultSetNode",
"removed": [
"\t\t */\t\t\t",
"\t\tif (pushPList != null && ",
"\t\t\tchildResult instanceof SelectNode &&",
"\t\t\t!resultColumns.containsWindowFunctionResultColumn() )"
]
},
{
"added": [
"",
"",
""
],
"header": "@@ -1421,7 +1387,9 @@ public class ProjectRestrictNode extends SingleChildResultSetNode",
"removed": [
"\t"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumnList.java",
"hunks": [
{
"added": [],
"header": "@@ -1105,10 +1105,6 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t\t\t\t/* Window function columns are added by the WindowResultSet for this levels RCL */",
"\t\t\t\tif (rc.isWindowFunction()) {",
"\t\t\t\t\tcontinue;",
"\t\t\t\t}"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/SelectNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.types.TypeId;",
"import org.apache.derby.iapi.types.DataTypeDescriptor;"
],
"header": "@@ -33,6 +33,8 @@ import org.apache.derby.iapi.sql.conn.Authorizer;",
"removed": []
},
{
"added": [
"\t\tPredicateList\t\trestrictionList;",
"\t\tResultColumnList\tprRCList;",
""
],
"header": "@@ -1254,29 +1256,11 @@ public class SelectNode extends ResultSetNode",
"removed": [
"\t\tboolean\t\t\t\thasWindowFunction = false;",
"\t\tResultColumnList\toriginalRCL = this.resultColumns.copyListAndObjects();\t\t",
"\t\t/*",
"\t\t * Even if we have a window function, we must do all projection and",
"\t\t * restriction at this stage. Restricting on a window function, i.e",
"\t\t * SELECT ... <WINDOWFUNCTION> AS W ... WHERE W ...",
"\t\t * is not allowed. Restrictions need to be evaluated in an outer SELECT",
"\t\t * to achieve this.",
"\t\t */",
"\t\thasWindowFunction = resultColumns.containsWindowFunctionResultColumn();\t\t",
"\t\tif (hasWindowFunction) {\t\t\t\t",
"\t\t\t/*",
"\t\t\t * Remove any window function columns now, and reinsert them from",
"\t\t\t * the copy made above once grouping and ordering has been performed.",
"\t\t\t */\t\t\t\t\t\t",
"\t\t\tresultColumns.removeWindowFunctionColumns();\t\t\t",
"\t\t\tif (orderByList != null) {",
"\t\t\t\torderByList.adjustForWindowFunctionColumns();\t\t\t\t",
"\t\t\t}",
"\t\t}",
"\t\t"
]
},
{
"added": [
"\t\t// if it is distinct, that must also be taken care of.",
"\t\tif (isDistinct)"
],
"header": "@@ -1324,12 +1308,8 @@ public class SelectNode extends ResultSetNode",
"removed": [
"\t\t/* ",
"\t\t * If it is distinct, that must also be taken care of. But, if there is ",
"\t\t * a window function in the RCL we must delay duplicate elimination",
"\t\t * until after the window function has been evaluated.",
"\t\t */",
"\t\tif (isDistinct && !hasWindowFunction)"
]
},
{
"added": [],
"header": "@@ -1433,10 +1413,6 @@ public class SelectNode extends ResultSetNode",
"removed": [
"\t\t\t\t/*",
"\t\t\t\t * Remove added ordering columns from the saved original RCL",
"\t\t\t\t*/\t\t\t\t\t\t\t\t",
"\t\t\t\toriginalRCL.removeOrderByColumns();\t\t\t\t"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/SubqueryNode.java",
"hunks": [
{
"added": [],
"header": "@@ -622,12 +622,10 @@ public class SubqueryNode extends ValueNode",
"removed": [
"\t\t * o It does not contain a window function column.",
"\t\t\t\t\t !hasWindowFunctionColumn() &&"
]
},
{
"added": [],
"header": "@@ -689,14 +687,12 @@ public class SubqueryNode extends ValueNode",
"removed": [
"\t\t * o The subquery has a window function column.",
"\t\t\t\t\t !hasWindowFunctionColumn() &&"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/RealResultSetStatisticsFactory.java",
"hunks": [
{
"added": [],
"header": "@@ -69,7 +69,6 @@ import org.apache.derby.impl.sql.execute.TableScanResultSet;",
"removed": [
"import org.apache.derby.impl.sql.execute.WindowResultSet;"
]
},
{
"added": [],
"header": "@@ -105,7 +104,6 @@ import org.apache.derby.impl.sql.execute.rts.RealTableScanStatistics;",
"removed": [
"import org.apache.derby.impl.sql.execute.rts.RealWindowResultSetStatistics;"
]
}
]
}
] |
derby-DERBY-3645-1d0c809d
|
derby-4477 Selecting / projecting a column whose value is represented by a stream more than once fails
Patch derby-4477-partial-2. This patch clones streamable columns in
occurence 2..n in ProjectRestrictResultset if they occur more than
once in the select list. It also adds the three repro test cases from
DERBY-3645, DERBY-3646 and DERBY-2349 to BLOBTest. The patch is not
complete and needs to be revisted when cloning of store streams is
implemented. Currenly the cloning occurs via materialization and this
will exhaust memory when lobs are large. See FIXME in
ProjectRestrictResultSet.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@902857 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/execute/ResultSetFactory.java",
"hunks": [
{
"added": [
" @param cloneMapItem Item # for columns that need cloning",
" @param reuseResult Whether or not to reuse the result row."
],
"header": "@@ -294,7 +294,8 @@ public interface ResultSetFactory {",
"removed": [
"\t\t@param reuseResult\tWhether or not to reuse the result row."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ProjectRestrictNode.java",
"hunks": [
{
"added": [
"",
" ResultColumnList.ColumnMapping mappingArrays =",
" resultColumns.mapSourceColumns();",
"",
" int[] mapArray = mappingArrays.mapArray;",
" boolean[] cloneMap = mappingArrays.cloneMap;",
"",
" int cloneMapItem = acb.addItem(cloneMap);"
],
"header": "@@ -1417,8 +1417,15 @@ public class ProjectRestrictNode extends SingleChildResultSetNode",
"removed": [
"\t\tint[] mapArray = resultColumns.mapSourceColumns();"
]
},
{
"added": [
" * arg8: cloneMapItem - item # for mapping of columns that need cloning",
" * arg9: reuseResult - whether or not the result row can be reused",
" * (ie, will it always be the same)",
" * arg10: doesProjection - does this node do a projection",
" * arg11: estimated row count",
" * arg12: estimated cost",
" * arg13: close method",
" */"
],
"header": "@@ -1456,13 +1463,14 @@ public class ProjectRestrictNode extends SingleChildResultSetNode",
"removed": [
"\t\t * arg8: reuseResult - whether or not the result row can be reused",
"\t\t *\t\t\t\t\t\t(ie, will it always be the same)",
"\t\t * arg9: doesProjection - does this node do a projection",
"\t\t * arg10: estimated row count",
"\t\t * arg11: estimated cost",
"\t\t * arg12: close method",
"\t\t */"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumnList.java",
"hunks": [
{
"added": [
"import java.util.Map;",
"import java.util.HashMap;"
],
"header": "@@ -26,6 +26,8 @@ import java.sql.ResultSetMetaData;",
"removed": []
},
{
"added": [
" * <p/>",
" * Also build an array of boolean for columns that point to the same virtual",
" * column and have types that are streamable to be able to determine if",
" * cloning is needed at execution time.",
" ColumnMapping mapSourceColumns()",
" int[] mapArray = new int[size()];",
" boolean[] cloneMap = new boolean[size()];",
"",
" // key: virtual column number, value: index",
" Map seenMap = new HashMap();",
""
],
"header": "@@ -3496,14 +3498,23 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\tint[] mapSourceColumns()",
"\t\tint[]\t\t\tmapArray = new int[size()];"
]
},
{
"added": [
" ResultColumn rc = vcn.getSourceColumn();",
" updateArrays(mapArray, cloneMap, seenMap, rc, index);",
""
],
"header": "@@ -3521,7 +3532,9 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t\t\t\t\tmapArray[index] = vcn.getSourceColumn().getVirtualColumnId();"
]
},
{
"added": [
" ResultColumn rc = cr.getSource();",
"",
" updateArrays(mapArray, cloneMap, seenMap, rc, index);"
],
"header": "@@ -3536,7 +3549,9 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t\t\t\t\tmapArray[index] = cr.getSource().getVirtualColumnId();"
]
},
{
"added": [
" ColumnMapping result = new ColumnMapping(mapArray, cloneMap);",
" return result;"
],
"header": "@@ -3545,7 +3560,8 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t\treturn mapArray;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/GenericResultSetFactory.java",
"hunks": [
{
"added": [
" int cloneMapItem,"
],
"header": "@@ -212,6 +212,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/ProjectRestrictResultSet.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.io.StreamStorable;"
],
"header": "@@ -22,6 +22,7 @@",
"removed": []
},
{
"added": [
"",
" /**",
" * Holds columns present more than once in the result set and which may be",
" * represented by a stream, since such columns need to be cloned.",
" */",
" private boolean[] cloneMap;",
""
],
"header": "@@ -62,6 +63,13 @@ class ProjectRestrictResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
" int cloneMapItem,"
],
"header": "@@ -80,6 +88,7 @@ class ProjectRestrictResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
" cloneMap =",
" ((boolean[])a.getPreparedStatement().getSavedObject(cloneMapItem));",
""
],
"header": "@@ -108,6 +117,9 @@ class ProjectRestrictResultSet extends NoPutResultSetImpl",
"removed": []
}
]
}
] |
derby-DERBY-3646-1d0c809d
|
derby-4477 Selecting / projecting a column whose value is represented by a stream more than once fails
Patch derby-4477-partial-2. This patch clones streamable columns in
occurence 2..n in ProjectRestrictResultset if they occur more than
once in the select list. It also adds the three repro test cases from
DERBY-3645, DERBY-3646 and DERBY-2349 to BLOBTest. The patch is not
complete and needs to be revisted when cloning of store streams is
implemented. Currenly the cloning occurs via materialization and this
will exhaust memory when lobs are large. See FIXME in
ProjectRestrictResultSet.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@902857 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/execute/ResultSetFactory.java",
"hunks": [
{
"added": [
" @param cloneMapItem Item # for columns that need cloning",
" @param reuseResult Whether or not to reuse the result row."
],
"header": "@@ -294,7 +294,8 @@ public interface ResultSetFactory {",
"removed": [
"\t\t@param reuseResult\tWhether or not to reuse the result row."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ProjectRestrictNode.java",
"hunks": [
{
"added": [
"",
" ResultColumnList.ColumnMapping mappingArrays =",
" resultColumns.mapSourceColumns();",
"",
" int[] mapArray = mappingArrays.mapArray;",
" boolean[] cloneMap = mappingArrays.cloneMap;",
"",
" int cloneMapItem = acb.addItem(cloneMap);"
],
"header": "@@ -1417,8 +1417,15 @@ public class ProjectRestrictNode extends SingleChildResultSetNode",
"removed": [
"\t\tint[] mapArray = resultColumns.mapSourceColumns();"
]
},
{
"added": [
" * arg8: cloneMapItem - item # for mapping of columns that need cloning",
" * arg9: reuseResult - whether or not the result row can be reused",
" * (ie, will it always be the same)",
" * arg10: doesProjection - does this node do a projection",
" * arg11: estimated row count",
" * arg12: estimated cost",
" * arg13: close method",
" */"
],
"header": "@@ -1456,13 +1463,14 @@ public class ProjectRestrictNode extends SingleChildResultSetNode",
"removed": [
"\t\t * arg8: reuseResult - whether or not the result row can be reused",
"\t\t *\t\t\t\t\t\t(ie, will it always be the same)",
"\t\t * arg9: doesProjection - does this node do a projection",
"\t\t * arg10: estimated row count",
"\t\t * arg11: estimated cost",
"\t\t * arg12: close method",
"\t\t */"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumnList.java",
"hunks": [
{
"added": [
"import java.util.Map;",
"import java.util.HashMap;"
],
"header": "@@ -26,6 +26,8 @@ import java.sql.ResultSetMetaData;",
"removed": []
},
{
"added": [
" * <p/>",
" * Also build an array of boolean for columns that point to the same virtual",
" * column and have types that are streamable to be able to determine if",
" * cloning is needed at execution time.",
" ColumnMapping mapSourceColumns()",
" int[] mapArray = new int[size()];",
" boolean[] cloneMap = new boolean[size()];",
"",
" // key: virtual column number, value: index",
" Map seenMap = new HashMap();",
""
],
"header": "@@ -3496,14 +3498,23 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\tint[] mapSourceColumns()",
"\t\tint[]\t\t\tmapArray = new int[size()];"
]
},
{
"added": [
" ResultColumn rc = vcn.getSourceColumn();",
" updateArrays(mapArray, cloneMap, seenMap, rc, index);",
""
],
"header": "@@ -3521,7 +3532,9 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t\t\t\t\tmapArray[index] = vcn.getSourceColumn().getVirtualColumnId();"
]
},
{
"added": [
" ResultColumn rc = cr.getSource();",
"",
" updateArrays(mapArray, cloneMap, seenMap, rc, index);"
],
"header": "@@ -3536,7 +3549,9 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t\t\t\t\tmapArray[index] = cr.getSource().getVirtualColumnId();"
]
},
{
"added": [
" ColumnMapping result = new ColumnMapping(mapArray, cloneMap);",
" return result;"
],
"header": "@@ -3545,7 +3560,8 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t\treturn mapArray;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/GenericResultSetFactory.java",
"hunks": [
{
"added": [
" int cloneMapItem,"
],
"header": "@@ -212,6 +212,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/ProjectRestrictResultSet.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.io.StreamStorable;"
],
"header": "@@ -22,6 +22,7 @@",
"removed": []
},
{
"added": [
"",
" /**",
" * Holds columns present more than once in the result set and which may be",
" * represented by a stream, since such columns need to be cloned.",
" */",
" private boolean[] cloneMap;",
""
],
"header": "@@ -62,6 +63,13 @@ class ProjectRestrictResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
" int cloneMapItem,"
],
"header": "@@ -80,6 +88,7 @@ class ProjectRestrictResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
" cloneMap =",
" ((boolean[])a.getPreparedStatement().getSavedObject(cloneMapItem));",
""
],
"header": "@@ -108,6 +117,9 @@ class ProjectRestrictResultSet extends NoPutResultSetImpl",
"removed": []
}
]
}
] |
derby-DERBY-3649-96e7da94
|
DERBY-3649 can't call a stored function with an aggregate argument without getting the following error: ERROR 42Y29
Move check for function in group by from AggregateExpressionVisitor to GroupByList
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@652533 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3649-e1e6ecb2
|
DERBY-3631 UDF used with aggregate arguments results in error 30000
Add test case. Actual issue fixed with DERBY-3649
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@719015 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3650-0da1e72b
|
DERBY-3650 (partial) - Derby + Hibernate JPA 3.2.1 problem on entity with Blob/Clob
Just adding some tests for the issue which don't pass yet, so not adding them to any suite yet.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@657124 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-3650-1b454a1f
|
DERBY-4520 (partial): Refactor and extend data type cloning facilities
Added functionality to clone store streams (without materialization).
Delayed filling the byte buffer in OverflowInputStream constructor and in
OverflowInputStream.resetStream.
Orginal patch contributed by Mike Matrigali (mikem_app at sbcglobal dot net) as
part of DERBY-3650, modified by Kristian Waagan (Kristian dot Waagan at Sun dot com).
Patch file: derby-4520-3b-CloneableStream_and_delayed_fill.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@902742 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatIdInputStream.java",
"hunks": [
{
"added": [],
"header": "@@ -26,7 +26,6 @@ import java.io.IOException;",
"removed": [
"import org.apache.derby.iapi.reference.SQLState;"
]
},
{
"added": [
" implements ErrorObjectInput, Resetable, CloneableStream"
],
"header": "@@ -43,7 +42,7 @@ import org.apache.derby.iapi.services.context.ContextService;",
"removed": [
"\t implements ErrorObjectInput, Resetable"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/ByteHolder.java",
"hunks": [
{
"added": [],
"header": "@@ -21,12 +21,9 @@",
"removed": [
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"",
"import java.util.Vector;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/OverflowInputStream.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.io.CloneableStream;",
"",
"",
"import java.io.InputStream;"
],
"header": "@@ -24,12 +24,16 @@ package org.apache.derby.impl.store.raw.data;",
"removed": []
},
{
"added": [
"implements Resetable, CloneableStream"
],
"header": "@@ -48,7 +52,7 @@ the datatype's stream is set using:",
"removed": [
"implements Resetable"
]
},
{
"added": [
" RecordHandle recordToLock) {"
],
"header": "@@ -104,9 +108,7 @@ implements Resetable",
"removed": [
" RecordHandle recordToLock)",
" throws IOException, StandardException",
" {"
]
},
{
"added": [],
"header": "@@ -114,8 +116,6 @@ implements Resetable",
"removed": [
"",
" fillByteHolder();"
]
},
{
"added": [
" // Simplify this code when we can use the Java 1.5 constructor",
" // taking the cause as an argument.",
" IOException ioe = new IOException(se.toString());",
" ioe.initCause(se);",
" throw ioe;"
],
"header": "@@ -156,7 +156,11 @@ implements Resetable",
"removed": [
" throw new IOException(se.toString());"
]
},
{
"added": [],
"header": "@@ -298,9 +302,6 @@ implements Resetable",
"removed": [
"",
" // fill the byte holder",
" fillByteHolder();"
]
},
{
"added": [
"",
" /**************************************************************************",
" * Public Methods of CloneableStream Interface",
" **************************************************************************/",
"",
" /**",
" * Clone this object.",
" * <p>",
" * Creates a deep copy of this object. The returned stream has its own",
" * working buffers and can be initialized, reset and read independently",
" * from this stream.",
" * <p>",
" * The cloned stream is set back to the beginning of stream, no matter",
" * where the current stream happens to be positioned.",
" *",
" * @return Copy of this stream which can be used independently.",
" */",
" public InputStream cloneStream() {",
" OverflowInputStream ret_stream = ",
" new OverflowInputStream(",
" bh.cloneEmpty(),",
" owner, ",
" firstOverflowPage, ",
" firstOverflowId, ",
" recordToLock);",
"",
" return(ret_stream);",
" }"
],
"header": "@@ -314,4 +315,32 @@ implements Resetable",
"removed": []
}
]
}
] |
derby-DERBY-3650-55bc97fc
|
DERBY-4477 Selecting / projecting a column whose value is represented by a stream more than once fails
Patch derby-4477-lowmem-2, which adds test cases to check that lobs
are not materialized when large, for the use cases covered by this
issue. The test cases are added to the lowmem suite, which is not not
part of the regular suites.All. This commit is preparatory in that the
lobs are still small, so these changes should be revisited to change
their sizes when the cloning handles materialization properly, cf
DERBY-3650 and DERBY-4520.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@904538 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/streams/LoopingAlphabetReader.java",
"hunks": [
{
"added": [
" /**",
" * Reopen the stream.",
" */",
" public void reopen()",
" throws IOException {",
" this.closed = false;",
" reset();",
" }",
""
],
"header": "@@ -176,6 +176,15 @@ public class LoopingAlphabetReader",
"removed": []
}
]
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.