id
stringlengths 22
25
| commit_message
stringlengths 137
6.96k
| diffs
listlengths 0
63
|
|---|---|---|
derby-DERBY-2799-0ce4bbb6
|
DERBY-2799: Intermittent failure in lang/deadlockMode.java
Fixed problem with ordering of the output. Also added main method to
LangHarnessJavaTest to allow easier testing of a single harness test.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@545873 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2805-cf530d5c
|
DERBY-2805: Fix FromVTI to not throw an ASSERT when sort elimination occurs.
In particular this patch does the following:
1 - Renames the "markOrderingDependent()" method and related variables to
reflect their use, which is to indicate that the optimizer has eliminated
a sort and thus that the underlying result sets *may* need to make
adjustments to compensate for the dropped sort. At the moment the only
result set node which needs to make such an adjustment is
IndexRowToBaseRowNode.
2 - Updates comments where appropriate to more explicitly describe the
intended use of the "adjustForSortElimination()" method (which is
what "markOrderingDependent()" was renamed to).
3 - Adds a void implementation of "adjustForSortElimination()" to the
FromVTI class since that class doesn't need to make any adjustments.
This void method is what solves the failure reported in DERBY-2805.
4 - Adds appropriate test cases to a new fixture in the existing
lang/SysDiagVTIMappingTest JUnit test.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547066 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/SelectNode.java",
"hunks": [
{
"added": [
"\t\tboolean\t\t\t\teliminateSort = false;"
],
"header": "@@ -1206,7 +1206,7 @@ public class SelectNode extends ResultSetNode",
"removed": [
"\t\tboolean\t\t\t\torderingDependent = false;"
]
},
{
"added": [
"\t\t\t// Remember whether or not we can eliminate the sort.",
"\t\t\teliminateSort = eliminateSort || gbn.getIsInSortedOrder();"
],
"header": "@@ -1255,8 +1255,8 @@ public class SelectNode extends ResultSetNode",
"removed": [
"\t\t\t// Remember if the result is dependent on the ordering",
"\t\t\torderingDependent = orderingDependent || gbn.getIsInSortedOrder();"
]
},
{
"added": [
"\t\t\t\t// Remember whether or not we can eliminate the sort.",
"\t\t\t\teliminateSort = eliminateSort || inSortedOrder;"
],
"header": "@@ -1322,8 +1322,8 @@ public class SelectNode extends ResultSetNode",
"removed": [
"\t\t\t\t// Remember if the result is dependent on the ordering",
"\t\t\t\torderingDependent = orderingDependent || inSortedOrder;"
]
}
]
}
] |
derby-DERBY-2806-6b5cc244
|
DERBY-2818: Rewrite ClobUpdateableReader constructors. Preparation for DERBY-2806.
Patch file: derby-2818-1a.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547296 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/ClobUpdateableReader.java",
"hunks": [
{
"added": [
" private InputStream stream = null;"
],
"header": "@@ -48,7 +48,7 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" private InputStream stream;"
]
},
{
"added": [
"",
" init (stream, 0);"
],
"header": "@@ -73,10 +73,11 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" init (stream, 0);"
]
},
{
"added": [
" // A subset of the Clob has not been requested.",
" // Hence set maxPos to -1.",
"",
" InternalClob internalClob = clob.getInternalClob();",
" materialized = internalClob.isWritable(); ",
" if (materialized) {",
" long byteLength = internalClob.getByteLength();",
" this.stream = internalClob.getRawByteStream();",
" init ((LOBInputStream)stream, 0);",
" } else {",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(internalClob instanceof StoreStreamClob,",
" \"Wrong type of internal clob representation: \" +",
" internalClob.toString());",
" }",
" // Since this representation is read-only, the stream never has to",
" // update itself, until the Clob representation itself has been",
" // changed. That even will be detected by {@link #updateIfRequired}.",
" this.streamReader = internalClob.getReader(1L);",
" this.pos = 0L;",
" }"
],
"header": "@@ -86,18 +87,30 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" materialized = clob.isWritable(); ",
" //getting bytelength make some time leave exisitng streams",
" //unusable",
" long byteLength = clob.getByteLength();",
" this.stream = clob.getInternalStream ();",
" init (0, byteLength);",
" //The subset of the Clob",
" //has not been requested.",
" //Hence set maxPos to -1."
]
},
{
"added": [
" this.maxPos = pos + len;",
"",
" InternalClob internalClob = clob.getInternalClob();",
" materialized = internalClob.isWritable(); ",
" if (materialized) {",
" long byteLength = internalClob.getByteLength();",
" this.stream = internalClob.getRawByteStream();",
" // Position the stream on pos using the init method.",
" init ((LOBInputStream)stream, pos);",
" } else {",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(internalClob instanceof StoreStreamClob,",
" \"Wrong type of internal clob representation: \" +",
" internalClob.toString());",
" }",
" // Since this representation is read-only, the stream never has to",
" // update itself, until the Clob representation itself has been",
" // changed. That even will be detected by {@link #updateIfRequired}.",
" this.streamReader = internalClob.getReader(1L);",
" this.pos = 0L;",
" }"
],
"header": "@@ -115,22 +128,29 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" materialized = clob.isWritable(); ",
" //Get the Byte length from the Clob which can be ",
" //passes to the init method.",
" long byteLength = clob.getByteLength();",
" //Initialize the InputStream with the underlying ",
" //InputStream of the Clob.",
" this.stream = clob.getInternalStream ();",
" //position the stream on pos using the init method.",
" init (pos, byteLength);",
" //The length requested cannot exceed the length",
" //of the underlying Clob object. Hence chose the",
" //minimum of the length of the underlying Clob",
" //object and requested length.",
" maxPos = Math.min(clob.length(), pos + len);"
]
},
{
"added": [],
"header": "@@ -204,18 +224,6 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" private void init (long skip, long streamLength) throws IOException {",
" streamReader = new UTF8Reader (stream, 0, streamLength,",
" conChild, ",
" conChild.getConnectionSynchronization());",
" long remainToSkip = skip;",
" while (remainToSkip > 0) {",
" long skipBy = streamReader.skip(remainToSkip);",
" remainToSkip -= skipBy;",
" }",
" pos = skip;",
" }",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedClob.java",
"hunks": [
{
"added": [
" * Returns the current internal Clob representation.",
" * <p>",
" * Care should be taken, as the representation can change when the user",
" * performs operations on the Clob. An example is if the Clob content is",
" * served from a store stream and the user updates the content. The",
" * internal representation will then be changed to a temporary Clob copy",
" * that allows updates.",
" *",
" * @return The current internal Clob representation.",
" InternalClob getInternalClob() {",
" return this.clob;"
],
"header": "@@ -771,30 +771,17 @@ restartScan:",
"removed": [
" * Returns if the internal clob is a writable clob.",
" * @return true if internal clob is writable",
" */",
" boolean isWritable() {",
" return clob.isWritable();",
" }",
"",
" /**",
" * Returns the internal InputStream associated with this clob.",
" * @return internal InputStream",
" * @throws IOException",
" */",
" InputStream getInternalStream () ",
" throws IOException, SQLException {",
" return clob.getRawByteStream();",
" }",
"",
" /**",
" * Returns byte length of the clob",
" * @return byte length of the clob",
" * @throws IOException",
" * @throws SQLException",
" long getByteLength() throws IOException, SQLException {",
" return clob.getByteLength();"
]
}
]
}
] |
derby-DERBY-2806-ddb70fe3
|
DERBY-2806: Fix a test failure in ClobUpdatableReaderTest (testMultiplexedOperationProblem). The patch also cleans up the tests in general (the diff is so large because I removed a try - finally and had to reindent the block).
Patch file: derby-2806-3a-test_fix_and_cleanup.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547461 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2806-f839f5e0
|
DERBY-2806: Added a position-aware stream that can be repositioned on request. This was needed to support multiplexed operations on Clob, specifically involving streams. The code is isolated to Clobs on top of a stream from store (read-only), with the exception of UTF8Reader which is also used for other Clobs (temporary, modifiable ones).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547422 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/StoreStreamClob.java",
"hunks": [
{
"added": [
" private final PositionedStoreStream positionedStoreStream;"
],
"header": "@@ -66,7 +66,7 @@ final class StoreStreamClob",
"removed": [
" private final InputStream storeStream;"
]
},
{
"added": [
" this.positionedStoreStream = new PositionedStoreStream(stream);",
" this.positionedStoreStream.initStream();"
],
"header": "@@ -96,10 +96,10 @@ final class StoreStreamClob",
"removed": [
" this.storeStream = stream;",
" ((Resetable)this.storeStream).initStream();"
]
},
{
"added": [
" this.positionedStoreStream.closeStream();"
],
"header": "@@ -107,7 +107,7 @@ final class StoreStreamClob",
"removed": [
" ((Resetable)this.storeStream).closeStream();"
]
},
{
"added": [
" this.positionedStoreStream.reposition(0L);",
" int us1 = this.positionedStoreStream.read();",
" int us2 = this.positionedStoreStream.read();",
" byteLength = (us1 << 8) + (us2 << 0);",
" long skipped =",
" this.positionedStoreStream.skip(SKIP_BUFFER_SIZE);"
],
"header": "@@ -126,11 +126,15 @@ final class StoreStreamClob",
"removed": [
" byteLength = resetStoreStream(true);",
" long skipped = this.storeStream.skip(SKIP_BUFFER_SIZE);"
]
},
{
"added": [
" } catch (StandardException se) {",
" throw Util.generateCsSQLException(se);"
],
"header": "@@ -140,6 +144,8 @@ final class StoreStreamClob",
"removed": []
},
{
"added": [
" try {",
" // Skip the encoded length.",
" this.positionedStoreStream.reposition(2L);",
" } catch (StandardException se) {",
" throw Util.generateCsSQLException(se);",
" }",
" return this.positionedStoreStream;"
],
"header": "@@ -183,8 +189,13 @@ final class StoreStreamClob",
"removed": [
" resetStoreStream(true);",
" return this.storeStream;"
]
},
{
"added": [
" try {",
" this.positionedStoreStream.reposition(0L);",
" } catch (StandardException se) {",
" throw Util.generateCsSQLException(se);",
" }",
" Reader reader = new UTF8Reader(this.positionedStoreStream,",
" TypeId.CLOB_MAXWIDTH, this.conChild,",
" this.synchronizationObject);"
],
"header": "@@ -200,9 +211,14 @@ final class StoreStreamClob",
"removed": [
" resetStoreStream(false);",
" Reader reader = new UTF8Reader(this.storeStream, TypeId.CLOB_MAXWIDTH,",
" this.conChild, this.synchronizationObject);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"import org.apache.derby.iapi.types.Resetable;",
"",
" /** Stream store that can reposition itself on request. */",
" private final PositionedStoreStream positionedIn;",
" /** Store last visited position in the store stream. */",
" private long rawStreamPos = 0L;"
],
"header": "@@ -29,12 +29,20 @@ import java.io.UTFDataFormatException;",
"removed": []
},
{
"added": [
" throws IOException, SQLException",
" parent.setupContextStack();",
" try {",
" synchronized (lock) { // Synchronize access to store.",
" if (in instanceof PositionedStoreStream) {",
" this.positionedIn = (PositionedStoreStream)in;",
" // This stream is already buffered, and buffering it again",
" // this high up complicates the handling a lot. Must",
" // implement a special buffered reader to buffer again.",
" // Note that buffering this UTF8Reader again, does not",
" // cause any trouble...",
" this.in = in;",
" try {",
" this.positionedIn.resetStream();",
" } catch (StandardException se) {",
" IOException ioe = new IOException(se.getMessage());",
" ioe.initCause(se);",
" throw ioe;",
" }",
" } else {",
" this.positionedIn = null;",
" // Buffer this for improved performance.",
" this.in = new BufferedInputStream (in);",
" }",
" this.utfLen = readUnsignedShort();",
" // Even if we are reading the encoded length, the stream may",
" // not be a positioned stream. This is currently true when a",
" // stream is passed in after a ResetSet.getXXXStream method.",
" if (this.positionedIn != null) {",
" this.rawStreamPos = this.positionedIn.getPosition();",
" }",
" } // End synchronized block",
" } finally {",
" parent.restoreContextStack();",
" }"
],
"header": "@@ -55,17 +63,46 @@ public final class UTF8Reader extends Reader",
"removed": [
" throws IOException",
"",
"\t\tthis.in = new BufferedInputStream (in);",
"\t\tsynchronized (lock) {",
"\t\t\tthis.utfLen = readUnsignedShort();",
"\t\t}"
]
},
{
"added": [
" this.positionedIn = null;",
"",
" if (SanityManager.DEBUG) {",
" // Do not allow the inputstream here to be a Resetable, as this",
" // means (currently, not by design...) that the length is encoded in",
" // the stream and we can't pass that out as data to the user.",
" SanityManager.ASSERT(!(in instanceof Resetable));",
" }",
" // Buffer this for improved performance.",
" this.in = new BufferedInputStream(in);"
],
"header": "@@ -88,11 +125,19 @@ public final class UTF8Reader extends Reader",
"removed": [
"",
" this.in = new BufferedInputStream(in);"
]
},
{
"added": [
" //@GuardedBy(\"lock\")"
],
"header": "@@ -270,6 +315,7 @@ public final class UTF8Reader extends Reader",
"removed": []
},
{
"added": [
" // If we are operating on a positioned stream, reposition it to",
" // continue reading at the position we stopped last time.",
" if (this.positionedIn != null) {",
" try {",
" this.positionedIn.reposition(this.rawStreamPos);",
" } catch (StandardException se) {",
" throw Util.generateCsSQLException(se);",
" }",
" }"
],
"header": "@@ -281,7 +327,15 @@ public final class UTF8Reader extends Reader",
"removed": [
""
]
},
{
"added": [
" if (charactersInBuffer != 0) {",
" if (this.positionedIn != null) {",
" // Save the last visisted position so we can start reading where",
" // we let go the next time we fill the buffer.",
" this.rawStreamPos = this.positionedIn.getPosition();",
" }",
" }"
],
"header": "@@ -361,8 +415,14 @@ readChars:",
"removed": [
"\t\tif (charactersInBuffer != 0)"
]
}
]
}
] |
derby-DERBY-2807-6b141721
|
DERBY-2807: Patch adds BLOB/CLOB testing to the compatibility testing
framework
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@599836 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/TestConfiguration.java",
"hunks": [
{
"added": [
" /**",
" * Equivalent to \"defaultSuite\" as defined above, but assumes a server",
" * has already been started. ",
" * Does NOT decorate for running in embedded mode.",
" */",
" public static Test defaultExistingServerSuite(Class testClass)",
" {",
" return defaultExistingServerSuite(testClass, true);",
" }",
" public static Test defaultExistingServerSuite(Class testClass, boolean cleanDB)",
" {",
" final TestSuite suite = new TestSuite(suiteName(testClass));",
" ",
" if (cleanDB)",
" {",
" suite.addTest(new CleanDatabaseTestSetup(clientExistingServerSuite(testClass)));",
" }",
" else",
" {",
" suite.addTest(clientExistingServerSuite(testClass));",
" }",
"",
" return (suite);",
" }"
],
"header": "@@ -245,6 +245,30 @@ public class TestConfiguration {",
"removed": []
},
{
"added": [
" /**",
" * Equivalent to 'clientServerSuite' above, but assumes server is",
" * already running.",
" *",
" */",
" public static Test clientExistingServerSuite(Class testClass)",
" {",
" TestSuite suite = new TestSuite(testClass,",
" suiteName(testClass)+\":client\");",
" return defaultExistingServerDecorator(suite); // Will not start server and does not stop it when done!.",
" }",
""
],
"header": "@@ -319,6 +343,18 @@ public class TestConfiguration {",
"removed": []
},
{
"added": [
" /**",
" * Decorate a test to use suite's default host and port, ",
" * but assuming the server is already running.",
" */",
" public static Test defaultExistingServerDecorator(Test test)",
" {",
" // As defaultServerDecorator but assuming ",
" // server is already started.",
" // Need to have client ",
" // and not running in J2ME (JSR169).",
" if (!(Derby.hasClient())",
" || JDBC.vmSupportsJSR169())",
" {",
" return new TestSuite(\"empty: no network server support in JSR169 (or derbyclient.jar missing).\");",
" }",
" ",
" Test r =",
" new ServerSetup(test, DEFAULT_HOSTNAME, DEFAULT_PORT);",
" ((ServerSetup)r).setJDBCClient(JDBCClient.DERBYNETCLIENT); ",
" ",
" return r;",
" }"
],
"header": "@@ -346,6 +382,28 @@ public class TestConfiguration {",
"removed": []
}
]
}
] |
derby-DERBY-2809-3233189a
|
DERBY-2809 (cleanup) Rename bind method to bindOperand() in UnaryOperatorNode since that correctly
reflects its role. Change sub-classes to call the node correctly. Move some of the logic from
UnaryOperatorNode.bindOperator() to the actual sub-classes where it logically lives.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@548822 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DB2LengthOperatorNode.java",
"hunks": [
{
"added": [
" bindOperand( fromList, subqueryList, aggregateVector);"
],
"header": "@@ -81,7 +81,7 @@ public final class DB2LengthOperatorNode extends UnaryOperatorNode",
"removed": [
" ValueNode boundExpression = super.bindExpression( fromList, subqueryList, aggregateVector);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UnaryDateTimestampOperatorNode.java",
"hunks": [
{
"added": [
"\tpublic ValueNode bindExpression ("
],
"header": "@@ -97,7 +97,7 @@ public class UnaryDateTimestampOperatorNode extends UnaryOperatorNode",
"removed": [
"\tprotected ValueNode bindUnaryOperator("
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UnaryLogicalOperatorNode.java",
"hunks": [
{
"added": [
"\t\tbindOperand(fromList, subqueryList,"
],
"header": "@@ -71,7 +71,7 @@ public abstract class UnaryLogicalOperatorNode extends UnaryOperatorNode",
"removed": [
"\t\tsuper.bindExpression(fromList, subqueryList,"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UnaryOperatorNode.java",
"hunks": [
{
"added": [
" ",
" /**",
" * Operator type, only valid for XMLPARSE and XMLSERIALIZE.",
" */",
"\tprivate int operatorType;"
],
"header": "@@ -62,7 +62,11 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"\tint operatorType;"
]
},
{
"added": [
" //",
" // This has lead to this class having somewhat of",
" // a confused personality. In one mode it is really",
" // a parent (abstract) class for various unary operator",
" // node implementations, in its other mode it is a concrete",
" // class for XMLPARSE and XMLSERIALIZE."
],
"header": "@@ -84,6 +88,12 @@ public class UnaryOperatorNode extends ValueNode",
"removed": []
},
{
"added": [
" * This method is the implementation for XMLPARSE and XMLSERIALIZE.",
" * Sub-classes need to implement their own bindExpression() method",
" * for their own specific rules."
],
"header": "@@ -286,6 +296,9 @@ public class UnaryOperatorNode extends ValueNode",
"removed": []
},
{
"added": [
"\t\tbindOperand(fromList, subqueryList, aggregateVector);",
" if (operatorType == XMLPARSE_OP)",
" bindXMLParse();",
" else if (operatorType == XMLSERIALIZE_OP)",
" bindXMLSerialize();",
" return this;",
"\t * Bind the operand for this unary operator.",
" * Binding the operator may change the operand node.",
" * Sub-classes bindExpression() methods need to call this",
" * method to bind the operand.",
"\tprotected void bindOperand(",
"\t\t\treturn;"
],
"header": "@@ -302,30 +315,29 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"\t\treturn bindUnaryOperator(fromList, subqueryList, aggregateVector);",
"\t * Workhorse for bindExpression. This exists so it can be called",
"\t * by child classes.",
"\tprotected ValueNode bindUnaryOperator(",
"\t\t/*",
"\t\t** Operand can be null for COUNT(*) which",
"\t\t** is treated like a normal aggregate.",
"\t\t*/",
"\t\tif (operand == null)",
"\t\t{",
"\t\t\treturn this;",
"\t\t}",
"\t\t\treturn this;"
]
},
{
"added": [],
"header": "@@ -342,13 +354,6 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"",
"\t\tif (operatorType == XMLPARSE_OP)",
"\t\t\tbindXMLParse();",
"\t\telse if (operatorType == XMLSERIALIZE_OP)",
"\t\t\tbindXMLSerialize();",
"",
"\t\treturn this;"
]
}
]
}
] |
derby-DERBY-2809-ac12b1fc
|
DERBY-2809 (cleanup) Remove need for specific method indicating a node is a unary +/- with a
parameter node, it can be handled consistently by the method ValueNode.isParameterNode().
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@552070 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CastNode.java",
"hunks": [
{
"added": [
" // Obviously the type of a parameter that",
" // requires its type from context (a parameter)",
" // gets its type from the type of the CAST.",
" castOperand.setType(getTypeServices());"
],
"header": "@@ -397,9 +397,12 @@ public class CastNode extends ValueNode",
"removed": [
"\t\t\tbindParameter();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UnaryOperatorNode.java",
"hunks": [
{
"added": [],
"header": "@@ -552,22 +552,6 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"\t/**",
"\t * Returns true if this UnaryOperatorNode is for -?/+?.",
"\t * This is required to check -?/+? say in the following sql",
"\t * select * from t1 where -? and c11=c11 or +?",
"\t * ",
"\t * @return\tTrue if this +?/-? node",
"\t */",
"\tpublic boolean isUnaryMinusOrPlusWithParameter()",
"\t{",
"\t\tif (operand !=null && operand instanceof ParameterNode && operand.requiresTypeFromContext() && ",
"\t\t\t\t(operator!= null && (operator.equals(\"-\") || operator.equals(\"+\"))))",
"\t\t\treturn true;",
"\t\telse",
"\t\t\treturn false;",
"\t}",
""
]
},
{
"added": [],
"header": "@@ -640,9 +624,6 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"\t\tif (operand == null)",
"\t\t\treturn;",
""
]
}
]
}
] |
derby-DERBY-2809-b9599450
|
DERBY-2809 Add test cases for non-boolean unary operators in case expressions.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@551764 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2809-e2f08d22
|
DERBY-2809 (partial) Cleanup the handling of nodes under UnaryOperatorNode to ensure
parameters (nodes requiring type to be set from the context) are handled consistently.
This includes moving logic to the node (UnaryArithmeticOperatorNode) that "owns" it rather than having it in the
parent class (UnaryOperatorNode).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@551183 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UnaryArithmeticOperatorNode.java",
"hunks": [
{
"added": [],
"header": "@@ -52,23 +52,6 @@ public class UnaryArithmeticOperatorNode extends UnaryOperatorNode",
"removed": [
"",
"\t//when the bindExpression method is called during the normal binding phase,",
"\t//unary minus and unary plus dynamic parameters are not ready for",
"\t//binding because the type of these dynamic parameters is not yet set.",
"\t//For eg, consider sql select * from t1 where c1 = -?",
"\t//bindExpression on -? gets called from BinaryComparisonOperatorNode's",
"\t//bindExpression but the parameter type has not been set yet for -?",
"\t//Later on, in BinaryComparisonOperatorNode's bindExpression, the type",
"\t//of the -? gets set to the type of c1 by the setType call. ",
"\t//Now, at this point, we are ready to finish binding phase for -? ",
"\t//(This class's setType method calls the bindExpression to finish binding)",
"\t//In order to accomplish binding later on, we need to save the following ",
"\t//3 objects during first call to bindExpression and then later this ",
"\t//gets used in setType method when it calls the bindExpression method.",
"\tFromList localCopyFromList;",
"\tSubqueryList localCopySubqueryList;",
"\tVector localAggregateVector;"
]
},
{
"added": [
" ",
" /**",
" * Unary + and - require their type to be set if",
" * they wrap another node (e.g. a parameter) that",
" * requires type from its context.",
" * @see ValueNode#requiresTypeFromContext",
" */",
" public boolean requiresTypeFromContext()",
" {",
" if (operatorType == UNARY_PLUS || operatorType == UNARY_MINUS)",
" return operand.requiresTypeFromContext(); ",
" return false;",
" }",
" * For SQRT and ABS the parameter becomes a DOUBLE.",
" * For unary + and - no change is made to the",
" * underlying node. Once this node's type is set",
" * using setType, then the underlying node will have",
" * its type set."
],
"header": "@@ -101,13 +84,26 @@ public class UnaryArithmeticOperatorNode extends UnaryOperatorNode",
"removed": [
"\t * By default unary operators don't accept ? parameters as operands.",
"\t * This can be over-ridden for particular unary operators.",
"\t *",
"\t *\tWe throw an exception if the parameter doesn't have a datatype",
"\t *\tassigned to it yet."
]
},
{
"added": [
" return;",
" ",
"\t\tif (operatorType == UNARY_MINUS || operatorType == UNARY_PLUS) ",
" ",
" // Not expected to get here since only the above types are supported",
" // but the super-class method will throw an exception",
" super.bindParameter();",
" ",
" "
],
"header": "@@ -120,15 +116,19 @@ public class UnaryArithmeticOperatorNode extends UnaryOperatorNode",
"removed": [
"\t\telse if (operatorType == UNARY_MINUS || operatorType == UNARY_PLUS) ",
"\t\telse if (operand.getTypeServices() == null)",
"\t\t{",
"\t\t\tthrow StandardException.newException(SQLState.LANG_UNARY_OPERAND_PARM, operator);",
"\t\t}"
]
},
{
"added": [],
"header": "@@ -146,9 +146,6 @@ public class UnaryArithmeticOperatorNode extends UnaryOperatorNode",
"removed": [
"\t\tlocalCopyFromList = fromList;",
"\t\tlocalCopySubqueryList = subqueryList;",
"\t\tlocalAggregateVector = aggregateVector;"
]
},
{
"added": [
" checkOperandIsNumeric(operand.getTypeId());"
],
"header": "@@ -163,15 +160,7 @@ public class UnaryArithmeticOperatorNode extends UnaryOperatorNode",
"removed": [
"\t\t\tTypeId operandType = operand.getTypeId();",
"",
"\t\t\tif ( ! operandType.isNumericTypeId())",
"\t\t\t{",
"\t\t\t",
"\t\t\t\tthrow StandardException.newException(SQLState.LANG_UNARY_ARITHMETIC_BAD_TYPE, ",
"\t\t\t\t\t(operatorType == UNARY_PLUS) ? \"+\" : \"-\", ",
"\t\t\t\t\toperandType.getSQLTypeName());",
"\t\t\t}"
]
},
{
"added": [
" ",
" /**",
" * Only called for Unary +/-.",
" *",
" */",
"\tprivate void checkOperandIsNumeric(TypeId operandType) throws StandardException",
"\t{",
"\t if (!operandType.isNumericTypeId())",
"\t {",
"\t throw StandardException.newException(",
" SQLState.LANG_UNARY_ARITHMETIC_BAD_TYPE, ",
"\t (operatorType == UNARY_PLUS) ? \"+\" : \"-\", ",
"\t operandType.getSQLTypeName());",
"\t }",
"\t ",
"\t}"
],
"header": "@@ -179,6 +168,22 @@ public class UnaryArithmeticOperatorNode extends UnaryOperatorNode",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UnaryOperatorNode.java",
"hunks": [
{
"added": [],
"header": "@@ -76,11 +76,6 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"\tpublic final static int UNARY_PLUS\t= 1;",
"\tpublic final static int UNARY_MINUS\t= 2;",
"\tpublic final static int NOT\t\t= 3;",
"\tpublic final static int IS_NULL\t\t= 4;",
""
]
},
{
"added": [
"\t\tif (operand.requiresTypeFromContext()) {",
" // If not bound yet then just return.",
" // The node type will be set by either",
" // this class' bindExpression() or a by",
" // a node that contains this expression.",
" if (operand.getTypeServices() == null)",
" return;",
" }"
],
"header": "@@ -334,16 +329,18 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"",
"\t\t//Return with no binding, if the type of unary minus/plus parameter is not set yet.",
"\t\tif (operand.requiresTypeFromContext() && ((operator.equals(\"-\") || operator.equals(\"+\"))) && operand.getTypeServices() == null)",
"\t\t\treturn;",
"",
"\t\tif (operand.requiresTypeFromContext())"
]
},
{
"added": [
" private void bindXMLParse() throws StandardException"
],
"header": "@@ -362,7 +359,7 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
" public void bindXMLParse() throws StandardException"
]
},
{
"added": [
" private void bindXMLSerialize() throws StandardException"
],
"header": "@@ -402,7 +399,7 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
" public void bindXMLSerialize() throws StandardException"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ValueNodeList.java",
"hunks": [
{
"added": [
"\t\t\tValueNode valueNode = (ValueNode) elementAt(index);",
"\t\t\tif (valueNodeDTS != null)",
"\t\t\t\treturn valueNodeDTS;",
"\t\treturn null;"
],
"header": "@@ -283,24 +283,20 @@ public class ValueNodeList extends QueryTreeNodeVector",
"removed": [
"\t\tDataTypeDescriptor\tfirstDTS = null;",
"\t\t\tValueNode\t\t\tvalueNode;",
"",
"\t\t\tvalueNode = (ValueNode) elementAt(index);",
"\t\t\tif ((firstDTS == null) && (valueNodeDTS != null))",
"\t\t\t\tfirstDTS = valueNodeDTS;",
"\t\t\t\tbreak;",
"\t\treturn firstDTS;"
]
},
{
"added": [
"\t\t\tValueNode valueNode = (ValueNode) elementAt(index);"
],
"header": "@@ -574,11 +570,10 @@ public class ValueNodeList extends QueryTreeNodeVector",
"removed": [
"\t\tValueNode\tvalueNode;",
"\t\t\tvalueNode = (ValueNode) elementAt(index);"
]
}
]
}
] |
derby-DERBY-2811-87264765
|
DERBY-2811: Use internal variable name for host parameter in default security policy; and handle hostname wildcard ::
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547521 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/drda/NetworkServerControl.java",
"hunks": [
{
"added": [
"\tprivate final static String IPV6_HOSTNAME_WILDCARD = \"::\";"
],
"header": "@@ -178,6 +178,7 @@ public class NetworkServerControl{",
"removed": []
},
{
"added": [
" // substituted into the default policy file. This is the hostname for",
" // SocketPermissions. This is an internal property which customers",
" // may not override.",
" System.setProperty( Property.DERBY_SECURITY_HOST, getHostNameForSocketPermission( server ) );",
" server.consoleMessage( \"XXX \" + Property.DERBY_SECURITY_HOST + \" = \" + PropertyUtil.getSystemProperty( Property.DERBY_SECURITY_HOST ) );",
" "
],
"header": "@@ -585,13 +586,14 @@ public class NetworkServerControl{",
"removed": [
" // substituted into the default policy file. It is ok to force this",
" // property at this time because it has already been read",
" // (and if necessary overridden) by server.getPropertyInfo()",
" // followed by server.parseArgs().",
" System.setProperty( Property.DRDA_PROP_HOSTNAME, getHostNameForSocketPermission( server ) );"
]
},
{
"added": [
" * wildcard valuse \"0.0.0.0\" and \"::\" are forced to be \"*\" since that is the wildcard",
" * not understand the \"0.0.0.0\" and \"::\" wildcards."
],
"header": "@@ -624,9 +626,9 @@ public class NetworkServerControl{",
"removed": [
" * wildcard value \"0.0.0.0\" is forced to be \"*\" since that is the wildcard",
" * not understand the \"0.0.0.0\" wildcard."
]
}
]
}
] |
derby-DERBY-2811-f69e1a28
|
DERBY-2811: Handle special 0.0.0.0 wildcard as host parameter when installing the server's security manager.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547230 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/drda/NetworkServerControl.java",
"hunks": [
{
"added": [
"\tprivate final static String DERBY_HOSTNAME_WILDCARD = \"0.0.0.0\";",
"\tprivate final static String SOCKET_PERMISSION_HOSTNAME_WILDCARD = \"*\";"
],
"header": "@@ -177,6 +177,8 @@ public class NetworkServerControl{",
"removed": []
},
{
"added": [
" //",
" // Forcibly set the following property so that it will be correctly",
" // substituted into the default policy file. It is ok to force this",
" // property at this time because it has already been read",
" // (and if necessary overridden) by server.getPropertyInfo()",
" // followed by server.parseArgs().",
" //",
" System.setProperty( Property.DRDA_PROP_HOSTNAME, getHostNameForSocketPermission( server ) );"
],
"header": "@@ -581,8 +583,14 @@ public class NetworkServerControl{",
"removed": [
" if ( PropertyUtil.getSystemProperty( Property.DRDA_PROP_HOSTNAME ) == null )",
" { System.setProperty( Property.DRDA_PROP_HOSTNAME, server.getHost() ); }"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/NetworkServerTestSetup.java",
"hunks": [
{
"added": [
"import java.util.ArrayList;"
],
"header": "@@ -28,6 +28,7 @@ import java.io.InputStream;",
"removed": []
},
{
"added": [
" public static final String HOST_OPTION = \"-h\";",
""
],
"header": "@@ -57,6 +58,8 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": []
},
{
"added": [
" String[] args = getDefaultStartupArgs( false );"
],
"header": "@@ -157,7 +160,7 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" String[] args = getDefaultStartupArgs();"
]
},
{
"added": [
" boolean skipHostName = false;"
],
"header": "@@ -169,6 +172,7 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": []
},
{
"added": [
" count = startupArgs.length;",
" for ( int i = 0; i < count; i++ )",
" {",
" // if the special startup args override the hostname, then don't",
" // specify it twice",
" if ( HOST_OPTION.equals( startupArgs[ i ] ) ) { skipHostName = true; }",
" }",
"",
" String[] defaultArgs = getDefaultStartupArgs( skipHostName );"
],
"header": "@@ -183,7 +187,15 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" String[] defaultArgs = getDefaultStartupArgs();"
]
},
{
"added": [
" //System.out.println( \"XXX server startup command = \" + command );",
""
],
"header": "@@ -201,6 +213,8 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": []
},
{
"added": [
" {",
" try {",
" networkServerController.shutdown();",
" } catch (Throwable t)",
" {",
" t.printStackTrace( System.out );",
" }",
" }"
],
"header": "@@ -246,7 +260,14 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" networkServerController.shutdown();"
]
},
{
"added": [
" public static String[] getDefaultStartupArgs( boolean skipHostName )",
" ArrayList argsList = new ArrayList();",
"",
" argsList.add( \"start\" );",
"",
" if ( !skipHostName )",
" {",
" argsList.add( HOST_OPTION );",
" argsList.add( config.getHostName() );",
" }",
" argsList.add( \"-p\" );",
" argsList.add( Integer.toString(config.getPort() ) );",
"",
" if (config.getSsl() != null) {",
" argsList.add( \"-ssl\" );",
" argsList.add( config.getSsl( ) );",
"",
" String[] retval = new String[ argsList.size() ];",
"",
" argsList.toArray( retval );",
"",
" return retval;"
],
"header": "@@ -262,29 +283,31 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" public static String[] getDefaultStartupArgs()",
" ",
" if (config.getSsl() == null) {",
" return new String[] {",
" \"start\",",
" \"-h\",",
" config.getHostName(),",
" \"-p\",",
" Integer.toString(config.getPort())",
" };",
" } else {",
" return new String[] {",
" \"start\",",
" \"-h\",",
" config.getHostName(),",
" \"-p\",",
" Integer.toString(config.getPort()),",
" \"-ssl\",",
" config.getSsl()",
" };"
]
},
{
"added": [
" } catch (Throwable e) {",
" if ( !vetPing( e ) )",
" {",
" e.printStackTrace( System.out );",
"",
" // at this point, we don't have the usual \"server not up",
" // yet\" error. get out. at this point, you may have to",
" // manually kill the server.",
"",
" return false;",
" }"
],
"header": "@@ -364,7 +387,17 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" } catch (Exception e) {"
]
},
{
"added": [
" } catch (Throwable t) {",
" // something unfortunate happened",
" t.printStackTrace( System.out );",
" return false;",
" // return false if ping returns an error other than \"server not up yet\"",
" private static boolean vetPing( Throwable t )",
" {",
" if ( !t.getClass().getName().equals( \"java.lang.Exception\" ) ) { return false; }",
" ",
" return ( t.getMessage().startsWith( \"DRDA_NoIO.S:Could not connect to Derby Network Server\" ) );",
" }",
" ",
" // return true if this is a drda error",
" private static boolean isDRDAerror( Throwable t )",
" {",
" if ( !t.getClass().getName().equals( \"java.lang.Exception\" ) ) { return false; }",
" ",
" return ( t.getMessage().startsWith( \"DRDA\" ) );",
" }",
" "
],
"header": "@@ -381,11 +414,31 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": []
}
]
}
] |
derby-DERBY-2818-6b5cc244
|
DERBY-2818: Rewrite ClobUpdateableReader constructors. Preparation for DERBY-2806.
Patch file: derby-2818-1a.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547296 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/ClobUpdateableReader.java",
"hunks": [
{
"added": [
" private InputStream stream = null;"
],
"header": "@@ -48,7 +48,7 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" private InputStream stream;"
]
},
{
"added": [
"",
" init (stream, 0);"
],
"header": "@@ -73,10 +73,11 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" init (stream, 0);"
]
},
{
"added": [
" // A subset of the Clob has not been requested.",
" // Hence set maxPos to -1.",
"",
" InternalClob internalClob = clob.getInternalClob();",
" materialized = internalClob.isWritable(); ",
" if (materialized) {",
" long byteLength = internalClob.getByteLength();",
" this.stream = internalClob.getRawByteStream();",
" init ((LOBInputStream)stream, 0);",
" } else {",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(internalClob instanceof StoreStreamClob,",
" \"Wrong type of internal clob representation: \" +",
" internalClob.toString());",
" }",
" // Since this representation is read-only, the stream never has to",
" // update itself, until the Clob representation itself has been",
" // changed. That even will be detected by {@link #updateIfRequired}.",
" this.streamReader = internalClob.getReader(1L);",
" this.pos = 0L;",
" }"
],
"header": "@@ -86,18 +87,30 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" materialized = clob.isWritable(); ",
" //getting bytelength make some time leave exisitng streams",
" //unusable",
" long byteLength = clob.getByteLength();",
" this.stream = clob.getInternalStream ();",
" init (0, byteLength);",
" //The subset of the Clob",
" //has not been requested.",
" //Hence set maxPos to -1."
]
},
{
"added": [
" this.maxPos = pos + len;",
"",
" InternalClob internalClob = clob.getInternalClob();",
" materialized = internalClob.isWritable(); ",
" if (materialized) {",
" long byteLength = internalClob.getByteLength();",
" this.stream = internalClob.getRawByteStream();",
" // Position the stream on pos using the init method.",
" init ((LOBInputStream)stream, pos);",
" } else {",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(internalClob instanceof StoreStreamClob,",
" \"Wrong type of internal clob representation: \" +",
" internalClob.toString());",
" }",
" // Since this representation is read-only, the stream never has to",
" // update itself, until the Clob representation itself has been",
" // changed. That even will be detected by {@link #updateIfRequired}.",
" this.streamReader = internalClob.getReader(1L);",
" this.pos = 0L;",
" }"
],
"header": "@@ -115,22 +128,29 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" materialized = clob.isWritable(); ",
" //Get the Byte length from the Clob which can be ",
" //passes to the init method.",
" long byteLength = clob.getByteLength();",
" //Initialize the InputStream with the underlying ",
" //InputStream of the Clob.",
" this.stream = clob.getInternalStream ();",
" //position the stream on pos using the init method.",
" init (pos, byteLength);",
" //The length requested cannot exceed the length",
" //of the underlying Clob object. Hence chose the",
" //minimum of the length of the underlying Clob",
" //object and requested length.",
" maxPos = Math.min(clob.length(), pos + len);"
]
},
{
"added": [],
"header": "@@ -204,18 +224,6 @@ final class ClobUpdateableReader extends Reader {",
"removed": [
" private void init (long skip, long streamLength) throws IOException {",
" streamReader = new UTF8Reader (stream, 0, streamLength,",
" conChild, ",
" conChild.getConnectionSynchronization());",
" long remainToSkip = skip;",
" while (remainToSkip > 0) {",
" long skipBy = streamReader.skip(remainToSkip);",
" remainToSkip -= skipBy;",
" }",
" pos = skip;",
" }",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedClob.java",
"hunks": [
{
"added": [
" * Returns the current internal Clob representation.",
" * <p>",
" * Care should be taken, as the representation can change when the user",
" * performs operations on the Clob. An example is if the Clob content is",
" * served from a store stream and the user updates the content. The",
" * internal representation will then be changed to a temporary Clob copy",
" * that allows updates.",
" *",
" * @return The current internal Clob representation.",
" InternalClob getInternalClob() {",
" return this.clob;"
],
"header": "@@ -771,30 +771,17 @@ restartScan:",
"removed": [
" * Returns if the internal clob is a writable clob.",
" * @return true if internal clob is writable",
" */",
" boolean isWritable() {",
" return clob.isWritable();",
" }",
"",
" /**",
" * Returns the internal InputStream associated with this clob.",
" * @return internal InputStream",
" * @throws IOException",
" */",
" InputStream getInternalStream () ",
" throws IOException, SQLException {",
" return clob.getRawByteStream();",
" }",
"",
" /**",
" * Returns byte length of the clob",
" * @return byte length of the clob",
" * @throws IOException",
" * @throws SQLException",
" long getByteLength() throws IOException, SQLException {",
" return clob.getByteLength();"
]
}
]
}
] |
derby-DERBY-2822-6f9cd955
|
DERBY-2822: Add caching of store stream length in StoreStreamClob, if appropriate.
StoreStreamClob is a read-only representation of a Clob, and will now cache the character length after it has been obtained the first time. Getting the length initially is still expensive, as the UTF-8 data stream has to be decoded.
Patch file: derby-2822-1b-cacheCharLength.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@708912 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/StoreStreamClob.java",
"hunks": [
{
"added": [
" /**",
" * The cached length of the store stream in number of characters.",
" * A value of {@code 0} means the length is unknown, and zero is an invalid",
" * length for a store stream Clob. It is set to zero because that is the",
" * value encoded as length in the store stream (on disk format) when the",
" * length is unknown or cannot be represented.",
" */",
" private long cachedCharLength = 0;"
],
"header": "@@ -65,6 +65,14 @@ final class StoreStreamClob",
"removed": []
},
{
"added": [
" if (this.cachedCharLength == 0) {",
" // Decode the stream to find the length.",
" synchronized (this.synchronizationObject) {",
" this.conChild.setupContextStack();",
" try {",
" this.cachedCharLength = UTF8Util.skipUntilEOF(",
" new BufferedInputStream(getRawByteStream()));",
" } catch (Throwable t) {",
" throw noStateChangeLOB(t);",
" } finally {",
" this.conChild.restoreContextStack();",
" }",
" return this.cachedCharLength;"
],
"header": "@@ -134,17 +142,21 @@ final class StoreStreamClob",
"removed": [
" synchronized (this.synchronizationObject) {",
" this.conChild.setupContextStack();",
" try {",
" return UTF8Util.skipUntilEOF(",
" new BufferedInputStream(getRawByteStream()));",
" } catch (Throwable t) {",
" throw noStateChangeLOB(t);",
" } finally {",
" this.conChild.restoreContextStack();"
]
}
]
}
] |
derby-DERBY-2824-2dce4aba
|
DERBY-2824: Improved error reporting when encountering invalid encodings and other error situations.
Patch file: derby-2824-3b-improved_error_reporting.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@568086 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java",
"hunks": [
{
"added": [
" private static final String READER_CLOSED = \"Reader closed\";"
],
"header": "@@ -37,6 +37,7 @@ import org.apache.derby.iapi.types.Resetable;",
"removed": []
},
{
"added": [
" private final long maxFieldSize; // characters"
],
"header": "@@ -46,7 +47,7 @@ public final class UTF8Reader extends Reader",
"removed": [
" private long maxFieldSize; // characeters"
]
},
{
"added": [
" throw new IOException(READER_CLOSED);"
],
"header": "@@ -149,7 +150,7 @@ public final class UTF8Reader extends Reader",
"removed": [
" throw new IOException();"
]
},
{
"added": [
" throw new IOException(READER_CLOSED);"
],
"header": "@@ -167,7 +168,7 @@ public final class UTF8Reader extends Reader",
"removed": [
" throw new IOException();"
]
},
{
"added": [
" throw new IOException(READER_CLOSED);"
],
"header": "@@ -196,7 +197,7 @@ public final class UTF8Reader extends Reader",
"removed": [
" throw new IOException();"
]
},
{
"added": [],
"header": "@@ -306,12 +307,6 @@ public final class UTF8Reader extends Reader",
"removed": [
" private IOException utfFormatException() {",
" noMoreReads = true;",
" closeIn();",
" return new UTFDataFormatException();",
" }",
""
]
},
{
"added": [
" throw utfFormatException(\"Reached EOF prematurely, \" +",
" \"read \" + utfCount + \" out of \" + utfLen + \" bytes\");"
],
"header": "@@ -348,7 +343,8 @@ readChars:",
"removed": [
" throw utfFormatException();"
]
},
{
"added": [
" throw utfFormatException(\"Reached EOF when reading \" +",
" \"second byte in a two byte character encoding; \" +",
" \"byte/char position \" + utfCount + \"/\" +",
" readerCharCount);",
" throw utfFormatException(\"Second byte in a two byte\" +",
" \"character encoding invalid: (int)\" + char2 +",
" \", byte/char pos \" + utfCount + \"/\" +",
" readerCharCount);"
],
"header": "@@ -365,10 +361,16 @@ readChars:",
"removed": [
" throw utfFormatException();",
" throw utfFormatException();"
]
},
{
"added": [
" throw utfFormatException(\"Reached EOF when reading \" +",
" \"second/third byte in a three byte character \" +",
" \"encoding; byte/char position \" + utfCount + \"/\" +",
" readerCharCount);"
],
"header": "@@ -380,7 +382,10 @@ readChars:",
"removed": [
" throw utfFormatException();"
]
},
{
"added": [
" throw utfFormatException(\"Internal error: Derby-\" +",
" \"specific EOF marker read\");",
" throw utfFormatException(\"Second/third byte in a \" +",
" \"three byte character encoding invalid: (int)\" +",
" char2 + \"/\" + char3 + \", byte/char pos \" +",
" utfCount + \"/\" + readerCharCount);"
],
"header": "@@ -391,11 +396,15 @@ readChars:",
"removed": [
" throw utfFormatException();",
" throw utfFormatException();"
]
},
{
"added": [
" throw utfFormatException(\"Invalid UTF encoding at \" +",
" \"byte/char position \" + utfCount + \"/\" +",
" readerCharCount + \": (int)\" + c);",
" throw utfFormatException(\"Incorrect encoded length in stream, \" +",
" \"expected \" + utfLen + \", have \" + utfCount + \" bytes\");"
],
"header": "@@ -405,14 +414,17 @@ readChars:",
"removed": [
" throw utfFormatException();",
" throw utfFormatException(\"utfCount \" + utfCount + \" utfLen \" + utfLen);"
]
},
{
"added": [
" throw new EOFException(\"Reached EOF when reading\" +",
" \"encoded length bytes\");"
],
"header": "@@ -441,7 +453,8 @@ readChars:",
"removed": [
" throw new EOFException();"
]
}
]
}
] |
derby-DERBY-2824-48152c29
|
DERBY-2824: Improve error reporting, fix whitespace/formatting issues and replace tabs in UTF8Reader. This commit fixes and improves the JavaDoc.
Patch file: derby-2824-4a-javadoc.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@581934 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java",
"hunks": [
{
"added": [
" * Class for reading characters from streams encoded in the modified UTF-8",
" * format.",
" * <p>",
" * Note that we often operate on a special Derby stream.",
" * A Derby stream is possibly different from a \"normal\" stream in two ways;",
" * an encoded length is inserted at the head of the stream, and if the encoded",
" * length is <code>0</code> a Derby-specific end of stream marker is appended",
" * to the data.",
" * <p>",
" * If the underlying stream is capable of repositioning itself on request,",
" * this class supports multiple readers on the same source stream in such a way",
" * that the various readers do not interfere with each other (except for",
" * serializing access). Each reader instance will have its own pointer into the",
" * stream, and request that the stream repositions itself before calling",
" * read/skip on the stream.",
" *",
" * @see PositionedStoreStream",
" */",
" /** The underlying data stream. */",
" /** Store stream that can reposition itself on request. */",
" /**",
" * The expected number of bytes in the stream, if known.",
" * <p>",
" * A value of <code>0<code> means the length is unknown, and that the end",
" * of the stream is marked with a Derby-specific end of stream marker.",
" */",
" /** Number of bytes read from the stream. */",
" /** Number of characters read from the stream. */",
" /** ",
" * The maximum number of characters allowed for the column",
" * represented by the passed stream.",
" * <p>",
" * A value of <code>0</code> means there is no associated maximum length.",
" */",
" /** Internal character buffer storing characters read from the stream. */",
" private final char[] buffer = new char[8 * 1024];",
" /** The number of characters in the internal buffer. */",
" /** The position of the next character to read in the internal buffer. */",
" /** Tells if this reader has been closed. */",
" /** ",
" * A reference to the parent object of the stream.",
" * <p>",
" * The reference is kept so that the parent object can't get",
" * garbage collected until we are done with the stream.",
" */",
" /**",
" * Constructs a reader and consumes the encoded length bytes from the",
" * stream.",
" * <p>",
" * The encoded length bytes either state the number of bytes in the stream,",
" * or it is <code>0</code> which informs us the length is unknown or could",
" * not be represented and that we have to look for the Derby-specific",
" * end of stream marker.",
" * ",
" * @param in the underlying stream",
" * @param maxFieldSize the maximum allowed column length in characters",
" * @param parent the parent object / connection child",
" * @param synchronization synchronization object used when accessing the",
" * underlying data stream",
" * ",
" * @throws IOException if reading from the underlying stream fails",
" * @throws SQLException if setting up or restoring the context stack fails",
" */"
],
"header": "@@ -34,31 +34,89 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": [
"*/",
" /** Stream store that can reposition itself on request. */",
" private char[] buffer = new char[8 * 1024];",
" // maintain a reference to the parent object so that it can't get",
" // garbage collected until we are done with the stream."
]
},
{
"added": [
" // stream is passed in after a ResultSet.getXXXStream method."
],
"header": "@@ -96,7 +154,7 @@ public final class UTF8Reader extends Reader",
"removed": [
" // stream is passed in after a ResetSet.getXXXStream method."
]
},
{
"added": [
" Object synchronization) {"
],
"header": "@@ -123,8 +181,7 @@ public final class UTF8Reader extends Reader",
"removed": [
" Object synchronization)",
" throws IOException {"
]
},
{
"added": [
" * Reader implemention.",
" */",
"",
" /**",
" * Reads a single character from the stream.",
" * ",
" * @return A character or <code>-1</code> if end of stream has been reached.",
" * @throws IOException if the stream has been closed, or an exception is",
" * raised while reading from the underlying stream",
" */"
],
"header": "@@ -142,8 +199,16 @@ public final class UTF8Reader extends Reader",
"removed": [
" ** Reader implemention.",
" */"
]
},
{
"added": [
" /**",
" * Reads characters into an array.",
" * ",
" * @return The number of characters read, or <code>-1</code> if the end of",
" * the stream has been reached.",
" */ "
],
"header": "@@ -163,6 +228,12 @@ public final class UTF8Reader extends Reader",
"removed": []
},
{
"added": [
" /**",
" * Skips characters.",
" * ",
" * @param len the numbers of characters to skip",
" * @return The number of characters actually skipped.",
" * @throws IllegalArgumentException if the number of characters to skip is",
" * negative",
" * @throws IOException if accessing the underlying stream fails",
" */"
],
"header": "@@ -189,6 +260,15 @@ public final class UTF8Reader extends Reader",
"removed": []
},
{
"added": [
" /**",
" * Close the reader, disallowing further reads.",
" */",
" parent = null;",
" * Methods just for Derby's JDBC driver",
" */",
"",
" /**",
" * Reads characters from the stream.",
" * <p>",
" * Due to internal buffering a smaller number of characters than what is",
" * requested might be returned. To ensure that the request is fulfilled,",
" * call this method in a loop until the requested number of characters is",
" * read or <code>-1</code> is returned.",
" * ",
" * @param sb the destination buffer",
" * @param len maximum number of characters to read",
" * @return The number of characters read, or <code>-1</code> if the end of",
" * the stream is reached.",
" */"
],
"header": "@@ -219,18 +299,35 @@ public final class UTF8Reader extends Reader",
"removed": [
" parent = null;",
" ** Methods just for Derby's JDBC driver",
" */"
]
},
{
"added": [
" /**",
" * Reads characters into an array as ASCII characters.",
" * <p>",
" * Due to internal buffering a smaller number of characters than what is",
" * requested might be returned. To ensure that the request is fulfilled,",
" * call this method in a loop until the requested number of characters is",
" * read or <code>-1</code> is returned.",
" * <p>",
" * Characters outside the ASCII range are replaced with an out of range",
" * marker.",
" * ",
" * @param abuf the buffer to read into",
" * @param off the offset into the destination buffer",
" * @param len maximum number of characters to read",
" * @return The number of characters read, or <code>-1</code> if the end of",
" * the stream is reached.",
" */"
],
"header": "@@ -253,6 +350,23 @@ public final class UTF8Reader extends Reader",
"removed": []
},
{
"added": [
" * internal implementation",
" */",
" /**",
" * Close the underlying stream if it is open.",
" */",
" // Ignore exceptions thrown on close.",
" // [TODO] Maybe we should log these?",
" /**",
" * Convenience method generating an {@link UTFDataFormatException} and",
" * cleaning up the reader state.",
" */"
],
"header": "@@ -287,20 +401,29 @@ public final class UTF8Reader extends Reader",
"removed": [
" ** internal implementation",
" */"
]
},
{
"added": [
" * Fills the internal character buffer by decoding bytes from the stream.",
" * ",
" * @return <code>true</code> if the end of the stream is reached,",
" * <code>false</code> if there is apparently more data to be read.",
" */"
],
"header": "@@ -308,8 +431,11 @@ public final class UTF8Reader extends Reader",
"removed": [
" Fill the buffer, return true if eof has been reached.",
" */"
]
},
{
"added": [
" /**",
" * Decode the length encoded in the stream.",
" * ",
" * This method came from {@link java.io.DataInputStream}",
" * ",
" * @return The number of bytes in the stream, or <code>0</code> if the",
" * length is unknown and the end of stream must be marked by the",
" * Derby-specific end of stream marker.",
" */"
],
"header": "@@ -448,7 +574,15 @@ readChars:",
"removed": [
" // this method came from java.io.DataInputStream"
]
}
]
}
] |
derby-DERBY-2824-fe27d9d4
|
DERBY-2824: Improve error reporting, fix whitespace/formatting issues and replace tabs in UTF8Reader. Whitespace changes only.
Patch file: derby-2824-2a-whitespace_changes.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547694 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java",
"hunks": [
{
"added": [
" private final long utfLen; // bytes",
" private long utfCount; // bytes",
" private long readerCharCount; // characters",
" private long maxFieldSize; // characeters",
" private char[] buffer = new char[8 * 1024];",
" private int charactersInBuffer; // within buffer",
" private boolean noMoreReads;",
" // maintain a reference to the parent object so that it can't get",
" private ConnectionChild parent;",
" InputStream in,",
" long maxFieldSize,",
" ConnectionChild parent,",
" Object synchronization)",
" throws IOException, SQLException"
],
"header": "@@ -43,27 +43,27 @@ public final class UTF8Reader extends Reader",
"removed": [
" private final long utfLen; // bytes",
" private long utfCount; // bytes",
" private long readerCharCount; // characters",
" private long maxFieldSize; // characeters",
" private char[] buffer = new char[8 * 1024];",
" private int charactersInBuffer; // within buffer",
" private boolean noMoreReads;",
" // maintain a reference to the parent object so that it can't get ",
" private ConnectionChild parent;",
" InputStream in,",
" long maxFieldSize,",
" ConnectionChild parent,",
" Object synchronization) ",
" throws IOException, SQLException"
]
},
{
"added": [
" InputStream in,",
" long maxFieldSize,",
" long streamSize,",
" ConnectionChild parent,",
" Object synchronization)"
],
"header": "@@ -118,11 +118,11 @@ public final class UTF8Reader extends Reader",
"removed": [
" InputStream in,",
" long maxFieldSize,",
" long streamSize,",
" ConnectionChild parent,",
" Object synchronization)"
]
},
{
"added": [
" \"Number of characters to skip must be positive: \" + len);"
],
"header": "@@ -191,7 +191,7 @@ public final class UTF8Reader extends Reader",
"removed": [
" \"Number of characters to skip must be positive:\" + len);"
]
},
{
"added": [],
"header": "@@ -230,7 +230,6 @@ public final class UTF8Reader extends Reader",
"removed": [
""
]
},
{
"added": [
""
],
"header": "@@ -252,6 +251,7 @@ public final class UTF8Reader extends Reader",
"removed": []
},
{
"added": [],
"header": "@@ -289,7 +289,6 @@ public final class UTF8Reader extends Reader",
"removed": [
""
]
},
{
"added": [
""
],
"header": "@@ -300,6 +299,7 @@ public final class UTF8Reader extends Reader",
"removed": []
},
{
"added": [],
"header": "@@ -325,7 +325,6 @@ public final class UTF8Reader extends Reader",
"removed": [
" "
]
},
{
"added": [
" switch (c >> 4) {"
],
"header": "@@ -353,7 +352,7 @@ readChars:",
"removed": [
" switch (c >> 4) { "
]
},
{
"added": [
" throw utfFormatException();"
],
"header": "@@ -369,7 +368,7 @@ readChars:",
"removed": [
" throw utfFormatException(); "
]
},
{
"added": [
" throw utfFormatException();"
],
"header": "@@ -396,7 +395,7 @@ readChars:",
"removed": [
" throw utfFormatException(); "
]
},
{
"added": [
" throw utfFormatException();",
" if (utfLen != 0 && utfCount > utfLen)",
" throw utfFormatException(\"utfCount \" + utfCount + \" utfLen \" + utfLen);"
],
"header": "@@ -406,14 +405,14 @@ readChars:",
"removed": [
" throw utfFormatException(); ",
" if (utfLen != 0 && utfCount > utfLen) ",
" throw utfFormatException(\"utfCount \" + utfCount + \" utfLen \" + utfLen); "
]
},
{
"added": [],
"header": "@@ -437,7 +436,6 @@ readChars:",
"removed": [
""
]
}
]
}
] |
derby-DERBY-2827-b0c495f8
|
DERBY-2827: Rename ClobStreamControl to TemporaryClob. Updated all affected classes.
Patch file: derby-2827-1a.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547657 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/ClobUtf8Writer.java",
"hunks": [
{
"added": [
" private TemporaryClob control; "
],
"header": "@@ -35,7 +35,7 @@ import org.apache.derby.iapi.services.i18n.MessageService;",
"removed": [
" private ClobStreamControl control; "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedClob.java",
"hunks": [
{
"added": [
" this.clob = new TemporaryClob (con.getDBName(), this);"
],
"header": "@@ -87,7 +87,7 @@ final class EmbedClob extends ConnectionChild implements Clob",
"removed": [
" this.clob = new ClobStreamControl (con.getDBName(), this);"
]
},
{
"added": [
" clob = new TemporaryClob(con.getDBName(),"
],
"header": "@@ -116,7 +116,7 @@ final class EmbedClob extends ConnectionChild implements Clob",
"removed": [
" clob = new ClobStreamControl(con.getDBName(),"
]
},
{
"added": [
" this.clob = TemporaryClob.cloneClobContent("
],
"header": "@@ -743,7 +743,7 @@ restartScan:",
"removed": [
" this.clob = ClobStreamControl.cloneClobContent("
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/TemporaryClob.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.jdbc.TemporaryClob"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.jdbc.ClobStreamControl"
]
},
{
"added": [
"final class TemporaryClob implements InternalClob {"
],
"header": "@@ -38,7 +38,7 @@ import org.apache.derby.iapi.util.UTF8Util;",
"removed": [
"final class ClobStreamControl implements InternalClob {"
]
},
{
"added": [
" TemporaryClob newClob = new TemporaryClob(dbName, conChild);"
],
"header": "@@ -73,7 +73,7 @@ final class ClobStreamControl implements InternalClob {",
"removed": [
" ClobStreamControl newClob = new ClobStreamControl(dbName, conChild);"
]
},
{
"added": [
" TemporaryClob newClob = new TemporaryClob(dbName, conChild);",
" * Constructs a <code>TemporaryClob</code> object used to perform",
" * ",
" TemporaryClob (String dbName, ConnectionChild conChild) {"
],
"header": "@@ -94,21 +94,21 @@ final class ClobStreamControl implements InternalClob {",
"removed": [
" ClobStreamControl newClob = new ClobStreamControl(dbName, conChild);",
" * Constructs a <code>ClobStreamControl</code> object used to perform",
" *",
" ClobStreamControl (String dbName, ConnectionChild conChild) {"
]
}
]
}
] |
derby-DERBY-2830-a535ea90
|
DERBY-2830: Rename UpdateableBlobStream to UpdatableBlobStream. Also renames the corresponding test class and all affected classes.
Patch file: derby-2830-1b.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@547715 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedBlob.java",
"hunks": [
{
"added": [
" return new UpdatableBlobStream (this, "
],
"header": "@@ -458,7 +458,7 @@ final class EmbedBlob extends ConnectionChild implements Blob",
"removed": [
" return new UpdateableBlobStream (this, "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/UpdatableBlobStream.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.jdbc.UpdatableBlobStream"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.jdbc.UpdateableBlobStream"
]
},
{
"added": [
" * Updatable blob stream is a wrapper stream over dvd stream",
"class UpdatableBlobStream extends InputStream {"
],
"header": "@@ -30,13 +30,13 @@ import org.apache.derby.iapi.reference.SQLState;",
"removed": [
" * Updateable blob stream is a wrapper stream over dvd stream",
"class UpdateableBlobStream extends InputStream {"
]
},
{
"added": [
" * Constructs UpdatableBlobStream using the the InputStream receives as the",
" * ",
" UpdatableBlobStream (EmbedBlob blob, InputStream is) {"
],
"header": "@@ -55,12 +55,13 @@ class UpdateableBlobStream extends InputStream {",
"removed": [
" * Constructs UpdateableBlobStream using the the InputStream receives as the",
" UpdateableBlobStream (EmbedBlob blob, InputStream is) {"
]
},
{
"added": [
" * Construct an <code>UpdatableBlobStream<code> using the ",
" * "
],
"header": "@@ -71,11 +72,11 @@ class UpdateableBlobStream extends InputStream {",
"removed": [
" * Construct an <code>UpdateableBlobStream<code> using the ",
" *"
]
}
]
}
] |
derby-DERBY-2831-62299081
|
DERBY-2831
Forgot to enter a test case during 551793 checkin. This commit just creates a function in non-existent database to ensure
we don't throw null pointer exception.
In addition, this commit also makes sure that we associate correct collation information with the character string parameters to procedures when the procedure gets created in CreateAliasNode. We already do this for function but similar work needs to happen for procedures.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@551942 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CreateAliasNode.java",
"hunks": [
{
"added": [
"\t * or procedure in it's init method, which is called by the parser. But at ",
"\t * that time, we do not have the SchemaDescriptor ready to determine the ",
"\t * collation type. Hence, at the bind time, when we do have the ",
"\t * SchemaDescriptor available, we should go back and fix the ",
"\t * RoutineAliasInfo to have correct collation for its character string ",
"\t * parameters and also fix its return type (for functions) so as to have ",
"\t * correct collation if it is returning character string type. "
],
"header": "@@ -222,12 +222,13 @@ public class CreateAliasNode extends DDLStatementNode",
"removed": [
"\t * in it's init method, which is called by the parser. But at that time, we",
"\t * do not have the SchemaDescriptor ready to determine the collation",
"\t * type. Hence, at the bind time, when we do have the SchemaDescriptor",
"\t * available, we should go back and fix the RoutineAliasIno to have correct",
"\t * collation for it's character string parameters and also fix it's return",
"\t * type's collation if the return type is a character string."
]
},
{
"added": [
"\t\tTypeId compTypeId;",
"\t\tif (aType != null) //that means we are not dealing with a procedure",
"\t\t{",
"\t\t\tcompTypeId = TypeId.getBuiltInTypeId(aType.getTypeName());",
"\t\t\tif (compTypeId != null && compTypeId.isStringTypeId()) ",
"\t\t\t\treturn true;\t\t\t",
"\t\t}"
],
"header": "@@ -239,13 +240,17 @@ public class CreateAliasNode extends DDLStatementNode",
"removed": [
"\t\tTypeId compTypeId = TypeId.getBuiltInTypeId(aType.getTypeName());",
"\t\tif (compTypeId != null && compTypeId.isStringTypeId()) ",
"\t\t\treturn true;"
]
},
{
"added": [
"\t * associated with the definition of the user defined function/procedure ",
"\t * should take the collation of the schema in which this user defined ",
"\t * function is getting created.",
"\t * the function/procedure is getting created.",
"\t\t//We could have been called for the return type but for procedures ",
"\t\t//there is no return type and hence we should be careful that we",
"\t\t//don't run into null ptr exception. So before doing anything, check if",
"\t\t//the passed parameter is null and if so, then simply return.",
"\t\tif (changeTD == null) ",
"\t\t\treturn changeTD;"
],
"header": "@@ -264,17 +269,23 @@ public class CreateAliasNode extends DDLStatementNode",
"removed": [
"\t * associated with the definition of the user defined function should take ",
"\t * the collation of the schema in which this user defined function is ",
"\t * getting created.",
"\t * the function is getting created."
]
},
{
"added": [
"\t\t//Are we dealing with user defined function or procedure?",
"\t\tif (aliasType == AliasInfo.ALIAS_TYPE_FUNCTION_AS_CHAR ||",
"\t\t\t\taliasType == AliasInfo.ALIAS_TYPE_PROCEDURE_AS_CHAR) {",
"\t\t\t//Does the user defined function/procedure have any character ",
"\t\t\t//string types in it's definition"
],
"header": "@@ -307,10 +318,11 @@ public class CreateAliasNode extends DDLStatementNode",
"removed": [
"\t\t//Are we dealing with user defined function?",
"\t\tif (aliasType == AliasInfo.ALIAS_TYPE_FUNCTION_AS_CHAR) {",
"\t\t\t//Does the user defined function have any character string types",
"\t\t\t//in it's definition"
]
}
]
}
] |
derby-DERBY-285-2abb2b2c
|
Fix for DERBY-285 Network Client should not print non-ascii token separators in message when it cannot connect to the server to retrieve the error message
This patch changes network server to send the complete message in its own locale instead of just the message arguments and delimiters if the exception is severe enough to terminate the connection. This will eliminate the special characters and provide a more readable message.
I also cleaned up the code a bit in this area. I removed the extra logging for the severe exceptions as these will already be logged as part of the normal Derby error logging.
git-svn-id: https://svn.apache.org/repos/asf/incubator/derby/code/trunk@180107 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [],
"header": "@@ -5086,7 +5086,6 @@ public class DRDAConnThread extends Thread {",
"removed": [
"\t\tString severeExceptionInfo = null;"
]
},
{
"added": [
"\t\t\t"
],
"header": "@@ -5100,6 +5099,7 @@ public class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
"\t\t\tsqlerrmc = buildSqlerrmc(e);",
"\t\t\t",
"\t\t"
],
"header": "@@ -5114,80 +5114,10 @@ public class DRDAConnThread extends Thread {",
"removed": [
"\t\t\tif (e instanceof EmbedSQLException)",
"\t\t\t{",
"\t\t\t\tEmbedSQLException ce = (EmbedSQLException) e;",
"\t\t\t\tboolean severeException = (ce.getErrorCode() >= ExceptionSeverity.SESSION_SEVERITY);",
"\t\t\t\t/* we need messageId to retrieve the localized message, just using",
"\t\t\t\t * sqlstate may not be easy, because it is the messageId that we",
"\t\t\t\t * used as key in the property file, for many sqlstates, they are",
"\t\t\t\t * just the first 5 characters of the corresponding messageId.",
"\t\t\t\t * We append messageId as the last element of sqlerrmc. We can't",
"\t\t\t\t * send messageId in the place of sqlstate because jcc expects only",
"\t\t\t\t * 5 chars for sqlstate.",
"\t\t\t\t */",
"\t\t\t\tsqlerrmc = \"\";",
"\t\t\t\tbyte[] sep = {20};\t// use it as separator of sqlerrmc tokens",
"\t\t\t\tbyte[] errSep = {20, 20, 20}; // mark between exceptions",
"\t\t\t\tString separator = new String(sep);",
"\t\t\t\tString errSeparator = new String(errSep); ",
"\t\t\t\tString dbname = null;",
"\t\t\t\tif (database != null)",
"\t\t\t\t\tdbname = database.dbName;",
"\t\t\t\t",
"\t\t\t\tdo {",
"\t\t\t\t\tString messageId = ce.getMessageId();",
"\t\t\t\t\t",
"\t\t\t\t\t// arguments are variable part of a message",
"\t\t\t\t\tObject[] args = ce.getArguments();",
"\t\t\t\t\tfor (int i = 0; args != null && i < args.length; i++)",
"\t\t\t\t\t\tsqlerrmc += args[i] + separator;",
"\t\t\t\t\t",
"\t\t\t\t\t// Severe exceptions need to be logged in the error log",
"\t\t\t\t\t// also log location and non-localized message will be",
"\t\t\t\t\t// returned to client as appended arguments",
"\t\t\t\t\tif (severeException)\t",
"\t\t\t\t\t{",
"\t\t\t\t\t\tif (severeExceptionInfo == null)",
"\t\t\t\t\t\t\tsevereExceptionInfo = \"\";",
"\t\t\t\t\t\tsevereExceptionInfo += ce.getMessage() + separator;",
"\t\t\t\t\t\tprintln2Log(dbname, session.drdaID, ce.getMessage());",
"\t\t\t\t\t}",
"\t\t\t\t\tsqlerrmc += messageId; \t//append MessageId",
"\t\t\t\t",
"\t\t\t\t\te = ce.getNextException();",
"\t\t\t\t\tif (e != null) {",
"\t\t\t\t\t\tif (e instanceof EmbedSQLException) {",
"\t\t\t\t\t\t\tce = (EmbedSQLException)e;",
"\t\t\t\t\t\t\tsqlerrmc += errSeparator + ce.getSQLState() + \":\";",
"\t\t\t\t\t\t}",
"\t\t\t\t\t\telse {",
"\t\t\t\t\t\t\tsqlerrmc += errSeparator + e.getSQLState() + \":\";",
"\t\t\t\t\t\t\tce = null;",
"\t\t\t\t\t\t}",
"\t\t\t\t\t}",
"\t\t\t\t\telse",
"\t\t\t\t\t\tce = null;",
"\t\t\t\t} while (ce != null);",
"\t\t\t\t\t\t",
"\t\t\t\tif (severeExceptionInfo != null)",
"\t\t\t\t{",
"\t\t\t\t\tsevereExceptionInfo += \"(\" + \"server log:\" +",
" Monitor.getStream().getName() + \")\" ;",
"\t\t\t\t\tsqlerrmc += separator + severeExceptionInfo;",
"\t\t\t\t}",
"\t\t\t}",
"\t\t\telse",
"\t\t\t\tsqlerrmc = e.getMessage();",
"",
"\t\t// Truncate the sqlerrmc to a length that the client can support.",
"\t\tint maxlen = (sqlerrmc == null) ? -1 : Math.min(sqlerrmc.length(),",
"\t\t\t\tappRequester.supportedMessageParamLength());",
"\t\tif ((maxlen >= 0) && (sqlerrmc.length() > maxlen))",
"\t\t// have to truncate so the client can handle it.",
"\t\t\tsqlerrmc = sqlerrmc.substring(0, maxlen);",
""
]
}
]
}
] |
derby-DERBY-2861-7ae131ba
|
DERBY-2861 Thread safety issue in TableDescriptor
Patch derby-2861-2, which solves the issue by make session local state
in TableDescriptor thread local.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@670286 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/TableDescriptor.java",
"hunks": [
{
"added": [
"import java.util.WeakHashMap;"
],
"header": "@@ -24,6 +24,7 @@ package org.apache.derby.iapi.sql.dictionary;",
"removed": []
},
{
"added": [
"",
"\t/**",
"\t * referencedColumnMap is thread local (since DERBY-2861)",
"\t *",
"\t * It contains a weak hash map keyed by the the TableDescriptor",
"\t * and the value is the actual referencedColumnMap bitmap. So,",
"\t * each thread has a weak hash map it uses to find the appropriate",
"\t * referencedColumnMap for 'this' TableDescriptor.",
"\t *",
"\t * Since the hash map is weak, when the TableDescriptor is no",
"\t * longer referenced the hash entry can be garbage collected (it",
"\t * is the *key* of a weak hash map that is weak, not the value).",
"\t */",
"\tprivate static ThreadLocal referencedColumnMap = new ThreadLocal() {",
"\t\t\tprotected Object initialValue() {",
"\t\t\t\t// Key: TableDescriptor",
"\t\t\t\t// Value: FormatableBitSet",
"\t\t\t\treturn new WeakHashMap();",
"\t\t\t}",
"\t\t};",
"",
"\tprivate FormatableBitSet referencedColumnMapGet() {",
"\t\tWeakHashMap map = (WeakHashMap)(referencedColumnMap.get());",
"",
"\t\treturn (FormatableBitSet) (map.get(this));",
"\t}",
"",
"\tprivate void referencedColumnMapPut",
"\t\t(FormatableBitSet newReferencedColumnMap) {",
"",
"\t\tWeakHashMap map = (WeakHashMap)(referencedColumnMap.get());",
"\t\tmap.put(this, newReferencedColumnMap);",
"\t}"
],
"header": "@@ -112,7 +113,39 @@ public class TableDescriptor extends TupleDescriptor",
"removed": [
"\tFormatableBitSet\t\t\t\t\t\t\treferencedColumnMap;"
]
},
{
"added": [
"\t\treturn referencedColumnMapGet();"
],
"header": "@@ -363,7 +396,7 @@ public class TableDescriptor extends TupleDescriptor",
"removed": [
"\t\treturn referencedColumnMap;"
]
},
{
"added": [
"\t\treferencedColumnMapPut(referencedColumnMap);"
],
"header": "@@ -374,7 +407,7 @@ public class TableDescriptor extends TupleDescriptor",
"removed": [
"\t\tthis.referencedColumnMap = referencedColumnMap;"
]
},
{
"added": [
"\t\tif (referencedColumnMapGet() == null)",
"\t\t\treturn getColumnDependableFinder",
"\t\t\t\t(StoredFormatIds.COLUMN_DESCRIPTOR_FINDER_V01_ID,",
"\t\t\t\t referencedColumnMapGet().getByteArray());"
],
"header": "@@ -749,11 +782,12 @@ public class TableDescriptor extends TupleDescriptor",
"removed": [
"\t\tif (referencedColumnMap == null) ",
"\t\t\treturn getColumnDependableFinder(StoredFormatIds.COLUMN_DESCRIPTOR_FINDER_V01_ID,",
"\t\t\t\t\t\t\t\t\t\t\t referencedColumnMap.getByteArray());"
]
},
{
"added": [
"\t\tif (referencedColumnMapGet() == null)",
"",
"\t\t\t\tif (referencedColumnMapGet().isSet(cd.getPosition()))"
],
"header": "@@ -763,16 +797,17 @@ public class TableDescriptor extends TupleDescriptor",
"removed": [
"\t\tif (referencedColumnMap == null)",
"\t\t\t\tif (referencedColumnMap.isSet(cd.getPosition()))"
]
}
]
}
] |
derby-DERBY-2871-b1d6f8e9
|
DERBY-2871 - revert revision 552621, it causes an intermittent problem.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@553028 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2871-bb9f97a7
|
DERBY-2871: XATransactionTest gets XaException: Error executing a XAResource.commit(), server returned XAER_PROTO.
Patch contributed by Julius Stroffek
Patch file: derby-2871.v2.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@647078 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAXAProtocol.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.jdbc.ResourceAdapter;",
"import org.apache.derby.iapi.services.monitor.Monitor;",
"import org.apache.derby.iapi.store.access.xa.XAXactId;",
"import org.apache.derby.shared.common.reference.MessageId;"
],
"header": "@@ -20,8 +20,12 @@",
"removed": []
},
{
"added": [
" /**",
" * @return The ResourceAdapter instance for",
" * the underlying database.",
" */",
" ResourceAdapter getResourceAdapter()",
" {",
" return ((XADatabase)connThread.getDatabase()).getResourceAdapter();",
" }",
""
],
"header": "@@ -715,6 +719,15 @@ class DRDAXAProtocol {",
"removed": []
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/XADatabase.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.jdbc.BrokeredConnection;",
"import org.apache.derby.iapi.jdbc.ResourceAdapter;"
],
"header": "@@ -34,9 +34,9 @@ import java.util.Enumeration;",
"removed": [
"import org.apache.derby.impl.drda.DRDAXid;",
"import org.apache.derby.iapi.jdbc.BrokeredConnection;"
]
},
{
"added": [
"\tprivate ResourceAdapter ra;"
],
"header": "@@ -51,6 +51,7 @@ class XADatabase extends Database {",
"removed": []
},
{
"added": [
"\t\t\tra = xaDataSource.getResourceAdapter();"
],
"header": "@@ -81,6 +82,7 @@ class XADatabase extends Database {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/jdbc/ResourceAdapter.java",
"hunks": [
{
"added": [
"import javax.transaction.xa.XAException;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/ResourceAdapterImpl.java",
"hunks": [
{
"added": [],
"header": "@@ -29,7 +29,6 @@ import org.apache.derby.iapi.services.monitor.Monitor;",
"removed": [
"//import org.apache.derby.iapi.jdbc.XATransactionResource;"
]
},
{
"added": [
"import javax.transaction.xa.XAException;"
],
"header": "@@ -40,6 +39,7 @@ import org.apache.derby.iapi.store.access.xa.XAXactId;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/XATransactionState.java",
"hunks": [
{
"added": [
"import org.apache.derby.shared.common.reference.MessageId;"
],
"header": "@@ -37,6 +37,7 @@ import org.apache.derby.iapi.store.access.xa.XAXactId;",
"removed": []
},
{
"added": [
" /** Indicates whether this transaction is supposed to be rolled back by timeout. */",
" boolean performTimeoutRollback;"
],
"header": "@@ -81,9 +82,8 @@ final class XATransactionState extends ContextImpl {",
"removed": [
" /** Has this transaction been finished (committed",
" * or rolled back)? */",
" boolean isFinished;"
]
},
{
"added": [
" XATransactionState.this.cancel(MessageId.CONN_XA_TRANSACTION_TIMED_OUT);",
" } catch (Throwable th) {",
" Monitor.logThrowable(th);"
],
"header": "@@ -100,9 +100,9 @@ final class XATransactionState extends ContextImpl {",
"removed": [
" XATransactionState.this.cancel();",
" } catch (XAException ex) {",
" Monitor.logThrowable(ex);"
]
},
{
"added": [
"\t\tthis.performTimeoutRollback = false; // there is no transaction yet"
],
"header": "@@ -118,8 +118,7 @@ final class XATransactionState extends ContextImpl {",
"removed": [
" this.isFinished = false;",
""
]
},
{
"added": [
" // Mark the transaction to be rolled back bby timeout",
" performTimeoutRollback = true;"
],
"header": "@@ -309,6 +308,8 @@ final class XATransactionState extends ContextImpl {",
"removed": []
},
{
"added": [
" /** This method cancels timeoutTask and assigns",
" * 'performTimeoutRollback = false'.",
" performTimeoutRollback = false;"
],
"header": "@@ -347,14 +348,14 @@ final class XATransactionState extends ContextImpl {",
"removed": [
" /** This method cancels timeoutTask and marks the transaction",
" * as finished by assigning 'isFinished = true'.",
" isFinished = true;"
]
}
]
}
] |
derby-DERBY-2871-d4cbc63f
|
DERBY-2871; modifying timeout in test XATransactionTest.
Patch contributed by Julius Stroffek
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@552621 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2878-47efb9bf
|
DERBY-2878 (partial) Scan protection handle could be cached in BasePage
This patch is the first step towards eliminating the allocation of
protection handles. It adds a method to Page/BasePage called
getProtectionRecordHandle(). This method creates a new record handle
if it hasn't been called before and returns a cached value
otherwise. B2IRowLocking3.lockScan() and lockScanForReclaimSpace()
have been modified to use the new method.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@551933 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/index/B2IRowLocking3.java",
"hunks": [
{
"added": [
" current_leaf.getPage().getProtectionRecordHandle();"
],
"header": "@@ -665,8 +665,7 @@ class B2IRowLocking3 implements BTreeLockingPolicy",
"removed": [
" current_leaf.getPage().makeRecordHandle(",
" RecordHandle.RECORD_ID_PROTECTION_HANDLE);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/BasePage.java",
"hunks": [
{
"added": [
"\t/**",
"\t * A record handle that, when locked, protects all the record ids on the",
"\t * page.",
"\t * @see RecordHandle#RECORD_ID_PROTECTION_HANDLE",
"\t */",
"\tprivate RecordId protectionHandle;",
""
],
"header": "@@ -105,6 +105,13 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": []
},
{
"added": [
"\t\tprotectionHandle = null;"
],
"header": "@@ -219,6 +226,7 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": []
},
{
"added": [
"\t\t\tSanityManager.ASSERT(protectionHandle == null);"
],
"header": "@@ -260,6 +268,7 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": []
},
{
"added": [
"\t\tprotectionHandle = null;"
],
"header": "@@ -283,6 +292,7 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": []
},
{
"added": [
" /**",
" * Get the record id protection handle for the page.",
" *",
" * @return protection handle",
" * @see RecordHandle#RECORD_ID_PROTECTION_HANDLE",
" */",
" public final RecordHandle getProtectionRecordHandle() {",
" // only allocate a new handle the first time the method is called",
" if (protectionHandle == null) {",
" protectionHandle =",
" new RecordId(getPageId(),",
" RecordHandle.RECORD_ID_PROTECTION_HANDLE);",
" }",
"",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(",
" getPageId().equals(protectionHandle.getPageId()),",
" \"PageKey for cached protection handle doesn't match identity\");",
" }",
"",
" return protectionHandle;",
" }",
""
],
"header": "@@ -310,6 +320,29 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": []
}
]
}
] |
derby-DERBY-2878-bbcc9236
|
DERBY-2878: Scan protection handle could be cached in BasePage
Don't allocate a new RecordHandle and PageKey in unlockScan().
1) Replaced the field current_scan_pageno (a long) in
BTreeRowPosition with current_scan_protectionHandle (a
RecordHandle which is cached in BasePage)
2) Changed the signature of BTreeLockingPolicy.unlockScan() to take
a RecordHandle
3) Removed unused method OpenBTree.makeRecordHandle()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@557507 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeLockingPolicy.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.store.raw.RecordHandle;"
],
"header": "@@ -24,6 +24,7 @@ package org.apache.derby.impl.store.access.btree;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeMaxScan.java",
"hunks": [
{
"added": [
" SanityManager.ASSERT(pos.current_scan_protectionHandle == null);"
],
"header": "@@ -285,7 +285,7 @@ public class BTreeMaxScan extends BTreeScan",
"removed": [
" SanityManager.ASSERT(pos.current_scan_pageno == 0);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeRowPosition.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.store.raw.RecordHandle;"
],
"header": "@@ -25,6 +25,7 @@ package org.apache.derby.impl.store.access.btree;",
"removed": []
},
{
"added": [
" public RecordHandle current_scan_protectionHandle;"
],
"header": "@@ -45,7 +46,7 @@ public class BTreeRowPosition extends RowPosition",
"removed": [
" public long current_scan_pageno;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeScan.java",
"hunks": [
{
"added": [
" SanityManager.ASSERT(pos.current_scan_protectionHandle == null);"
],
"header": "@@ -348,7 +348,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" SanityManager.ASSERT(pos.current_scan_pageno == 0);"
]
},
{
"added": [
" pos.current_scan_protectionHandle =",
" pos.current_leaf.page.getProtectionRecordHandle();"
],
"header": "@@ -481,7 +481,8 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" pos.current_scan_pageno = pos.current_leaf.page.getPageNumber();"
]
},
{
"added": [
" SanityManager.ASSERT(pos.current_scan_protectionHandle == null);"
],
"header": "@@ -514,7 +515,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" SanityManager.ASSERT(pos.current_scan_pageno == 0);"
]
},
{
"added": [
" pos.current_scan_protectionHandle =",
" pos.current_leaf.page.getProtectionRecordHandle();"
],
"header": "@@ -639,7 +640,8 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" pos.current_scan_pageno = pos.current_leaf.page.getPageNumber();"
]
},
{
"added": [
" SanityManager.ASSERT(pos.current_scan_protectionHandle != null);"
],
"header": "@@ -668,7 +670,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" SanityManager.ASSERT(pos.current_scan_pageno != 0);"
]
},
{
"added": [
"\t\t\tif (pos.current_scan_protectionHandle.getPageNumber() !=",
" pos.current_leaf.page.getPageNumber()) {",
" \"pos.current_scan_protectionHandle = \" +",
" pos.current_scan_protectionHandle +",
" }"
],
"header": "@@ -705,10 +707,13 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
"\t\t\tif (pos.current_scan_pageno != pos.current_leaf.page.getPageNumber())",
" \"pos.current_scan_pageno = \" + pos.current_scan_pageno +"
]
},
{
"added": [
" pos.current_scan_protectionHandle =",
" (pos.current_leaf == null) ?",
" null : pos.current_leaf.page.getProtectionRecordHandle();"
],
"header": "@@ -722,8 +727,9 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" pos.current_scan_pageno = ",
" (pos.next_leaf == null) ? 0 : pos.next_leaf.page.getPageNumber();"
]
},
{
"added": [
" SanityManager.ASSERT(pos.current_scan_protectionHandle != null);"
],
"header": "@@ -1065,7 +1071,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" SanityManager.ASSERT(pos.current_scan_pageno != 0);"
]
},
{
"added": [
" SanityManager.ASSERT(pos.current_scan_protectionHandle == null);"
],
"header": "@@ -1078,7 +1084,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" SanityManager.ASSERT(pos.current_scan_pageno == 0);"
]
},
{
"added": [
" pos.current_scan_protectionHandle =",
" pos.current_leaf.page.getProtectionRecordHandle();"
],
"header": "@@ -1136,7 +1142,8 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" pos.current_scan_pageno = pos.current_leaf.page.getPageNumber();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/OpenBTree.java",
"hunks": [
{
"added": [],
"header": "@@ -577,16 +577,6 @@ public class OpenBTree",
"removed": [
" public RecordHandle makeRecordHandle(",
" long page_number,",
" int rec_id)",
" throws StandardException",
" {",
" return(",
" container.makeRecordHandle(",
" page_number, rec_id));",
" }",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/index/B2INoLocking.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.store.raw.RecordHandle;"
],
"header": "@@ -28,6 +28,7 @@ import org.apache.derby.iapi.types.RowLocation;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/index/B2IRowLocking3.java",
"hunks": [
{
"added": [
" public void unlockScan(RecordHandle scan_lock_rh)"
],
"header": "@@ -861,17 +861,12 @@ class B2IRowLocking3 implements BTreeLockingPolicy",
"removed": [
" public void unlockScan(",
" long page_number)",
" RecordHandle scan_lock_rh = ",
" open_btree.makeRecordHandle(",
" page_number, RecordHandle.RECORD_ID_PROTECTION_HANDLE);",
""
]
}
]
}
] |
derby-DERBY-2878-fabdf938
|
DERBY-2878 (partial) Scan protection handle could be cached in BasePage
Factor out common code in BTreeScan for unlocking the current scan.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@552677 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeScan.java",
"hunks": [
{
"added": [
" unlockCurrentScan(pos);"
],
"header": "@@ -718,8 +718,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" this.getLockingPolicy().unlockScan(",
" pos.current_leaf.page.getPageNumber());"
]
},
{
"added": [
" unlockCurrentScan(pos);"
],
"header": "@@ -804,11 +803,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" if (pos.current_scan_pageno != 0)",
" {",
" this.getLockingPolicy().unlockScan(pos.current_scan_pageno);",
" pos.current_scan_pageno = 0;",
" }"
]
},
{
"added": [
" unlockCurrentScan(pos);"
],
"header": "@@ -842,11 +837,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" if (pos.current_scan_pageno != 0)",
" {",
" this.getLockingPolicy().unlockScan(pos.current_scan_pageno);",
" pos.current_scan_pageno = 0;",
" }"
]
},
{
"added": [
" /**",
" * Unlock the scan protection row for the current scan.",
" *",
" * @param pos position of the scan",
" */",
" private void unlockCurrentScan(BTreeRowPosition pos) {",
" if (pos.current_scan_pageno != 0L) {",
" getLockingPolicy().unlockScan(pos.current_scan_pageno);",
" pos.current_scan_pageno = 0L;",
" }",
" }",
""
],
"header": "@@ -1153,6 +1144,18 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": []
},
{
"added": [
" unlockCurrentScan(scan_position);"
],
"header": "@@ -2120,12 +2123,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
" if (scan_position.current_scan_pageno != 0)",
" {",
" this.getLockingPolicy().unlockScan(",
" scan_position.current_scan_pageno);",
" scan_position.current_scan_pageno = 0;",
" }"
]
},
{
"added": [
" unlockCurrentScan(scan_position);"
],
"header": "@@ -2344,13 +2342,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
"",
" if (scan_position.current_scan_pageno != 0)",
" {",
" this.getLockingPolicy().unlockScan(",
" scan_position.current_scan_pageno);",
" scan_position.current_scan_pageno = 0;",
" }"
]
},
{
"added": [
" unlockCurrentScan(scan_position);"
],
"header": "@@ -2465,13 +2457,7 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager",
"removed": [
"",
" if (scan_position.current_scan_pageno != 0)",
" {",
" this.getLockingPolicy().unlockScan(",
" scan_position.current_scan_pageno);",
" scan_position.current_scan_pageno = 0;",
" }"
]
}
]
}
] |
derby-DERBY-2879-a9dbccec
|
DERBY-2879
Derby currently requires all the character columns in a table to have the same collation as the collation of the schema in
which the table is getting defined. In order to implement this behavior, CreateTableNods's bind method will check the
collation of it's character columns against the schema's collation and if there is a mismatch, an exception will be thrown.
In addition, this patch does minor cleanup. The string constants for the 2 collation types in Derby were declared in both
StringDataValue.java and Property.java I have removed it from StringDataValue so we don't have duplicate constants.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@552531 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/types/TypeDescriptorImpl.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.Property;"
],
"header": "@@ -28,6 +28,7 @@ import org.apache.derby.catalog.TypeDescriptor;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CreateTableNode.java",
"hunks": [
{
"added": [
"\t\t\tSchemaDescriptor sd = getSchemaDescriptor();",
"\t\t\tint schemaCollationType = sd.getCollationType();",
"\t "
],
"header": "@@ -290,6 +290,9 @@ public class CreateTableNode extends DDLStatementNode",
"removed": []
}
]
}
] |
derby-DERBY-2885-9858f6f4
|
DERBY-2885 (cleanup)
Some more cleanup of EmbedConnection.clearLOBMapping():
1) Store the map in a local variable instead of calling
getlobHMObj() a number of times.
2) Replace two disjoint if statements with if/else-if + THROWASSERT
if none of them matches.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@552325 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection.java",
"hunks": [
{
"added": [
"\t\tHashMap map = rootConnection.lobHashMap;",
"\t\tif (map != null) {",
"\t\t\tfor (Iterator it = map.values().iterator(); it.hasNext(); ) {",
"\t\t\t\tObject obj = it.next();",
"\t\t\t\tif (obj instanceof EmbedClob) {",
"\t\t\t\t\t((EmbedClob) obj).free();",
"\t\t\t\t} else if (obj instanceof EmbedBlob) {",
"\t\t\t\t\t((EmbedBlob) obj).free();",
"\t\t\t\t} else if (SanityManager.DEBUG) {",
"\t\t\t\t\tSanityManager.THROWASSERT(\"Unexpected value: \" + obj);",
"\t\t\tmap.clear();"
],
"header": "@@ -2357,20 +2357,19 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\tif (rootConnection.lobHashMap != null) {",
"\t\t\tfor (Iterator e = getlobHMObj().values().iterator();",
"\t\t\t\te.hasNext() ;) {",
"\t\t\t\tObject obj = e.next();",
"\t\t\t\tif (obj instanceof Clob) {",
"\t\t\t\t\tEmbedClob temp = (EmbedClob)obj;",
"\t\t\t\t\ttemp.free();",
"\t\t\t\t}",
"\t\t\t\tif (obj instanceof Blob) {",
"\t\t\t\t\tEmbedBlob temp = (EmbedBlob)obj;",
"\t\t\t\t\ttemp.free();",
"\t\t\tgetlobHMObj().clear();"
]
}
]
}
] |
derby-DERBY-2887-cc30c0c4
|
DERBY-2887: NULLS FIRST/LAST for ORDER BY
This change implements the SQL Standard 10.10 Sort Specification List feature:
<null ordering :== NULLS FIRST | NULLS LAST
The implementation adds a new boolean argument to the DataType.compare()
function to allow control over whether NULL values should be compared
as lower than non-NULL values or as higher. The change also adds code to
the parser to recognize the new syntax, and to pass it to the execution
layer via the compiler data structures.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@567314 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/DataValueDescriptor.java",
"hunks": [
{
"added": [
"\t * that is, it treats SQL null as equal to null and greater than all"
],
"header": "@@ -720,7 +720,7 @@ public interface DataValueDescriptor extends Storable, Orderable",
"removed": [
"\t * that is, it treats SQL null as equal to null and less than all"
]
},
{
"added": [
"\t/**",
"\t * Compare this Orderable with another, with configurable null ordering.",
"\t * This method treats nulls as ordered values, but allows the caller",
" * to specify whether they should be lower than all non-NULL values,",
" * or higher than all non-NULL values.",
"\t *",
"\t * @param other\t\tThe Orderable to compare this one to.",
" % @param nullsOrderedLow True if null should be lower than non-NULL",
"\t *",
"\t * @return <0 - this Orderable is less than other.",
"\t * \t\t\t 0 - this Orderable equals other.",
"\t *\t\t\t>0 - this Orderable is greater than other.",
" *",
" *\t\t\tThe code should not explicitly look for -1, or 1.",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
"\tint compare(DataValueDescriptor other, boolean nullsOrderedLow)",
" throws StandardException;",
""
],
"header": "@@ -735,6 +735,26 @@ public interface DataValueDescriptor extends Storable, Orderable",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/IntersectOrExceptNode.java",
"hunks": [
{
"added": [
" private boolean[] intermediateOrderByNullsLow; // TRUE means NULL values should be ordered lower than non-NULL values"
],
"header": "@@ -86,6 +86,7 @@ public class IntersectOrExceptNode extends SetOperatorNode",
"removed": []
},
{
"added": [
" intermediateOrderByNullsLow = new boolean[ intermediateOrderByColumns.length];"
],
"header": "@@ -144,6 +145,7 @@ public class IntersectOrExceptNode extends SetOperatorNode",
"removed": []
},
{
"added": [
" intermediateOrderByNullsLow[intermediateOrderByIdx] = orderByColumn.isNullsOrderedLow();"
],
"header": "@@ -160,6 +162,7 @@ public class IntersectOrExceptNode extends SetOperatorNode",
"removed": []
},
{
"added": [
" intermediateOrderByNullsLow[intermediateOrderByIdx] = false;"
],
"header": "@@ -170,6 +173,7 @@ public class IntersectOrExceptNode extends SetOperatorNode",
"removed": []
},
{
"added": [
" intermediateOrderByNullsLow[i] = false;"
],
"header": "@@ -183,6 +187,7 @@ public class IntersectOrExceptNode extends SetOperatorNode",
"removed": []
},
{
"added": [
" if( intermediateOrderByNullsLow[i])",
" orderByColumn.setNullsOrderedLow();"
],
"header": "@@ -208,6 +213,8 @@ public class IntersectOrExceptNode extends SetOperatorNode",
"removed": []
},
{
"added": [
" * 9) intermediateOrderByColumns saved object index",
" * 10) intermediateOrderByDirection saved object index",
" * 11) intermediateOrderByNullsLow saved object index"
],
"header": "@@ -344,9 +351,9 @@ public class IntersectOrExceptNode extends SetOperatorNode",
"removed": [
" * 9) close method",
" * 10) intermediateOrderByColumns saved object index",
" * 11) intermediateOrderByDirection saved object index"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/OrderByColumn.java",
"hunks": [
{
"added": [
"\tprivate boolean\t\t\tnullsOrderedLow = false;"
],
"header": "@@ -45,6 +45,7 @@ public class OrderByColumn extends OrderedColumn {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/GenericResultSetFactory.java",
"hunks": [
{
"added": [
" int intermediateOrderByDirectionSavedObject,",
" int intermediateOrderByNullsLowSavedObject)"
],
"header": "@@ -1081,7 +1081,8 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": [
" int intermediateOrderByDirectionSavedObject)"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/IndexColumnOrder.java",
"hunks": [
{
"added": [
" /**",
" * indicate whether NULL values should sort low.",
" *",
" * nullsOrderedLow is usually false, because generally Derby defaults",
" * to have NULL values compare higher than non-null values, but if",
" * the user specifies an ORDER BY clause with a <null ordering>",
" * specification that indicates that NULL values should be ordered",
" * lower than non-NULL values, thien nullsOrderedLow is set to true.",
" */",
" boolean nullsOrderedLow;"
],
"header": "@@ -54,6 +54,16 @@ public class IndexColumnOrder implements ColumnOrdering, Formatable",
"removed": []
},
{
"added": [
" this.nullsOrderedLow = false;",
" this.nullsOrderedLow = false;",
"\t}",
"",
" /**",
" * constructor used by the ORDER BY clause.",
" *",
" * This version of the constructor is used by the compiler when",
" * it processes an ORDER BY clause in a SQL statement. For such",
" * statements, the user gets to control whether NULL values are",
" * ordered as lower than all non-NULL values, or higher than all",
" * non-NULL values.",
" *",
" * @param colNum number of this column",
" * @param ascending whether the ORDER BY is ascendeing or descending",
" * @param nullsLow whether nulls should be ordered low",
" */",
"\tpublic IndexColumnOrder(int colNum, boolean ascending,",
" boolean nullsLow)",
" {",
"\t\t this.colNum = colNum;",
"\t\t this.ascending = ascending;",
" this.nullsOrderedLow = nullsLow;"
],
"header": "@@ -69,11 +79,34 @@ public class IndexColumnOrder implements ColumnOrdering, Formatable",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/SetOpResultSet.java",
"hunks": [
{
"added": [
" private final boolean[] intermediateOrderByNullsLow;"
],
"header": "@@ -63,6 +63,7 @@ class SetOpResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
" int intermediateOrderByDirectionSavedObject,",
" int intermediateOrderByNullsLowSavedObject)"
],
"header": "@@ -78,7 +79,8 @@ class SetOpResultSet extends NoPutResultSetImpl",
"removed": [
" int intermediateOrderByDirectionSavedObject)"
]
},
{
"added": [
" intermediateOrderByNullsLow = (boolean[]) eps.getSavedObject(intermediateOrderByNullsLowSavedObject);"
],
"header": "@@ -92,6 +94,7 @@ class SetOpResultSet extends NoPutResultSetImpl",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/sort/MergeSort.java",
"hunks": [
{
"added": [
"\t/**",
" A lookup table to speed up lookup of nulls-low ordering of a column, ",
"\t**/",
"\tprivate boolean columnOrderingNullsLowMap[];",
""
],
"header": "@@ -122,6 +122,11 @@ final class MergeSort implements Sort",
"removed": []
},
{
"added": [
" boolean nullsLow = this.columnOrderingNullsLowMap[i];",
"\t\t\tif ((r = r1[colid].compare(r2[colid], nullsLow)) "
],
"header": "@@ -488,10 +493,11 @@ final class MergeSort implements Sort",
"removed": [
"\t\t\tif ((r = r1[colid].compare(r2[colid])) "
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/unitTests/store/T_ColumnOrderingImpl.java",
"hunks": [
{
"added": [
"",
"\t/**",
"\t@see ColumnOrdering#getIsNullsOrderedLow",
"\t**/",
"\tpublic boolean getIsNullsOrderedLow()",
"\t{",
"\t\treturn false;",
"\t}"
],
"header": "@@ -59,5 +59,13 @@ public class T_ColumnOrderingImpl implements ColumnOrdering",
"removed": []
}
]
}
] |
derby-DERBY-289-b175fd27
|
DERBY-4856 DERBY-4929 Add thread dump information for error StandardException and SQLException. Due to DERBY-289, ThreadDump.java and ExceptionUtil.java should go to iapi/error for engine. Currently, all thread dump information goes to derby.log
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1043290 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/context/ContextManager.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.ExceptionUtil;"
],
"header": "@@ -29,6 +29,7 @@ import org.apache.derby.iapi.error.PassThroughException;",
"removed": []
},
{
"added": [
"import java.sql.SQLException;"
],
"header": "@@ -37,6 +38,7 @@ import org.apache.derby.iapi.services.property.PropertyUtil;",
"removed": []
},
{
"added": [
" int errorSeverity = getErrorSeverity(error);"
],
"header": "@@ -302,9 +304,7 @@ public class ContextManager",
"removed": [
"\t\t\tint errorSeverity = error instanceof StandardException ?",
"\t\t\t\t((StandardException) error).getSeverity() :",
"\t\t\t\tExceptionSeverity.NO_APPLICABLE_SEVERITY;"
]
},
{
"added": [
" if (reportError",
" && errorSeverity >= ExceptionSeverity.SESSION_SEVERITY) {",
" threadDump = ExceptionUtil.dumpThreads();",
" } else {",
" threadDump = null;",
" }"
],
"header": "@@ -331,6 +331,12 @@ cleanup:\tfor (int index = holder.size() - 1; index >= 0; index--) {",
"removed": []
},
{
"added": [
" if (threadDump != null)",
" errorStream.println(threadDump);"
],
"header": "@@ -401,6 +407,8 @@ cleanup:\tfor (int index = holder.size() - 1; index >= 0; index--) {",
"removed": []
},
{
"added": [
" ",
" /**",
" * return the severity of the exception. Currently, this method ",
" * does not determine a severity that is not StandardException ",
" * or SQLException.",
" * @param error - Throwable error",
" * ",
" * @return int vendorcode/severity for the Throwable error",
" * - error/exception to extract vendorcode/severity. ",
" * For error that we can not get severity, ",
" * NO_APPLICABLE_SEVERITY will return.",
" */",
" public int getErrorSeverity(Throwable error) {",
" ",
" if (error instanceof StandardException) {",
" return ((StandardException) error).getErrorCode();",
" }",
" ",
" if (error instanceof SQLException) {",
" return ((SQLException) error).getErrorCode();",
" }",
" return ExceptionSeverity.NO_APPLICABLE_SEVERITY;",
" }"
],
"header": "@@ -504,6 +512,29 @@ cleanup:\tfor (int index = holder.size() - 1; index >= 0; index--) {",
"removed": []
}
]
}
] |
derby-DERBY-2890-debc0fc6
|
DERBY-2890: Simplify handling of maxPos in UpdatableBlobStream and ClobUpdatableReader
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@552702 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/ClobUpdatableReader.java",
"hunks": [
{
"added": [
" * Position in Clob where to stop reading unless EOF is reached first."
],
"header": "@@ -56,7 +56,7 @@ final class ClobUpdatableReader extends Reader {",
"removed": [
" * Position in Clob where to stop reading."
]
},
{
"added": [
" //Hence set maxPos to infinity (or as close as we get).",
" this.maxPos = Long.MAX_VALUE;"
],
"header": "@@ -74,8 +74,8 @@ final class ClobUpdatableReader extends Reader {",
"removed": [
" //Hence set maxPos to -1.",
" this.maxPos = -1;"
]
},
{
"added": [
" // Hence set maxPos to infinity (or as close as we get).",
" this.maxPos = Long.MAX_VALUE;"
],
"header": "@@ -90,8 +90,8 @@ final class ClobUpdatableReader extends Reader {",
"removed": [
" // Hence set maxPos to -1.",
" this.maxPos = -1;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/UpdatableBlobStream.java",
"hunks": [
{
"added": [
" * Position in Blob where to stop reading unless EOF is reached first.",
" private final long maxPos;"
],
"header": "@@ -49,9 +49,9 @@ class UpdatableBlobStream extends InputStream {",
"removed": [
" * Position in Blob where to stop reading.",
" private long maxPos;"
]
},
{
"added": [
" * @throws IOException if an I/O error occurs",
" UpdatableBlobStream (EmbedBlob blob, InputStream is)",
" throws IOException {",
" // The entire Blob has been requested, hence set length to infinity (or",
" // as close as we get).",
" this(blob, is, 0L, Long.MAX_VALUE);"
],
"header": "@@ -60,15 +60,13 @@ class UpdatableBlobStream extends InputStream {",
"removed": [
" UpdatableBlobStream (EmbedBlob blob, InputStream is) {",
" stream = is;",
" this.pos = 0;",
" this.blob = blob;",
" //The subset of the Blob",
" //has not been requested.",
" //Hence set maxPos to -1.",
" this.maxPos = -1;"
]
},
{
"added": [
" throws IOException {",
" this.blob = blob;",
" stream = is;",
" maxPos = pos + len;",
" if (pos > 0) {",
" skip(pos);",
" }"
],
"header": "@@ -83,20 +81,18 @@ class UpdatableBlobStream extends InputStream {",
"removed": [
" * @throws SQLException",
" throws IOException, SQLException {",
" this(blob, is);",
" //The length requested cannot exceed the length",
" //of the underlying Blob object. Hence chose the",
" //minimum of the length of the underlying Blob",
" //object and requested length.",
" maxPos = Math.min(blob.length(), pos + len);",
" skip(pos);"
]
},
{
"added": [
" //If the current position inside the stream has exceeded maxPos, the",
" //read should return -1 signifying end of stream.",
" if (pos >= maxPos) {"
],
"header": "@@ -158,11 +154,9 @@ class UpdatableBlobStream extends InputStream {",
"removed": [
" //If maxPos is not invalid and the current",
" //position inside the stream has exceeded",
" //maxPos the read sould return -1 signifying",
" //end of stream.",
" if (maxPos != -1 && pos >= maxPos) {"
]
},
{
"added": [
" int actualLength = (int) Math.min(len, maxPos - pos);"
],
"header": "@@ -197,22 +191,8 @@ class UpdatableBlobStream extends InputStream {",
"removed": [
" int actualLength = 0;",
" ",
" //If maxPos is not invalid then",
" //ensure that the length(len) ",
" //that is requested falls within",
" //the restriction set by maxPos.",
" if(maxPos != -1) {",
" actualLength ",
" = (int )Math.min(len, maxPos - pos);",
" }",
" else {",
" //maxPos has not been set. Make",
" //maxPos the length requested.",
" actualLength = len;",
" }"
]
},
{
"added": [
" int actualLength = (int) Math.min(b.length, maxPos - pos);"
],
"header": "@@ -240,21 +220,7 @@ class UpdatableBlobStream extends InputStream {",
"removed": [
" int actualLength = 0;",
" //If maxPos is not invalid",
" //then ensure that the length",
" //(len of the byte array b) ",
" //falls within the restriction ",
" //set by maxPos.",
" if(maxPos != -1) {",
" actualLength ",
" = (int )Math.min(b.length, maxPos - pos);",
" }",
" else {",
" //maxPos has not been set. Make",
" //maxPos the length requested.",
" actualLength = b.length;",
" }"
]
}
]
}
] |
derby-DERBY-2891-5b41e458
|
DERBY-2891: Clob.getCharacterStream(long,long) ignores position
parameter for large (>32k) CLOBs
Fixed hard-coded start position in ClobUpdatableReader's constructor.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@553111 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/ClobUpdatableReader.java",
"hunks": [
{
"added": [
" // Hence set length to infinity (or as close as we get).",
" this(clob, 0L, Long.MAX_VALUE);"
],
"header": "@@ -87,30 +87,9 @@ final class ClobUpdatableReader extends Reader {",
"removed": [
" this.clob = clob;",
" this.conChild = clob;",
" // Hence set maxPos to infinity (or as close as we get).",
" this.maxPos = Long.MAX_VALUE;",
"",
" InternalClob internalClob = clob.getInternalClob();",
" materialized = internalClob.isWritable(); ",
" if (materialized) {",
" long byteLength = internalClob.getByteLength();",
" this.stream = internalClob.getRawByteStream();",
" init ((LOBInputStream)stream, 0);",
" } else {",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(internalClob instanceof StoreStreamClob,",
" \"Wrong type of internal clob representation: \" +",
" internalClob.toString());",
" }",
" // Since this representation is read-only, the stream never has to",
" // update itself, until the Clob representation itself has been",
" // changed. That even will be detected by {@link #updateIfRequired}.",
" this.streamReader = internalClob.getReader(1L);",
" this.pos = 0L;",
" }"
]
}
]
}
] |
derby-DERBY-2892-4015c92f
|
DERBY-2892: Closing a resultset after retrieving a large > 32665 bytes value with Network Server does not release locks.
Fixed the problem in an alternative way to avoid compatibility problems. Instead of explicitly closing the underlying LOB when creating streams etc, closing is now handled by the LOBStateTracker in the same way as if the LOB column had never been accessed by the user. Multiple result set getter methods can (again) be called on a LOB column, except for the getters returning streams.
Patch file: derby-2892-1b-alternative_fix_partial.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@646255 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Cursor.java",
"hunks": [
{
"added": [
" /**",
" * Returns a {@code Blob} object.",
" *",
" * @param column 1-based column index",
" * @param agent associated agent",
" * @param toBePublished whether the Blob will be published to the user",
" * @return A {@linkplain java.sql.Blob Blob} object.",
" * @throws SqlException if getting the {@code Blob} fails",
" */",
" public abstract Blob getBlobColumn_(int column, Agent agent,",
" boolean toBePublished)",
" throws SqlException;",
" /**",
" * Returns a {@code Clob} object.",
" *",
" * @param column 1-based column index",
" * @param agent associated agent",
" * @param toBePublished whether the Clob will be published to the user",
" * @return A {@linkplain java.sql.Clob Clob} object.",
" * @throws SqlException if getting the {@code Clob} fails",
" */",
" public abstract Clob getClobColumn_(int column, Agent agent,",
" boolean toBePublished)",
" throws SqlException;"
],
"header": "@@ -686,9 +686,31 @@ public abstract class Cursor {",
"removed": [
" abstract public Blob getBlobColumn_(int column, Agent agent) throws SqlException;",
" abstract public Clob getClobColumn_(int column, Agent agent) throws SqlException;"
]
},
{
"added": [
" Blob b = getBlobColumn_(column, agent_, false);",
" Clob c = getClobColumn_(column, agent_, false);"
],
"header": "@@ -1004,15 +1026,13 @@ public abstract class Cursor {",
"removed": [
" Blob b = getBlobColumn_(column, agent_);",
" b.free(); // Free resources from underlying Blob",
" Clob c = getClobColumn_(column, agent_);",
" c.free(); // Free resources from underlying Clob"
]
},
{
"added": [
" Blob b = getBlobColumn_(column, agent_, false);"
],
"header": "@@ -1032,9 +1052,8 @@ public abstract class Cursor {",
"removed": [
" Blob b = getBlobColumn_(column, agent_);",
" b.free(); // Free resources from underlying Blob"
]
},
{
"added": [
" Blob b = getBlobColumn_(column, agent_, false);"
],
"header": "@@ -1055,12 +1074,10 @@ public abstract class Cursor {",
"removed": [
" Blob b = getBlobColumn_(column, agent_);",
" // Underlying Blob should be released when stream is closed",
" is.setFreeBlobOnClose();"
]
},
{
"added": [
" Clob c = getClobColumn_(column, agent_, false);"
],
"header": "@@ -1076,12 +1093,10 @@ public abstract class Cursor {",
"removed": [
" Clob c = getClobColumn_(column, agent_);",
" // Underlying Clob should be released when stream is closed",
" is.setFreeClobOnClose();"
]
},
{
"added": [
" Clob c = getClobColumn_(column, agent_, false);"
],
"header": "@@ -1121,9 +1136,8 @@ public abstract class Cursor {",
"removed": [
" Clob c = getClobColumn_(column, agent_);",
" c.free(); // Release resources from underlying Clob"
]
},
{
"added": [
" Clob c = getClobColumn_(column, agent_, false);"
],
"header": "@@ -1172,12 +1186,10 @@ public abstract class Cursor {",
"removed": [
" Clob c = getClobColumn_(column, agent_);",
" // Make sure underlying Blob is released when reader is closed",
" reader.setFreeClobOnClose();"
]
},
{
"added": [
" return getBlobColumn_(column, agent_, true);"
],
"header": "@@ -1222,7 +1234,7 @@ public abstract class Cursor {",
"removed": [
" return getBlobColumn_(column, agent_);"
]
},
{
"added": [
" return getClobColumn_(column, agent_, true);"
],
"header": "@@ -1232,7 +1244,7 @@ public abstract class Cursor {",
"removed": [
" return getClobColumn_(column, agent_);"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetCursor.java",
"hunks": [
{
"added": [
" /**",
" * @see org.apache.derby.client.am.Cursor#getBlobColumn_",
" */",
" public Blob getBlobColumn_(int column, Agent agent, boolean toBePublished)",
" throws SqlException {",
" // Only inform the tracker if the Blob is published to the user.",
" if (toBePublished) {",
" netResultSet_.markLOBAsAccessed(column);",
" }"
],
"header": "@@ -1076,9 +1076,15 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" public Blob getBlobColumn_(int column, Agent agent) throws SqlException ",
" {",
" netResultSet_.markLOBAsAccessed(column);"
]
},
{
"added": [
" /**",
" * @see org.apache.derby.client.am.Cursor#getClobColumn_",
" */",
" public Clob getClobColumn_(int column, Agent agent, boolean toBePublished)",
" throws SqlException {",
" // Only inform the tracker if the Clob is published to the user.",
" if (toBePublished) {",
" netResultSet_.markLOBAsAccessed(column);",
" }"
],
"header": "@@ -1112,8 +1118,15 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" public Clob getClobColumn_(int column, Agent agent) throws SqlException {",
" netResultSet_.markLOBAsAccessed(column);"
]
}
]
}
] |
derby-DERBY-2892-50f5a012
|
DERBY-2892: Closing a resultset after retrieving a large > 32665 bytes value with Network Server does not release locks.
Cleanup patch removing code that became unused after the alternative fix went in.
Patch file: derby-2892-2a-alternative_fix_cleanup.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@647680 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/BlobLocatorInputStream.java",
"hunks": [
{
"added": [],
"header": "@@ -21,12 +21,8 @@",
"removed": [
"import java.sql.SQLException;",
"",
"import org.apache.derby.shared.common.error.ExceptionUtil;",
"import org.apache.derby.shared.common.reference.SQLState;"
]
},
{
"added": [],
"header": "@@ -188,38 +184,6 @@ public class BlobLocatorInputStream extends java.io.InputStream",
"removed": [
" /**",
" * Closes this input stream and releases any system resources associated",
" * with the stream. This will release the underlying Blob value. ",
" * ",
" * @throws java.io.IOException",
" */",
" public void close() throws IOException {",
" try {",
" if (blob != null && freeBlobOnClose) {",
" blob.free();",
" }",
" } catch (SQLException ex) {",
" if (ex.getSQLState().compareTo",
" (ExceptionUtil.getSQLStateFromIdentifier",
" (SQLState.LOB_OBJECT_INVALID)) == 0) {",
" // Blob has already been freed, probably because of autocommit",
" return; // Ignore error",
" }",
"",
" IOException ioEx = new IOException();",
" ioEx.initCause(ex);",
" throw ioEx;",
" }",
" }",
"",
" /**",
" * Tell stream to free the underlying Blob when it is closed.",
" */",
" public void setFreeBlobOnClose() {",
" freeBlobOnClose = true;",
" }",
" "
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/ClobLocatorInputStream.java",
"hunks": [
{
"added": [],
"header": "@@ -63,13 +63,6 @@ public class ClobLocatorInputStream extends java.io.InputStream {",
"removed": [
" /**",
" * If true, the underlying Blob will be freed when the underlying stream is",
" * closed. Used to implement correct behavior for streams obtained from",
" * result sets.",
" */",
" private boolean freeClobOnClose = false;",
""
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/ClobLocatorReader.java",
"hunks": [
{
"added": [],
"header": "@@ -69,13 +69,6 @@ public class ClobLocatorReader extends java.io.Reader {",
"removed": [
" /**",
" * If true, the underlying Blob will be freed when the underlying stream is",
" * closed. Used to implement correct behavior for streams obtained from",
" * result sets.",
" */",
" private boolean freeClobOnClose = false;",
""
]
},
{
"added": [],
"header": "@@ -175,30 +168,6 @@ public class ClobLocatorReader extends java.io.Reader {",
"removed": [
" ",
" try {",
" if (clob != null && freeClobOnClose) {",
" clob.free();",
" }",
" } catch (SQLException ex) {",
" if (ex.getSQLState().compareTo",
" (ExceptionUtil.getSQLStateFromIdentifier",
" (SQLState.LOB_OBJECT_INVALID)) == 0) {",
" // Clob has already been freed, probably because of autocommit",
" return; // Ignore error",
" }",
"",
" IOException ioEx = new IOException();",
" ioEx.initCause(ex);",
" throw ioEx;",
" }",
" }",
"",
" /**",
" * Tell stream to free the underlying Clob when it is closed.",
" */",
" public void setFreeClobOnClose() {",
" freeClobOnClose = true;"
]
}
]
}
] |
derby-DERBY-2892-c6ed70e8
|
DERBY-2892 Closing a resultset after retrieving a large > 32665 bytes value with Network Server does not release locks
Fixes getString(), getCharacaterStream(), getBytes, getBinaryStream() so they don't hold locks. Also restricts BLOB columns to a single getXXX call.
Patch contributed by Oystein Grovlen (oystein dot grovlen at sun dot com)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@642974 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Blob.java",
"hunks": [
{
"added": [],
"header": "@@ -21,7 +21,6 @@",
"removed": [
"import java.io.BufferedInputStream;"
]
},
{
"added": [
" java.io.InputStream getBinaryStreamX() throws SqlException {"
],
"header": "@@ -260,7 +259,7 @@ public class Blob extends Lob implements java.sql.Blob {",
"removed": [
" private java.io.InputStream getBinaryStreamX() throws SqlException {"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/BlobLocatorInputStream.java",
"hunks": [
{
"added": [
"import org.apache.derby.shared.common.error.ExceptionUtil;",
"import org.apache.derby.shared.common.reference.SQLState;"
],
"header": "@@ -21,11 +21,12 @@",
"removed": [
"import java.sql.CallableStatement;"
]
},
{
"added": [
"",
" /**",
" * Closes this input stream and releases any system resources associated",
" * with the stream. This will release the underlying Blob value. ",
" * ",
" * @throws java.io.IOException",
" */",
" public void close() throws IOException {",
" try {",
" if (blob != null && freeBlobOnClose) {",
" blob.free();",
" }",
" } catch (SQLException ex) {",
" if (ex.getSQLState().compareTo",
" (ExceptionUtil.getSQLStateFromIdentifier",
" (SQLState.LOB_OBJECT_INVALID)) == 0) {",
" // Blob has already been freed, probably because of autocommit",
" return; // Ignore error",
" }",
"",
" IOException ioEx = new IOException();",
" ioEx.initCause(ex);",
" throw ioEx;",
" }",
" }",
"",
" /**",
" * Tell stream to free the underlying Blob when it is closed.",
" */",
" public void setFreeBlobOnClose() {",
" freeBlobOnClose = true;",
" }"
],
"header": "@@ -186,7 +187,38 @@ public class BlobLocatorInputStream extends java.io.InputStream",
"removed": [
" "
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Clob.java",
"hunks": [
{
"added": [
" java.io.Reader getCharacterStreamX() throws SqlException {"
],
"header": "@@ -354,7 +354,7 @@ public class Clob extends Lob implements java.sql.Clob {",
"removed": [
" private java.io.Reader getCharacterStreamX() throws SqlException {"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/ClobLocatorInputStream.java",
"hunks": [
{
"added": [
"import org.apache.derby.shared.common.error.ExceptionUtil;",
"import org.apache.derby.shared.common.reference.SQLState;"
],
"header": "@@ -21,12 +21,13 @@",
"removed": [
"import java.sql.CallableStatement;"
]
},
{
"added": [
" /**",
" * If true, the underlying Blob will be freed when the underlying stream is",
" * closed. Used to implement correct behavior for streams obtained from",
" * result sets.",
" */",
" private boolean freeClobOnClose = false;",
""
],
"header": "@@ -62,6 +63,13 @@ public class ClobLocatorInputStream extends java.io.InputStream {",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/ClobLocatorReader.java",
"hunks": [
{
"added": [
"import java.sql.SQLException;",
"import org.apache.derby.shared.common.error.ExceptionUtil;",
"import org.apache.derby.shared.common.reference.SQLState;"
],
"header": "@@ -23,7 +23,10 @@ package org.apache.derby.client.am;",
"removed": []
},
{
"added": [
" /**",
" * If true, the underlying Blob will be freed when the underlying stream is",
" * closed. Used to implement correct behavior for streams obtained from",
" * result sets.",
" */",
" private boolean freeClobOnClose = false;",
""
],
"header": "@@ -66,6 +69,13 @@ public class ClobLocatorReader extends java.io.Reader {",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Cursor.java",
"hunks": [
{
"added": [
"import java.io.BufferedInputStream;",
"import java.io.BufferedReader;"
],
"header": "@@ -23,6 +23,8 @@ package org.apache.derby.client.am;",
"removed": []
},
{
"added": [
" Blob b = getBlobColumn_(column, agent_);",
" tempString = agent_.crossConverters_.",
" getStringFromBytes(b.getBytes(1, (int) b.length()));",
" b.free(); // Free resources from underlying Blob",
" return tempString;",
" tempString = c.getSubString(1, (int) c.length());",
" c.free(); // Free resources from underlying Clob",
" return tempString;"
],
"header": "@@ -979,11 +981,16 @@ public abstract class Cursor {",
"removed": [
" Blob b = (Blob) getBlobColumn_(column, agent_);",
" return agent_.crossConverters_.getStringFromBytes(b.getBytes(1, (int) b.length()));",
" return c.getSubString(1, (int) c.length());"
]
},
{
"added": [
" Blob b = getBlobColumn_(column, agent_);",
" byte[] bytes = b.getBytes(1, (int) b.length());",
" b.free(); // Free resources from underlying Blob",
" return bytes;"
],
"header": "@@ -1002,8 +1009,10 @@ public abstract class Cursor {",
"removed": [
" Blob b = (Blob) getBlobColumn_(column, agent_);",
" return b.getBytes(1, (int) b.length());"
]
},
{
"added": [
" public final java.io.InputStream getBinaryStream(int column) ",
" throws SqlException ",
" {",
" switch (jdbcTypes_[column - 1]) {",
" Blob b = getBlobColumn_(column, agent_);",
" if (b.isLocator()) {",
" BlobLocatorInputStream is ",
" = new BlobLocatorInputStream(agent_.connection_, b);",
" // Underlying Blob should be released when stream is closed",
" is.setFreeBlobOnClose();",
" return new BufferedInputStream(is);",
" } else {",
" return b.getBinaryStreamX();",
" }",
" public final java.io.InputStream getAsciiStream(int column) ",
" throws SqlException",
" {",
" switch (jdbcTypes_[column - 1]) {",
" if (c.isLocator()) {",
" ClobLocatorInputStream is ",
" = new ClobLocatorInputStream(agent_.connection_, c);",
" // Underlying Clob should be released when stream is closed",
" is.setFreeClobOnClose();",
" return new BufferedInputStream(is);",
" } else {",
" return c.getAsciiStreamX();",
" }"
],
"header": "@@ -1013,32 +1022,47 @@ public abstract class Cursor {",
"removed": [
" public final java.io.InputStream getBinaryStream(int column) throws SqlException {",
" try {",
" switch (jdbcTypes_[column - 1]) {",
" Blob b = (Blob) getBlobColumn_(column, agent_);",
" return b.getBinaryStream();",
" }",
" } catch ( SQLException se ) {",
" throw new SqlException(se);",
" public final java.io.InputStream getAsciiStream(int column) throws SqlException {",
" try {",
" switch (jdbcTypes_[column - 1]) {",
" return c.getAsciiStream();"
]
},
{
"added": [
" return getBinaryStream(column);",
" "
],
"header": "@@ -1062,18 +1086,13 @@ public abstract class Cursor {",
"removed": [
" Blob b = (Blob) getBlobColumn_(column, agent_);",
" return b.getBinaryStream();",
" }",
" }",
" catch ( SQLException se ) {",
" throw new SqlException(se);",
""
]
},
{
"added": [
" c.free(); // Release resources from underlying Clob"
],
"header": "@@ -1081,6 +1100,7 @@ public abstract class Cursor {",
"removed": []
},
{
"added": [
" return getBinaryStream(column);"
],
"header": "@@ -1114,8 +1134,7 @@ public abstract class Cursor {",
"removed": [
" Blob b = (Blob) getBlobColumn_(column, agent_);",
" return b.getBinaryStream();"
]
},
{
"added": [
" public final java.io.Reader getCharacterStream(int column) ",
" throws SqlException ",
" {",
" switch (jdbcTypes_[column - 1]) {",
" if (c.isLocator()) {",
" ClobLocatorReader reader",
" = new ClobLocatorReader(agent_.connection_, c);",
" // Make sure underlying Blob is released when reader is closed",
" reader.setFreeClobOnClose();",
" return new BufferedReader(reader);",
" } else {",
" return c.getCharacterStreamX();",
" }"
],
"header": "@@ -1125,12 +1144,21 @@ public abstract class Cursor {",
"removed": [
" public final java.io.Reader getCharacterStream(int column) throws SqlException {",
" try {",
" switch (jdbcTypes_[column - 1]) {",
" return c.getCharacterStream();"
]
},
{
"added": [
" return new java.io.InputStreamReader(getBinaryStream(column),",
" \"UTF-16BE\");"
],
"header": "@@ -1155,8 +1183,8 @@ public abstract class Cursor {",
"removed": [
" Blob b = (Blob) getBlobColumn_(column, agent_);",
" return new java.io.InputStreamReader(b.getBinaryStream(), \"UTF-16BE\");"
]
}
]
}
] |
derby-DERBY-2893-3ce8a04e
|
DERBY-2893 Fix GrantRevokeTest to use correct user for assertInsertPrivilege
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@555075 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2896-b1f37df6
|
DERBY-2896
Metadata calls getTables and getUDTs were failing when run from a user schema in a territory based collated database.
The reason for it is that these metadata calls were not getting compiled in SYS schema when they were executed from
a user schema. Metadata calls should always compile in SYS schema no matter what the current schema might be. The
reason getTables was not getting compiled in SYS schema was because we were trying to modify it's metadata sql on
the fly and then were compiling that modified sql in whatever the current schema might be. I have changed the
metadata sql for getTables in metadata.properties so that we do not need to modify it on the fly anymore. This will
allow getTables to follow the same codepath as other metadata queries which will also ensure that the sql gets
compiled in SYS schema.
As for getUDTs, it was merely a coding bug that we didn't follow the same logic as other metadata queries for it.
I have changed getUDTs implementation to follow the same codepath as other metadata queries.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@557334 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData.java",
"hunks": [
{
"added": [
"\t\tPreparedStatement s = getPreparedQuery(\"getTables\");",
"\t\ts.setString(1, swapNull(catalog));",
"\t\ts.setString(2, swapNull(schemaPattern));",
"\t\ts.setString(3, swapNull(tableNamePattern));",
"\t\t//IMPORTANT",
"\t\t//Whenever a new table type is added to Derby, the sql for ",
"\t\t//getTables in metadata.properties will have to change and the ",
"\t\t//following if else will need to be modified too. ",
"\t\t//",
"\t\t//The getTables sql in metadata.properties has following clause ",
"\t\t//TABLETYPE IN (?, ?, ?, ?)",
"\t\t//There are 4?s for IN list because Derby supports 4 tables types ",
"\t\t//at the moment which are 'T','S','V' and 'A'.",
"\t\t//Anytime a new table type is added, an additional ? should be",
"\t\t//added to the above clause. In addition, the following code will ",
"\t\t//have to change too because it will need to set value for that ",
"\t\t//additional ?.",
"\t\t//",
"\t\t//Following explains the logic for table types handling.",
"\t\t//If the user has asked for specific table types in getTables,",
"\t\t//then the \"if\" statement below will use those types values",
"\t\t//for ?s. If there are still some ?s in the IN list that are left ",
"\t\t//with unassigned values, then we will set those ? to NULL.",
"\t\t//eg if getTables is called to only look for table types 'S' and ",
"\t\t//'A', then 'S' will be used for first ? in TABLETYPE IN (?, ?, ?, ?)",
"\t\t//'A' will be used for second ? in TABLETYPE IN (?, ?, ?, ?) and",
"\t\t//NULL will be used for third and fourth ?s in ",
"\t\t//TABLETYPE IN (?, ?, ?, ?)",
"\t\t//If the user hasn't asked for any specific table types, then the",
"\t\t//\"else\" statement below will kick in. When the control comes to ",
"\t\t//\"else\" statement, it means that the user wants to see all the",
"\t\t//table types supported by Derby. And hence, we simply set first",
"\t\t//? to 'T', second ? to 'S', third ? to 'V' and fourth ? to 'A'.",
"\t\t//When a new table type is added to Derby in future, we will have",
"\t\t//to do another setString for that in the \"else\" statement for that",
"\t\t//new table type.",
"\t\tif (types != null && types.length >= 1) {",
"\t\t\tint i=0;",
"\t\t\tfinal int numberOfTableTypesInDerby = 4;",
"\t\t\tfor (; i<types.length; i++){",
"\t\t\t\t/*",
"\t\t\t\t * Let's assume for now that the table type first char ",
"\t\t\t\t * corresponds to JBMS table type identifiers.",
"\t\t\t\t * ",
"\t\t\t\t * The reason I have i+4 is because there are already 3 ?s in",
"\t\t\t\t * the getTables sql before the ?s in the IN clause. Hence",
"\t\t\t\t * setString for table types should be done starting 4th ",
"\t\t\t\t * parameter.",
"\t\t\t\t */",
"\t\t\t\ts.setString(i+4, types[i].substring(0, 1));\t\t\t\t\t",
"\t\t\tfor (; i<numberOfTableTypesInDerby; i++) {",
"\t\t\t\ts.setNull(i+4, Types.CHAR);",
"\t\t} else {",
"\t\t\ts.setString(4, \"T\");",
"\t\t\ts.setString(5, \"S\");",
"\t\t\ts.setString(6, \"V\");",
"\t\t\ts.setString(7, \"A\");\t\t\t\t",
"\t\treturn s.executeQuery();"
],
"header": "@@ -1707,55 +1707,67 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\tsynchronized (getConnectionSynchronization()) {",
" setupContextStack();",
"\t\t\tResultSet rs = null;",
"\t\t\ttry {",
"\t\t\t",
"\t\t\tString queryText =",
"\t\t\t\tgetQueryDescriptions(false).getProperty(\"getTables\");",
"",
"\t\t\t/*",
"\t\t\t * The query text is assumed to end with a \"where\" clause, so",
"\t\t\t * that we can safely append",
"\t\t\t * \"and table_Type in ('xxx','yyy','zzz', ...)\" and",
"\t\t\t * have it become part of the where clause.",
"\t\t\t *",
"\t\t\t * Let's assume for now that the table type first char corresponds",
"\t\t\t * to JBMS table type identifiers.",
"\t\t\t */",
"\t\t\tStringBuffer whereClauseTail = new StringBuffer(queryText);",
"",
"\t\t\tif (types != null && types.length >= 1) {",
"\t\t\t\twhereClauseTail.append(\" AND TABLETYPE IN ('\");",
"\t\t\t\twhereClauseTail.append(types[0].substring(0, 1));",
"",
"\t\t\t\tfor (int i=1; i<types.length; i++) {",
"\t\t\t\t\twhereClauseTail.append(\"','\");",
"\t\t\t\t\twhereClauseTail.append(types[i].substring(0, 1));",
"\t\t\t\t}",
"\t\t\t\twhereClauseTail.append(\"')\");",
"\t\t\t// Add the order by clause after the 'in' list.",
"\t\t\twhereClauseTail.append(",
"\t\t\t\t\" ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME\");",
"",
"\t\t\tPreparedStatement s =",
"\t\t\t\tgetEmbedConnection().prepareMetaDataStatement(whereClauseTail.toString());",
"",
"\t\t\ts.setString(1, swapNull(catalog));",
"\t\t\ts.setString(2, swapNull(schemaPattern));",
"\t\t\ts.setString(3, swapNull(tableNamePattern));",
"",
"\t\t\trs = s.executeQuery();",
"\t\t } catch (Throwable t) {",
"\t\t\t\tthrow handleException(t);",
"\t\t\t} finally {",
"\t\t\t restoreContextStack();",
"",
"\t\t\treturn rs;"
]
}
]
}
] |
derby-DERBY-2903-0c8386da
|
DERBY-2903 ; addressing some review comments
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@593643 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2903-d1b8db2c
|
DERBY-2903 ; adding init() to sysinfo.Main.java's call to
LocalizedResource.getInstance() ensures that if sysinfo has a clean
one that does not get affected by any earlier calls to init.
After this, the redirections of system.out work correctly in a loop, so
adjusting the test to do away with the hoky annotating and unraveling of
the output; also now sequence of tests in tools._Suite no longer matters.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@593635 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/tools/org/apache/derby/impl/tools/sysinfo/Main.java",
"hunks": [
{
"added": [
" LocalizedResource.getInstance().init();"
],
"header": "@@ -93,7 +93,7 @@ public final class Main {",
"removed": [
" LocalizedResource.getInstance();"
]
}
]
}
] |
derby-DERBY-2905-45cb2dfc
|
DERBY-2905 Shutting down embedded Derby will remove AutoloadedDriver. The AutoloadedDriver is not left registered in the DriverManager.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1063996 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/jdbc/AutoloadedDriver.java",
"hunks": [
{
"added": [
"import org.apache.derby.shared.common.sanity.SanityManager;"
],
"header": "@@ -34,6 +34,7 @@ import org.apache.derby.iapi.reference.MessageId;",
"removed": []
},
{
"added": [
" // This flag is set if AutoloadedDriver exists",
" private static boolean activeautoloadeddriver = false;",
"",
" //This is the driver that memorizes the autoloadeddriver (DERBY-2905)",
" private static Driver _autoloadedDriver;",
""
],
"header": "@@ -59,6 +60,12 @@ public class AutoloadedDriver implements Driver",
"removed": []
},
{
"added": [
" _autoloadedDriver = new AutoloadedDriver();",
" DriverManager.registerDriver( _autoloadedDriver );",
" activeautoloadeddriver = true;"
],
"header": "@@ -68,7 +75,9 @@ public class AutoloadedDriver implements Driver",
"removed": [
"\t\t\tDriverManager.registerDriver( new AutoloadedDriver() );"
]
},
{
"added": [
"\t\tif ( _engineForcedDown && (_autoloadedDriver == null))"
],
"header": "@@ -180,7 +189,7 @@ public class AutoloadedDriver implements Driver",
"removed": [
"\t\tif ( _engineForcedDown )"
]
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/EmbeddedDataSource.java",
"hunks": [
{
"added": [
"import java.sql.Driver;"
],
"header": "@@ -24,6 +24,7 @@ package org.apache.derby.jdbc;",
"removed": []
}
]
}
] |
derby-DERBY-2905-57af2a55
|
DERBY-2905 Check-in deregister attribute option with shutdown. After shutdown and reloading the engine via Class.forName(...).newInstance(), AutoloadedDriver will get registered to DriverManager even if AutoloadedDriver40 was the original driver.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1068842 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/jdbc/AutoloadedDriver.java",
"hunks": [
{
"added": [
" //This flag is set is deregister attribute is set by user, ",
" //default is true (DERBY-2905)",
" private static boolean deregister = true;"
],
"header": "@@ -60,12 +60,13 @@ public class AutoloadedDriver implements Driver",
"removed": [
" // This flag is set if AutoloadedDriver exists",
" private static boolean activeautoloadeddriver = false;"
]
},
{
"added": [],
"header": "@@ -93,7 +94,6 @@ public class AutoloadedDriver implements Driver",
"removed": [
" activeautoloadeddriver = true;"
]
},
{
"added": [
" if (_autoloadedDriver == null) {",
" _autoloadedDriver = new AutoloadedDriver();",
" DriverManager.registerDriver(_autoloadedDriver);",
" }"
],
"header": "@@ -227,8 +227,10 @@ public class AutoloadedDriver implements Driver",
"removed": [
" if (!activeautoloadeddriver)",
" DriverManager.registerDriver(_driverModule);"
]
},
{
"added": [
" // deregister is false if user set deregister=false attribute (DERBY-2905)",
" if (deregister && _autoloadedDriver != null) {"
],
"header": "@@ -244,9 +246,9 @@ public class AutoloadedDriver implements Driver",
"removed": [
" if (activeautoloadeddriver) {",
" activeautoloadeddriver = false;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/InternalDriver.java",
"hunks": [
{
"added": [
" // DERBY-2905, allow users to provide deregister attribute to ",
" // left AutoloadedDriver in DriverManager, default value is true",
" if (finfo.getProperty(Attribute.DEREGISTER_ATTR) != null) {",
" boolean deregister = Boolean.valueOf(",
" finfo.getProperty(Attribute.DEREGISTER_ATTR))",
" .booleanValue();",
" AutoloadedDriver.setDeregister(deregister);",
" }",
""
],
"header": "@@ -223,6 +223,15 @@ public abstract class InternalDriver implements ModuleControl {",
"removed": []
}
]
}
] |
derby-DERBY-2905-90e70fe0
|
DERBY-2905 (partial) Add test fixtures to test that after an embedded shutdown that no
Derby driver is left registered and that a subsequent load of the embedded driver succeeds.
The assert for no driver left registered is disabled until DERBY-2905 is fixed.
Inspired by Ramin Moazeni's patch v3 attached to the Jira issue.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@572822 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2907-442f6967
|
DERBY-2907 Close Statement objects opened by utility methods in BaseJDBCTestCase at tearDown time.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@553994 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2910-d9b61925
|
DERBY-2910 SimpleStringOperatorNode in it's bindExpression method generates a character string CAST if required but does not set the correct collation.
This change changes implicit casts in SimpleStringOperatorNode, ConcatenationOperatorNode, and TernaryOperatorNode to use the current schema collation when doing an implicit cast. Since the SQL Spec had no specific rules for implicit casts, I matched the explicit cast behavior described in 6.1.2 (10).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@570546 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2911-086dee28
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
Addressed some review comments:
- Split calls to CacheEntry.lockWhenIdentityIsSet() into to separate steps
(lock and wait) to make it easier to follow the code.
- Created named constants for some of the heuristics in ClockPolicy
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@636670 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
" /**",
" * How large part of the clock to look at before giving up in",
" * {@code rotateClock()}.",
" */",
" private static final float MAX_ROTATION = 0.2f;",
"",
" /**",
" * How large part of the clock to look at before giving up finding",
" * an evictable entry in {@code shrinkMe()}.",
" */",
" private static final float PART_OF_CLOCK_FOR_SHRINK = 0.1f;",
""
],
"header": "@@ -76,6 +76,18 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
},
{
"added": [
" Holder h = rotateClock(entry, size >= maxSize);"
],
"header": "@@ -161,7 +173,7 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" Holder h = rotateClock(entry, (float) 0.2, size >= maxSize);"
]
},
{
"added": [
" * search stops when a reusable entry is found, or when more than a certain",
" * percentage of the entries have been visited. If there are",
" private Holder rotateClock(CacheEntry entry, boolean allowEvictions)"
],
"header": "@@ -359,21 +371,18 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" * search stops when a reusable entry is found, or when as many entries as",
" * specified by <code>partOfClock</code> have been checked. If there are",
" * @param partOfClock how large part of the clock to look at before we give",
" * up",
" private Holder rotateClock(CacheEntry entry, float partOfClock,",
" boolean allowEvictions)"
]
},
{
"added": [
" (int) (clock.size() * MAX_ROTATION));"
],
"header": "@@ -384,7 +393,7 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" (int) (clock.size() * partOfClock));"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" // Found an entry in the cache, lock it.",
" entry.lock();",
" // If someone else is setting the identity of the Cacheable",
" // in this entry, we'll need to wait for them to complete.",
" entry.waitUntilIdentityIsSet();"
],
"header": "@@ -117,9 +117,11 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" // Found an entry in the cache. Lock it, but wait until its",
" // identity has been set.",
" entry.lockWhenIdentityIsSet();"
]
},
{
"added": [
" entry.lock();",
" // If the identity of the cacheable is being set, we need to wait",
" // for it to complete so that we don't return a cacheable that",
" // isn't fully initialized.",
" entry.waitUntilIdentityIsSet();"
],
"header": "@@ -324,8 +326,12 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" entry.lockWhenIdentityIsSet();"
]
}
]
}
] |
derby-DERBY-2911-0b57378c
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
Clean-up to address review comments:
- Make ReplacementPolicy.insertEntry() void since the return value isn't used
- Simplify handling of small caches in ClockPolicy.rotateClock()
- Factor out common code in rotateClock() and shrinkMe()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@636247 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
" /**",
" * The minimum number of items to check before we decide to give up",
" * looking for evictable entries when rotating the clock.",
" */",
" private static final int MIN_ITEMS_TO_CHECK = 20;",
""
],
"header": "@@ -70,6 +70,12 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": []
},
{
"added": [
" public void insertEntry(CacheEntry entry) throws StandardException {"
],
"header": "@@ -122,10 +128,9 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" * @return callback object used by the cache manager",
" public Callback insertEntry(CacheEntry entry) throws StandardException {"
]
},
{
"added": [
" clock.add(new Holder(entry));",
" return;"
],
"header": "@@ -134,7 +139,8 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" return grow(entry);"
]
},
{
"added": [
" if (h == null) {",
" // didn't find a victim, so we need to grow",
" synchronized (clock) {",
" clock.add(new Holder(entry));",
" }"
],
"header": "@@ -156,13 +162,12 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" if (h != null) {",
" return h;",
" }",
" // didn't find a victim, so we need to grow",
" synchronized (clock) {",
" return grow(entry);"
]
},
{
"added": [],
"header": "@@ -348,19 +353,6 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" /**",
" * Increase the size of the clock by one and return a new holder. The",
" * caller must be synchronized on <code>clock</code>.",
" *",
" * @param entry the entry to insert into the clock",
" * @return a new holder which wraps the entry",
" */",
" private Holder grow(CacheEntry entry) {",
" Holder h = new Holder(entry);",
" clock.add(h);",
" return h;",
" }",
""
]
},
{
"added": [
" // Calculate how many items we need to check before we give up",
" // finding an evictable one. If we don't allow evictions, none should",
" // be checked (however, we may search for unused entries in the loop",
" // below).",
" int itemsToCheck = 0;",
" itemsToCheck = Math.max(MIN_ITEMS_TO_CHECK,",
" (int) (clock.size() * partOfClock));"
],
"header": "@@ -384,25 +376,16 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" // calculate how many items to check",
" int itemsToCheck;",
" final int size;",
" size = clock.size();",
" if (size < 20) {",
" // if we have a very small cache, allow two rounds before",
" // giving up",
" itemsToCheck = size * 2;",
" } else {",
" // otherwise, just check a fraction of the clock",
" itemsToCheck = (int) (size * partOfClock);",
" }",
" } else {",
" // we don't allow evictions, so we shouldn't check any items unless",
" // there are unused ones",
" itemsToCheck = 0;"
]
},
{
"added": [
" if (!isEvictable(e, h, true)) {"
],
"header": "@@ -432,32 +415,7 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" if (h.getEntry() != e) {",
" // Someone else evicted this entry before we obtained the",
" // lock. Move on to the next entry.",
" continue;",
" }",
"",
" if (e.isKept()) {",
" // The entry is in use. Move on to the next entry.",
" continue;",
" }",
"",
" if (SanityManager.DEBUG) {",
" // At this point the entry must be valid. If it's not, it's",
" // either removed (in which case we shouldn't get here), or",
" // it is setting it's identity (in which case it is kept,",
" // and we shouldn't get here).",
" SanityManager.ASSERT(e.isValid(),",
" \"Holder contains invalid entry\");",
" SanityManager.ASSERT(!h.isEvicted(),",
" \"Trying to reuse an evicted holder\");",
" }",
"",
" if (h.recentlyUsed) {",
" // The object has been used recently. Clear the",
" // recentlyUsed flag and move on to the next entry.",
" h.recentlyUsed = false;"
]
},
{
"added": [
" /**",
" * Check if an entry can be evicted. Only entries that still are present in",
" * the cache, are not kept and not recently used, can be evicted. This",
" * method does not check whether the {@code Cacheable} contained in the",
" * entry is dirty, so it may be necessary to clean it before an eviction",
" * can take place even if the method returns {@code true}. The caller must",
" * hold the lock on the entry before calling this method.",
" *",
" * @param e the entry to check",
" * @param h the holder which holds the entry",
" * @param clearRecentlyUsedFlag tells whether or not the recently used flag",
" * should be cleared on the entry ({@code true} only when called as part of",
" * a normal clock rotation)",
" * @return whether or not this entry can be evicted (provided that its",
" * {@code Cacheable} is cleaned first)",
" */",
" private boolean isEvictable(CacheEntry e, Holder h,",
" boolean clearRecentlyUsedFlag) {",
"",
" if (h.getEntry() != e) {",
" // Someone else evicted this entry before we obtained the",
" // lock, so we can't evict it.",
" return false;",
" }",
"",
" if (e.isKept()) {",
" // The entry is in use and cannot be evicted.",
" return false;",
" }",
"",
" if (SanityManager.DEBUG) {",
" // At this point the entry must be valid. If it's not, it's either",
" // removed (in which case getEntry() != e and we shouldn't get",
" // here), or it is setting its identity (in which case it is kept",
" // and we shouldn't get here).",
" SanityManager.ASSERT(e.isValid(), \"Holder contains invalid entry\");",
" SanityManager.ASSERT(!h.isEvicted(), \"Holder is evicted\");",
" }",
"",
" if (h.recentlyUsed) {",
" // The object has been used recently, so it cannot be evicted.",
" if (clearRecentlyUsedFlag) {",
" h.recentlyUsed = false;",
" }",
" return false;",
" }",
"",
" return true;",
" }",
""
],
"header": "@@ -511,6 +469,56 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ReplacementPolicy.java",
"hunks": [
{
"added": [
" * The entry will be associated with a {@code Callback} object that it can",
" * use to communicate back to the replacement policy events (for instance,",
" * that it has been accessed or become invalid).",
" *",
" * @see CacheEntry#setCallback(ReplacementPolicy.Callback)",
" void insertEntry(CacheEntry entry) throws StandardException;"
],
"header": "@@ -37,14 +37,17 @@ interface ReplacementPolicy {",
"removed": [
" * @return a callback object that can be used to notify the replacement",
" * algorithm about operations performed on the cached object",
" Callback insertEntry(CacheEntry entry) throws StandardException;"
]
}
]
}
] |
derby-DERBY-2911-106ea47b
|
Checked in the performance tests used in DERBY-1961 and DERBY-2911.
The following command prints information on how to run the tests:
java org.apache.derbyTesting.perf.clients.Runner
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@619404 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/WisconsinFiller.java",
"hunks": [
{
"added": [
"/*",
"",
"Derby - Class org.apache.derbyTesting.perf.clients.WisconsinFiller",
"",
"Licensed to the Apache Software Foundation (ASF) under one or more",
"contributor license agreements. See the NOTICE file distributed with",
"this work for additional information regarding copyright ownership.",
"The ASF licenses this file to You under the Apache License, Version 2.0",
"(the \"License\"); you may not use this file except in compliance with",
"the License. You may obtain a copy of the License at",
"",
" http://www.apache.org/licenses/LICENSE-2.0",
"",
"Unless required by applicable law or agreed to in writing, software",
"distributed under the License is distributed on an \"AS IS\" BASIS,",
"WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
"See the License for the specific language governing permissions and",
"limitations under the License.",
"",
"*/",
"",
"package org.apache.derbyTesting.perf.clients;",
"",
"import java.sql.Connection;",
"import java.sql.SQLException;",
"import java.sql.Savepoint;",
"import java.sql.Statement;",
"import org.apache.derbyTesting.functionTests.tests.lang.wisconsin;",
"",
"/**",
" * Class which creates and populates the tables used by",
" * {@code IndexJoinClient}. These are the same tables as the ones used by the",
" * functional Wisconsin test found in the lang suite.",
" */",
"public class WisconsinFiller implements DBFiller {",
"",
" public void fill(Connection c) throws SQLException {",
" c.setAutoCommit(false);",
"",
" dropTable(c, \"TENKTUP1\");",
" dropTable(c, \"TENKTUP2\");",
" dropTable(c, \"ONEKTUP\");",
" dropTable(c, \"BPRIME\");",
"",
" wisconsin.createTables(c, false);",
"",
" c.commit();",
" }",
"",
" /**",
" * Helper method which drops a table if it exists. Nothing happens if",
" * the table doesn't exist.",
" *",
" * @param c the connection to use",
" * @param table the table to drop",
" * @throws SQLException if an unexpected database error occurs",
" */",
" static void dropTable(Connection c, String table) throws SQLException {",
" // Create a savepoint that we can roll back to if drop table fails.",
" // This is not needed by Derby, but some databases (e.g., PostgreSQL)",
" // don't allow more operations in a transaction if a statement fails,",
" // and we want to be able to run these tests against other databases",
" // than Derby.",
" Savepoint sp = c.setSavepoint();",
" Statement stmt = c.createStatement();",
" try {",
" stmt.executeUpdate(\"DROP TABLE \" + table);",
" } catch (SQLException e) {",
" // OK to fail if table doesn't exist, roll back to savepoint",
" c.rollback(sp);",
" }",
" stmt.close();",
" c.releaseSavepoint(sp);",
" }",
"}"
],
"header": "@@ -0,0 +1,75 @@",
"removed": []
}
]
}
] |
derby-DERBY-2911-29141b85
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
Improved comments as suggested in the review.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@635577 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/BackgroundCleaner.java",
"hunks": [
{
"added": [
" * A background cleaner that {@code ConcurrentCache} can use to clean {@code",
" * Cacheable}s asynchronously in a background instead of synchronously in the",
" * user threads. It is normally used by the replacement algorithm in order to",
" * make dirty {@code Cacheable}s clean and evictable in the future. When the"
],
"header": "@@ -29,8 +29,10 @@ import org.apache.derby.iapi.services.daemon.DaemonService;",
"removed": [
" * A background cleaner which can be used by <code>ConcurrentCache</code> so",
" * that it doesn't have to wait for clean operations to finish. When the"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
" // Successfully scheduled the clean operation. We can't",
" // evict it until the clean operation has finished. Since",
" // we'd like to be as responsive as possible, move on to",
" // the next entry instead of waiting for the clean",
" // operation to finish."
],
"header": "@@ -474,8 +474,11 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" // Successfully scheduled the clean operation. Move on to",
" // the next entry."
]
},
{
"added": [
"",
" // If no one has touched the entry while we were cleaning it, we",
" // could reuse it at this point. The old buffer manager (Clock)",
" // would however under high load normally move on to the next",
" // entry in the clock instead of reusing the one it recently",
" // cleaned. Some of the performance tests performed as part of",
" // DERBY-2911 indicated that not reusing the entry that was just",
" // cleaned made the replacement algorithm more efficient. For now",
" // we try to stay as close to the old buffer manager as possible",
" // and don't reuse the entry immediately."
],
"header": "@@ -493,6 +496,16 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
}
]
}
] |
derby-DERBY-2911-2d09c33b
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
Removed ClockPolicy.trimMe() since it adds complexity without adding
any obvious benefit.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@642755 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
" * Tells whether there currently is a thread in the {@code doShrink()}",
" * method. If this variable is {@code true} a call to {@code doShrink()}",
" * will be a no-op."
],
"header": "@@ -115,9 +115,9 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" * Tells whether there currently is a thread in the <code>doShrink()</code>",
" * or <code>trimToSize()</code> methods. If this variable is",
" * <code>true</code> a call to any one of those methods will be a no-op."
]
},
{
"added": [
" // not enough. If we manage to change isShrinking atomically from false",
" // to true, no one else is currently inside shrinkMe(), and others will",
" // be blocked from entering it until we reset isShrinking to false.",
" shrinkMe();",
" // allow others to call shrinkMe()"
],
"header": "@@ -549,31 +549,14 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" // not enough.",
" if (shrinkMe()) {",
" // the clock shrunk, try to trim it too",
" trimMe();",
" }",
" } finally {",
" isShrinking.set(false);",
" }",
" }",
" }",
"",
" /**",
" * Try to reduce the size of the clock as much as possible by removing",
" * invalid entries. In most cases, this method will do nothing.",
" *",
" * @see #trimMe()",
" */",
" public void trimToSize() {",
" // ignore this request if we're already performing trim or shrink",
" if (isShrinking.compareAndSet(false, true)) {",
" try {",
" trimMe();"
]
},
{
"added": [
" * by a single thread at a time.",
" private void shrinkMe() {"
],
"header": "@@ -581,13 +564,9 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" * by a single thread at a time, and should not be called concurrently",
" * with <code>trimMe()</code>.",
" *",
" * @return {@code true} if the clock shrunk as a result of calling this",
" * method",
" private boolean shrinkMe() {"
]
},
{
"added": [],
"header": "@@ -604,8 +583,6 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" boolean shrunk = false;",
""
]
},
{
"added": [],
"header": "@@ -641,7 +618,6 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" shrunk = true;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [],
"header": "@@ -571,7 +571,6 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" boolean shrunk = false;"
]
},
{
"added": [],
"header": "@@ -582,16 +581,12 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" shrunk = true;",
" if (shrunk) {",
" replacementPolicy.trimToSize();",
" }"
]
},
{
"added": [],
"header": "@@ -637,7 +632,6 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" boolean shrunk = false;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ReplacementPolicy.java",
"hunks": [
{
"added": [],
"header": "@@ -55,15 +55,6 @@ interface ReplacementPolicy {",
"removed": [
" /**",
" * Try to reduce the size of the cache as much as possible by removing",
" * invalid entries. Depending on the underlying data structure, this might",
" * be a very expensive operation. The implementations are therefore allowed",
" * to ignore calls to this method when they think the cost outweighs the",
" * benefit.",
" */",
" void trimToSize();",
""
]
}
]
}
] |
derby-DERBY-2911-3141a3ba
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
Improved comments as suggested in the review.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@635556 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" * Evict an entry to make room for a new entry that is being inserted into",
" * the cache. Clear the identity of its {@code Cacheable} and set it to",
" * {@code null}. When this method is called, the caller has already chosen",
" * the {@code Cacheable} for reuse. Therefore, this method won't call",
" * {@code CacheEntry.free()} as that would make the {@code Cacheable} free",
" * for reuse by other entries as well.",
" *",
" * <p>",
" *",
" * The caller must have locked the entry that is about to be evicted."
],
"header": "@@ -168,11 +168,16 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" * Remove an entry from the cache. Clear the identity of its",
" * <code>Cacheable</code> and set it to null. This method is called when",
" * the replacement algorithm needs to evict an entry from the cache in",
" * order to make room for a new entry. The caller must have locked the",
" * entry that is about to be evicted."
]
},
{
"added": [
" // Use get() instead of getEntry() so that we don't insert an empty",
" // entry if the requested object isn't there."
],
"header": "@@ -310,8 +315,8 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" // We don't want to insert it if it's not there, so there's no need to",
" // use getEntry()."
]
},
{
"added": [
" // The entry must be present and kept when this method is called, so we",
" // don't need the complexity of getEntry() to ensure that the entry is",
" // not added to or removed from the cache before we have locked",
" // it. Just call get() which is cheaper."
],
"header": "@@ -392,7 +397,10 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" // The entry must be present, so we don't need to call getEntry()."
]
},
{
"added": [
"",
" // The entry must be present and kept when this method is called, so we",
" // don't need the complexity of getEntry() to ensure that the entry is",
" // not added to or removed from the cache before we have locked",
" // it. Just call get() which is cheaper.",
""
],
"header": "@@ -416,9 +424,14 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" // The entry must be present, so we don't need to call getEntry()."
]
}
]
}
] |
derby-DERBY-2911-45c4ca4a
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
Release the ReentrantLock on the CacheEntry while the identity of the
Cacheable is being set. Otherwise, we may run into a deadlock if the
Cacheable's setIdentity() or createIdentity() method accesses the cache.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@603491 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/CacheEntry.java",
"hunks": [
{
"added": [
" * <dt>Uninitialized</dt> <dd>The entry object has just been constructed, but",
" * has not yet been initialized. In this state, <code>isValid()</code> returns",
" * <code>false</code>, whereas <code>isKept()</code> returns <code>true</code>",
" * in order to prevent removal of the entry until it has been initialized.",
" * When the entry is in this state, calls to",
" * <code>lockWhenIdentityIsSet()</code> will block until",
" * <code>settingIdentityComplete()</code> has been called.</dd>"
],
"header": "@@ -39,13 +39,13 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": [
" * <dt>Uninitialized</dt> <dd>The entry object has just been constructed. In",
" * this state, <code>isValid()</code> and <code>isKept()</code> return",
" * <code>false</code>, and <code>getCacheable()</code> returns",
" * <code>null</code>. As long as the entry is in this state, the reference to",
" * the object should not be made available to other threads than the one that",
" * created it, since there is no way for other threads to see the difference",
" * between an uninitialized entry and a removed entry.</dd>"
]
},
{
"added": [
" /**",
" * Condition variable used to notify a thread that the setting of this",
" * entry's identity is complete. This variable is non-null when the object",
" * is created, and will be set to null when the identity has been set.",
" * @see #settingIdentityComplete()",
" */",
" private Condition settingIdentity = mutex.newCondition();",
""
],
"header": "@@ -93,6 +93,14 @@ final class CacheEntry {",
"removed": []
},
{
"added": [
" /**",
" * Block until this entry's cacheable has been initialized (that is, until",
" * <code>settingIdentityComplete()</code> has been called on this object)",
" * and the current thread is granted exclusive access to the entry.",
" */",
" void lockWhenIdentityIsSet() {",
" lock();",
" while (settingIdentity != null) {",
" settingIdentity.awaitUninterruptibly();",
" }",
" }",
""
],
"header": "@@ -109,6 +117,18 @@ final class CacheEntry {",
"removed": []
},
{
"added": [
" /**",
" * Notify this entry that the initialization of its cacheable has been",
" * completed. This method should be called after",
" * <code>Cacheable.setIdentity()</code> or",
" * <code>Cacheable.createIdentity()</code> has been called.",
" */",
" void settingIdentityComplete() {",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(mutex.isHeldByCurrentThread());",
" }",
" settingIdentity.signalAll();",
" settingIdentity = null;",
" }",
""
],
"header": "@@ -116,6 +136,20 @@ final class CacheEntry {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
" if (e.isKept()) {",
" // The entry is in use. Move on to the next entry.",
" continue;",
" }",
"",
" // At this point the entry must be valid. If it's not, it's",
" // either removed (in which case we shouldn't get here), or",
" // it is setting it's identity (in which case it is kept,",
" // and we shouldn't get here)."
],
"header": "@@ -436,20 +436,22 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" // At this point the entry must be valid. Otherwise, it",
" // would have been removed from the Holder.",
" if (e.isKept()) {",
" // The entry is in use. Move on to the next entry.",
" continue;",
" }",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" * not kept. If another thread is currently setting the identity of this",
" * entry, this method will block until the identity has been set."
],
"header": "@@ -105,7 +105,8 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" * not kept."
]
},
{
"added": [
" // Found an entry in the cache. Lock it, but wait until its",
" // identity has been set.",
" entry.lockWhenIdentityIsSet();"
],
"header": "@@ -114,9 +115,9 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" // Found an entry in the cache. Lock it and validate that it's",
" // still there.",
" entry.lock();"
]
},
{
"added": [],
"header": "@@ -146,22 +147,6 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" /**",
" * Find a free cacheable and give the specified entry a pointer to it. If",
" * a free cacheable cannot be found, allocate a new one. The entry must be",
" * locked by the current thread.",
" *",
" * @param entry the entry for which a <code>Cacheable</code> is needed",
" * @exception StandardException if an error occurs during the search for",
" * a free cacheable",
" */",
" private void findFreeCacheable(CacheEntry entry) throws StandardException {",
" replacementPolicy.insertEntry(entry);",
" if (!entry.isValid()) {",
" entry.setCacheable(holderFactory.newCacheable(this));",
" }",
" }",
""
]
},
{
"added": [
" * Find or create an object in the cache. If the object is not presently",
" * in the cache, it will be added to the cache.",
" * @param key the identity of the object to find or create",
" * @param create whether or not the object should be created",
" * @param createParameter used as argument to <code>createIdentity()</code>",
" * when <code>create</code> is <code>true</code>",
" * @return the cached object, or <code>null</code> if it cannot be found",
" * @throws StandardException if an error happens when accessing the object",
" private Cacheable findOrCreateObject(Object key, boolean create,",
" Object createParameter)",
"",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(createParameter == null || create,",
" \"createParameter should be null when create is false\");",
" }",
"",
" if (stopped) {",
" return null;",
" }",
"",
" // A free cacheable which we'll initialize if we don't find the object",
" // in the cache.",
" Cacheable free;",
"",
" CacheEntry entry = getEntry(key);",
" try {",
" Cacheable item = entry.getCacheable();",
" if (item != null) {",
" if (create) {",
" throw StandardException.newException(",
" SQLState.OBJECT_EXISTS_IN_CACHE, name, key);",
" }",
" entry.keep(true);",
" return item;",
" }",
"",
" // not currently in the cache",
" try {",
" replacementPolicy.insertEntry(entry);",
" } catch (StandardException se) {",
" removeEntry(key);",
" throw se;",
" }",
"",
" free = entry.getCacheable();",
" if (free == null) {",
" // We didn't get a reusable cacheable. Create a new one.",
" free = holderFactory.newCacheable(this);",
" }",
"",
" entry.keep(true);",
"",
" } finally {",
" entry.unlock();",
" }",
"",
" // Set the identity in a try/finally so that we can remove the entry",
" // if the operation fails. We have released the lock on the entry so",
" // that we don't run into deadlocks if the user code (setIdentity() or",
" // createIdentity()) reenters the cache.",
" c = free.createIdentity(key, createParameter);",
" c = free.setIdentity(key);",
" entry.lock();",
" try {",
" // Notify the entry that setIdentity() or createIdentity() has",
" // finished.",
" entry.settingIdentityComplete();",
" if (c == null) {",
" // Setting identity failed, or the object was not found.",
" removeEntry(key);",
" } else {",
" // Successfully set the identity.",
" entry.setCacheable(c);",
" }",
" } finally {",
" entry.unlock();"
],
"header": "@@ -196,52 +181,94 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" * Initialize an entry by finding a free <code>Cacheable</code> and setting",
" * its identity. If the identity is successfully set, the entry is kept and",
" * the <code>Cacheable</code> is inserted into the entry and returned.",
" * Otherwise, the entry is removed from the cache and <code>null</code>",
" * is returned.",
" * @param entry the entry to initialize",
" * @param key the identity to set",
" * @param createParameter parameter to <code>createIdentity()</code>",
" * (ignored if <code>create</code> is <code>false</code>)",
" * @param create if <code>true</code>, create new identity with",
" * <code>Cacheable.createIdentity()</code>; otherwise, set identity with",
" * <code>Cacheable.setIdentity()</code>",
" * @return a <code>Cacheable</code> if the identity could be set,",
" * <code>null</code> otherwise",
" * @exception StandardException if an error occured while searching for a",
" * free <code>Cacheable</code> or while setting the identity",
" * @see Cacheable#setIdentity(Object)",
" * @see Cacheable#createIdentity(Object,Object)",
" private Cacheable initIdentity(CacheEntry entry,",
" Object key, Object createParameter, boolean create)",
" findFreeCacheable(entry);",
" c = entry.getCacheable().createIdentity(key, createParameter);",
" c = entry.getCacheable().setIdentity(key);",
" if (c == null) {",
" // Either an exception was thrown, or setIdentity() or",
" // createIdentity() returned null. In either case, the entry is",
" // invalid and must be removed.",
" removeEntry(key);",
" // If we successfully set the identity, insert the cacheable and mark",
" // the entry as kept.",
" if (c != null) {",
" entry.setCacheable(c);",
" entry.keep(true);",
" }"
]
},
{
"added": [
" return findOrCreateObject(key, false, null);"
],
"header": "@@ -256,23 +283,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
"",
" if (stopped) {",
" return null;",
" }",
"",
" CacheEntry entry = getEntry(key);",
" try {",
" Cacheable item = entry.getCacheable();",
" if (item != null) {",
" entry.keep(true);",
" return item;",
" }",
" // not currently in the cache",
" return initIdentity(entry, key, null, false);",
" } finally {",
" entry.unlock();",
" }"
]
},
{
"added": [
"",
" // Lock the entry, but wait until its identity has been set.",
" entry.lockWhenIdentityIsSet();"
],
"header": "@@ -296,7 +307,9 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" entry.lock();"
]
},
{
"added": [
" return findOrCreateObject(key, true, createParameter);"
],
"header": "@@ -325,21 +338,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
"",
" if (stopped) {",
" return null;",
" }",
"",
" CacheEntry entry = getEntry(key);",
" try {",
" if (entry.isValid()) {",
" throw StandardException.newException(",
" SQLState.OBJECT_EXISTS_IN_CACHE, name, key);",
" }",
" return initIdentity(entry, key, createParameter, true);",
" } finally {",
" entry.unlock();",
" }"
]
}
]
}
] |
derby-DERBY-2911-651fa21e
|
DERBY-2911 (partial) Implemented background cleaner for ConcurrentCache
Created a background cleaner which enables the user threads to request
that the cleaning of a cached object happens asynchronously in Derby's
service daemon.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@596265 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
" // This variable will hold a dirty cacheable that should be cleaned",
" // after the try/finally block.",
" final Cacheable dirty;",
""
],
"header": "@@ -355,6 +355,10 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" /** The maximum size (number of elements) for this cache. */",
" private final int maxSize;"
],
"header": "@@ -57,6 +57,8 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
" /**",
" * Background cleaner which can be used to clean cached objects in a",
" * separate thread to avoid blocking the user threads.",
" */",
" private BackgroundCleaner cleaner;",
""
],
"header": "@@ -69,6 +71,12 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
" this.maxSize = maxSize;"
],
"header": "@@ -81,6 +89,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
" entry.keep(true);"
],
"header": "@@ -222,7 +231,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" entry.keep();"
]
},
{
"added": [
" entry.keep(true);"
],
"header": "@@ -247,7 +256,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" entry.keep();"
]
},
{
"added": [
" entry.keep(true);"
],
"header": "@@ -284,7 +293,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" entry.keep();"
]
},
{
"added": [
" final Cacheable dirtyObject;",
" if (!entry.isValid()) {",
" // no need to clean an invalid entry",
" continue;",
" }",
" if (partialKey != null && !partialKey.match(c.getIdentity())) {",
" // don't clean objects that don't match the partial key",
" continue;",
" }",
" if (!c.isDirty()) {",
" // already clean",
" continue;",
"",
" // Increment the keep count for this entry to prevent others",
" // from removing it. Then release the lock on the entry to",
" // avoid blocking others when the object is cleaned.",
" entry.keep(false);",
" dirtyObject = c;",
"",
" } finally {",
" entry.unlock();",
" }",
"",
" // Clean the object and decrement the keep count.",
" cleanAndUnkeepEntry(entry, dirtyObject);",
" }",
" }",
"",
" /**",
" * Clean an entry in the cache.",
" *",
" * @param entry the entry to clean",
" * @exception StandardException if an error occurs while cleaning",
" */",
" void cleanEntry(CacheEntry entry) throws StandardException {",
" // Fetch the cacheable while having exclusive access to the entry.",
" // Release the lock before cleaning to avoid blocking others.",
" Cacheable item;",
" entry.lock();",
" try {",
" item = entry.getCacheable();",
" if (item == null) {",
" // nothing to do",
" return;",
" }",
" entry.keep(false);",
" } finally {",
" entry.unlock();",
" }",
" cleanAndUnkeepEntry(entry, item);",
" }",
"",
" /**",
" * Clean an entry in the cache and decrement its keep count. The entry must",
" * be kept before this method is called, and it must contain the specified",
" * <code>Cacheable</code>.",
" *",
" * @param entry the entry to clean",
" * @param item the cached object contained in the entry",
" * @exception StandardException if an error occurs while cleaning",
" */",
" void cleanAndUnkeepEntry(CacheEntry entry, Cacheable item)",
" throws StandardException {",
" try {",
" // Clean the cacheable while we're not holding",
" // the lock on its entry.",
" item.clean(false);",
" } finally {",
" // Re-obtain the lock on the entry, and reduce the keep count",
" // since the entry should not be kept by the cleaner any longer.",
" entry.lock();",
" try {",
" if (SanityManager.DEBUG) {",
" // Since the entry is kept, the Cacheable shouldn't",
" // have changed.",
" SanityManager.ASSERT(entry.getCacheable() == item,",
" \"CacheEntry didn't contain the expected Cacheable\");",
" }",
" entry.unkeep();"
],
"header": "@@ -398,14 +407,89 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" if (c != null && c.isDirty() &&",
" (partialKey == null ||",
" partialKey.match(c.getIdentity()))) {",
" c.clean(false);"
]
},
{
"added": [
" if (cleaner != null) {",
" cleaner.unsubscribe();",
" }",
" /**",
" * Specify a daemon service that can be used to perform operations in",
" * the background. Callers must provide enough synchronization so that",
" * they have exclusive access to the cache when this method is called.",
" *",
" * @param daemon the daemon service to use",
" */",
" if (cleaner != null) {",
" cleaner.unsubscribe();",
" }",
" // Create a background cleaner that can queue up 1/10 of the elements",
" // in the cache.",
" cleaner = new BackgroundCleaner(this, daemon, Math.max(maxSize/10, 1));",
" }",
"",
" BackgroundCleaner getBackgroundCleaner() {",
" return cleaner;"
],
"header": "@@ -438,14 +522,32 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" // TODO - unsubscribe background writer",
" // TODO"
]
}
]
}
] |
derby-DERBY-2911-6a317f01
|
DERBY-2911 (partial) Implement a buffer manager using
java.util.concurrent classes
Changed the replacement algorithm so that the cache will never grow if
there is an unused entry that can be reused.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@585060 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
"import java.util.concurrent.atomic.AtomicInteger;"
],
"header": "@@ -22,6 +22,7 @@",
"removed": []
},
{
"added": [
" /**",
" * The number of free entries. This is the number of objects that have been",
" * removed from the cache and whose entries are free to be reused without",
" * eviction.",
" */",
" private final AtomicInteger freeEntries = new AtomicInteger();",
""
],
"header": "@@ -87,6 +88,13 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
},
{
"added": [
"",
" final boolean isFull;",
" if (freeEntries.get() == 0) {",
" // We have not reached the maximum size yet, and there's no",
" // free entry to reuse. Make room by growing.",
" return grow(entry);",
" }",
" // The cache is not full, but there are free entries that can",
" // be reused.",
" isFull = false;",
" } else {",
" // The cache is full, so we'll need to rotate the clock hand",
" // and evict an object.",
" isFull = true;",
" Holder h = rotateClock(entry, (float) 0.2, isFull);"
],
"header": "@@ -108,16 +116,27 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" // TODO - check whether there are free entries that could be",
" // used instead of growing",
" return grow(entry);",
" Holder h = rotateClock(entry, (float) 0.2);"
]
},
{
"added": [
" private class Holder implements Callback {"
],
"header": "@@ -135,7 +154,7 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" private static class Holder implements Callback {"
]
},
{
"added": [
" // let others know that a free entry is available",
" int free = freeEntries.incrementAndGet();",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(",
" free > 0,",
" \"freeEntries should be greater than 0, but is \" + free);",
" }"
],
"header": "@@ -183,6 +202,13 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
},
{
"added": [
" int free = freeEntries.decrementAndGet();",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(",
" free >= 0, \"freeEntries is negative: \" + free);",
" }"
],
"header": "@@ -197,6 +223,11 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
},
{
"added": [
" * Rotate the clock in order to find a free space for a new entry. If",
" * <code>allowEvictions</code> is <code>true</code>, an not recently used",
" * object might be evicted to make room for the new entry. Otherwise, only",
" * unused entries are searched for. When evictions are allowed, entries are",
" * marked as not recently used when the clock hand sweeps over them. The",
" * search stops when a reusable entry is found, or when as many entries as",
" * specified by <code>partOfClock</code> have been checked. If there are",
" * free (unused) entries, the search will continue until a reusable entry",
" * is found, regardless of how many entries that need to be checked.",
" * @param allowEvictions tells whether evictions are allowed (normally",
" * <code>true</code> if the cache is full and <code>false</code> otherwise)",
" private Holder rotateClock(CacheEntry entry, float partOfClock,",
" boolean allowEvictions)",
" if (allowEvictions) {",
" final int size;",
" synchronized (clock) {",
" size = clock.size();",
" }",
" if (size < 20) {",
" itemsToCheck = size * 2;",
" itemsToCheck = (int) (size * partOfClock);",
" } else {",
" // we don't allow evictions, so we shouldn't check any items unless",
" // there are unused ones",
" itemsToCheck = 0;",
" // Check up to itemsToCheck entries before giving up, but don't give up",
" // if we know there are unused entries.",
" while (itemsToCheck-- > 0 || freeEntries.get() > 0) {"
],
"header": "@@ -260,34 +291,52 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" * Rotate the clock in order to find a free space for a new entry, or a",
" * <em>not recently used</em> entry that we can evict to make free",
" * space. Entries that we move past are marked as recently used.",
" private Holder rotateClock(CacheEntry entry, float partOfClock)",
" synchronized (clock) {",
" itemsToCheck = clock.size();",
" if (itemsToCheck < 20) {",
" itemsToCheck *= 2;",
" itemsToCheck *= partOfClock;",
" while (itemsToCheck-- > 0) {"
]
},
{
"added": [
" if (!allowEvictions) {",
" // Evictions are not allowed, so we can't reuse this entry.",
" continue;",
" }",
""
],
"header": "@@ -301,6 +350,11 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
}
]
}
] |
derby-DERBY-2911-721f8959
|
DERBY-2911 (partial) Removed unused methods from the CacheManager and
CacheFactory interfaces
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@571055 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/cache/CacheManager.java",
"hunks": [
{
"added": [],
"header": "@@ -31,21 +31,6 @@ import java.util.Collection;",
"removed": [
" /**",
" * @return the current maximum size of the cache.",
" */",
"\tpublic long getMaximumSize();",
"",
" /**",
" * Change the maximum size of the cache. If the size is decreased then cache entries",
" * will be thrown out.",
" *",
" * @param newSize the new maximum cache size",
" *",
" * @exception StandardException Standard Derby error policy",
" */",
"\tpublic void resize( long newSize) throws StandardException;",
""
]
},
{
"added": [],
"header": "@@ -112,24 +97,6 @@ public interface CacheManager {",
"removed": [
" /**",
" * Determine whether a key is in the cache.",
" *",
" * <b>WARNING:</b> This method does not keep a lock on the entry or the cache, so",
" * the return value could be made incorrect by the time that this method returns.",
" * Therefore this method should only be used for statistical purposes.",
" */",
" public boolean containsKey( Object key);",
" ",
" /**",
" * Mark a set of entries as having been used. Normally this is done as a side effect",
" * of find() or findCached. If the entry has been replaced then this method",
" * does nothing.",
" *",
" * @param keys the key of the used entry.",
" */",
" public void setUsed( Object[] keys);",
" "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/RAMTransaction.java",
"hunks": [
{
"added": [],
"header": "@@ -432,13 +432,6 @@ public class RAMTransaction",
"removed": [
"\t/**",
"\t\tGet cache statistics for the specified cache",
"\t*/",
"\tpublic long[] getCacheStats(String cacheName) {",
"\t\treturn getRawStoreXact().getCacheStats(cacheName);",
"\t}",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/xact/Xact.java",
"hunks": [
{
"added": [],
"header": "@@ -328,20 +328,6 @@ public class Xact extends RawTransaction implements Limit {",
"removed": [
"\t/**",
"\t\tGet cache statistics for the specified cache",
"\t*/",
"\tpublic long[] getCacheStats(String cacheName) {",
"\t\treturn getDataFactory().getCacheStats(cacheName);",
"\t}",
"",
"\t/**",
"\t\tReset the cache statistics for the specified cache",
"\t*/",
"\tpublic void resetCacheStats(String cacheName) {",
"\t\tgetDataFactory().resetCacheStats(cacheName);",
"\t}",
""
]
}
]
}
] |
derby-DERBY-2911-73a34db6
|
DERBY-2911 (partial) Added a partial implementation of a buffer
manager using the java.util.concurrent utilities. Only basic
operations to fetch objects from the cache are implemented. For the
other methods there are stubs only. The code is currently disabled.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@572645 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCacheFactory.java",
"hunks": [
{
"added": [
"/*",
"",
" Derby - Class org.apache.derby.impl.services.cache.ConcurrentCacheFactory",
"",
" Licensed to the Apache Software Foundation (ASF) under one or more",
" contributor license agreements. See the NOTICE file distributed with",
" this work for additional information regarding copyright ownership.",
" The ASF licenses this file to you under the Apache License, Version 2.0",
" (the \"License\"); you may not use this file except in compliance with",
" the License. You may obtain a copy of the License at",
"",
" http://www.apache.org/licenses/LICENSE-2.0",
"",
" Unless required by applicable law or agreed to in writing, software",
" distributed under the License is distributed on an \"AS IS\" BASIS,",
" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
" See the License for the specific language governing permissions and",
" limitations under the License.",
"",
" */",
"",
"package org.apache.derby.impl.services.cache;",
"",
"import org.apache.derby.iapi.services.cache.CacheFactory;",
"import org.apache.derby.iapi.services.cache.CacheManager;",
"import org.apache.derby.iapi.services.cache.CacheableFactory;",
"",
"/**",
" * Factory class which creates cache manager instances based on the",
" * <code>ConcurrentCache</code> implementation.",
" */",
"public class ConcurrentCacheFactory implements CacheFactory {",
" /**",
" * Create a new <code>ConcurrentCache</code> instance.",
" *",
" * @param holderFactory factory which creates <code>Cacheable</code>s",
" * @param name name of the cache",
" * @param initialSize initial size of the cache (number of objects)",
" * @param maximumSize maximum size of the cache (number of objects)",
" * @return a <code>ConcurrentCache</code> instance",
" */",
" public CacheManager newCacheManager(CacheableFactory holderFactory,",
" String name,",
" int initialSize, int maximumSize) {",
" return new ConcurrentCache(holderFactory, name);",
" }",
"}"
],
"header": "@@ -0,0 +1,47 @@",
"removed": []
}
]
}
] |
derby-DERBY-2911-7a816cbd
|
DERBY-2911 (partial) Implement a buffer manager using java.util.concurrent classes
Added replacement algorithm in the new buffer manager.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@580252 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/CacheEntry.java",
"hunks": [
{
"added": [
"",
"",
""
],
"header": "@@ -76,13 +76,16 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": []
},
{
"added": [
" /**",
" * Callback object used to notify the replacement algorithm about events on",
" * the cached objects (like accesses and requests for removal).",
" */",
" private ReplacementPolicy.Callback callback;",
""
],
"header": "@@ -90,6 +93,12 @@ final class CacheEntry {",
"removed": []
},
{
"added": [
" callback.access();"
],
"header": "@@ -116,6 +125,7 @@ final class CacheEntry {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" /** Replacement policy to be used for this cache. */",
" private final ReplacementPolicy replacementPolicy;"
],
"header": "@@ -57,6 +57,8 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
" * @param maxSize maximum number of elements in the cache",
" ConcurrentCache(CacheableFactory holderFactory, String name, int maxSize) {",
" replacementPolicy = new ClockPolicy(this, maxSize);"
],
"header": "@@ -72,9 +74,11 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" ConcurrentCache(CacheableFactory holderFactory, String name) {"
]
},
{
"added": [
" * @exception StandardException if an error occurs during the search for",
" * a free cacheable",
" private void findFreeCacheable(CacheEntry entry) throws StandardException {",
" replacementPolicy.insertEntry(entry);",
" if (!entry.isValid()) {",
" entry.setCacheable(holderFactory.newCacheable(this));",
" }",
" * if the entry is present in the cache and locked by the current thread."
],
"header": "@@ -130,17 +134,20 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" private void findFreeCacheable(CacheEntry entry) {",
" // TODO - When the replacement algorithm has been implemented, we",
" // should reuse a cacheable if possible.",
" entry.setCacheable(holderFactory.newCacheable(this));",
" * if the entry is locked by the current thread."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ReplacementPolicy.java",
"hunks": [
{
"added": [
"/*",
"",
" Derby - Class org.apache.derby.impl.services.cache.ReplacementPolicy",
"",
" Licensed to the Apache Software Foundation (ASF) under one or more",
" contributor license agreements. See the NOTICE file distributed with",
" this work for additional information regarding copyright ownership.",
" The ASF licenses this file to you under the Apache License, Version 2.0",
" (the \"License\"); you may not use this file except in compliance with",
" the License. You may obtain a copy of the License at",
"",
" http://www.apache.org/licenses/LICENSE-2.0",
"",
" Unless required by applicable law or agreed to in writing, software",
" distributed under the License is distributed on an \"AS IS\" BASIS,",
" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
" See the License for the specific language governing permissions and",
" limitations under the License.",
"",
" */",
"",
"package org.apache.derby.impl.services.cache;",
"",
"import org.apache.derby.iapi.error.StandardException;",
"",
"/**",
" * Interface that must be implemented by classes that provide a replacement",
" * algorithm for <code>ConcurrentCache</code>.",
" */",
"interface ReplacementPolicy {",
" /**",
" * Insert an entry into the <code>ReplacementPolicy</code>'s data",
" * structure, possibly evicting another entry. The entry should be",
" * uninitialized when the method is called (that is, its",
" * <code>Cacheable</code> should be <code>null</code>), and it should be",
" * locked. When the method returns, the entry may have been initialized",
" * with a <code>Cacheable</code> which is ready to be reused. It is also",
" * possible that the <code>Cacheable</code> is still <code>null</code> when",
" * the method returns, in which case the caller must allocate one itself.",
" *",
" * @param entry the entry to insert",
" * @return a callback object that can be used to notify the replacement",
" * algorithm about operations performed on the cached object",
" * @exception StandardException if an error occurs while inserting the",
" * entry",
" */",
" Callback insertEntry(CacheEntry entry) throws StandardException;",
"",
" /**",
" * The interface for the callback objects that <code>ConcurrentCache</code>",
" * uses to notify the replacement algorithm about events such as look-ups",
" * and removals. Each <code>Callback</code> object is associated with a",
" * single entry in the cache.",
" */",
" interface Callback {",
" /**",
" * Notify the replacement algorithm that the cache entry has been",
" * accessed. The replacement algorithm can use this information to",
" * collect statistics about access frequency which can be used to",
" * determine the order of evictions.",
" *",
" * <p>",
" *",
" * The entry associated with the callback object must be locked by the",
" * current thread.",
" */",
" void access();",
"",
" /**",
" * Notify the replacement algorithm that the entry associated with this",
" * callback object has been removed, and the callback object and the",
" * <code>Cacheable</code> can be reused.",
" *",
" * <p>",
" *",
" * The entry associated with the callback object must be locked by the",
" * current thread.",
" */",
" void free();",
" }",
"}"
],
"header": "@@ -0,0 +1,81 @@",
"removed": []
}
]
}
] |
derby-DERBY-2911-a0ab25a8
|
DERBY-2911 (partial) Updated class javadoc for CacheEntry with
information about its different states and how to lock more than one
entry without risking deadlocks.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@575607 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/CacheEntry.java",
"hunks": [
{
"added": [
" * this class, except <code>lock()</code>, it must first have called",
" * <code>lock()</code> to ensure exclusive access to the entry.",
" *",
" * <p>",
" *",
" * When no thread holds the lock on the entry, it must be in one of the",
" * following states:",
" *",
" * <dl>",
" *",
" * <dt>Uninitialized</dt> <dd>The entry object has just been constructed. In",
" * this state, <code>isValid()</code> and <code>isKept()</code> return",
" * <code>false</code>, and <code>getCacheable()</code> returns",
" * <code>null</code>. As long as the entry is in this state, the reference to",
" * the object should not be made available to other threads than the one that",
" * created it, since there is no way for other threads to see the difference",
" * between an uninitialized entry and a removed entry.</dd>",
" *",
" * <dt>Unkept</dt> <dd>In this state, the entry object contains a reference to",
" * a <code>Cacheable</code> and the keep count is zero. <code>isValid()</code>",
" * returns <code>true</code> and <code>isKept()</code> returns",
" * <code>false</code> in this state. <code>getCacheable()</code> returns a",
" * non-null value.<dd>",
" *",
" * <dt>Kept</dt> <dd>Same as the unkept state, except that the keep count is",
" * positive and <code>isKept()</code> returns <code>true</code>.</dd>",
" *",
" * <dt>Removed</dt> <dd>The entry has been removed from the cache. In this",
" * state, <code>isValid()</code> and <code>isKept()</code> return",
" * <code>false</code>, and <code>getCacheable()</code> returns",
" * <code>null</code>. When an entry has entered the removed state, it cannot be",
" * transitioned back to any of the other states.</dd>",
" *",
" * </dl>",
" *",
" * <p>",
" *",
" * To prevent deadlocks, each thread should normally lock only one entry at a",
" * time. In some cases it is legitimate to hold the lock on two entries, for",
" * instance if an entry must be evicted to make room for a new entry. If this",
" * is the case, exactly one of the two entries must be in the uninitialized",
" * state, and the uninitialized entry must be locked before the lock on the",
" * other entry can be requested."
],
"header": "@@ -29,9 +29,49 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": [
" * this class, except <code>lock()</code>, it must have called",
" * <code>lock()</code> to ensure exclusive access to the entry. No thread",
" * should ever lock more than one entry in order to prevent deadlocks."
]
}
]
}
] |
derby-DERBY-2911-a8544646
|
DERBY-2911 (partial) Implemented CacheManager.values() for ConcurrentCache
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@572693 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
"import java.util.ArrayList;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
},
{
"added": [
" /**",
" * Return a collection view of all the <code>Cacheable</code>s in the",
" * cache. There is no guarantee that the objects in the collection can be",
" * accessed in a thread-safe manner once this method has returned, so it",
" * should only be used for diagnostic purposes. (Currently, it is only used",
" * by the <code>StatementCache</code> VTI.)",
" *",
" * @return a collection view of the objects in the cache",
" */",
" public Collection<Cacheable> values() {",
" ArrayList<Cacheable> values = new ArrayList<Cacheable>();",
" for (CacheEntry entry : cache.values()) {",
" entry.lock();",
" try {",
" Cacheable c = entry.getCacheable();",
" if (c != null) {",
" values.add(c);",
" }",
" } finally {",
" entry.unlock();",
" }",
" }",
" return values;"
],
"header": "@@ -401,8 +402,28 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" public Collection values() {",
" // TODO",
" return null;"
]
}
]
}
] |
derby-DERBY-2911-b5ca246a
|
DERBY-2911 (partial)
Improved some javadoc comments in the CacheManager interface and made
them match what's actually implemented.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@573196 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/cache/CacheManager.java",
"hunks": [
{
"added": [
"\t\tRelease a <code>Cacheable</code> object previously found with",
"\t\t<code>find()</code> or <code>findCached()</code>, or created with",
"\t\t<code>create()</code>, and which is still kept by the caller.",
"\t\t@param entry the cached object to release"
],
"header": "@@ -141,9 +141,12 @@ public interface CacheManager {",
"removed": [
"\t\tRelease a Cacheable object previously found with find() or findCached()."
]
},
{
"added": [
"\t\tThe object must previously have been found with <code>find()</code> or",
"\t\t<code>findCached()</code>, or created with <code>create()</code>, and",
"\t\tit must still be kept by the caller."
],
"header": "@@ -152,7 +155,9 @@ public interface CacheManager {",
"removed": [
"\t\tThe object must have previously been found with find() or findCached()."
]
},
{
"added": [
"\t\t@param entry the object to remove from the cache",
""
],
"header": "@@ -162,6 +167,8 @@ public interface CacheManager {",
"removed": []
},
{
"added": [
"\t\tany more valid references on a <code>find()</code>,",
"\t\t<code>findCached()</code> or <code>create()</code> call,"
],
"header": "@@ -212,7 +219,8 @@ public interface CacheManager {",
"removed": [
"\t\tany more valid references on a find() or findCached() call,"
]
},
{
"added": [
"\t * <p>",
"\t * This method should only be used for diagnostic purposes.",
"\t *"
],
"header": "@@ -253,6 +261,9 @@ public interface CacheManager {",
"removed": []
}
]
}
] |
derby-DERBY-2911-ce4ee235
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
Minor change in ClockPolicy.shrinkMe() to make it easier to read.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@642752 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
" // Fetch the next holder from the clock.",
" h = clock.get(pos);",
" // The index of the holder we're looking at. Since no one else than",
" // us can remove elements from the clock while we're in this",
" // method, and new elements will be added at the end of the list,",
" // the index for a holder does not change until we remove it.",
" final int index = pos;",
"",
" // Let pos point at the index of the holder we'll look at in the",
" // next iteration.",
" pos++;",
""
],
"header": "@@ -611,21 +611,25 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" // The index of the holder we're looking at. Since no one else than",
" // us can remove elements from the clock while we're in this",
" // method, and new elements will be added at the end of the list,",
" // the index for a holder does not change until we remove it.",
" final int index;",
"",
" index = pos++;",
" h = clock.get(index);"
]
}
]
}
] |
derby-DERBY-2911-d8bdc4cb
|
DERBY-2911 (partial) Implement a buffer manager using java.util.concurrent classes
Add functionality to shrink the cache when it has grown bigger than
its specified maximum size.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@601680 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/BackgroundCleaner.java",
"hunks": [
{
"added": [
" /**",
" * Flag which tells whether the cleaner should try to shrink the cache",
" * the next time it wakes up.",
" */",
" private volatile boolean shrink;",
""
],
"header": "@@ -54,6 +54,12 @@ final class BackgroundCleaner implements Serviceable {",
"removed": []
},
{
"added": [
" boolean scheduleClean(CacheEntry entry) {"
],
"header": "@@ -83,7 +89,7 @@ final class BackgroundCleaner implements Serviceable {",
"removed": [
" boolean scheduleWork(CacheEntry entry) {"
]
},
{
"added": [
" /**",
" * Request that the cleaner tries to shrink the cache the next time it",
" * wakes up.",
" */",
" void scheduleShrink() {",
" shrink = true;",
" requestService();",
" }",
""
],
"header": "@@ -91,6 +97,15 @@ final class BackgroundCleaner implements Serviceable {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ClockPolicy.java",
"hunks": [
{
"added": [
"import java.util.concurrent.atomic.AtomicBoolean;"
],
"header": "@@ -22,6 +22,7 @@",
"removed": []
},
{
"added": [
" /**",
" * Tells whether there currently is a thread in the <code>doShrink()</code>",
" * or <code>trimToSize()</code> methods. If this variable is",
" * <code>true</code> a call to any one of those methods will be a no-op.",
" */",
" private final AtomicBoolean isShrinking = new AtomicBoolean();",
""
],
"header": "@@ -95,6 +96,13 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
},
{
"added": [
" final int size;",
" size = clock.size();",
" if (size < maxSize) {",
" }",
" }",
"",
" if (size > maxSize) {",
" // Maximum size is exceeded. Shrink the clock in the background",
" // cleaner, if we have one; otherwise, shrink it in the current",
" // thread.",
" BackgroundCleaner cleaner = cacheManager.getBackgroundCleaner();",
" if (cleaner != null) {",
" cleaner.scheduleShrink();",
" doShrink();",
" // Rotate the clock hand (look at up to 20% of the cache) and try to",
" // find free space for the entry. Only allow evictions if the cache",
" // has reached its maximum size. Otherwise, we only look for invalid",
" // entries and rather grow the cache than evict valid entries.",
" Holder h = rotateClock(entry, (float) 0.2, size >= maxSize);"
],
"header": "@@ -117,26 +125,35 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" final boolean isFull;",
" if (clock.size() < maxSize) {",
" // The cache is not full, but there are free entries that can",
" // be reused.",
" isFull = false;",
" // The cache is full, so we'll need to rotate the clock hand",
" // and evict an object.",
" isFull = true;",
" // rotate clock hand (look at up to 20% of the cache)",
" Holder h = rotateClock(entry, (float) 0.2, isFull);"
]
},
{
"added": [
" /**",
" * Flag which tells whether this holder has been evicted from the",
" * clock. If it has been evicted, it can't be reused when a new entry",
" * is inserted. Only the owner of this holder's monitor is allowed to",
" * access this variable.",
" */",
" private boolean evicted;",
""
],
"header": "@@ -181,6 +198,14 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
},
{
"added": [
" * specified entry, <code>false</code> if someone else has taken it or",
" * the holder has been evicted from the clock",
" if (entry == null && !evicted) {"
],
"header": "@@ -218,10 +243,11 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" * specified entry, <code>false</code> if someone else has taken it",
" if (entry == null) {"
]
},
{
"added": [
"",
" /**",
" * Evict this holder from the clock if it is not associated with an",
" * entry.",
" *",
" * @return <code>true</code> if the holder was successfully evicted,",
" * <code>false</code> otherwise",
" */",
" synchronized boolean evictIfFree() {",
" if (entry == null && !evicted) {",
" int free = freeEntries.decrementAndGet();",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(",
" free >= 0, \"freeEntries is negative: \" + free);",
" }",
" evicted = true;",
" return true;",
" }",
" return false;",
" }",
"",
" /**",
" * Mark this holder as evicted from the clock, effectively preventing",
" * reuse of the holder. Calling thread must have locked the holder's",
" * entry.",
" */",
" synchronized void setEvicted() {",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(!evicted, \"Already evicted\");",
" }",
" evicted = true;",
" entry = null;",
" }",
"",
" /**",
" * Check whether this holder has been evicted from the clock.",
" *",
" * @return <code>true</code> if it has been evicted, <code>false</code>",
" * otherwise",
" */",
" synchronized boolean isEvicted() {",
" return evicted;",
" }"
],
"header": "@@ -260,6 +286,49 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": []
},
{
"added": [
" SanityManager.ASSERT(e.isValid(),",
" \"Holder contains invalid entry\");",
" SanityManager.ASSERT(!h.isEvicted(),",
" \"Trying to reuse an evicted holder\");"
],
"header": "@@ -370,7 +439,10 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" SanityManager.ASSERT(e.isValid());"
]
},
{
"added": [
" if (cleaner != null && cleaner.scheduleClean(e)) {"
],
"header": "@@ -397,7 +469,7 @@ final class ClockPolicy implements ReplacementPolicy {",
"removed": [
" if (cleaner != null && cleaner.scheduleWork(e)) {"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" /**",
" * Return the <code>ReplacementPolicy</code> instance for this cache.",
" *",
" * @return replacement policy",
" */",
" ReplacementPolicy getReplacementPolicy() {",
" return replacementPolicy;",
" }",
""
],
"header": "@@ -92,6 +92,15 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
" boolean shrunk = false;"
],
"header": "@@ -500,6 +509,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
" shrunk = true;",
" if (shrunk) {",
" replacementPolicy.trimToSize();",
" }"
],
"header": "@@ -510,12 +520,16 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
" boolean shrunk = false;"
],
"header": "@@ -561,6 +575,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/cache/ReplacementPolicy.java",
"hunks": [
{
"added": [
" /**",
" * Try to shrink the cache if it has exceeded its maximum size. It is not",
" * guaranteed that the cache will actually shrink.",
" */",
" void doShrink();",
"",
" /**",
" * Try to reduce the size of the cache as much as possible by removing",
" * invalid entries. Depending on the underlying data structure, this might",
" * be a very expensive operation. The implementations are therefore allowed",
" * to ignore calls to this method when they think the cost outweighs the",
" * benefit.",
" */",
" void trimToSize();",
""
],
"header": "@@ -46,6 +46,21 @@ interface ReplacementPolicy {",
"removed": []
}
]
}
] |
derby-DERBY-2911-dbc7584c
|
DERBY-2911 (partial) Implement a buffer manager using java.util.concurrent classes
Changed signatures of ConcurrentCache.findFreeCacheable() and ConcurrentCache.removeEntry().
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@577224 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" * Find a free cacheable and give the specified entry a pointer to it. If",
" * a free cacheable cannot be found, allocate a new one. The entry must be",
" * locked by the current thread.",
" * @param entry the entry for which a <code>Cacheable</code> is needed",
" private void findFreeCacheable(CacheEntry entry) {",
" entry.setCacheable(holderFactory.newCacheable(this));"
],
"header": "@@ -125,15 +125,16 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" * Find a free cacheable. If a free one cannot be found, allocate a new",
" * one.",
" * @return a cacheable with no identity",
" private Cacheable findFreeCacheable() {",
" return holderFactory.newCacheable(this);"
]
},
{
"added": [
" * @param key the identity of the entry to remove",
" private void removeEntry(Object key) {",
" CacheEntry entry = cache.remove(key);",
" entry.getCacheable().clearIdentity();"
],
"header": "@@ -141,13 +142,12 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" * @param entry the entry to remove from the cache",
" private void removeEntry(CacheEntry entry) {",
" Cacheable c = entry.getCacheable();",
" cache.remove(c.getIdentity());",
" c.clearIdentity();"
]
},
{
"added": [
" findFreeCacheable(entry);",
" item = entry.getCacheable().setIdentity(key);",
" removeEntry(key);"
],
"header": "@@ -173,17 +173,12 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" Cacheable free = findFreeCacheable();",
" item = free.setIdentity(key);",
" cache.remove(key);",
"",
" // TODO - When the replacement algorithm has been",
" // implemented, the cacheable (free) should be returned to",
" // the free list.",
""
]
},
{
"added": [
" findFreeCacheable(entry);",
" Cacheable c =",
" entry.getCacheable().createIdentity(key, createParameter);",
" // Could not create an object with that identity. Remove the",
" // entry from the cache.",
" removeEntry(key);"
],
"header": "@@ -258,14 +253,16 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" Cacheable free = findFreeCacheable();",
" Cacheable c = free.createIdentity(key, createParameter);",
" // TODO - When replacement policy is implemented, return the",
" // cacheable (free) to the free list"
]
},
{
"added": [
" Object key = item.getIdentity();",
" CacheEntry entry = cache.get(key);"
],
"header": "@@ -305,7 +302,8 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" CacheEntry entry = cache.get(item.getIdentity());"
]
},
{
"added": [
" removeEntry(key);"
],
"header": "@@ -313,7 +311,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" removeEntry(entry);"
]
},
{
"added": [
" removeEntry(c.getIdentity());"
],
"header": "@@ -373,7 +371,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" removeEntry(entry);"
]
},
{
"added": [
" removeEntry(c.getIdentity());"
],
"header": "@@ -424,7 +422,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" removeEntry(entry);"
]
}
]
}
] |
derby-DERBY-2911-df17cb04
|
DERBY-2911 (partial) Implement a buffer manager using java.util.concurrent classes
Remove entries from the cache if a failure occurs while setting the
identity of the Cacheable.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@578006 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" Cacheable c = entry.getCacheable();",
" if (c != null && c.getIdentity() != null) {",
" // The cacheable should not have an identity when it has been",
" // removed.",
" entry.getCacheable().clearIdentity();",
" }",
" /**",
" * Initialize an entry by finding a free <code>Cacheable</code> and setting",
" * its identity. If the identity is successfully set, the entry is kept and",
" * the <code>Cacheable</code> is inserted into the entry and returned.",
" * Otherwise, the entry is removed from the cache and <code>null</code>",
" * is returned.",
" *",
" * @param entry the entry to initialize",
" * @param key the identity to set",
" * @param createParameter parameter to <code>createIdentity()</code>",
" * (ignored if <code>create</code> is <code>false</code>)",
" * @param create if <code>true</code>, create new identity with",
" * <code>Cacheable.createIdentity()</code>; otherwise, set identity with",
" * <code>Cacheable.setIdentity()</code>",
" * @return a <code>Cacheable</code> if the identity could be set,",
" * <code>null</code> otherwise",
" * @exception StandardException if an error occured while searching for a",
" * free <code>Cacheable</code> or while setting the identity",
" * @see Cacheable#setIdentity(Object)",
" * @see Cacheable#createIdentity(Object,Object)",
" */",
" private Cacheable initIdentity(CacheEntry entry,",
" Object key, Object createParameter, boolean create)",
" throws StandardException {",
" Cacheable c = null;",
" try {",
" findFreeCacheable(entry);",
" if (create) {",
" c = entry.getCacheable().createIdentity(key, createParameter);",
" } else {",
" c = entry.getCacheable().setIdentity(key);",
" }",
" } finally {",
" if (c == null) {",
" // Either an exception was thrown, or setIdentity() or",
" // createIdentity() returned null. In either case, the entry is",
" // invalid and must be removed.",
" removeEntry(key);",
" }",
" }",
"",
" // If we successfully set the identity, insert the cacheable and mark",
" // the entry as kept.",
" if (c != null) {",
" entry.setCacheable(c);",
" entry.keep();",
" }",
" return c;",
" }",
""
],
"header": "@@ -146,12 +146,67 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" entry.getCacheable().clearIdentity();"
]
},
{
"added": [
" if (item != null) {",
" entry.keep();",
" return item;",
" // not currently in the cache",
" return initIdentity(entry, key, null, false);"
],
"header": "@@ -171,22 +226,12 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" if (item == null) {",
" // not currently in the cache",
" findFreeCacheable(entry);",
" item = entry.getCacheable().setIdentity(key);",
" if (item == null) {",
" // Could not find an object with that identity. Remove its",
" // entry from the cache and return null.",
" removeEntry(key);",
" return null;",
" }",
" entry.setCacheable(item);",
" // increase keep count to prevent others from removing the entry",
" // while it's not locked",
" entry.keep();",
" return item;"
]
},
{
"added": [
" return initIdentity(entry, key, createParameter, true);"
],
"header": "@@ -253,18 +298,7 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" findFreeCacheable(entry);",
" Cacheable c =",
" entry.getCacheable().createIdentity(key, createParameter);",
" if (c != null) {",
" entry.setCacheable(c);",
" entry.keep();",
" } else {",
" // Could not create an object with that identity. Remove the",
" // entry from the cache.",
" removeEntry(key);",
" }",
" return c;"
]
}
]
}
] |
derby-DERBY-2911-e4501308
|
DERBY-2911 (partial) Implemented CacheManager.shutdown() in ConcurrentCache
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@572953 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" /**",
" * Flag that indicates whether this cache instance has been shut down. When",
" * it has been stopped, <code>find()</code>, <code>findCached()</code> and",
" * <code>create()</code> will return <code>null</code>. The flag is",
" * declared <code>volatile</code> so that no synchronization is needed when",
" * it is accessed by concurrent threads.",
" */",
" private volatile boolean stopped;",
""
],
"header": "@@ -58,6 +58,15 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
"",
" if (stopped) {",
" return null;",
" }",
""
],
"header": "@@ -154,6 +163,11 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
"",
" if (stopped) {",
" return null;",
" }",
""
],
"header": "@@ -192,6 +206,11 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
"",
" if (stopped) {",
" return null;",
" }",
""
],
"header": "@@ -228,6 +247,11 @@ final class ConcurrentCache implements CacheManager {",
"removed": []
},
{
"added": [
" /**",
" * Shut down the cache.",
" */",
" // TODO - unsubscribe background writer",
" stopped = true;",
" cleanAll();",
" ageOut();"
],
"header": "@@ -358,8 +382,14 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" // TODO"
]
}
]
}
] |
derby-DERBY-2911-f30ee415
|
DERBY-2911: Implement a buffer manager using java.util.concurrent classes
DERBY-3493: stress.multi times out waiting on testers with blocked testers waiting on the same statement
Changed ConcurrentCache.create() to match Clock.create() more closely.
The patch basically makes ConcurrentCache.create() use
ConcurrentHashMap.get() directly instead of going through
ConcurrentCache.getEntry(), which may block until the identity has
been set by another thread. Then create() fails immediately if the
object already exists in the cache, also if another thread is in the
process of inserting the object into the cache. Since this introduced
yet another difference between find() and create() in
findOrCreateObject(), I also followed Øystein's suggestion from his
review of DERBY-2911 and split findOrCreateObject() into a number of
smaller methods, which I think makes the code easier to follow.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@635183 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/cache/ConcurrentCache.java",
"hunks": [
{
"added": [
" * Insert a {@code CacheEntry} into a free slot in the {@code",
" * ReplacementPolicy}'s internal data structure, and return a {@code",
" * Cacheable} that the caller can reuse. The entry must have been locked",
" * before this method is called.",
" * @param key the identity of the object being inserted",
" * @param entry the entry that is being inserted",
" * @return a {@code Cacheable} object that the caller can reuse",
" * @throws StandardException if an error occurs while inserting the entry",
" * or while allocating a new {@code Cacheable}",
" private Cacheable insertIntoFreeSlot(Object key, CacheEntry entry)",
" try {",
" replacementPolicy.insertEntry(entry);",
" } catch (StandardException se) {",
" // Failed to insert the entry into the replacement policy. Make",
" // sure that it's also removed from the hash table.",
" removeEntry(key);",
" throw se;",
" Cacheable free = entry.getCacheable();",
"",
" if (free == null) {",
" // We didn't get a reusable cacheable. Create a new one.",
" free = holderFactory.newCacheable(this);",
" entry.keep(true);",
" return free;",
" }",
"",
" /**",
" * Complete the setting of the identity. This includes notifying the",
" * threads that are waiting for the setting of the identity to complete,",
" * so that they can wake up and continue. If setting the identity failed,",
" * the entry will be removed from the cache.",
" *",
" * @param key the identity of the object being inserted",
" * @param entry the entry which is going to hold the cached object",
" * @param item a {@code Cacheable} object with the identity set (if",
" * the identity was successfully set), or {@code null} if setting the",
" * identity failed",
" */",
" private void settingIdentityComplete(",
" Object key, CacheEntry entry, Cacheable item) {",
" entry.lock();",
" entry.settingIdentityComplete();",
" entry.setCacheable(item);",
" } else {"
],
"header": "@@ -183,95 +183,66 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" * Find or create an object in the cache. If the object is not presently",
" * in the cache, it will be added to the cache.",
" * @param key the identity of the object to find or create",
" * @param create whether or not the object should be created",
" * @param createParameter used as argument to <code>createIdentity()</code>",
" * when <code>create</code> is <code>true</code>",
" * @return the cached object, or <code>null</code> if it cannot be found",
" * @throws StandardException if an error happens when accessing the object",
" private Cacheable findOrCreateObject(Object key, boolean create,",
" Object createParameter)",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(createParameter == null || create,",
" \"createParameter should be null when create is false\");",
" if (stopped) {",
" return null;",
" // A free cacheable which we'll initialize if we don't find the object",
" // in the cache.",
" Cacheable free;",
" CacheEntry entry = getEntry(key);",
" Cacheable item = entry.getCacheable();",
" if (create) {",
" throw StandardException.newException(",
" SQLState.OBJECT_EXISTS_IN_CACHE, name, key);",
" }",
" entry.keep(true);",
" return item;",
" }",
"",
" // not currently in the cache",
" try {",
" replacementPolicy.insertEntry(entry);",
" } catch (StandardException se) {",
" throw se;",
" }",
"",
" free = entry.getCacheable();",
" if (free == null) {",
" // We didn't get a reusable cacheable. Create a new one.",
" free = holderFactory.newCacheable(this);",
"",
" entry.keep(true);",
"",
"",
" // Set the identity in a try/finally so that we can remove the entry",
" // if the operation fails. We have released the lock on the entry so",
" // that we don't run into deadlocks if the user code (setIdentity() or",
" // createIdentity()) reenters the cache.",
" Cacheable c = null;",
" try {",
" if (create) {",
" c = free.createIdentity(key, createParameter);",
" } else {",
" c = free.setIdentity(key);",
" }",
" } finally {",
" entry.lock();",
" try {",
" // Notify the entry that setIdentity() or createIdentity() has",
" // finished.",
" entry.settingIdentityComplete();",
" if (c == null) {",
" // Setting identity failed, or the object was not found.",
" removeEntry(key);",
" } else {",
" // Successfully set the identity.",
" entry.setCacheable(c);",
" }",
" } finally {",
" entry.unlock();",
" }",
" }",
"",
" return c;"
]
},
{
"added": [
"",
" if (stopped) {",
" return null;",
" }",
"",
" CacheEntry entry = getEntry(key);",
"",
" Cacheable item;",
" try {",
" item = entry.getCacheable();",
" if (item != null) {",
" // The object is already cached. Increase the use count and",
" // return it.",
" entry.keep(true);",
" return item;",
" } else {",
" // The object is not cached. Insert the entry into a free",
" // slot and retrieve a reusable Cacheable.",
" item = insertIntoFreeSlot(key, entry);",
" }",
" } finally {",
" entry.unlock();",
" }",
"",
" // Set the identity without holding the lock on the entry. If we",
" // hold the lock, we may run into a deadlock if the user code in",
" // setIdentity() re-enters the buffer manager.",
" Cacheable itemWithIdentity = null;",
" try {",
" itemWithIdentity = item.setIdentity(key);",
" } finally {",
" // Always invoke settingIdentityComplete(), also on error,",
" // otherwise other threads may wait forever. If setIdentity()",
" // fails, itemWithIdentity is going to be null.",
" settingIdentityComplete(key, entry, itemWithIdentity);",
" }",
"",
" return itemWithIdentity;"
],
"header": "@@ -285,7 +256,44 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" return findOrCreateObject(key, false, null);"
]
},
{
"added": [
"",
" if (stopped) {",
" return null;",
" }",
"",
" CacheEntry entry = new CacheEntry();",
" entry.lock();",
"",
" if (cache.putIfAbsent(key, entry) != null) {",
" // We can't create the object if it's already in the cache.",
" throw StandardException.newException(",
" SQLState.OBJECT_EXISTS_IN_CACHE, name, key);",
" }",
"",
" Cacheable item;",
" try {",
" item = insertIntoFreeSlot(key, entry);",
" } finally {",
" entry.unlock();",
" }",
"",
" // Create the identity without holding the lock on the entry.",
" // Otherwise, we may run into a deadlock if the user code in",
" // createIdentity() re-enters the buffer manager.",
" Cacheable itemWithIdentity = null;",
" try {",
" itemWithIdentity = item.createIdentity(key, createParameter);",
" } finally {",
" // Always invoke settingIdentityComplete(), also on error,",
" // otherwise other threads may wait forever. If createIdentity()",
" // fails, itemWithIdentity is going to be null.",
" settingIdentityComplete(key, entry, itemWithIdentity);",
" }",
"",
" return itemWithIdentity;"
],
"header": "@@ -340,7 +348,41 @@ final class ConcurrentCache implements CacheManager {",
"removed": [
" return findOrCreateObject(key, true, createParameter);"
]
}
]
}
] |
derby-DERBY-2915-acaa764f
|
DERBY-2915 - make test NoConnetionAfterHardUpgrade accept XSLAP or XSLAN errors.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@555441 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2920-4ecfb0ef
|
DERBY-2920: Share code between readExternal() and readExternalFromArray()
Provide a default readExternalFromArray() method that simply forwards
calls to readExternal() in the DataType class. Remove the method from
sub-classes where the readExternal() and readExternalFromArray() methods
are identical.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1407432 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/BinaryDecimal.java",
"hunks": [
{
"added": [],
"header": "@@ -30,7 +30,6 @@ import java.sql.Types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/DataType.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"import java.io.IOException;"
],
"header": "@@ -23,10 +23,11 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.i18n.MessageService;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLBinary.java",
"hunks": [
{
"added": [],
"header": "@@ -27,8 +27,6 @@ import org.apache.derby.iapi.reference.ContextId;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLBoolean.java",
"hunks": [
{
"added": [],
"header": "@@ -21,9 +21,6 @@",
"removed": [
"",
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
},
{
"added": [],
"header": "@@ -32,11 +29,6 @@ import org.apache.derby.iapi.services.io.StoredFormatIds;",
"removed": [
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.TypeId;",
"import org.apache.derby.iapi.types.BooleanDataValue;",
"",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLDate.java",
"hunks": [
{
"added": [],
"header": "@@ -23,8 +23,6 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLDecimal.java",
"hunks": [
{
"added": [],
"header": "@@ -23,8 +23,6 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
},
{
"added": [],
"header": "@@ -33,11 +31,9 @@ import org.apache.derby.iapi.services.io.Storable;",
"removed": [
"import org.apache.derby.iapi.services.info.JVMInfo;",
"import java.lang.Math;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLDouble.java",
"hunks": [
{
"added": [],
"header": "@@ -23,8 +23,6 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
},
{
"added": [],
"header": "@@ -32,16 +30,8 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": [
"import org.apache.derby.iapi.types.BooleanDataValue;",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.NumberDataValue;",
"import org.apache.derby.iapi.types.TypeId;",
"",
"import org.apache.derby.iapi.types.NumberDataType;",
"import org.apache.derby.iapi.types.SQLBoolean;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLInteger.java",
"hunks": [
{
"added": [],
"header": "@@ -21,12 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.TypeId;",
"import org.apache.derby.iapi.types.NumberDataValue;",
"import org.apache.derby.iapi.types.BooleanDataValue;"
]
},
{
"added": [],
"header": "@@ -37,9 +31,6 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": [
"import org.apache.derby.iapi.types.NumberDataType;",
"import org.apache.derby.iapi.types.SQLBoolean;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLLongint.java",
"hunks": [
{
"added": [],
"header": "@@ -21,12 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.TypeId;",
"import org.apache.derby.iapi.types.NumberDataValue;",
"import org.apache.derby.iapi.types.BooleanDataValue;"
]
},
{
"added": [],
"header": "@@ -37,9 +31,6 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": [
"import org.apache.derby.iapi.types.NumberDataType;",
"import org.apache.derby.iapi.types.SQLBoolean;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLReal.java",
"hunks": [
{
"added": [],
"header": "@@ -23,22 +23,12 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.types.BooleanDataValue;",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.NumberDataValue;",
"import org.apache.derby.iapi.types.StringDataValue;",
"import org.apache.derby.iapi.types.TypeId;",
"",
"import org.apache.derby.iapi.types.NumberDataType;",
"import org.apache.derby.iapi.types.SQLBoolean;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLRef.java",
"hunks": [
{
"added": [],
"header": "@@ -21,28 +21,14 @@",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.types.DataTypeDescriptor;",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.TypeId;",
"import org.apache.derby.iapi.reference.SQLState;",
"",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"",
"import org.apache.derby.iapi.types.RowLocation;",
"import org.apache.derby.iapi.types.Orderable;",
"",
"import org.apache.derby.iapi.types.DataType;",
"import org.apache.derby.iapi.types.RefDataValue;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLSmallint.java",
"hunks": [
{
"added": [],
"header": "@@ -23,13 +23,6 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.TypeId;",
"import org.apache.derby.iapi.types.NumberDataValue;",
"import org.apache.derby.iapi.types.BooleanDataValue;",
""
]
},
{
"added": [],
"header": "@@ -38,9 +31,6 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": [
"import org.apache.derby.iapi.types.NumberDataType;",
"import org.apache.derby.iapi.types.SQLBoolean;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLTime.java",
"hunks": [
{
"added": [],
"header": "@@ -23,8 +23,6 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLTimestamp.java",
"hunks": [
{
"added": [],
"header": "@@ -23,8 +23,6 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLTinyint.java",
"hunks": [
{
"added": [],
"header": "@@ -23,24 +23,14 @@ package org.apache.derby.iapi.types;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.types.BooleanDataValue;",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.NumberDataValue;",
"import org.apache.derby.iapi.types.TypeId;",
"",
"import org.apache.derby.iapi.types.NumberDataType;",
"import org.apache.derby.iapi.types.SQLBoolean;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/UserType.java",
"hunks": [
{
"added": [],
"header": "@@ -25,20 +25,12 @@ import org.apache.derby.catalog.TypeDescriptor;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.TypeId;",
"",
"import org.apache.derby.iapi.types.BooleanDataValue;",
"import org.apache.derby.iapi.types.UserDataValue;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/StorableFormatId.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.impl.store.access;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTree.java",
"hunks": [
{
"added": [],
"header": "@@ -23,7 +23,6 @@ package org.apache.derby.impl.store.access.btree;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/index/B2I.java",
"hunks": [
{
"added": [],
"header": "@@ -28,8 +28,6 @@ import java.util.Properties;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
},
{
"added": [
" public void readExternal(ObjectInput in)"
],
"header": "@@ -1137,7 +1135,7 @@ public class B2I extends BTree",
"removed": [
"\tprivate final void localReadExternal(ObjectInput in)"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/heap/Heap.java",
"hunks": [
{
"added": [],
"header": "@@ -29,7 +29,6 @@ import java.util.Properties;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;"
]
},
{
"added": [
" public void readExternal(ObjectInput in)"
],
"header": "@@ -1188,7 +1187,7 @@ public class Heap",
"removed": [
"\tprivate final void localReadExternal(ObjectInput in)"
]
},
{
"added": [],
"header": "@@ -1238,16 +1237,4 @@ public class Heap",
"removed": [
"",
"\tpublic void readExternal(ObjectInput in)",
"\t\tthrows IOException, ClassNotFoundException",
"\t{",
" localReadExternal(in);",
" }",
"",
"\tpublic void readExternalFromArray(ArrayInputStream in)",
"\t\tthrows IOException, ClassNotFoundException",
"\t{",
" localReadExternal(in);",
" }"
]
}
]
}
] |
derby-DERBY-2920-5baee936
|
DERBY-2920: Remove unused readExternal() methods
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1411164 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatableArrayHolder.java",
"hunks": [
{
"added": [],
"header": "@@ -21,13 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.services.io.StoredFormatIds;",
"import org.apache.derby.iapi.services.io.FormatIdUtil;",
"import org.apache.derby.iapi.services.io.ArrayUtil;",
"import org.apache.derby.iapi.services.io.Formatable;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatableHashtable.java",
"hunks": [
{
"added": [],
"header": "@@ -21,11 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.services.io.StoredFormatIds;",
"import org.apache.derby.iapi.services.io.FormatIdUtil;",
"import org.apache.derby.iapi.services.io.Formatable;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatableIntHolder.java",
"hunks": [
{
"added": [],
"header": "@@ -21,12 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.services.io.Formatable;",
"import org.apache.derby.iapi.services.io.FormatIdUtil;",
"import org.apache.derby.iapi.services.io.StoredFormatIds;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatableLongHolder.java",
"hunks": [
{
"added": [],
"header": "@@ -21,12 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.services.io.FormatIdUtil;",
"import org.apache.derby.iapi.services.io.Formatable;",
"import org.apache.derby.iapi.services.io.StoredFormatIds;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatableProperties.java",
"hunks": [
{
"added": [],
"header": "@@ -21,12 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
"",
"import org.apache.derby.iapi.services.io.FormatIdUtil;",
"import org.apache.derby.iapi.services.io.Formatable;",
"import org.apache.derby.iapi.services.io.StoredFormatIds;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/index/B2IUndo.java",
"hunks": [
{
"added": [],
"header": "@@ -23,8 +23,6 @@ package org.apache.derby.impl.store.access.btree.index;",
"removed": [
"import org.apache.derby.iapi.services.io.ArrayInputStream;",
""
]
},
{
"added": [],
"header": "@@ -322,9 +320,4 @@ public class B2IUndo implements LogicalUndo, Formatable",
"removed": [
"\tpublic void readExternal(ArrayInputStream in)",
"\t\tthrows IOException, ClassNotFoundException",
"\t{",
" return;",
"\t}"
]
}
]
}
] |
derby-DERBY-2931-bdd93c5c
|
DERBY-2931
In soft upgrade mode the format id of the new heap format was being written
out along with the old format metadata. On reboot system would try to read
new format and fail. Problem was that wrong format id was associated with
the soft upgrade version of the heap conglomerate class (Heap_v10_2.java).
The code change is just changing that format id from
StoredFormatIds.ACCESS_HEAP_V3_ID to StoredFormatIds.ACCESS_HEAP_V2_ID. The
rest of the changes is comments in the code and updates to the upgrade test
suite. The changes to the upgrade suite showed the problem before the fix in
both 10.3 branch and trunk and then passed once the fix was applied. Adding
those tests actually caused some other tests in the upgrade suite to fail
which passed previously, and those cases were also fixed by this change.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@555778 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/access/heap/Heap_v10_2.java",
"hunks": [
{
"added": [
" * conglomerate object. Access contains no \"directory\" of ",
" * conglomerate information. In order to bootstrap opening a file",
" * it encodes the factory that can open the conglomerate in the ",
" * conglomerate id itself. There exists a single HeapFactory which",
" * must be able to read all heap format id's. ",
" *",
" * This format was used for all Derby database Heap's in version",
" * 10.2 and previous versions.",
" * as the first field of the conglomerate itself. A bootstrap",
" * problem exists as we don't know the format id of the heap ",
" * until we are in the \"middle\" of reading the Heap. Thus the",
" * base Heap implementation must be able to read and write ",
" * all formats based on the reading the ",
" * \"format_of_this_conglomerate\". ",
" *",
" * soft upgrade to ACCESS_HEAP_V3_ID:",
" * read:",
" * old format is readable by current Heap implementation,",
" * with automatic in memory creation of default collation",
" * id needed by new format. No code other than",
" * readExternal and writeExternal need know about old format.",
" * write:",
" * will never write out new format id in soft upgrade mode.",
" * Code in readExternal and writeExternal handles writing",
" * correct version. Code in the factory handles making",
" * sure new conglomerates use the Heap_v10_2 class",
" * that will write out old format info.",
" * hard upgrade to ACCESS_HEAP_V3_ID:",
" * read:",
" * old format is readable by current Heap implementation,",
" * with automatic in memory creation of default collation",
" * id needed by new format.",
" * write:",
" * Only \"lazy\" upgrade will happen. New format will only",
" * get written for new conglomerate created after the ",
" * upgrade. Old conglomerates continue to be handled the",
" * same as soft upgrade."
],
"header": "@@ -35,18 +35,46 @@ import java.lang.ClassNotFoundException;",
"removed": [
" * conglomerate object. The Heap conglomerate object is stored in",
" * a field of a row in the Conglomerate directory.",
" * as a separate column in the conglomerate directory. To read",
" * A conglomerate object from disk and upgrade it to the current",
" * version do the following:",
" * format_id = get format id from a separate column",
" * Upgradable conglom_obj = instantiate empty obj(format_id)",
" * read in conglom_obj from disk",
" * conglom = conglom_obj.upgradeToCurrent();"
]
}
]
}
] |
derby-DERBY-2931-db85ee3a
|
DERBY-2931 move test case to BasicSetup
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@555806 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2933-b672fd0e
|
DERBY-2933 (partial) When network server disconnects due to an I/O Exception it does not always log the exception that caused the error
Committing change for IOExceptions during writeScalarStream(). There also may be exceptions during disconnect of a session when the server shuts down and I left these unlogged.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@562524 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
"\t markCommunicationsFailure(null,arg1,arg2,arg3, arg4);",
" ",
" ",
" /**",
" * Indicate a communications failure. Log to derby.log",
" * ",
" * @param e - Source exception that was thrown",
" * @param arg1 - info about the communications failure",
" * @param arg2 - info about the communications failure",
" * @param arg3 - info about the communications failure",
" * @param arg4 - info about the communications failure",
" *",
" * @exception DRDAProtocolException disconnect exception always thrown",
" */",
" protected void markCommunicationsFailure(Exception e, String arg1, String arg2, String arg3,",
" String arg4) throws DRDAProtocolException",
" {",
" String dbname = null;",
" ",
" if (database != null)",
" {",
" dbname = database.dbName;",
" }",
" if (e != null) {",
" println2Log(dbname,session.drdaID, e.getMessage());",
" server.consoleExceptionPrintTrace(e);",
" }",
" ",
" Object[] oa = {arg1,arg2,arg3,arg4};",
" throw DRDAProtocolException.newDisconnectException(this,oa);",
" }",
""
],
"header": "@@ -450,10 +450,40 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\tObject[] oa = {arg1,arg2,arg3,arg4};",
"\t\tthrow DRDAProtocolException.newDisconnectException(this,oa);"
]
}
]
}
] |
derby-DERBY-2935-24d0b901
|
DERBY-2935: DDMReader.readLengthAndCodePoint() decodes long integer incorrectly
Use long arithmetic instead of int arithmetic.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@557513 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DDMReader.java",
"hunks": [
{
"added": [
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 56) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 48) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 40) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 32) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 24) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 16) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 8) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 0);",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 40) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 32) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 24) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 16) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 8) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 0);",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 24) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 16) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 8) +",
"\t\t\t\t\t((buffer[pos++] & 0xFFL) << 0);"
],
"header": "@@ -530,32 +530,32 @@ class DDMReader",
"removed": [
"\t\t\t\t\t((buffer[pos++] & 0xff) << 56) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 48) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 40) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 32) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 24) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 16) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 8) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 0);",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 40) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 32) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 24) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 16) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 8) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 0);",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 24) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 16) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 8) +",
"\t\t\t\t\t((buffer[pos++] & 0xff) << 0);"
]
}
]
}
] |
derby-DERBY-2936-298ff5e4
|
DERBY-2936 (partial) Use java.nio.ByteBuffer for buffering in DDMWriter
Wrap the byte array in a java.nio.ByteBuffer and use the utility
methods for encoding primitive types.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@556583 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DDMWriter.java",
"hunks": [
{
"added": [
"import java.nio.ByteBuffer;"
],
"header": "@@ -33,6 +33,7 @@ import org.apache.derby.iapi.reference.Property;",
"removed": []
},
{
"added": [
"\t/**",
"\t * Output buffer.",
"\t * @see #bytes",
"\t */",
"\t/**",
"\t * Wrapper around the output buffer (<code>bytes</code>) which enables the",
"\t * use of utility methods for easy encoding of primitive values and",
"\t * strings. Changes to the output buffer are visible in the wrapper, and",
"\t * vice versa.",
"\t */",
"\tprivate ByteBuffer buffer;"
],
"header": "@@ -57,11 +58,18 @@ class DDMWriter",
"removed": [
"\t// output buffer",
"",
"\t// offset into output buffer",
"\tprivate int offset;"
]
},
{
"added": [
"\t\tthis.buffer = ByteBuffer.wrap(bytes);"
],
"header": "@@ -115,6 +123,7 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\tthis.buffer = ByteBuffer.wrap(bytes);"
],
"header": "@@ -128,6 +137,7 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\tbuffer.clear();"
],
"header": "@@ -144,7 +154,7 @@ class DDMWriter",
"removed": [
"\t\toffset = 0;"
]
},
{
"added": [
"\t\tint length = buffer.position() - start;"
],
"header": "@@ -321,7 +331,7 @@ class DDMWriter",
"removed": [
"\t\tint length = offset - start;"
]
},
{
"added": [
"\t\tfinal int offset = buffer.position();",
"\t\t// move past the length bytes before writing the code point",
"\t\tbuffer.position(offset + 2);",
"\t\tbuffer.putShort((short) codePoint);"
],
"header": "@@ -338,12 +348,12 @@ class DDMWriter",
"removed": [
"\t\toffset += 2; // move past the length bytes before writing the code point",
"\t\tbytes[offset] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) (codePoint & 0xff);",
"\t\toffset += 2;"
]
},
{
"added": [
"\t\tbuffer.position(markStack[top--]);"
],
"header": "@@ -352,7 +362,7 @@ class DDMWriter",
"removed": [
"\t\toffset = markStack[top--];"
]
},
{
"added": [
"\t\tbuffer.clear();"
],
"header": "@@ -361,7 +371,7 @@ class DDMWriter",
"removed": [
"\t\toffset = 0;"
]
},
{
"added": [
"\t\tfinal int offset = buffer.position();"
],
"header": "@@ -378,6 +388,7 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\t\tbuffer.position(offset + extendedLengthByteCount);"
],
"header": "@@ -412,7 +423,7 @@ class DDMWriter",
"removed": [
"\t\t\toffset += extendedLengthByteCount;"
]
},
{
"added": [
" return buffer.position() - dssLengthLocation;"
],
"header": "@@ -439,7 +450,7 @@ class DDMWriter",
"removed": [
" return offset - dssLengthLocation;"
]
},
{
"added": [
" buffer.position(dssLengthLocation + value);"
],
"header": "@@ -451,7 +462,7 @@ class DDMWriter",
"removed": [
" offset = dssLengthLocation + value;"
]
},
{
"added": [
"\t\tbuffer.put((byte) value);"
],
"header": "@@ -473,7 +484,7 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset++] = (byte) (value & 0xff);"
]
},
{
"added": [
"\t\tbuffer.putShort((short) value);"
],
"header": "@@ -485,9 +496,7 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) ((value >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) (value & 0xff);",
"\t\toffset += 2;"
]
},
{
"added": [
"\t\tbuffer.putInt(value);"
],
"header": "@@ -498,11 +507,7 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) ((value >>> 24) & 0xff);",
"\t\tbytes[offset + 1] = (byte) ((value >>> 16) & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((value >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (value & 0xff);",
"\t\toffset += 4;"
]
},
{
"added": [
"\t\tbuffer.put(buf, start, length);"
],
"header": "@@ -538,8 +543,7 @@ class DDMWriter",
"removed": [
"\t\tSystem.arraycopy(buf,start,bytes,offset,length);",
"\t\toffset += length;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) codePoint);",
"\t\tbuffer.putShort((short) value);"
],
"header": "@@ -579,11 +583,8 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) (codePoint & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((value >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (value & 0xff);",
"\t\toffset += 4;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) 0x0005);",
"\t\tbuffer.putShort((short) codePoint);",
"\t\tbuffer.put((byte) value);"
],
"header": "@@ -595,12 +596,9 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = 0x00;",
"\t\tbytes[offset + 1] = 0x05;",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\tbytes[offset + 4] = (byte) (value & 0xff);",
"\t\toffset += 5;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) 0x0006);",
"\t\tbuffer.putShort((short) codePoint);",
"\t\tbuffer.putShort((short) value);",
"\t\tbuffer.putShort((short) value);"
],
"header": "@@ -612,21 +610,15 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = 0x00;",
"\t\tbytes[offset + 1] = 0x06;",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\tbytes[offset + 4] = (byte) ((value >>> 8) & 0xff);",
"\t\tbytes[offset + 5] = (byte) (value & 0xff);",
"\t\toffset += 6;",
"\t\tbytes[offset] = (byte) ((value >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) (value & 0xff);",
"\t\toffset += 2;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) length);",
"\t\tbuffer.putShort((short) codePoint);"
],
"header": "@@ -638,11 +630,8 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) ((length >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) (length & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\toffset += 4;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) length);",
"\t\tbuffer.putShort((short) codePoint);",
"\t\tbuffer.put(buf, 0, length);"
],
"header": "@@ -662,12 +651,9 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) (((length+4) >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) ((length+4) & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\tSystem.arraycopy(buf,0,bytes,offset + 4, length);",
"\t\toffset += length + 4;"
]
},
{
"added": [
"\t\t int spareBufferLength = buffer.remaining();",
"",
"\t\t\tfinal int offset = buffer.position();",
"\t\t\ttotalBytesRead += bytesRead;",
"\t\t\tbuffer.position(offset + bytesRead);"
],
"header": "@@ -699,21 +685,22 @@ class DDMWriter",
"removed": [
"\t\t int spareBufferLength = bytes.length - offset;",
"\t\t ",
"\t\t\t\t\ttotalBytesRead += bytesRead;",
"\t\t\t\t\toffset += bytesRead;"
]
},
{
"added": [
" ensureLength( DEFAULT_BUFFER_SIZE - buffer.position() );"
],
"header": "@@ -782,7 +769,7 @@ class DDMWriter",
"removed": [
" ensureLength( DEFAULT_BUFFER_SIZE - offset );"
]
},
{
"added": [
"\t\treturn buffer.position() != 0;"
],
"header": "@@ -825,7 +812,7 @@ class DDMWriter",
"removed": [
"\t\treturn offset != 0;"
]
},
{
"added": [
"\t\t\tdssLengthLocation = buffer.position();",
"\t\t\tbuffer.putShort((short) 0xFFFF);"
],
"header": "@@ -856,9 +843,8 @@ class DDMWriter",
"removed": [
"\t\t\tdssLengthLocation = offset;",
"\t\t\tbytes[offset++] = (byte) (0xff);",
"\t\t\tbytes[offset++] = (byte) (0xff);"
]
},
{
"added": [
" buffer.put((byte) (length >>> shiftSize));"
],
"header": "@@ -873,10 +859,9 @@ class DDMWriter",
"removed": [
" bytes[offset + i] = (byte) ((length >>> shiftSize) & 0xff);",
"\toffset += extendedLengthByteCount;"
]
},
{
"added": [
"\tbuffer.putShort((short) length);",
"\tbuffer.putShort((short) codePoint);"
],
"header": "@@ -888,11 +873,8 @@ class DDMWriter",
"removed": [
" bytes[offset] = (byte) ((length >>> 8) & 0xff);",
" bytes[offset + 1] = (byte) (length & 0xff);",
" bytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes[offset + 3] = (byte) (codePoint & 0xff);",
"\toffset +=4;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) (dataLength + 4));",
"\t\tbuffer.putShort((short) codePoint);"
],
"header": "@@ -904,11 +886,8 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) (((dataLength+4) >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) ((dataLength+4) & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\toffset += 4;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) (stringLength + 4));",
"\t\tbuffer.putShort((short) codePoint);",
"\t\tbuffer.position(",
"\t\t\tccsidManager.convertFromUCS2(string, bytes, buffer.position()));"
],
"header": "@@ -922,11 +901,10 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) (((stringLength+4) >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) ((stringLength+4) & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\toffset = ccsidManager.convertFromUCS2 (string, bytes, offset + 4);"
]
},
{
"added": [
"\t\tbuffer.putShort((short) (paddedLength + 4));",
"\t\tbuffer.putShort((short) codePoint);",
"\t\tfinal int offset =",
"\t\t\tccsidManager.convertFromUCS2(string, bytes, buffer.position());",
"\t\tfinal int end = offset + fillLength;",
"\t\tArrays.fill(bytes, offset, end, ccsidManager.space);",
"\t\tbuffer.position(end);"
],
"header": "@@ -942,13 +920,13 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) (((paddedLength+4) >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) ((paddedLength+4) & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\toffset = ccsidManager.convertFromUCS2 (string, bytes, offset + 4);",
"\t\tArrays.fill(bytes,offset, offset + fillLength,ccsidManager.space);",
"\t\toffset += fillLength;"
]
},
{
"added": [
"\t\tfinal int offset =",
"\t\t\tccsidManager.convertFromUCS2(string, bytes, buffer.position());",
"\t\tfinal int end = offset + fillLength;",
"\t\tArrays.fill(bytes, offset, end, ccsidManager.space);",
"\t\tbuffer.position(end);"
],
"header": "@@ -964,9 +942,11 @@ class DDMWriter",
"removed": [
"\t\toffset = ccsidManager.convertFromUCS2 (string, bytes, offset);",
"\t\tArrays.fill(bytes,offset, offset + fillLength,ccsidManager.space);",
"\t\toffset += fillLength;"
]
},
{
"added": [
"\t\tbuffer.put(drdaString.getBytes(), 0, stringLength);",
"\t\tfinal int offset = buffer.position();",
"\t\tfinal int end = offset + fillLength;",
"\t\tArrays.fill(bytes, offset, end, ccsidManager.space);",
"\t\tbuffer.position(end);"
],
"header": "@@ -981,10 +961,11 @@ class DDMWriter",
"removed": [
"\t\tSystem.arraycopy(drdaString.getBytes(), 0, bytes, offset, stringLength);",
"\t\toffset += stringLength;",
"\t\tArrays.fill(bytes, offset, offset + fillLength, ccsidManager.space);",
"\t\toffset += fillLength;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) (paddedLength + 4));",
"\t\tbuffer.putShort((short) codePoint);",
"\t\tbuffer.put(buf);",
"\t\tfinal int offset = buffer.position();",
"\t\tfinal int end = offset + (paddedLength - buf.length);",
"\t\tArrays.fill(bytes, offset, end, padByte);",
"\t\tbuffer.position(end);"
],
"header": "@@ -997,18 +978,14 @@ class DDMWriter",
"removed": [
"\t\tint bufLength = buf.length;",
"\t\tbytes[offset] = (byte) (((paddedLength+4) >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) ((paddedLength+4) & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\toffset += 4;",
"\t\tSystem.arraycopy(buf,0,bytes,offset,bufLength);",
"\t\toffset += bufLength;",
"\t\tint fillLength = paddedLength - bufLength;",
"\t\tArrays.fill(bytes,offset,offset + fillLength,padByte);",
"\t\toffset += fillLength;"
]
},
{
"added": [
"\t\tbuffer.put(buf);",
"\t\tfinal int offset = buffer.position();",
"\t\tfinal int end = offset + (paddedLength - buf.length);",
"\t\tArrays.fill(bytes, offset, end, padByte);",
"\t\tbuffer.position(end);"
],
"header": "@@ -1020,13 +997,12 @@ class DDMWriter",
"removed": [
"\t\tint bufLength = buf.length;",
"\t\tint fillLength = paddedLength - bufLength;",
"\t\tSystem.arraycopy(buf,0,bytes,offset,bufLength);",
"\t\toffset +=bufLength;",
"\t\tArrays.fill(bytes,offset,offset + fillLength,padByte);",
"\t\toffset += fillLength;"
]
},
{
"added": [
"\t\tensureLength(buf.length + 4);",
"\t\tbuffer.putShort((short) (buf.length + 4));",
"\t\tbuffer.putShort((short) codePoint);",
"\t\tbuffer.put(buf);"
],
"header": "@@ -1037,14 +1013,10 @@ class DDMWriter",
"removed": [
"\t\tint bufLength = buf.length;",
"\t\tensureLength (bufLength + 4);",
"\t\tbytes[offset] = (byte) (((bufLength+4) >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) ((bufLength+4) & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\tSystem.arraycopy(buf,0,bytes,offset + 4,bufLength);",
"\t\toffset += bufLength + 4;"
]
},
{
"added": [
"\t\tbuffer.putShort((short) (numBytes + 4));",
"\t\tbuffer.putShort((short) codePoint);",
"\t\tbuffer.put(buf, start, length);"
],
"header": "@@ -1066,13 +1038,9 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] = (byte) (((numBytes+4) >>> 8) & 0xff);",
"\t\tbytes[offset + 1] = (byte) ((numBytes+4) & 0xff);",
"\t\tbytes[offset + 2] = (byte) ((codePoint >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (codePoint & 0xff);",
"\t\toffset += 4;",
"\t\tSystem.arraycopy(buf,start,bytes,offset,numBytes);",
"\t\toffset += numBytes;"
]
},
{
"added": [
"\t\tbuffer.putLong(v);"
],
"header": "@@ -1116,15 +1084,7 @@ class DDMWriter",
"removed": [
"\t\tbytes[offset] =\t(byte) ((v >>> 56) & 0xff);",
"\t\tbytes[offset + 1] =\t(byte) ((v >>> 48) & 0xff);",
"\t\tbytes[offset + 2] =\t(byte) ((v >>> 40) & 0xff);",
"\t\tbytes[offset + 3] =\t(byte) ((v >>> 32) & 0xff);",
"\t\tbytes[offset + 4] =\t(byte) ((v >>> 24) & 0xff);",
"\t\tbytes[offset + 5] =\t(byte) ((v >>> 16) & 0xff);",
"\t\tbytes[offset + 6] =\t(byte) ((v >>> 8) & 0xff);",
"\t\tbytes[offset + 7] =\t(byte) ((v >>> 0) & 0xff);",
"\t\toffset += 8;"
]
},
{
"added": [
"\t\tensureLength(length);",
"\t\tbuffer.position(buffer.position() + length);"
],
"header": "@@ -1159,9 +1119,9 @@ class DDMWriter",
"removed": [
"\t\tensureLength (offset + length);",
"\t\toffset += length;"
]
},
{
"added": [
"\t\twriteByte(v ? 1 : 0);"
],
"header": "@@ -1171,8 +1131,7 @@ class DDMWriter",
"removed": [
"\t\tensureLength (1);",
"\t\tbytes[offset++] = (byte) ((v ? 1 : 0) & 0xff);"
]
},
{
"added": [
"\t\tfinal int offset = buffer.position();",
"\t\tfinal int end = offset + length;",
"\t\tArrays.fill(bytes, offset, end, val);",
"\t\tbuffer.position(end);"
],
"header": "@@ -1299,8 +1258,10 @@ class DDMWriter",
"removed": [
"\t\tArrays.fill(bytes,offset, offset + length,val);",
"\t\toffset += length;"
]
},
{
"added": [
"\t\tfinal int offset = buffer.position();"
],
"header": "@@ -1324,6 +1285,7 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\tdssLengthLocation = buffer.position();",
"\t\tbuffer.position(dssLengthLocation + 2);",
"\t\tbuffer.put((byte) 0xD0);",
"\t\tbuffer.put((byte) (dssType | DssConstants.DSSCHAIN_SAME_ID));",
"\t\tbuffer.putShort((short) correlationID);"
],
"header": "@@ -1370,34 +1332,31 @@ class DDMWriter",
"removed": [
"\t\tdssLengthLocation = offset;",
"\t\toffset += 2;",
"\t\tbytes[offset] = (byte) 0xD0;",
"\t\tbytes[offset + 1] = (byte) dssType;",
"\t\tbytes[offset + 1] |= DssConstants.DSSCHAIN_SAME_ID;",
"\t\tbytes[offset + 2] = (byte) ((correlationID >>> 8) & 0xff);",
"\t\tbytes[offset + 3] = (byte) (correlationID & 0xff);",
"\t\toffset += 4;"
]
},
{
"added": [
"\t\t// initial position in the byte buffer",
"\t\tfinal int offset = buffer.position();",
""
],
"header": "@@ -1411,6 +1370,9 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\t\tbuffer.position(offset + shiftSize);"
],
"header": "@@ -1442,7 +1404,7 @@ class DDMWriter",
"removed": [
"\t\t\toffset += shiftSize;"
]
},
{
"added": [
"\t\tbuffer.putShort(dssLengthLocation, (short) totalSize);"
],
"header": "@@ -1544,8 +1506,7 @@ class DDMWriter",
"removed": [
"\t\tbytes[dssLengthLocation] = (byte) ((totalSize >>> 8) & 0xff);",
"\t\tbytes[dssLengthLocation + 1] = (byte) (totalSize & 0xff);"
]
},
{
"added": [
"\t\tif (buffer.remaining() < length) {",
"\t\t\tint newLength =",
"\t\t\t\tMath.max(buffer.capacity() << 1, buffer.position() + length);",
"\t\t\t// copy the old buffer into a new one",
"\t\t\tbuffer.flip();",
"\t\t\tbuffer = ByteBuffer.allocate(newLength).put(buffer);",
"\t\t\t// update the reference to the new backing array",
"\t\t\tbytes = buffer.array();"
],
"header": "@@ -1593,15 +1554,18 @@ class DDMWriter",
"removed": [
"\t\tlength += offset;",
"\t\tif (length > bytes.length) {",
"\t\t\tbyte newBytes[] = new byte[Math.max (bytes.length << 1, length)];",
"\t\t\tSystem.arraycopy (bytes, 0, newBytes, 0, offset);",
"\t\t\tbytes = newBytes;"
]
},
{
"added": [
" final int offset = buffer.position();",
""
],
"header": "@@ -1671,6 +1635,8 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\tfinal int offset = buffer.position();"
],
"header": "@@ -1788,6 +1754,7 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\treturn buffer.position();"
],
"header": "@@ -1920,7 +1887,7 @@ class DDMWriter",
"removed": [
"\t\treturn offset;"
]
},
{
"added": [
"\t\tbuffer.position(mark);"
],
"header": "@@ -1939,7 +1906,7 @@ class DDMWriter",
"removed": [
"\t\toffset = mark;"
]
}
]
}
] |
derby-DERBY-2936-9f8e1ebc
|
DERBY-2936: Use java.nio.ByteBuffer for buffering in DDMWriter
Use java.nio.charset.CharsetEncoder instead of String.getBytes() to
encode strings.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@557506 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DDMWriter.java",
"hunks": [
{
"added": [],
"header": "@@ -24,7 +24,6 @@ package org.apache.derby.impl.drda;",
"removed": [
"import java.io.UnsupportedEncodingException;"
]
},
{
"added": [
"import java.nio.CharBuffer;",
"import java.nio.charset.CharsetEncoder;",
"import java.nio.charset.CoderResult;",
"import java.nio.charset.CodingErrorAction;"
],
"header": "@@ -34,6 +33,10 @@ import org.apache.derby.iapi.services.property.PropertyUtil;",
"removed": []
},
{
"added": [],
"header": "@@ -52,12 +55,6 @@ class DDMWriter",
"removed": [
"",
"\tstatic final BigDecimal ZERO = BigDecimal.valueOf(0L);",
" ",
"\tprivate static final byte MULTI_BYTE_MASK = (byte) 0xC0;",
"\tprivate static final byte CONTINUATION_BYTE = (byte) 0x80;",
""
]
},
{
"added": [
"\t/** Encoder which encodes strings with the server's default encoding. */",
"\tprivate final CharsetEncoder encoder;"
],
"header": "@@ -119,20 +116,8 @@ class DDMWriter",
"removed": [
"\t// Constructors",
"\tDDMWriter (int minSize, CcsidManager ccsidManager, DRDAConnThread agent, DssTrace dssTrace)",
"\t{",
"\t\tthis.bytes = new byte[minSize];",
"\t\tthis.buffer = ByteBuffer.wrap(bytes);",
"\t\tthis.ccsidManager = ccsidManager;",
"\t\tthis.agent = agent;",
"\t\tthis.prevHdrLocation = -1;",
"\t\tthis.previousCorrId = DssConstants.CORRELATION_ID_UNKNOWN;",
"\t\tthis.previousChainByte = DssConstants.DSS_NOCHAIN;",
"\t\tthis.isContinuationDss = false;",
"\t\tthis.lastDSSBeforeMark = -1;",
"\t\treset(dssTrace);",
"\t}"
]
},
{
"added": [
"\t\t// create an encoder which inserts the charset's default replacement",
"\t\t// character for characters it can't encode",
"\t\tencoder = NetworkServerControlImpl.DEFAULT_CHARSET.newEncoder()",
"\t\t\t.onMalformedInput(CodingErrorAction.REPLACE)",
"\t\t\t.onUnmappableCharacter(CodingErrorAction.REPLACE);"
],
"header": "@@ -146,6 +131,11 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t/**",
"\t * Find the maximum number of bytes needed to represent the string in the",
"\t * default encoding.",
"\t *",
"\t * @param s the string to encode",
"\t * @return an upper limit for the number of bytes needed to encode the",
"\t * string",
"\t */",
"\tprivate int maxEncodedLength(String s) {",
"\t\t// maxBytesPerChar() returns a float, which can only hold 24 bits of an",
"\t\t// integer. Therefore, promote the float to a double so that all bits",
"\t\t// are preserved in the intermediate result.",
"\t\treturn (int) (s.length() * (double) encoder.maxBytesPerChar());",
"\t}"
],
"header": "@@ -1146,6 +1136,20 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\t// Position on which to write the length of the string (in bytes). The",
"\t\t// actual writing of the length is delayed until we have encoded the",
"\t\t// string.",
"\t\tfinal int lengthPos = buffer.position();",
"\t\t// Position on which to start writing the string (right after length,",
"\t\t// which is 2 bytes long).",
"\t\tfinal int stringPos = lengthPos + 2;",
"\t\t// don't send more than LONGVARCHAR_MAX_LEN bytes",
"\t\tfinal int maxStrLen =",
"\t\t\tMath.min(maxEncodedLength(s), FdocaConstants.LONGVARCHAR_MAX_LEN);",
"",
"\t\tensureLength(2 + maxStrLen);",
"",
"\t\t// limit the writable area of the output buffer",
"\t\tbuffer.position(stringPos);",
"\t\tbuffer.limit(stringPos + maxStrLen);",
"",
"\t\t// encode the string",
"\t\tCharBuffer input = CharBuffer.wrap(s);",
"\t\tCoderResult res = encoder.encode(input, buffer, true);",
"\t\tif (SanityManager.DEBUG) {",
"\t\t\t// UNDERFLOW is returned if the entire string was encoded, OVERFLOW",
"\t\t\t// is returned if the string was truncated at LONGVARCHAR_MAX_LEN",
"\t\t\tSanityManager.ASSERT(",
"\t\t\t\tres == CoderResult.UNDERFLOW || res == CoderResult.OVERFLOW,",
"\t\t\t\t\"Unexpected coder result: \" + res);",
"\t\t}",
"\t\t// write the length in bytes",
"\t\tbuffer.putShort(lengthPos, (short) (maxStrLen - buffer.remaining()));",
"\t\t// remove the limit on the output buffer",
"\t\tbuffer.limit(buffer.capacity());"
],
"header": "@@ -1156,54 +1160,39 @@ class DDMWriter",
"removed": [
"\t\ttry {",
"\t\t\tbyte [] byteval = s.getBytes(NetworkServerControlImpl.DEFAULT_ENCODING);",
"\t\t\tint origLen = byteval.length;",
"\t\t\tint writeLen =",
"\t\t\t\tjava.lang.Math.min(FdocaConstants.LONGVARCHAR_MAX_LEN,",
"\t\t\t\t\t\t\t\t origLen);",
"\t\t\t/*",
"\t\t\tNeed to make sure we truncate on character boundaries.",
"\t\t\tWe are assuming",
"\t\t\thttp://www.sun.com/developers/gadc/technicalpublications/articles/utf8.html",
"\t\t\tTo find the beginning of a multibyte character:",
"\t\t\t1) Does the current byte start with the bit pattern 10xxxxxx?",
"\t\t\t2) If yes, move left and go to step #1.",
"\t\t\t3) Finished",
"\t\t\tWe assume that NetworkServerControlImpl.DEFAULT_ENCODING remains UTF-8",
"\t\t\t*/",
"",
"\t\t\tif (SanityManager.DEBUG)",
"\t\t\t{",
"\t\t\t\tif (!(NetworkServerControlImpl.DEFAULT_ENCODING.equals(\"UTF8\")))",
"\t\t\t\t\tSanityManager.THROWASSERT(\"Encoding assumed to be UTF8, but is actually\" + NetworkServerControlImpl.DEFAULT_ENCODING);",
"\t\t\t}",
"\t\t\tif (writeLen != origLen) {",
"\t\t\t\t//find the first byte of the multibyte char in case",
"\t\t\t\t//the last byte is part of a multibyte char",
"\t\t\t\twhile (isContinuationChar (byteval [writeLen])) {",
"\t\t\t\t\twriteLen--;",
"\t\t\t\t}",
"\t\t\t\t//",
"\t\t\t\t// Now byteval[ writeLen ] is either a standalone 1-byte char",
"\t\t\t\t// or the first byte of a multi-byte character. That means that",
"\t\t\t\t// byteval[ writeLen -1 ] is the last (perhaps only) byte of the",
"\t\t\t\t// previous character.",
"\t\t\t\t//",
"\t\t\t}",
" ",
"\t\t\twriteShort(writeLen);",
"\t\t\twriteBytes(byteval,writeLen);",
"\t\t}",
"\t\tcatch (UnsupportedEncodingException e) {",
"\t\t\t//this should never happen",
"\t\t\tagent.agentError(\"Encoding \" + NetworkServerControlImpl.DEFAULT_ENCODING + \" not supported\");",
"\t\t}",
"\t}",
"\tprivate boolean isContinuationChar( byte b ) { ",
"\t\treturn ( (b & MULTI_BYTE_MASK) == CONTINUATION_BYTE );"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"import java.nio.charset.Charset;"
],
"header": "@@ -40,6 +40,7 @@ import javax.net.ssl.SSLSocket;",
"removed": []
},
{
"added": [
"\tfinal static Charset DEFAULT_CHARSET = Charset.forName(DEFAULT_ENCODING);"
],
"header": "@@ -169,6 +170,7 @@ public final class NetworkServerControlImpl {",
"removed": []
}
]
}
] |
derby-DERBY-2936-a0adefe5
|
DERBY-2936: Use java.nio.ByteBuffer for buffering in DDMWriter
Removed the bytes field from DDMWriter. Also
- updated comments which contained references to the old field
- made endDdm() use ByteBuffer.put(byte[],int,int) instead of
System.arraycopy()
- made writeScalarStream() use buffer.array() instead of the old
field
- removed unused variables and narrowed the scope of others in
writeScalarStream()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@569661 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DDMWriter.java",
"hunks": [
{
"added": [],
"header": "@@ -57,14 +57,6 @@ class DDMWriter",
"removed": [
"\t * @see #bytes",
"\t */",
"\tprivate byte[] bytes;",
"\t/**",
"\t * Wrapper around the output buffer (<code>bytes</code>) which enables the",
"\t * use of utility methods for easy encoding of primitive values and",
"\t * strings. Changes to the output buffer are visible in the wrapper, and",
"\t * vice versa."
]
},
{
"added": [
"\t// Location of the start of the header"
],
"header": "@@ -97,7 +89,7 @@ class DDMWriter",
"removed": [
"\t// Location within the \"bytes\" array of the start of the header"
]
},
{
"added": [
"\t// location within the buffer of the start of the header"
],
"header": "@@ -112,7 +104,7 @@ class DDMWriter",
"removed": [
"\t// location within the \"bytes\" array of the start of the header"
]
},
{
"added": [
"\t\tthis.buffer = ByteBuffer.allocate(DEFAULT_BUFFER_SIZE);"
],
"header": "@@ -121,8 +113,7 @@ class DDMWriter",
"removed": [
"\t\tthis.bytes = new byte[DEFAULT_BUFFER_SIZE];",
"\t\tthis.buffer = ByteBuffer.wrap(bytes);"
]
},
{
"added": [
"\t\tint length = buffer.position() - lengthLocation;"
],
"header": "@@ -393,8 +384,7 @@ class DDMWriter",
"removed": [
"\t\tfinal int offset = buffer.position();",
"\t\tint length = offset - lengthLocation;"
]
},
{
"added": [
"\t\t\t// the extended length should be written right after the length and",
"\t\t\t// the codepoint (2+2 bytes)",
"\t\t\tfinal int extendedLengthLocation = lengthLocation + 4;",
"",
"\t\t\tbuffer.position(extendedLengthLocation + extendedLengthByteCount);",
"\t\t\tbuffer.put(buffer.array(), extendedLengthLocation, extendedLength);"
],
"header": "@@ -409,14 +399,14 @@ class DDMWriter",
"removed": [
"\t\t\tint extendedLengthLocation = lengthLocation + 4;",
"\t\t\tSystem.arraycopy (bytes,",
"\t\t\t\t\t extendedLengthLocation,",
"\t\t\t\t\t bytes,",
"\t\t\t\t\t extendedLengthLocation + extendedLengthByteCount,",
"\t\t\t\t\t extendedLength);"
]
},
{
"added": [],
"header": "@@ -426,9 +416,6 @@ class DDMWriter",
"removed": [
"\t\t\t// adjust the offset to account for the shift and insert",
"\t\t\tbuffer.position(offset + extendedLengthByteCount);",
""
]
},
{
"added": [],
"header": "@@ -673,9 +660,6 @@ class DDMWriter",
"removed": [
"\t\tint bytesRead = 0;",
"\t\tint totalBytesRead = 0;",
""
]
},
{
"added": [
"\t\t\t// read as many bytes as possible directly into the backing array",
"\t\t\tfinal int bytesRead =",
"\t\t\t\tin.read(buffer.array(), offset,",
"\t\t\t\t\t\tMath.min(spareDssLength, buffer.remaining()));",
"",
"\t\t\t// update the buffer position",
""
],
"header": "@@ -685,24 +669,22 @@ class DDMWriter",
"removed": [
"\t\t int spareBufferLength = buffer.remaining();",
"\t\t ",
"\t\t bytesRead = in.read(bytes,",
"\t\t\t\t\toffset,",
"\t\t\t\t\tMath.min(spareDssLength,",
"\t\t\t\t\t\t spareBufferLength));",
"\t\t ",
"\t\t\ttotalBytesRead += bytesRead;",
"\t\t spareBufferLength -= bytesRead;"
]
},
{
"added": [],
"header": "@@ -1498,8 +1480,6 @@ class DDMWriter",
"removed": [
"\t\t\t// update the reference to the new backing array",
"\t\t\tbytes = buffer.array();"
]
}
]
}
] |
derby-DERBY-2936-a1f00681
|
DERBY-2936 (partial) Use java.nio.ByteBuffer for buffering in DDMWriter
Description of the patch:
* replaces all occurrences of bytes[xxx] with absolute buffer.get/put methods
* replaces calls to Arrays.fill() + buffer.position() with calls to
the existing padBytes() method
* makes CcsidManager.convertFromUCS2() take a ByteBuffer instead of
byte array + offset
* removes the original writeBigDecimal() method and renames
bigDecimalToPackedDecimalBytes() to writeBigDecimal()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@568039 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/CcsidManager.java",
"hunks": [
{
"added": [
"import java.nio.ByteBuffer;",
""
],
"header": "@@ -20,6 +20,8 @@",
"removed": []
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DDMWriter.java",
"hunks": [
{
"added": [
"import java.math.BigDecimal;"
],
"header": "@@ -25,6 +25,7 @@ import java.io.BufferedOutputStream;",
"removed": []
},
{
"added": [
"\t\t\tbyte b = (byte) (buffer.get(dssLengthLocation) | 0x80);",
"\t\t\tbuffer.put(dssLengthLocation, b);"
],
"header": "@@ -217,7 +218,8 @@ class DDMWriter",
"removed": [
"\t\t\tbytes[dssLengthLocation] |= 0x80;"
]
},
{
"added": [
"\t\toverrideChainByte(dssLengthLocation + 3, chainByte);",
" /**",
" * Override the default chaining byte with the chaining byte that is passed",
" * in.",
" *",
" * @param pos the position on which the chaining byte is located",
" * @param chainByte the chaining byte that overrides the default",
" */",
" private void overrideChainByte(int pos, byte chainByte) {",
" byte b = buffer.get(pos);",
" b &= 0x0F; // Zero out default",
" b |= chainByte;",
" buffer.put(pos, b);",
" }",
""
],
"header": "@@ -248,12 +250,25 @@ class DDMWriter",
"removed": [
"\t\tbytes[dssLengthLocation + 3] &= 0x0F;\t// Zero out default",
"\t\tbytes[dssLengthLocation + 3] |= chainByte;"
]
},
{
"added": [
"\t\tbuffer.position(start);",
"\t\tbuffer.get(temp);"
],
"header": "@@ -322,7 +337,8 @@ class DDMWriter",
"removed": [
"\t\tSystem.arraycopy(bytes,start,temp,0,length);"
]
},
{
"added": [
"\t\t\t// write the extended length (a variable number of bytes in",
"\t\t\t// big-endian order)",
"\t\t\tfor (int pos = extendedLengthLocation + extendedLengthByteCount - 1;",
"\t\t\t\t pos >= extendedLengthLocation; pos--) {",
"\t\t\t\tbuffer.put(pos, (byte) extendedLength);",
"\t\t\t\textendedLength >>= 8;"
],
"header": "@@ -402,13 +418,12 @@ class DDMWriter",
"removed": [
"\t\t\t// write the extended length",
"\t\t\tint shiftSize = (extendedLengthByteCount -1) * 8;",
"\t\t\tfor (int i = 0; i < extendedLengthByteCount; i++)",
"\t\t\t{",
"\t\t\t\tbytes[extendedLengthLocation++] =",
"\t\t\t\t\t(byte) ((extendedLength >>> shiftSize ) & 0xff);",
"\t\t\t\tshiftSize -= 8;"
]
},
{
"added": [
"\t\tbuffer.putShort(lengthLocation, (short) length);"
],
"header": "@@ -423,9 +438,7 @@ class DDMWriter",
"removed": [
"\t\tbytes[lengthLocation] = (byte) ((length >>> 8) & 0xff);",
"\t\tbytes[lengthLocation+1] = (byte) (length & 0xff);",
""
]
},
{
"added": [
"\t\tbuffer.putShort(dssLengthLocation, (short) 0xFFFF);"
],
"header": "@@ -727,8 +740,7 @@ class DDMWriter",
"removed": [
" \t\tbytes[dssLengthLocation] = (byte) 0xFF;",
" \t\tbytes[dssLengthLocation + 1] = (byte) 0xFF;"
]
},
{
"added": [
"\t\tbuffer.put(dssLengthLocation + 3, (byte) dssType);"
],
"header": "@@ -737,7 +749,7 @@ class DDMWriter",
"removed": [
"\t\tbytes[dssLengthLocation + 3] = (byte) (dssType & 0xff);"
]
},
{
"added": [
"\t\tccsidManager.convertFromUCS2(string, buffer);"
],
"header": "@@ -880,8 +892,7 @@ class DDMWriter",
"removed": [
"\t\tbuffer.position(",
"\t\t\tccsidManager.convertFromUCS2(string, bytes, buffer.position()));"
]
},
{
"added": [
"\t\tccsidManager.convertFromUCS2(string, buffer);",
"\t\tpadBytes(ccsidManager.space, fillLength);"
],
"header": "@@ -899,11 +910,8 @@ class DDMWriter",
"removed": [
"\t\tfinal int offset =",
"\t\t\tccsidManager.convertFromUCS2(string, bytes, buffer.position());",
"\t\tfinal int end = offset + fillLength;",
"\t\tArrays.fill(bytes, offset, end, ccsidManager.space);",
"\t\tbuffer.position(end);"
]
},
{
"added": [
"\t\tccsidManager.convertFromUCS2(string, buffer);",
"\t\tpadBytes(ccsidManager.space, fillLength);"
],
"header": "@@ -919,11 +927,8 @@ class DDMWriter",
"removed": [
"\t\tfinal int offset =",
"\t\t\tccsidManager.convertFromUCS2(string, bytes, buffer.position());",
"\t\tfinal int end = offset + fillLength;",
"\t\tArrays.fill(bytes, offset, end, ccsidManager.space);",
"\t\tbuffer.position(end);"
]
},
{
"added": [
"\t\tpadBytes(ccsidManager.space, fillLength);"
],
"header": "@@ -939,10 +944,7 @@ class DDMWriter",
"removed": [
"\t\tfinal int offset = buffer.position();",
"\t\tfinal int end = offset + fillLength;",
"\t\tArrays.fill(bytes, offset, end, ccsidManager.space);",
"\t\tbuffer.position(end);"
]
},
{
"added": [
"\t\tpadBytes(padByte, paddedLength - buf.length);"
],
"header": "@@ -959,10 +961,7 @@ class DDMWriter",
"removed": [
"\t\tfinal int offset = buffer.position();",
"\t\tfinal int end = offset + (paddedLength - buf.length);",
"\t\tArrays.fill(bytes, offset, end, padByte);",
"\t\tbuffer.position(end);"
]
},
{
"added": [
"\t\tpadBytes(padByte, paddedLength - buf.length);"
],
"header": "@@ -976,10 +975,7 @@ class DDMWriter",
"removed": [
"\t\tfinal int offset = buffer.position();",
"\t\tfinal int end = offset + (paddedLength - buf.length);",
"\t\tArrays.fill(bytes, offset, end, padByte);",
"\t\tbuffer.position(end);"
]
},
{
"added": [],
"header": "@@ -1084,23 +1080,6 @@ class DDMWriter",
"removed": [
"\t/**",
"\t * Write big decimal to buffer",
"\t *",
"\t * @param v value to write",
"\t * @param precision Precison of decimal or numeric type",
"\t * @param scale declared scale",
"\t * @exception SQLException thrown if number of digits > 31",
"\t */",
"\tprotected void writeBigDecimal (java.math.BigDecimal v, int precision, int scale)",
"\t\tthrows SQLException",
"\t{",
"\t\tint length = precision / 2 + 1;",
"\t\tensureLength(length);",
"\t\tbigDecimalToPackedDecimalBytes (v,precision, scale);",
"\t\tbuffer.position(buffer.position() + length);",
"\t}",
""
]
},
{
"added": [
"\t\tArrays.fill(buffer.array(), offset, end, val);"
],
"header": "@@ -1210,7 +1189,7 @@ class DDMWriter",
"removed": [
"\t\tArrays.fill(bytes, offset, end, val);"
]
},
{
"added": [
"\t\tfinal byte[] bytes = buffer.array();"
],
"header": "@@ -1235,6 +1214,7 @@ class DDMWriter",
"removed": []
},
{
"added": [
"",
"\t\t\t// We're going to access the buffer with absolute positions, so",
"\t\t\t// just move the current position pointer right away to where it's",
"\t\t\t// supposed to be after we have finished the shifting."
],
"header": "@@ -1354,6 +1334,10 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\t\t\t// perform the shift directly on the backing array",
"\t\t\t\tfinal byte[] bytes = buffer.array();"
],
"header": "@@ -1413,6 +1397,8 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\t\t\tbuffer.putShort(dataByte + shiftSize - 1,",
"\t\t\t\t\t\t\t\t(short) twoByteContDssHeader);"
],
"header": "@@ -1434,10 +1420,8 @@ class DDMWriter",
"removed": [
"\t\t\t\tbytes[dataByte + shiftSize - 1] = (byte)",
"\t\t\t\t\t((twoByteContDssHeader >>> 8) & 0xff);",
"\t\t\t\tbytes[dataByte + shiftSize] = (byte)",
"\t\t\t\t\t(twoByteContDssHeader & 0xff);"
]
},
{
"added": [
"\t * @param scale declared scale",
"\tvoid writeBigDecimal(BigDecimal b, int precision, int scale)",
" final int encodedLength = precision / 2 + 1;",
" ensureLength(encodedLength);",
"",
" // The bytes are processed from right to left. Therefore, save starting",
" // offset and use absolute positioning.",
" final int offset = buffer.position();",
" // Move current position to the end of the encoded decimal.",
" buffer.position(offset + encodedLength);",
""
],
"header": "@@ -1525,14 +1509,22 @@ class DDMWriter",
"removed": [
"\t * @return length written.",
"\tprivate int bigDecimalToPackedDecimalBytes (java.math.BigDecimal b,",
"\t\t\t\t\t\t\t\t\t\t\t\tint precision, int scale)"
]
},
{
"added": [
" byte signByte = (byte) ((b.signum() >= 0) ? 12 : 13);"
],
"header": "@@ -1585,7 +1577,7 @@ class DDMWriter",
"removed": [
" final int offset = buffer.position();"
]
},
{
"added": [
" if (bigIndex >= 0) {",
" // process the last nybble together with the sign nybble.",
" signByte |= (unscaledStr.charAt(bigIndex) - zeroBase) << 4;",
" buffer.put(offset + (packedIndex+1)/2, signByte);"
],
"header": "@@ -1594,17 +1586,11 @@ class DDMWriter",
"removed": [
" if (bigIndex < 0) {",
" // all digits are discarded, so only process the sign nybble.",
" bytes[offset+(packedIndex+1)/2] =",
" (byte) ( (b.signum()>=0)?12:13 ); // sign nybble",
" }",
" else {",
" // process the last nybble together with the sign nybble.",
" bytes[offset+(packedIndex+1)/2] =",
" (byte) ( ( (unscaledStr.charAt(bigIndex)-zeroBase) << 4 ) + // last nybble",
" ( (b.signum()>=0)?12:13 ) ); // sign nybble"
]
},
{
"added": [
" buffer.put(offset + (packedIndex+1)/2, signByte);",
" buffer.put(offset + (packedIndex+1)/2, (byte) 0);",
" byte bt = (byte)",
" ((unscaledStr.charAt(bigPrecision - 1) - zeroBase) << 4);",
" buffer.put(offset + (packedIndex+1)/2, bt);"
],
"header": "@@ -1616,16 +1602,15 @@ class DDMWriter",
"removed": [
" bytes[offset+(packedIndex+1)/2] =",
" (byte) ( (b.signum()>=0)?12:13 ); // sign nybble",
" bytes[offset+(packedIndex+1)/2] = (byte) 0;",
" bytes[offset+(packedIndex+1)/2] =",
" (byte) ( (unscaledStr.charAt(bigPrecision-1)-zeroBase) << 4 ); // high nybble",
""
]
},
{
"added": [
" byte bt = (byte)",
" (((unscaledStr.charAt(bigIndex)-zeroBase) << 4) | // high nybble",
" (unscaledStr.charAt(bigIndex+1)-zeroBase)); // low nybble",
" buffer.put(offset + (packedIndex+1)/2, bt);",
" buffer.put(offset + (packedIndex+1)/2,",
" (byte) (unscaledStr.charAt(0) - zeroBase));",
" packedIndex-=2;",
" buffer.put(offset + (packedIndex+1)/2, (byte) 0);"
],
"header": "@@ -1636,24 +1621,23 @@ class DDMWriter",
"removed": [
" bytes[offset+(packedIndex+1)/2] =",
" (byte) ( ( (unscaledStr.charAt(bigIndex)-zeroBase) << 4 ) + // high nybble",
" ( unscaledStr.charAt(bigIndex+1)-zeroBase ) ); // low nybble",
" bytes[offset+(packedIndex+1)/2] =",
" (byte) (unscaledStr.charAt(0) - zeroBase);",
" packedIndex-=2;",
" bytes[offset+(packedIndex+1)/2] = (byte) 0;",
"",
" return declaredPrecision/2 + 1;"
]
},
{
"added": [
"\tfinal byte[] bytes = buffer.array();",
"\tfinal int length = buffer.position();",
" socketOutputStream.write(bytes, 0, length);"
],
"header": "@@ -1704,9 +1688,10 @@ class DDMWriter",
"removed": [
"\tfinal int offset = buffer.position();",
" socketOutputStream.write (bytes, 0, offset);"
]
},
{
"added": [
"\t\t\t length,"
],
"header": "@@ -1714,7 +1699,7 @@ class DDMWriter",
"removed": [
"\t\t\t offset,"
]
},
{
"added": [
"\t\tint len = (buffer != null) ? buffer.capacity() : 0;",
"\t\ts += indent + \"byte buffer length = \" + len + \"\\n\";"
],
"header": "@@ -1727,10 +1712,8 @@ class DDMWriter",
"removed": [
"\t\tint byteslen = 0;",
"\t\tif ( bytes != null)",
"\t\t\tbyteslen = bytes.length;",
"\t\ts += indent + \"byte array length = \" + bytes.length + \"\\n\";"
]
},
{
"added": [
"\t\t\toverrideChainByte(prevHdrLocation + 3, currChainByte);"
],
"header": "@@ -1796,8 +1779,7 @@ class DDMWriter",
"removed": [
"\t\t\tbytes[prevHdrLocation + 3] &= 0x0F;\t// Zero out old chain value.",
"\t\t\tbytes[prevHdrLocation + 3] |= currChainByte;"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/EbcdicCcsidManager.java",
"hunks": [
{
"added": [
"import java.nio.ByteBuffer;",
""
],
"header": "@@ -21,6 +21,8 @@",
"removed": []
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/TestProto.java",
"hunks": [
{
"added": [
"\t\treturn ccsidManager.convertFromUCS2(str);"
],
"header": "@@ -953,9 +953,7 @@ public class TestProto {",
"removed": [
"\t\tbyte [] buf = new byte[str.length()];",
"\t\tccsidManager.convertFromUCS2(str, buf, 0);",
"\t\treturn buf;"
]
}
]
}
] |
derby-DERBY-2937-8fe95616
|
DERBY-2937 Add a test case for the error seen in this bug and ensure that when a column reference
is created for an aggregate node that it points to its source.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@561860 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2937-d1fbe3c3
|
In tracing the types of various nodes created by the failing query for DERBY-2937 I saw
that some nodes had types of CHAR(0) which makes no sense. These were NULL constant nodes
where no type attributes were being passed in. Change QueryTreeNode.getNullNode() to
have a DataTypeDescriptor as a parameter and set the type of the constant node to be that type.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@561806 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/AggregateNode.java",
"hunks": [
{
"added": [
"\t\tDataTypeDescriptor compType =",
" DataTypeDescriptor.getSQLDataTypeDescriptor(className);",
"\t\tConstantNode nullNode = getNullNode(compType);"
],
"header": "@@ -498,18 +498,14 @@ public class AggregateNode extends UnaryOperatorNode",
"removed": [
"\t\tTypeId compTypeId = TypeId.getSQLTypeForJavaType(className);",
"\t\tConstantNode nullNode = getNullNode(",
"\t\t\t\tcompTypeId,",
"\t\t\t\tgetContextManager(),",
"\t\t\t\tgetTypeServices().getCollationType(),",
"\t\t\t\tgetTypeServices().getCollationDerivation()",
"\t\t\t\t); // no params"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumn.java",
"hunks": [
{
"added": [
"\t\texpression = getNullNode(getTypeServices());"
],
"header": "@@ -319,9 +319,7 @@ public class ResultColumn extends ValueNode",
"removed": [
"\t\texpression = getNullNode(getTypeId(), ",
"\t\t\t\t\t\t\t\t\tgetContextManager(), getTypeServices().getCollationType(),",
"\t\t\t\t\t\t\t\t\tgetTypeServices().getCollationDerivation());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ResultSetNode.java",
"hunks": [
{
"added": [
" getNullNode(colType),"
],
"header": "@@ -1184,12 +1184,7 @@ public abstract class ResultSetNode extends QueryTreeNode",
"removed": [
" getNullNode(",
" colType.getTypeId(),",
" getContextManager(), ",
"\t\t\t\t\t\tcolType.getCollationType(),",
" colType.getCollationDerivation()",
" ),"
]
}
]
}
] |
derby-DERBY-2939-10b43853
|
DERBY-2939: Log file is flushed every time a log buffer gets full
Contributed by Jørgen Løland
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@579511 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/log/LogAccessFile.java",
"hunks": [
{
"added": [
" In case of a large log record that does not fit into a buffer, the",
" checksum is written to the byte[] allocated for the big log",
" record. "
],
"header": "@@ -68,12 +68,9 @@ import org.apache.derby.iapi.store.raw.RawStoreFactory;",
"removed": [
" In case of a large log record that does not fit into a bufffer, it needs to ",
" be written directly to the disk instead of going through the log buffers. ",
" In this case the log record write gets broken into three parts:",
" 1) Write checksum log record and LOG RECORD HEADER (length + instant) ",
" 2) Write the log record. ",
" 3) Write the trailing length of the log record. "
]
},
{
"added": [],
"header": "@@ -114,7 +111,6 @@ public class LogAccessFile",
"removed": [
"\tprivate boolean directWrite = false; //true when log is written directly to file."
]
},
{
"added": [
"\t\t\t * this space when buffer is switched"
],
"header": "@@ -197,7 +193,7 @@ public class LogAccessFile",
"removed": [
"\t\t\t * this space when buffer is switched or while doing direct write to the log file."
]
},
{
"added": [],
"header": "@@ -209,9 +205,6 @@ public class LogAccessFile",
"removed": [
"\tprivate byte[] db = new byte[LOG_RECORD_TRAILER_SIZE]; ",
"",
""
]
},
{
"added": [
" if (total_log_record_length <= currentBuffer.bytes_free) {",
" int newpos = appendLogRecordToBuffer(currentBuffer.buffer,",
" currentBuffer.position,",
" length, ",
" instant, ",
" data, ",
" data_offset,",
" optional_data,",
" optional_data_offset,",
" optional_data_length);",
" currentBuffer.position = newpos;",
" if (SanityManager.DEBUG) {",
" int normalizedPosition = currentBuffer.position;",
" if (writeChecksum) {",
" normalizedPosition -= checksumLogRecordSize;",
" }",
" SanityManager.ASSERT(",
" currentBuffer.bytes_free + normalizedPosition ==",
" currentBuffer.length,",
" \"free_bytes and position do not add up to the total \" +",
" \"length of the buffer\");",
" }",
" } else {",
" /* The current log record will never fit in a single",
" * buffer. The reason is that reserveSpaceForChecksum is",
" * always called before writeLogRecord (see",
" * LogToFile#appendLogRecord). When we reach this point,",
" * reserveSpaceForChecksum has already found out that the",
" * previous buffer did not have enough free bytes to store",
" * this log record, and therefore switched to a fresh",
" * buffer. Hence, currentBuffer is empty now, and",
" * switching to the next free buffer will not help. Since",
" * there is no way for this log record to fit into a",
" * buffer, it is written to a new, big enough, byte[] and",
" * then written to log file instead of writing it to",
" * buffer.",
" */",
"",
" // allocate a byte[] that is big enough to contain the",
" // giant log record:",
" int bigBufferLength =",
" checksumLogRecordSize + total_log_record_length;",
" byte[] bigbuffer = new byte[bigBufferLength];",
" appendLogRecordToBuffer(bigbuffer, checksumLogRecordSize,",
" length, ",
" instant, ",
" data, ",
" data_offset,",
" optional_data,",
" optional_data_offset,",
" optional_data_length);",
"",
" // write checksum to bigbuffer",
" if(writeChecksum) {",
" checksumLogOperation.reset();",
" checksumLogOperation.update(bigbuffer, checksumLogRecordSize,",
" total_log_record_length);",
"",
" writeChecksumLogRecord(bigbuffer);",
" }",
" // flush all buffers before writing the bigbuffer to the",
" // log file.",
" flushLogAccessFile();",
" // Note:No Special Synchronization required here , There",
" // will be nothing to write by flushDirtyBuffers that can",
" // run in parallel to the threads that is executing this",
" // code. Above flush call should have written all the",
" // buffers and NO new log will get added until the",
" // following direct log to file call finishes.",
"\t\t\t// write the log record directly to the log file.",
" writeToLog(bigbuffer, 0, bigBufferLength);",
" }",
" }",
" /**",
" * Append a log record to a byte[]. Typically, the byte[] will be",
" * currentBuffer, but if a log record that is too big to fit in a",
" * buffer is added, buff will be a newly allocated byte[].",
" *",
" * @param buff The byte[] the log record is appended to",
" * @param pos The position in buff where the method will start to",
" * append to",
" * @param length (data + optional_data) length bytes to write",
" * @param instant the log address of this log record.",
" * @param data \"from\" array to copy \"data\" portion of rec",
" * @param data_offset offset in \"data\" to start copying from.",
" * @param optional_data \"from\" array to copy \"optional data\" from",
" * @param optional_data_offset offset in \"optional_data\" to start copy from",
" * @param optional_data_length length of optional data to copy.",
" *",
" * @see writeLogRecord",
" */",
" private int appendLogRecordToBuffer(byte[] buff, int pos,",
" int length,",
" long instant,",
" byte[] data,",
" int data_offset,",
" byte[] optional_data,",
" int optional_data_offset,",
" int optional_data_length) {",
"",
" pos = writeInt(length, buff, pos);",
" pos = writeLong(instant, buff, pos);",
"",
" int data_length = length - optional_data_length;",
" System.arraycopy(data, data_offset,",
" buff, pos,",
" data_length);",
" pos += data_length;",
"",
" if (optional_data_length != 0) {",
" System.arraycopy(optional_data, optional_data_offset, ",
" buff, pos, ",
" optional_data_length);",
" pos += optional_data_length;",
" }",
" pos = writeInt(length, buff, pos);",
" return pos;"
],
"header": "@@ -258,114 +251,131 @@ public class LogAccessFile",
"removed": [
"\t\tif (total_log_record_length <= currentBuffer.bytes_free)",
" {",
" byte[] b = currentBuffer.buffer;",
" int p = currentBuffer.position;",
"",
" // writeInt(length)",
"\t\t\tp = writeInt(length, b, p);",
" ",
" // writeLong(instant)",
"\t\t\tp = writeLong(instant, b , p);",
"",
" // write(data, data_offset, length - optional_data_length)",
" int transfer_length = (length - optional_data_length);",
"\t\t\tSystem.arraycopy(data, data_offset, b, p, transfer_length);",
"",
" p += transfer_length;",
"",
" if (optional_data_length != 0)",
" {",
" // write(",
" // optional_data, optional_data_offset, optional_data_length);",
"",
" System.arraycopy(",
" optional_data, optional_data_offset, ",
" b, p, ",
" optional_data_length);",
"",
" p += optional_data_length;",
" }",
"",
" // writeInt(length)",
"\t\t\tp = writeInt(length, b, p);",
" ",
"\t\t\tcurrentBuffer.position = p;",
"\t\t}",
" else",
" {",
"\t\t\t",
"\t\t\t/** Because current log record will never fit in a single buffer",
"\t\t\t * a direct write to the log file is required instead of ",
"\t\t\t * writing the log record through the log bufffers. ",
"\t\t\t */",
"\t\t\tdirectWrite = true;",
"",
"\t\t\tbyte[] b = currentBuffer.buffer;",
" int p = currentBuffer.position;",
"",
" // writeInt(length)",
"\t\t\tp = writeInt(length , b, p);",
" ",
" // writeLong(instant)",
"\t\t\tp = writeLong(instant, b, p);",
"",
"\t\t\tcurrentBuffer.position = p;",
"\t\t\tcurrentBuffer.bytes_free -= LOG_RECORD_HEADER_SIZE;",
"",
"\t\t\t/** using a seperate small buffer to write the traling length",
"\t\t\t * instead of the log buffer because data portion will be ",
"\t\t\t * written directly to log file after the log buffer is ",
"\t\t\t * flushed and the trailing length should be written after that. ",
"\t\t\t */",
"",
"\t\t\t// writeInt(length)",
"\t\t\twriteInt(length , db, 0);",
"\t\t\tif(writeChecksum)",
"\t\t\t{",
"\t\t\t\tchecksumLogOperation.reset();",
"\t\t\t\tchecksumLogOperation.update(b, checksumLogRecordSize, p - checksumLogRecordSize);",
"\t\t\t\tchecksumLogOperation.update(data, data_offset, length - optional_data_length);",
"\t\t\t\tif (optional_data_length != 0)",
"\t\t\t\t{",
"\t\t\t\t\tchecksumLogOperation.update(optional_data, optional_data_offset, optional_data_length);\t",
"\t\t\t\t}",
"\t\t\t\t// update the checksum to include the trailing length.",
"\t\t\t\tchecksumLogOperation.update(db, 0, LOG_RECORD_TRAILER_SIZE);",
"\t\t\t",
"\t\t\t\t// write checksum log record to the log buffer ",
"\t\t\t\twriteChecksumLogRecord();",
"\t\t\t}",
"\t\t\t",
"\t\t\t",
"\t\t\t// now do the writes directly to the log file. ",
"\t\t\t// flush all buffers before wrting directly to the log file. ",
"\t\t\tflushLogAccessFile();",
"\t\t\t// Note:No Special Synchronization required here , ",
"\t\t\t// There will be nothing to write by flushDirtyBuffers that can run",
"\t\t\t// in parallel to the threads that is executing this code. Above",
"\t\t\t// flush call should have written all the buffers and NO new log will ",
"\t\t\t// get added until the following direct log to file call finishes. ",
"\t\t\t// write the rest of the log directltly to the log file. ",
" writeToLog(data, data_offset, length - optional_data_length);",
" if (optional_data_length != 0)",
" {",
" writeToLog(",
" optional_data, optional_data_offset, optional_data_length);",
" }",
"\t\t\t// write the trailing length ",
"\t\t\twriteToLog(db,0, 4);",
"\t\t\tdirectWrite = false;",
"\t\t}"
]
},
{
"added": [
"\t\t\tif(writeChecksum)",
"\t\t\t\twriteChecksumLogRecord(currentBuffer.buffer);"
],
"header": "@@ -571,11 +581,11 @@ public class LogAccessFile",
"removed": [
"\t\t\tif(writeChecksum && !directWrite)",
"\t\t\t\twriteChecksumLogRecord();"
]
},
{
"added": [
"\t/**",
"\t * Generate the checkum log record and write it into the log",
"\t * buffer. The checksum applies to all bytes from this checksum",
"\t * log record to the next one. ",
" * @param buffer The byte[] the checksum is written to. The",
" * checksum is always written at the beginning of buffer.",
"\tprivate void writeChecksumLogRecord(byte[] buffer)",
"\t\tthrows IOException, StandardException{",
"\t\tp = writeInt(checksumLength, buffer, p);",
"\t\tp = writeLong(checksumInstant, buffer, p);",
"\t\tlogOutputBuffer.setData(buffer);"
],
"header": "@@ -803,23 +813,26 @@ public class LogAccessFile",
"removed": [
"\t/*",
"\t * generate the checkum log record and write it into the log buffer.",
"\tprivate void writeChecksumLogRecord() throws IOException, StandardException",
"\t{",
"\t\tbyte[] b = currentBuffer.buffer;",
"\t\tp = writeInt(checksumLength, b , p);",
"\t\tp = writeLong(checksumInstant, b , p);",
"\t\tlogOutputBuffer.setData(b);"
]
},
{
"added": [
"\t\t\t\tlogFactory.encrypt(buffer, LOG_RECORD_HEADER_SIZE, checksumLength, ",
"\t\t\t\t\t\t\t\t buffer, LOG_RECORD_HEADER_SIZE);"
],
"header": "@@ -827,8 +840,8 @@ public class LogAccessFile",
"removed": [
"\t\t\t\tlogFactory.encrypt(b, LOG_RECORD_HEADER_SIZE, checksumLength, ",
"\t\t\t\t\t\t\t\t b, LOG_RECORD_HEADER_SIZE);"
]
},
{
"added": [
"\t\tp = writeInt(checksumLength, buffer, p );"
],
"header": "@@ -839,7 +852,7 @@ public class LogAccessFile",
"removed": [
"\t\tp = writeInt(checksumLength, b, p );"
]
}
]
}
] |
derby-DERBY-2946-ea5ca3f8
|
DERBY-2946
The character string literals take their collation from the current compilation schema. Derby's metadata queries have lots of comparisons where a character string literal is compared with a character string column from SYS schema. The character string columns from SYS schema have the collation of UCS_BASIC. If the metadata queries get run with user schema as current compilation schema, then the character string constants in metadata queries will get a collation of territory based and this mismatch in collation of character string constants and character string columns will cause the metadata queries to fail. This situation can arise in the current softupgrade code. In softupgrade mode, we do not ensure that the current compilation schema is SYS schema. A simple change in GenericLanguageConnectionContext(GLCC) takes care of that problem. In GLCC, with this checkin, we check if the query being executed is a metadata query and if yes, then we should set the current compilation schema to be SYS schema for that metadata query execution. This takes care of the soft upgrade problem. Outside of soft upgrade mode, we do not have problems with metadata queries because during a normal run/hard upgrade, we go to SYSSTATEMENTS to run metadata queries and that code path ensures that the current compilation schema is SYS schema.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@572880 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2949-db8af5c2
|
DERBY-2949: AssertionFailedError in testStalePlansOnLargeTable
Use explicit checkpoints to make the test independent of the timing of
the implicit checkpoints.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1043389 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2951-197b3d3c
|
This commit has 2 simple fixes (DERBY-2951 which gives assert failure and DERBY-2959 The table will have collation type UCS_BASIC which is different than the collation of the schema TERRITORY_BASED hence this operation is not supported.)
The failure in DERBY-2951 is because in store, we were not using correct format id and hence collation information was not getting written out and read from disk. Added a test case for this in CollationTest.
The failure in DERBY-2959 was because of the bug that we were comparing collation type for non-character types. Collation is only applicable to character types and hence we should check for character types before comparing the collation info. Added a test case for this one too.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@557693 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2955-27791253
|
DERBY-2955
We used to set the collation type of character string columns in the generate phase rather than the bind phase of create table. But this will cause problem with following query
CREATE TABLE STAFF9 (EMPNAME CHAR(20),
CONSTRAINT STAFF9_EMPNAME CHECK (EMPNAME NOT LIKE 'T%'))
For the query above, when run in a territory based db, we need to have the correct collation set in bind phase of create table so that when LIKE is handled in LikeEscapeOperatorNode, we have the correct collation set for EMPNAME otherwise it will throw an exception for 'T%' having collation of territory based and EMPNAME having the default collation of UCS_BASIC. The change in this commit will ensure that character string columns get their collation set early on in the bind phase so when the bind code for LIKE kicks in, we are all set with correct collation information.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@557886 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CreateTableNode.java",
"hunks": [
{
"added": [
"\t\t} else {",
"\t\t\t//Set the collation type and collation derivation of all the ",
"\t\t\t//character type columns. Their collation type will be same as the ",
"\t\t\t//collation of the schema they belong to. Their collation ",
"\t\t\t//derivation will be \"implicit\". ",
"\t\t\t//Earlier we did this in makeConstantAction but that is little too ",
"\t\t\t//late (DERBY-2955)",
"\t\t\t//eg ",
"\t\t\t//CREATE TABLE STAFF9 (EMPNAME CHAR(20),",
"\t\t\t// CONSTRAINT STAFF9_EMPNAME CHECK (EMPNAME NOT LIKE 'T%'))",
"\t\t\t//For the query above, when run in a territory based db, we need ",
"\t\t\t//to have the correct collation set in bind phase of create table ",
"\t\t\t//so that when LIKE is handled in LikeEscapeOperatorNode, we have ",
"\t\t\t//the correct collation set for EMPNAME otherwise it will throw an ",
"\t\t\t//exception for 'T%' having collation of territory based and ",
"\t\t\t//EMPNAME having the default collation of UCS_BASIC",
"\t\t\ttableElementList.setCollationTypesOnCharacterStringColumns(",
"\t\t\t\t\tgetSchemaDescriptor());"
],
"header": "@@ -348,6 +348,24 @@ public class CreateTableNode extends DDLStatementNode",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/TableElementList.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.types.StringDataValue;"
],
"header": "@@ -29,6 +29,7 @@ import org.apache.derby.iapi.sql.compile.CompilerContext;",
"removed": []
}
]
}
] |
derby-DERBY-2959-197b3d3c
|
This commit has 2 simple fixes (DERBY-2951 which gives assert failure and DERBY-2959 The table will have collation type UCS_BASIC which is different than the collation of the schema TERRITORY_BASED hence this operation is not supported.)
The failure in DERBY-2951 is because in store, we were not using correct format id and hence collation information was not getting written out and read from disk. Added a test case for this in CollationTest.
The failure in DERBY-2959 was because of the bug that we were comparing collation type for non-character types. Collation is only applicable to character types and hence we should check for character types before comparing the collation info. Added a test case for this one too.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@557693 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2961-27f9fd60
|
DERBY-2961
This commit fixes the ASSERT failure thrown by the SELECT statement in following query
CREATE TABLE T_MAIN1 (ID INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, V XML);
INSERT INTO T_MAIN1(V) VALUES NULL;
SELECT ID, XMLSERIALIZE(V AS CLOB), XMLSERIALIZE(V AS CLOB) FROM T_MAIN1 ORDER BY 1;
The SELECT statement was resulting in Assert Failure because the StringDataValue generated for V AS CLOB was not taking collation type into consideration ie it was always generating collation insensitive StringDataValue. I have fixed that problem by passing the current compilation schema's collation type to SqlXmlExecutor which then will get used in determining whether for instance we should generate SQLChar vs CollatorSQLChar. This collation information is required only for character string types.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@567735 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/XML.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.sql.conn.ConnectionUtil;"
],
"header": "@@ -31,6 +31,7 @@ import org.apache.derby.iapi.services.io.Storable;",
"removed": []
},
{
"added": [
"import java.text.RuleBasedCollator;"
],
"header": "@@ -41,6 +42,7 @@ import org.apache.derby.iapi.reference.SQLState;",
"removed": []
},
{
"added": [
" int targetType, int targetWidth, int targetCollationType) ",
" throws StandardException"
],
"header": "@@ -674,7 +676,8 @@ public class XML",
"removed": [
" int targetType, int targetWidth) throws StandardException"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/SqlXmlExecutor.java",
"hunks": [
{
"added": [
" // Target type, target width and target collation type that ",
" // were specified for an XMLSERIALIZE operator.",
" private int targetCollationType;"
],
"header": "@@ -126,10 +126,11 @@ public class SqlXmlExecutor {",
"removed": [
" // Target type and target width that were specified",
" // for an XMLSERIALIZE operator."
]
},
{
"added": [
" * @param targetCollationType The collation type of the target type.",
" public SqlXmlExecutor(int targetTypeId, int targetMaxWidth, ",
" \t\tint targetCollationType)",
" this.targetCollationType = targetCollationType;"
],
"header": "@@ -153,11 +154,14 @@ public class SqlXmlExecutor {",
"removed": [
" public SqlXmlExecutor(int targetTypeId, int targetMaxWidth)"
]
}
]
}
] |
derby-DERBY-2962-96c3cce8
|
DERBY-2962 Change functional tests to use casts for System table queries to avoid conversion errors when run with TERRITORY_BASED collation
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@558801 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2966-274e189e
|
DERBY-2966
We need to have context set up inside a moveToInsertRow code because that code tries to do DTD.getNull and getNull needs to find RuleBasedCollator which is found by relying on the context. Putting the context fixed the problem.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@559125 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [
"\t\t\t\t//we need to set the context because the getNull call below ",
"\t\t\t\t//(if dealing with territory based database) might need to ",
"\t\t\t\t//look up the current context to get the correct ",
"\t\t\t\t//RuleBasedCollator. This RuleBasedCollator will be used to",
"\t\t\t\t//construct a CollatorSQL... type rather than SQL...Char type ",
"\t\t\t\t//when dealing with character string datatypes.",
"\t\t\t\tsetupContextStack();"
],
"header": "@@ -3924,6 +3924,13 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": []
}
]
}
] |
derby-DERBY-2967-f148f1f5
|
DERBY-2967
Commiting the patch (DERBY2967_Oct11_07_diff.txt) attached to DERBY-2967. The implementation of LIKE for UCS_BASIC and territory based character string types do not differ much(based on SQL standard as explained in comments to
this Jira entry). I have been able to change the existing code for LIKE (in Like.java) for UCS_BASIC character strings to support territory based character strings. The existing method in Like.java now gets a new parameter and it is RuleBasedCollator. For UCS_BASIC strings, this will be passed as NULL. We check if the RuleBasedCollator is NULL and if so then we do simple one character equality check for non-metacharacters in pattern and correspnding characters in value string. But if RuleBasedCollator is not NULL, then we use it to get collation element(s) for one character at a time for non-metacharacters in patterns and corresponding characters in value string and do the collation element(s) comparison to establish equality.
In addition to the above mentioned change in Like.java, I have changed the callers of the method in Like.java to pass correct value for the RuleBasedCollator.
Additionally, I have added a test to CollationTest.java for the code changes. Existing like tests in CollationTest2.java were very useful in the testing of my changes. And lastly, I changed few of the existing tests to use different character string values so that when we run the full collation tests, we do not see some of the test failures which are genuine because of the nature of their data.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@585261 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/Like.java",
"hunks": [
{
"added": [
"\t ",
"\t This method gets called for UCS_BASIC and territory based character",
"\t string types to look for a pattern in a value string. It also deals",
"\t with escape character if user has provided one.",
"\t "
],
"header": "@@ -53,6 +53,11 @@ public class Like {",
"removed": []
},
{
"added": [
"\t\t@param collator null if we are dealing with UCS_BASIC ",
"\t\t character string types. If not null, then we use it to ",
"\t\t get collation elements for characters in val and ",
"\t\t non-metacharacters in pat to do the comparison."
],
"header": "@@ -60,6 +65,10 @@ public class Like {",
"removed": []
},
{
"added": [
"\t\tint \tescapeLength,",
"\t\tRuleBasedCollator collator",
"\t\treturn like(val, 0, valLength, pat, 0, patLength, escape, ",
"\t\t\t\tescapeLength, collator);"
],
"header": "@@ -72,10 +81,12 @@ public class Like {",
"removed": [
"\t\tint \tescapeLength",
"\t\treturn like(val, 0, valLength, pat, 0, patLength, escape, escapeLength);"
]
},
{
"added": [
"\t/* For character string types with UCS_BASIC and territory based",
"\t * collation. There is a different method for non-national chars */"
],
"header": "@@ -107,7 +118,8 @@ public class Like {",
"removed": [
"\t/* non-national chars */"
]
},
{
"added": [
"\t\tint \tescapeLength,",
"\t\tRuleBasedCollator collator"
],
"header": "@@ -117,7 +129,8 @@ public class Like {",
"removed": [
"\t\tint \tescapeLength"
]
},
{
"added": [
"\t\t\t\tif (checkEquality(val, vLoc, pat, pLoc, collator)) {",
"\t\t\t\t\t",
"\t\t\t\t} else"
],
"header": "@@ -147,18 +160,14 @@ public class Like {",
"removed": [
"\t\t\t\tif (val[vLoc] == pat[pLoc]) ",
"\t\t\t\t{",
"\t",
"\t\t\t\t}",
"\t\t\t\telse ",
"\t\t\t\t{",
"\t\t\t\t}"
]
},
{
"added": [
"\t\t\t\tif (checkEquality(val, vLoc, pat, pLoc, collator)) {"
],
"header": "@@ -174,7 +183,7 @@ public class Like {",
"removed": [
"\t\t\t\tif (val[vLoc] == pat[pLoc]) {"
]
},
{
"added": [
"\t\t\t\t\tBoolean restResult = Like.like(val, vLoc+n, vLoc+n+i, pat,",
"\t\t\t\t\t\t\tpLoc+1, pEnd, escape, escapeLength, collator);"
],
"header": "@@ -233,7 +242,8 @@ public class Like {",
"removed": [
"\t\t\t\t\tBoolean restResult = Like.like(val,vLoc+n,vLoc+n+i,pat,pLoc+1,pEnd,escape,escapeLength);"
]
},
{
"added": [
"\t/**",
"\t * Make sure that the character in val matches the character in pat.",
"\t * If we are dealing with UCS_BASIC character string (ie collator is null)",
"\t * then we can just do simple character equality check. But if we are",
"\t * dealing with territory based character string type, then we need to ",
"\t * convert the character in val and pat into it's collation element(s)",
"\t * and then do collation element equality comparison.",
"\t * ",
"\t * @param val value to compare.",
"\t * @param vLoc character position in val.",
"\t * @param pat pattern to look for in val.",
"\t * @param pLoc character position in pat.",
"\t * @param collator null if we are dealing with UCS_BASIC character string",
"\t * types. If not null, then we use it to get collation elements for ",
"\t * character in val and pat to do the equality comparison.",
"\t * @return",
"\t */",
"\tprivate static boolean checkEquality(char[] val, int vLoc,",
"\t\t\tchar[] pat, int pLoc, RuleBasedCollator collator) {",
"\t\tCollationElementIterator patternIterator;",
"\t\tint curCollationElementInPattern;",
"\t\tCollationElementIterator valueIterator;",
"\t\tint curCollationElementInValue;",
"",
"\t\tif (collator == null) {//dealing with UCS_BASIC character string",
"\t\t\tif (val[vLoc] == pat[pLoc]) ",
"\t\t\t\treturn true;",
"\t\t\telse ",
"\t\t\t\treturn false;",
"\t\t} else {//dealing with territory based character string",
"\t\t\tpatternIterator = collator.getCollationElementIterator(",
"\t\t\t\t\tnew String(pat, pLoc, 1));",
"\t\t\tvalueIterator = collator.getCollationElementIterator(",
"\t\t\t\t\tnew String(val, vLoc, 1));",
"\t\t\tcurCollationElementInPattern = patternIterator.next(); ",
"\t\t\tcurCollationElementInValue = valueIterator.next();",
"\t\t\twhile (curCollationElementInPattern == curCollationElementInValue)",
"\t\t\t{",
"\t\t\t\tif (curCollationElementInPattern == CollationElementIterator.NULLORDER)",
"\t\t\t\t\tbreak;",
"\t\t\t\tcurCollationElementInPattern = patternIterator.next(); ",
"\t\t\t\tcurCollationElementInValue = valueIterator.next(); ",
"\t\t\t}",
"\t\t\t//If the current collation element for the character in pattern ",
"\t\t\t//and value do not match, then we have found a mismatach and it",
"\t\t\t//is time to return FALSE from this method.",
"\t\t\tif (curCollationElementInPattern != curCollationElementInValue)",
"\t\t\t\treturn false;",
"\t\t\telse",
"\t\t\t\treturn true;",
"\t\t}",
"\t\t",
"\t}",
""
],
"header": "@@ -254,6 +264,60 @@ public class Like {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLChar.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\t pattern.getLength(),",
"\t\t\t\t\t\t\t\t null);",
"\t\t\t\t\tgetIntLength(), ",
"\t\t\t\t\tpatternSQLChar.getIntArray(),",
"\t\t\t\t\tpatternSQLChar.getIntLength(),",
"\t\t\t\t\tgetLocaleFinder().getCollator());"
],
"header": "@@ -1691,16 +1691,17 @@ readingLoop:",
"removed": [
"\t\t\t\t\t\t\t\t pattern.getLength());",
"\t\t\t\t\t\t\t\t getIntLength(),",
" \t\t \t\t\t\t\t patternSQLChar.getIntArray(),",
"\t\t\t\t\t\t\t\t patternSQLChar.getIntLength(),",
"\t\t\t\t\t\t\t\t getLocaleFinder().getCollator());"
]
},
{
"added": [
"\t\t\t\t\t\t\t\t escapeLength,",
"\t\t\t\t\t\t\t\t null);"
],
"header": "@@ -1774,7 +1775,8 @@ readingLoop:",
"removed": [
"\t\t\t\t\t\t\t\t escapeLength);"
]
},
{
"added": [
"\t\t\tlikeResult = Like.like(getIntArray(),",
"\t\t\t\t\tgetIntLength(), ",
"\t\t\t\t\tpatternSQLChar.getIntArray(),",
"\t\t\t\t\tpatternSQLChar.getIntLength(),",
"\t\t\t\t\tescapeIntArray,",
"\t\t\t\t\tescapeLength,",
"\t\t\t\t\tgetLocaleFinder().getCollator());"
],
"header": "@@ -1788,13 +1790,13 @@ readingLoop:",
"removed": [
"\t\t\tlikeResult = Like.like(getIntArray(), ",
"\t\t\t\t\t\t\t\t getIntLength(),",
" \t\t \t\t\t\t\t patternSQLChar.getIntArray(),",
"\t\t\t\t\t\t\t\t patternSQLChar.getIntLength(),",
"\t\t\t\t\t\t\t\t escapeIntArray,",
"\t\t\t\t\t\t\t\t escapeLength,",
"\t\t\t\t\t\t\t\t getLocaleFinder().getCollator());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/WorkHorseForCollatorDatatypes.java",
"hunks": [
{
"added": [
"\t\tlikeResult = Like.like(stringData.getCharArray(), ",
"\t\t\t\tstringData.getLength(), ",
"\t\t\t\t((SQLChar)pattern).getCharArray(), ",
"\t\t\t\tpattern.getLength(), ",
"\t\t\t\tnull, ",
"\t\t\t\t0,",
"\t\treturn SQLBoolean.truthValue(stringData ,"
],
"header": "@@ -125,15 +125,15 @@ final class WorkHorseForCollatorDatatypes",
"removed": [
"\t\tCollationElementsInterface patternToCheck = (CollationElementsInterface) pattern;",
"\t\tlikeResult = Like.like(",
"\t\t\t\tgetCollationElementsForString(),",
"\t\t\t\tgetCountOfCollationElements(),",
"\t\t\t\tpatternToCheck.getCollationElementsForString(),",
"\t\t\t\tpatternToCheck.getCountOfCollationElements(),",
"\t\treturn SQLBoolean.truthValue(stringData,"
]
},
{
"added": [],
"header": "@@ -169,7 +169,6 @@ final class WorkHorseForCollatorDatatypes",
"removed": [
"\t\tCollationElementsInterface patternToCheck = (CollationElementsInterface) pattern;"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/unitTests/lang/T_Like.java",
"hunks": [
{
"added": [
"\t\texpect(\"null like null escape null\", Like.like(caNull, 0, caNull, 0, caNull, 0, null), null);",
"\t\texpect(\"null like 'hello' escape null\", Like.like(caNull, 0, caHello, caHello.length, caNull, 0, null), null);",
"\t\texpect(\"'hello' like null escape null\", Like.like(caHello, caHello.length, caNull, 0, caNull, 0, null), null);",
"\t\texpect(\"null like null escape '\\\\'\", Like.like(caNull, 0, caNull, 0, \"\\\\\".toCharArray(), \"\\\\\".toCharArray().length, null), null);",
"\t\texpect(\"null like null escape 'hello'\", Like.like(caNull, 0, caNull, 0, caHello, caHello.length, null), null);",
"\t\texpect(\"null like 'hello\\\\' escape '\\\\'\", Like.like(caNull, 0, \"hello\\\\\".toCharArray(), \"hello\\\\\".toCharArray().length, \"\\\\\".toCharArray(), \"\\\\\".toCharArray().length, null), null);"
],
"header": "@@ -107,15 +107,15 @@ public class T_Like extends T_Generic",
"removed": [
"\t\texpect(\"null like null escape null\", Like.like(caNull, 0, caNull, 0, caNull, 0), null);",
"\t\texpect(\"null like 'hello' escape null\", Like.like(caNull, 0, caHello, caHello.length, caNull, 0), null);",
"\t\texpect(\"'hello' like null escape null\", Like.like(caHello, caHello.length, caNull, 0, caNull, 0), null);",
"\t\texpect(\"null like null escape '\\\\'\", Like.like(caNull, 0, caNull, 0, \"\\\\\".toCharArray(), \"\\\\\".toCharArray().length), null);",
"\t\texpect(\"null like null escape 'hello'\", Like.like(caNull, 0, caNull, 0, caHello, caHello.length), null);",
"\t\texpect(\"null like 'hello\\\\' escape '\\\\'\", Like.like(caNull, 0, \"hello\\\\\".toCharArray(), \"hello\\\\\".toCharArray().length, \"\\\\\".toCharArray(), \"\\\\\".toCharArray().length), null);"
]
},
{
"added": [
"\t\texpect(\"'hello' like 'hello' escape null\", Like.like(caHello, caHello.length, caHello, caHello.length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'hello' like 'h_llo' escape null\", Like.like(caHello, caHello.length, \"h_llo\".toCharArray(), \"h_llo\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'hello' like '_ello' escape null\", Like.like(caHello, caHello.length, \"_ello\".toCharArray(), \"_ello\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'hello' like 'hell_' escape null\", Like.like(caHello, caHello.length, \"hell_\".toCharArray(), \"hell_\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'hello' like '_____' escape null\", Like.like(caHello, caHello.length, \"_____\".toCharArray(), \"_____\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'hello' like 'h___e' escape null\", Like.like(caHello, caHello.length, \"h___o\".toCharArray(), \"h___o\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'h' like 'h' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"h\".toCharArray(), \"h\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'h' like '_' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"_\".toCharArray(), \"_\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'h' like '%' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"%\".toCharArray(), \"%\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'h' like '_%' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"_%\".toCharArray(), \"_%\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'h' like '%_' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"%_\".toCharArray(), \"%_\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'h' like '%' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"%\".toCharArray(), \"%\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'' like '%' escape null\", Like.like(\"\".toCharArray(), \"\".toCharArray().length, \"%\".toCharArray(), \"%\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'' like '%%' escape null\", Like.like(\"\".toCharArray(), \"\".toCharArray().length, \"%%\".toCharArray(), \"%%\".toCharArray().length, caNull, 0, null), Boolean.TRUE);",
"\t\texpect(\"'' like '%%%' escape null\", Like.like(\"\".toCharArray(), \"\".toCharArray().length, \"%%%\".toCharArray(), \"%%%\".toCharArray().length, caNull, 0, null), Boolean.TRUE);"
],
"header": "@@ -124,21 +124,21 @@ public class T_Like extends T_Generic",
"removed": [
"\t\texpect(\"'hello' like 'hello' escape null\", Like.like(caHello, caHello.length, caHello, caHello.length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'hello' like 'h_llo' escape null\", Like.like(caHello, caHello.length, \"h_llo\".toCharArray(), \"h_llo\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'hello' like '_ello' escape null\", Like.like(caHello, caHello.length, \"_ello\".toCharArray(), \"_ello\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'hello' like 'hell_' escape null\", Like.like(caHello, caHello.length, \"hell_\".toCharArray(), \"hell_\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'hello' like '_____' escape null\", Like.like(caHello, caHello.length, \"_____\".toCharArray(), \"_____\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'hello' like 'h___e' escape null\", Like.like(caHello, caHello.length, \"h___o\".toCharArray(), \"h___o\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'h' like 'h' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"h\".toCharArray(), \"h\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'h' like '_' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"_\".toCharArray(), \"_\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'h' like '%' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"%\".toCharArray(), \"%\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'h' like '_%' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"_%\".toCharArray(), \"_%\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'h' like '%_' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"%_\".toCharArray(), \"%_\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'h' like '%' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"%\".toCharArray(), \"%\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'' like '%' escape null\", Like.like(\"\".toCharArray(), \"\".toCharArray().length, \"%\".toCharArray(), \"%\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'' like '%%' escape null\", Like.like(\"\".toCharArray(), \"\".toCharArray().length, \"%%\".toCharArray(), \"%%\".toCharArray().length, caNull, 0), Boolean.TRUE);",
"\t\texpect(\"'' like '%%%' escape null\", Like.like(\"\".toCharArray(), \"\".toCharArray().length, \"%%%\".toCharArray(), \"%%%\".toCharArray().length, caNull, 0), Boolean.TRUE);"
]
},
{
"added": [
"\t\texpect(\"'hello' like 'hello ' escape null\", Like.like(caHello, caHello.length, \"hello \".toCharArray(), \"hello \".toCharArray().length, caNull, 0, null), Boolean.FALSE);",
"\t\texpect(\"'hello ' like 'hello' escape null\", Like.like(\"hello \".toCharArray(), \"hello \".toCharArray().length, caHello, caHello.length, caNull, 0, null), Boolean.FALSE);",
"\t\texpect(\"'hello' like 'hellox' escape null\", Like.like(caHello, caHello.length, \"hellox\".toCharArray(), \"hellox\".toCharArray().length, caNull, 0, null), Boolean.FALSE);",
"\t\texpect(\"'hellox' like 'hello' escape null\", Like.like(\"hellox\".toCharArray(), \"hellox\".toCharArray().length, caHello, caHello.length, caNull, 0, null), Boolean.FALSE);",
"\t\texpect(\"'xhellox' like 'hello' escape null\", Like.like(\"xhellox\".toCharArray(), \"xhellox\".toCharArray().length, caHello, caHello.length, caNull, 0, null), Boolean.FALSE);",
"\t\texpect(\"'hello' like 'xhellox' escape null\", Like.like(caHello, caHello.length, \"xhellox\".toCharArray(), \"xhellox\".toCharArray().length, null, 0, null), Boolean.FALSE);",
"\t\texpect(\"'hello' like 'h___' escape null\", Like.like(caHello, caHello.length, \"h___\".toCharArray(), \"h___\".toCharArray().length, caNull, 0, null), Boolean.FALSE);",
"\t\texpect(\"'h' like '_%_' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"_%_\".toCharArray(), \"_%_\".toCharArray().length, caNull, 0, null), Boolean.FALSE);",
"\t\texpect(\"'' like '_' escape null\", Like.like(\"\".toCharArray(), \"\".toCharArray().length, \"_\".toCharArray(), \"_\".toCharArray().length, caNull, 0, null), Boolean.FALSE);"
],
"header": "@@ -146,15 +146,15 @@ public class T_Like extends T_Generic",
"removed": [
"\t\texpect(\"'hello' like 'hello ' escape null\", Like.like(caHello, caHello.length, \"hello \".toCharArray(), \"hello \".toCharArray().length, caNull, 0), Boolean.FALSE);",
"\t\texpect(\"'hello ' like 'hello' escape null\", Like.like(\"hello \".toCharArray(), \"hello \".toCharArray().length, caHello, caHello.length, caNull, 0), Boolean.FALSE);",
"\t\texpect(\"'hello' like 'hellox' escape null\", Like.like(caHello, caHello.length, \"hellox\".toCharArray(), \"hellox\".toCharArray().length, caNull, 0), Boolean.FALSE);",
"\t\texpect(\"'hellox' like 'hello' escape null\", Like.like(\"hellox\".toCharArray(), \"hellox\".toCharArray().length, caHello, caHello.length, caNull, 0), Boolean.FALSE);",
"\t\texpect(\"'xhellox' like 'hello' escape null\", Like.like(\"xhellox\".toCharArray(), \"xhellox\".toCharArray().length, caHello, caHello.length, caNull, 0), Boolean.FALSE);",
"\t\texpect(\"'hello' like 'xhellox' escape null\", Like.like(caHello, caHello.length, \"xhellox\".toCharArray(), \"xhellox\".toCharArray().length, null, 0), Boolean.FALSE);",
"\t\texpect(\"'hello' like 'h___' escape null\", Like.like(caHello, caHello.length, \"h___\".toCharArray(), \"h___\".toCharArray().length, caNull, 0), Boolean.FALSE);",
"\t\texpect(\"'h' like '_%_' escape null\", Like.like(\"h\".toCharArray(), \"h\".toCharArray().length, \"_%_\".toCharArray(), \"_%_\".toCharArray().length, caNull, 0), Boolean.FALSE);",
"\t\texpect(\"'' like '_' escape null\", Like.like(\"\".toCharArray(), \"\".toCharArray().length, \"_\".toCharArray(), \"_\".toCharArray().length, caNull, 0), Boolean.FALSE);"
]
},
{
"added": [
"\t\t\tt=Like.like(caHello, caHello.length, caHello, caHello.length, caHello, caHello.length, null);"
],
"header": "@@ -166,7 +166,7 @@ public class T_Like extends T_Generic",
"removed": [
"\t\t\tt=Like.like(caHello, caHello.length, caHello, caHello.length, caHello, caHello.length);"
]
},
{
"added": [
"\t\t\tt=Like.like(caHello, caHello.length, \"hhh\".toCharArray(), \"hhh\".toCharArray().length, \"h\".toCharArray(), \"h\".toCharArray().length, null);"
],
"header": "@@ -181,7 +181,7 @@ public class T_Like extends T_Generic",
"removed": [
"\t\t\tt=Like.like(caHello, caHello.length, \"hhh\".toCharArray(), \"hhh\".toCharArray().length, \"h\".toCharArray(), \"h\".toCharArray().length);"
]
},
{
"added": [
"\t\t\tt=Like.like(caHello, caHello.length, \"he%\".toCharArray(), \"he%\".toCharArray().length, \"h\".toCharArray(), \"h\".toCharArray().length, null);"
],
"header": "@@ -196,7 +196,7 @@ public class T_Like extends T_Generic",
"removed": [
"\t\t\tt=Like.like(caHello, caHello.length, \"he%\".toCharArray(), \"he%\".toCharArray().length, \"h\".toCharArray(), \"h\".toCharArray().length);"
]
}
]
}
] |
derby-DERBY-2972-72abc725
|
DERBY-2972 Update or select with function in the where clause causes with TERRITORY_BASED collation fails with ERROR 42818: Comparisons between 'VARCHAR' and 'VARCHAR' are not supported.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@574590 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/JavaToSQLValueNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.types.StringDataValue;"
],
"header": "@@ -23,6 +23,7 @@ package\torg.apache.derby.impl.sql.compile;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/JavaValueNode.java",
"hunks": [
{
"added": [
" // * Collation type of schema where method is defined. ",
"\tprivate int collationType;",
""
],
"header": "@@ -74,6 +74,9 @@ abstract class JavaValueNode extends QueryTreeNode",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/NewInvocationNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.types.DataTypeDescriptor;",
"import org.apache.derby.iapi.types.StringDataValue;"
],
"header": "@@ -37,6 +37,8 @@ import org.apache.derby.iapi.services.i18n.MessageService;",
"removed": []
},
{
"added": [
"import org.apache.derby.catalog.TypeDescriptor;"
],
"header": "@@ -45,6 +47,7 @@ import org.apache.derby.impl.sql.compile.ExpressionClassBuilder;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/StaticMethodCallNode.java",
"hunks": [
{
"added": [
"\t\t\t\t\t// DERBY-2972 Match the collation of the RoutineAliasInfo\t\t",
"\t\t\t\t\treturnValueDtd.setCollationType(returnType.getCollationType());",
" returnValueDtd.setCollationDerivation(StringDataValue.COLLATION_DERIVATION_IMPLICIT);"
],
"header": "@@ -278,8 +278,9 @@ public class StaticMethodCallNode extends MethodCallNode",
"removed": [
"",
""
]
}
]
}
] |
derby-DERBY-2973-93b320dc
|
DERBY-2973
ALTER TABLE MODIFY COLUMN should maintain the collation info when the column being altered is character string type. The changes for this went into as a new method in ModifyColumnNode which gets called during the bind phase.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@560289 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ModifyColumnNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.types.StringDataValue;"
],
"header": "@@ -37,6 +37,7 @@ import org.apache.derby.iapi.sql.dictionary.ConstraintDescriptor;",
"removed": []
}
]
}
] |
derby-DERBY-2979-cd2b901b
|
DERBY-2979; fix IllegalArgumentException in system test mailjdbc.
Patch contributed by Manjula Kutty
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@585297 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/system/mailjdbc/utils/DbTasks.java",
"hunks": [
{
"added": [
"\t\t\tint attach_id = 0;",
"\t\t\tif((count - 1) <= 0)",
"\t\t\t\tattach_id = 0;",
"\t\t\telse",
"\t\t\t\t attach_id = Rn.nextInt(count - 1);"
],
"header": "@@ -193,7 +193,11 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tint attach_id = Rn.nextInt(count - 1);"
]
},
{
"added": [
"\t\tint id = 0;",
"\t\tint for_id = 0;"
],
"header": "@@ -261,6 +265,8 @@ public class DbTasks extends Thread {",
"removed": []
},
{
"added": [
"\t\t\tif((id_count -1) <= 0 )",
"\t\t\t\tid = id_count;",
"\t\t\telse",
"\t\t\t\tid = Rn.nextInt(id_count - 1);",
"\t\t\t\tif((id_count -1) <= 0 )",
"\t\t\t\t\tfor_id = id_count;",
"\t\t\t\telse",
"\t\t\t\t\tfor_id = Rn.nextInt(id_count - 1);"
],
"header": "@@ -272,12 +278,18 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\tint id = Rn.nextInt(id_count - 1);",
"\t\t\t\tint for_id = Rn.nextInt(id_count - 1);"
]
},
{
"added": [
"\t\t\t\tint message_id = 0;",
"\t\t\t\tif (count == 0)",
"\t\t\t\t\tmessage_id = 0;",
"\t\t\t\telse",
"\t\t\t\t message_id = Rn.nextInt(count - 1);"
],
"header": "@@ -375,9 +387,13 @@ public class DbTasks extends Thread {",
"removed": [
"\t\t\t\tint message_id = Rn.nextInt(count - 1);"
]
},
{
"added": [],
"header": "@@ -423,7 +439,6 @@ public class DbTasks extends Thread {",
"removed": [
"\t\tSystem.out.println(\"num: \" + num);"
]
}
]
}
] |
derby-DERBY-298-2fb95565
|
fix for DERBY-298. committing change for ·¹ystein Gr±±±vlen
The attached patch fixes the bug by setting the logEnd after recovery to the beginning of the new empty log file instead of the end of the previous file.
The patch contains changes to the following files:
M java/engine/org/apache/derby/impl/store/raw/log/FileLogger.java
- At the end of the redo scan, if the scan stopped in a file succeeding the file of the last log record, update logEnd to this position.
- Change assert to allow logEnd to be in a newer file than that of the last log record.
M java/engine/org/apache/derby/impl/store/raw/log/Scan.java
- Introduced new variable newFileStart which will only have a valid LogInstant value when the scan is at the header of the file.
- When a new file is entered, set newFileStart to the first possible LogInstant of this file (end of header).
- When a log record is encountered, set newFileStart to INVALID_LOG_INSTANT.
- Changed getLogRecordEnd() to return newFileStart if that is valid (i.e., scan is at the start of a file)
- Removed comment about not starting to write to the new empty log file, since that is not true anymore.
A java/testing/org/apache/derbyTesting/functionTests/tests/store/RecoveryAfterBackup_app.properties
- Test properties
M java/testing/org/apache/derbyTesting/functionTests/tests/store/copyfiles.ant
- Added new property files
A java/testing/org/apache/derbyTesting/functionTests/tests/store/RecoveryAfterBackupSetup_app.properties
- Test properties.
- useextdirs=true needed so the backup is placed somewhere the next test can find it.
A java/testing/org/apache/derbyTesting/functionTests/tests/store/RecoveryAfterBackup.java
- Test that is supposed to be run after RecoveryAfterBackupSetup.java.
- Does recovery, updates the database, shutdowns the database, and does roll-forward restore.
- Checks that updates made after recovery is reflected in the database after roll-forward restore.
A java/testing/org/apache/derbyTesting/functionTests/tests/store/RecoveryAfterBackupSetup.java
- Test that does the preparation for the RecoveryAfterBackup test.
- Inserts a few records, makes a backup, and stops without shutting down.
M java/testing/org/apache/derbyTesting/functionTests/harness/RunTest.java
- For tests where the database is not deleted at the end of the test, do not delete the external directories either.
- This is necessary to be able to access the backup in suceeding tests.
A java/testing/org/apache/derbyTesting/functionTests/master/RecoveryAfterBackupSetup.out
- Test output
A java/testing/org/apache/derbyTesting/functionTests/master/RecoveryAfterBackup.out
- Test output
MM java/testing/org/apache/derbyTesting/functionTests/suites/storerecovery.runall
- Added tests to storerecovery suite.
- Changed property eol-style.
The recently attached patch (derby-298a.diff) addresses Suresh's
review comments. The only major change from the previous patch is in
java/engine/org/apache/derby/impl/store/raw/log/Scan.java. The changes
to this file compared to the current head of trunk are:
- When a new log file is entered, check that the header of this
file refers to the end of the last log record of the previous log
file. If not, stop the scan.
- If the header was consistent, update knowGoodLogEnd to the first
possible LogInstant of this file (end of header).
- close() no longer reset knownGoodLogEnd since it is needed by
FileLogger after the scan is closed.
- Changed comment of getLogRecordEnd() to reflect that it can be
used after the scan is closed, and that it at that time may
return the start of an empty log file.
- Removed comment about not starting to write to the new empty log
file, since that is not true anymore.
In addition, the property files for the tests have been updated so
they are run without the security manager.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@367352 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/log/FileLogger.java",
"hunks": [
{
"added": [
"\t\t\t} // while redoScan.getNextRecord() != null",
"",
" // If the scan ended in an empty file, update logEnd to reflect that",
" // in order to avoid to continue logging to an older file",
" long end = redoScan.getLogRecordEnd(); ",
" if (end != LogCounter.INVALID_LOG_INSTANT",
" && (LogCounter.getLogFileNumber(logEnd) ",
" < LogCounter.getLogFileNumber(end))) {",
" logEnd = end;",
" }"
],
"header": "@@ -1508,7 +1508,16 @@ public class FileLogger implements Logger {",
"removed": [
"\t\t\t}"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/log/Scan.java",
"hunks": [
{
"added": [
"\t\t\t\t// scan is position just past the log header",
"\t\t\t\trecordStartPosition = scan.getFilePointer();",
"",
" // Verify that the header of the new log file refers",
" // to the end of the log record of the previous file",
" // (Rest of header has been verified by getLogFileAtBeginning)",
"\t\t\t\tscan.seek(LogToFile",
" .LOG_FILE_HEADER_PREVIOUS_LOG_INSTANT_OFFSET);",
" long previousLogInstant = scan.readLong();",
" if (previousLogInstant != knownGoodLogEnd) {",
" // If there is a mismatch, something is wrong and",
" // we return null to stop the scan. The same",
" // behavior occurs when getLogFileAtBeginning",
" // detects an error in the other fields of the header.",
" if (SanityManager.DEBUG) {",
" if (SanityManager.DEBUG_ON(LogToFile.DBG_FLAG)) {",
" SanityManager.DEBUG(LogToFile.DBG_FLAG, ",
" \"log file \" ",
" + currentLogFileNumber ",
" + \": previous log record: \"",
" + previousLogInstant",
" + \" known previous log record: \"",
" + knownGoodLogEnd);",
" }",
" }",
" return null;",
"\t\t\t\t}",
"",
"",
"\t\t\t\tscan.seek(recordStartPosition);",
""
],
"header": "@@ -706,6 +706,37 @@ public class Scan implements StreamLogScan {",
"removed": []
},
{
"added": [
" // Advance knownGoodLogEnd to make sure that if this",
" // log file is the last log file and empty, logging",
" // continues in this file, not the old file.",
" knownGoodLogEnd = LogCounter.makeLogInstantAsLong",
" (currentLogFileNumber, recordStartPosition);"
],
"header": "@@ -716,8 +747,11 @@ public class Scan implements StreamLogScan {",
"removed": [
"\t\t\t\t// scan is position just past the log header",
"\t\t\t\trecordStartPosition = scan.getFilePointer();"
]
},
{
"added": [],
"header": "@@ -734,14 +768,6 @@ public class Scan implements StreamLogScan {",
"removed": [
"\t\t\t\t\t// ideally, we would want to start writing on this new",
"\t\t\t\t\t// empty log file, but the scan is closed and there is",
"\t\t\t\t\t// no way to tell the difference between an empty log",
"\t\t\t\t\t// file and a log file which is not there. We will be",
"\t\t\t\t\t// writing to the end of the previous log file instead",
"\t\t\t\t\t// but when we next switch the log, the empty log file",
"\t\t\t\t\t// will be written over.",
""
]
},
{
"added": [
""
],
"header": "@@ -755,7 +781,7 @@ public class Scan implements StreamLogScan {",
"removed": [
"\t\t\t"
]
},
{
"added": [
"",
""
],
"header": "@@ -945,8 +971,8 @@ public class Scan implements StreamLogScan {",
"removed": [
"\t\t\t",
"\t\t\t"
]
},
{
"added": [
"\t\tLogFile in the form of a log instant.",
" After the scan has been closed, the end of the last log record will be",
" returned except when the scan ended in an empty log file. In that",
" case, the start of this empty log file will be returned. (This is",
" done to make sure new log records are inserted into the newest log",
" file.)"
],
"header": "@@ -1173,7 +1199,12 @@ public class Scan implements StreamLogScan {",
"removed": [
"\t\tLogFile in the form of a log instant"
]
}
]
}
] |
derby-DERBY-2983-768e56f2
|
DERBY-2983: Add missing FUNCTION_TYPE column to the ResultSet returned by DatabaseMetaData.getFunctions().
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@563172 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2986-4c2072eb
|
DERBY-2986: Add regression test case to lang/CaseExpressionTest.java.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@566235 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2986-5e32892e
|
DERBY-2986: Fix performance regression for queries involving CASE statements
that have multiple WHEN clauses.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@566217 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ConditionalNode.java",
"hunks": [
{
"added": [
"\t\tFromList fromList, SubqueryList subqueryList, Vector aggregateVector)",
"\t\tthrows StandardException"
],
"header": "@@ -206,7 +206,8 @@ public class ConditionalNode extends ValueNode",
"removed": [
"\t\tFromList fromList) throws StandardException"
]
},
{
"added": [
"\t\t\t\tfromList, subqueryList, aggregateVector);",
"\t\t\t\tfromList, subqueryList, aggregateVector);"
],
"header": "@@ -214,11 +215,11 @@ public class ConditionalNode extends ValueNode",
"removed": [
"\t\t\t\tfromList, new SubqueryList(), new Vector());",
"\t\t\t\tfromList, new SubqueryList(), new Vector());"
]
},
{
"added": [
"\t\t\t\tfindType(((ConditionalNode)thenNode).thenElseList, fromList,",
"\t\t\t\t\tsubqueryList, aggregateVector);"
],
"header": "@@ -257,7 +258,8 @@ public class ConditionalNode extends ValueNode",
"removed": [
"\t\t\t\tfindType(((ConditionalNode)thenNode).thenElseList, fromList);"
]
},
{
"added": [
"\t\t\t\tfindType(((ConditionalNode)elseNode).thenElseList, fromList,",
"\t\t\t\t\tsubqueryList, aggregateVector);"
],
"header": "@@ -266,7 +268,8 @@ public class ConditionalNode extends ValueNode",
"removed": [
"\t\t\t\tfindType(((ConditionalNode)elseNode).thenElseList, fromList);"
]
},
{
"added": [
"",
"\t\t\tthenElseList.bindExpression(fromList,",
"\t\t\t\tsubqueryList,",
"\t\t\t\taggregateVector);",
"",
"\t\t\t/* Following call to \"findType()\" will indirectly bind the",
"\t\t\t * expressions in the thenElseList, so no need to call",
"\t\t\t * \"thenElseList.bindExpression(...)\" after we do this.",
"\t\t\t * DERBY-2986.",
"\t\t\t */",
"\t\t\trecastNullNodes(thenElseList,",
"\t\t\t\tfindType(thenElseList, fromList, subqueryList, aggregateVector));"
],
"header": "@@ -386,15 +389,22 @@ public class ConditionalNode extends ValueNode",
"removed": [
"\t\t\trecastNullNodes(thenElseList, findType(thenElseList, fromList));",
"\t\tthenElseList.bindExpression(fromList,",
"\t\t\tsubqueryList,",
"\t\t\taggregateVector);",
""
]
}
]
}
] |
derby-DERBY-2991-152b9a7b
|
DERBY-2991: Index split deadlock
Added comment to IndexSplitDeadlockTest describing missing test cases
for BTreeScan.reposition().
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@747371 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2991-3811800e
|
DERBY-2991: Use setAutoCommit() helper method in IndexSplitDeadlockTest
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@746965 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-2991-8654374d
|
DERBY-4177: Javadoc for BTreeLockingPolicy should not mention "scan lock"
The fix for DERBY-2991 removed the concept of a "scan lock" and
RECORD_ID_PROTECTION_HANDLE, so the javadoc for the BTreeLockingPolicy
class hierarchy should not mention them anymore.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@775937 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeLockingPolicy.java",
"hunks": [
{
"added": [],
"header": "@@ -24,7 +24,6 @@ package org.apache.derby.impl.store.access.btree;",
"removed": [
"import org.apache.derby.iapi.store.raw.RecordHandle;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/index/B2IRowLocking3.java",
"hunks": [
{
"added": [
" // If we need to release the latches while searching left,",
" // a new key may have appeared in the range that we've already",
" // searched, or the tree may have been rearranged, so the"
],
"header": "@@ -818,14 +818,9 @@ class B2IRowLocking3 implements BTreeLockingPolicy",
"removed": [
" // RESOLVE RLL (mikem) - do I need to do the ",
" // RECORD_ID_PROTECTION_HANDLE lock.",
" // First guarantee that record id's will not move off this",
" // current page while searching for previous key, by getting",
" // the RECORD_ID_PROTECTION_HANDLE lock on the current page.",
" // Since we have a latch on the cur",
"",
" // RESOLVE RLL (mikem) - NO RECORD_ID PROTECTION IN EFFECT."
]
}
]
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.