id
stringlengths
22
25
commit_message
stringlengths
137
6.96k
diffs
listlengths
0
63
derby-DERBY-3812-2301e09b
DERBY-3812 - fix test regression in NetworkServerMBeanTest.testAttributeDrdaStreamOutBufferSize caused by checkin of converted test OutBufferStreamTest (DERBY-3796). Patch contributed by Erlend Birkenes git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@682531 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3818-f6967015
DERBY-3818 (partial): Whitespace and formatting changes only. Patch file: n/a git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@707103 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3819-05ef8bdc
DERBY-3819 ; 'Expected Table Scan ResultSet for T3' in test_predicatePushdown(....PredicatePushdownTest) Causing a number of asserts to be skipped when using a 64 bit platform. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@807733 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3823-7b429a31
DERBY-3823 NullPointerException in stress.multi test Adding a test case showing that in case of a network server, an open resulset's metadata can get changed underneath it but it is not reflected in the metadata. The test creates a table with one of column as varchar(5). It inserts 1000 rows and then opens a reulset on that table with varchar column as one of the columns. The test verifies that the reulset's metadata at this point shows the length of the column as 5. Next, while the resulset is still open, the tests does an ALTER TABLE to increase the varchar column's length to 8. In case of embedded mode, this fails because of the open resulset. In case of network server, because of prefetching of rows, the ALTER TABLE is allowed but when the test gets the resulset's metadata again and checks the length of varchar column, it still shows the length to be 5 rather than 8. There are couple other jiras related to network server prefetching, namely, DERBY-3839 and DERBY-4373. Once DERBY-3823 is fixed, we should see the change in metadata reflected in resultset's metadata. A fix for DERBY-3823 will cause the following the test added here to fail. Right now, the new test accepts the incorrect metadata length obtained through the resultset's metadata after ALTER TABLE has been performed in network server mode. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1182570 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3823-e8fb6d60
DERBY-3823 NullPointerException in stress.multi test This patch fixes a race condition in EmbedPreparedStatement#getMetaData: if we are trying to retrieve the metadata for a prepared statement while it is being recompiled there is a time window during which the activation class is null. The existing code could therefore cause an NPE. The new code plugs the race condition. This NPE led to intermittent errors in stressTestMulti. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1183192 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.loader.GeneratedClass;" ], "header": "@@ -64,6 +64,7 @@ import java.sql.Types;", "removed": [] }, { "added": [ "", " GeneratedClass currAc = null;", " ResultDescription resd = null;", "", " synchronized(execp) {", " // DERBY-3823 Some other thread may be repreparing", " do {", " while (!execp.upToDate()) {", " execp.rePrepare(lcc);", " }", "", " currAc = execp.getActivationClass();", " resd = execp.getResultDescription();", " } while (currAc == null);", " }", "", " if (gcDuringGetMetaData == null ||", " !gcDuringGetMetaData.equals(currAc.getName())) {", " rMetaData = null;", " gcDuringGetMetaData = currAc.getName();", " }", "", " if (rMetaData == null && resd != null) {", " // Internally, the result description has information", " // which is used for insert, update and delete statements", " // Externally, we decided that statements which don't", " // produce result sets such as insert, update and delete", " // should not return ResultSetMetaData. This is enforced", " // here", " String statementType = resd.getStatementType();", " if (statementType.equals(\"INSERT\") ||", " statementType.equals(\"UPDATE\") ||", " statementType.equals(\"DELETE\"))", " rMetaData = null;", " else", " rMetaData = newEmbedResultSetMetaData(resd);", " }", "" ], "header": "@@ -1077,43 +1078,48 @@ public abstract class EmbedPreparedStatement", "removed": [ "\t\t\t\t//bug 4579 - if the statement is invalid, regenerate the metadata info", "\t\t\t\tif (preparedStatement.isValid() == false)", "\t\t\t\t{", "\t\t\t\t\t//need to revalidate the statement here, otherwise getResultDescription would", "\t\t\t\t\t//still have info from previous valid statement", "\t\t\t\t\tpreparedStatement.rePrepare(lcc);", "\t\t\t\t\trMetaData = null;", "\t\t\t\t}", "\t\t\t\tif (gcDuringGetMetaData == null || gcDuringGetMetaData.equals(execp.getActivationClass().getName()) == false)", "\t\t\t\t{", "\t\t\t\t\trMetaData = null;", "\t\t\t\t\tgcDuringGetMetaData = execp.getActivationClass().getName();", "\t\t\t\t}", "\t\t\t\tif (rMetaData == null)", "\t\t\t\t{", "\t\t\t\t\tResultDescription resd = preparedStatement.getResultDescription();", "\t\t\t\t\tif (resd != null)", "\t\t\t\t\t{", "\t\t\t\t\t\t// Internally, the result description has information", "\t\t\t\t\t\t// which is used for insert, update and delete statements", "\t\t\t\t\t\t// Externally, we decided that statements which don't", "\t\t\t\t\t\t// produce result sets such as insert, update and delete", "\t\t\t\t\t\t// should not return ResultSetMetaData. This is enforced", "\t\t\t\t\t\t// here", "\t\t\t\t\t\tString statementType = resd.getStatementType();", "\t\t\t\t\t\tif (statementType.equals(\"INSERT\") ||", "\t\t\t\t\t\t\t\tstatementType.equals(\"UPDATE\") ||", "\t\t\t\t\t\t\t\tstatementType.equals(\"DELETE\"))", "\t\t\t\t\t\t\trMetaData = null;", "\t\t\t\t\t\telse", "\t\t\t\t \t\trMetaData = newEmbedResultSetMetaData(resd);", "\t\t\t\t\t}", "\t\t\t\t}" ] } ] } ]
derby-DERBY-3825-925b07bc
DERBY-3825: StoreStreamClob.getReader(charPos) performs poorly. Made the repositioning logic easier to read. Patch file: derby-3825-3a-simplification.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@710033 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java", "hunks": [ { "added": [ " if (requestedCharPos <= readerCharCount - charactersInBuffer) {", " // The stream must be reset, because the requested position is", " // before the current lower buffer boundary.", " resetUTF8Reader();", " }", "", " long currentCharPos =", " readerCharCount - charactersInBuffer + readPositionInBuffer;", " long difference = (requestedCharPos - 1) - currentCharPos;", "", " if (difference <= 0) {", " // Move back in the buffer.", " readPositionInBuffer += difference;", " // Skip forward.", " persistentSkip(difference);" ], "header": "@@ -621,25 +621,22 @@ readChars:", "removed": [ " // See if we can continue reading, or do nothing at all, to get to the", " // right position.", " if (requestedCharPos > readerCharCount) {", " // The second part corrects for the internal buffer position.", " long toSkip = (requestedCharPos - readerCharCount) +", " (charactersInBuffer - readPositionInBuffer) -1;", " persistentSkip(toSkip);", " // See if the requested position is within the current buffer.", " long lowerBufferBorder = readerCharCount - charactersInBuffer;", " if (requestedCharPos <= lowerBufferBorder) {", " // Have to reset and start from scratch.", " resetUTF8Reader();", " persistentSkip(requestedCharPos -1);", " } else {", " // We have the requested position in the buffer already.", " readPositionInBuffer =", " (int)(requestedCharPos - lowerBufferBorder -1);", " }" ] } ] } ]
derby-DERBY-3825-ab2037f8
DERBY-3825: StoreStreamClob.getReader(charPos) performs poorly. Added repositioning functionality to UTF8Reader - ordered after increasing cost: a) Reposition within current character buffer (small hops forwards and potentially backwards - in range 1 char to 8K chars) b) Forward stream from current position (hops forwards) c) Reset stream and skip data (hops backwards) In addition the new method getInternalReader was added to InternalClob. It takes advantage of the repositioning capability of UTF8Reader. Clob.getSubString uses the new functionality when the Clob is represented by a StoreStreamClob. Previously only mechanism c) was used. Patch file: derby-3825-2b-internalReader_repositioning.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@704547 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedClob.java", "hunks": [ { "added": [ " Reader reader;", " try {", " reader = this.clob.getInternalReader(pos);", " } catch (EOFException eofe) {", " throw Util.generateCsSQLException(", " SQLState.BLOB_POSITION_TOO_LARGE, new Long(pos), eofe);", " }" ], "header": "@@ -217,8 +217,14 @@ final class EmbedClob extends ConnectionChild implements Clob, EngineLOB", "removed": [ " Reader reader = this.clob.getReader(pos);" ] }, { "added": [], "header": "@@ -236,9 +242,6 @@ final class EmbedClob extends ConnectionChild implements Clob, EngineLOB", "removed": [ " } catch (EOFException eofe) {", " throw Util.generateCsSQLException(", " SQLState.BLOB_POSITION_TOO_LARGE, eofe);" ] }, { "added": [ " Reader reader = this.clob.getInternalReader(start);" ], "header": "@@ -342,7 +345,7 @@ final class EmbedClob extends ConnectionChild implements Clob, EngineLOB", "removed": [ " Reader reader = this.clob.getReader(start);" ] }, { "added": [ " reader = this.clob.getInternalReader(", " newStart);" ], "header": "@@ -381,7 +384,8 @@ final class EmbedClob extends ConnectionChild implements Clob, EngineLOB", "removed": [ " reader = this.clob.getReader(newStart);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/StoreStreamClob.java", "hunks": [ { "added": [ "import java.io.FilterReader;" ], "header": "@@ -24,6 +24,7 @@ package org.apache.derby.impl.jdbc;", "removed": [] }, { "added": [ " /**", " * Shared internal reader, closed when the Clob is released.", " * This is a performance optimization, and the stream is shared between", " * \"one time\" operations, for instance {@code getSubString} calls. Often a", " * subset, or the whole, of the Clob is read subsequently and then this", " * optimization avoids repositioning costs (the store does not support", " * random access for LOBs).", " * <b>NOTE</b>: Do not publish this reader to the end-user.", " */", " private UTF8Reader internalReader;", " /** The internal reader wrapped so that it cannot be closed. */", " private FilterReader unclosableInternalReader;" ], "header": "@@ -68,7 +69,18 @@ final class StoreStreamClob", "removed": [ "" ] }, { "added": [ " if (this.internalReader != null) {", " this.internalReader.close();", " }" ], "header": "@@ -104,6 +116,9 @@ final class StoreStreamClob", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java", "hunks": [ { "added": [ " /**", " * Resets the reader.", " * <p>", " * This method is used internally to achieve better performance.", " * @see #reposition(long)", " *", " * @throws IOException if resetting or reading from the stream fails", " * @throws StandardException if resetting the stream fails", " */", " private void resetUTF8Reader()", " throws IOException, StandardException {", " // 2L to skip the length encoding bytes.", " this.positionedIn.reposition(2L);", " this.rawStreamPos = this.positionedIn.getPosition();", " this.in = this.positionedIn;", " this.readerCharCount = this.utfCount = 0L;", " this.charactersInBuffer = this.readPositionInBuffer = 0;", " }", "", " /**", " * Repositions the stream so that the next character read will be the", " * character at the requested position.", " * <p>", " * There are three types of repositioning, ordered after increasing cost:", " * <ol> <li>Reposition within current character buffer (small hops forwards", " * and potentially backwards - in range 1 char to", " * {@code MAXIMUM_BUFFER_SIZE} chars)</li>", " * <li>Forward stream from current position (hops forwards)</li>", " * <li>Reset stream and skip data (hops backwards)</li>", " * </ol>", " *", " * @param requestedCharPos 1-based requested character position", " * @throws IOException if resetting or reading from the stream fails", " * @throws StandardException if resetting the stream fails", " */", " void reposition(long requestedCharPos)", " throws IOException, StandardException {", " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(this.positionedIn != null);", " SanityManager.ASSERT(requestedCharPos > 0);", " }", " // See if we can continue reading, or do nothing at all, to get to the", " // right position.", " if (requestedCharPos > readerCharCount) {", " // The second part corrects for the internal buffer position.", " long toSkip = (requestedCharPos - readerCharCount) +", " (charactersInBuffer - readPositionInBuffer) -1;", " persistentSkip(toSkip);", " } else {", " // See if the requested position is within the current buffer.", " long lowerBufferBorder = readerCharCount - charactersInBuffer;", " if (requestedCharPos <= lowerBufferBorder) {", " // Have to reset and start from scratch.", " resetUTF8Reader();", " persistentSkip(requestedCharPos -1);", " } else {", " // We have the requested position in the buffer already.", " readPositionInBuffer =", " (int)(requestedCharPos - lowerBufferBorder -1);", " }", " }", " }", "" ], "header": "@@ -580,6 +580,69 @@ readChars:", "removed": [] } ] }, { "file": "java/testing/org/apache/derby/impl/jdbc/_Suite.java", "hunks": [ { "added": [ " suite.addTest(UTF8ReaderTest.suite());" ], "header": "@@ -46,6 +46,7 @@ public class _Suite", "removed": [] } ] } ]
derby-DERBY-3831-a90874a5
DERBY-3831; junit.RuntimeStatisticsParser doesn't fully distinguish names of table or index. patch contributed by Junjie Peng git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@685733 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/RuntimeStatisticsParser.java", "hunks": [ { "added": [ " tableName + \" \")!= -1);" ], "header": "@@ -168,7 +168,7 @@ public class RuntimeStatisticsParser {", "removed": [ " tableName)!= -1);" ] }, { "added": [ " tableName + \" using index \" + indexName + \" \")!= -1);" ], "header": "@@ -180,7 +180,7 @@ public class RuntimeStatisticsParser {", "removed": [ " tableName + \" using index \" + indexName)!= -1);" ] }, { "added": [ " tableName + \" \")!= -1);" ], "header": "@@ -189,7 +189,7 @@ public class RuntimeStatisticsParser {", "removed": [ " tableName)!= -1);" ] }, { "added": [ " tableName + \":\")!= -1); ", " ", " tableName + \" \")!= -1);" ], "header": "@@ -217,18 +217,16 @@ public class RuntimeStatisticsParser {", "removed": [ " tableName)!= -1);", " ", " ", " ", " tableName)!= -1);" ] } ] } ]
derby-DERBY-3840-f23a20f8
DERBY-3840 The test code executes java processes by just executing java instead of using a full path. This may cause the wrong java to be picked up git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@693522 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/NetworkServerTestSetup.java", "hunks": [ { "added": [ " al.add( BaseTestCase.getJavaExecutableName() );" ], "header": "@@ -249,7 +249,7 @@ final public class NetworkServerTestSetup extends BaseTestSetup {", "removed": [ " al.add( \"java\" );" ] } ] } ]
derby-DERBY-3844-b61d6349
DERBY-3844: ASSERT failure in BasePage.unlatch() when running LobStreamsTest Disallows calling getBlob or getClob more than once on a given LOB column (on the same result set row). See release note for details, this change may break existing applications. Patch file: derby-3844-1c-disallow_multiple_accesses.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@911793 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/client/org/apache/derby/client/am/ResultSet.java", "hunks": [ { "added": [ "import java.util.Arrays;" ], "header": "@@ -25,9 +25,9 @@ import java.io.IOException;", "removed": [ "import org.apache.derby.shared.common.i18n.MessageUtil;" ] }, { "added": [ "", " /**", " * Indicates which columns have been fetched as a stream or as a LOB for a", " * row. Created on-demand by a getXXXStream or a get[BC]lob call. Note that", " * we only track columns that can be accessed as a stream or a LOB object.", " */", " private boolean[] columnUsedFlags_;" ], "header": "@@ -204,8 +204,13 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ " ", " private boolean[] streamUsedFlags_;" ] }, { "added": [ "", " unuseStreamsAndLOBs();" ], "header": "@@ -293,8 +298,8 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ "\t", "\tunuseStreams();" ] }, { "added": [ " useStreamOrLOB(column);" ], "header": "@@ -1119,7 +1124,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ " useStream(column);" ] }, { "added": [ " useStreamOrLOB(column);" ], "header": "@@ -1150,7 +1155,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ " useStream(column);" ] }, { "added": [ " useStreamOrLOB(column);" ], "header": "@@ -1201,7 +1206,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ " useStream(column);" ] }, { "added": [ " useStreamOrLOB(column);" ], "header": "@@ -1232,6 +1237,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [] }, { "added": [ " useStreamOrLOB(column);" ], "header": "@@ -1261,6 +1267,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [] }, { "added": [ " resetRowsetFlags();", " unuseStreamsAndLOBs();" ], "header": "@@ -2096,8 +2103,8 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ "\tresetRowsetFlags();", "\tunuseStreams();" ] }, { "added": [ " unuseStreamsAndLOBs();" ], "header": "@@ -2137,7 +2144,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ " unuseStreams();" ] }, { "added": [ " unuseStreamsAndLOBs();" ], "header": "@@ -2189,7 +2196,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ " unuseStreams();" ] }, { "added": [ " unuseStreamsAndLOBs();" ], "header": "@@ -2244,7 +2251,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ "\tunuseStreams();" ] }, { "added": [ " unuseStreamsAndLOBs();" ], "header": "@@ -2361,7 +2368,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ "\tunuseStreams();" ] }, { "added": [ " unuseStreamsAndLOBs();" ], "header": "@@ -2444,8 +2451,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ "\t", "\tunuseStreams();" ] }, { "added": [ " unuseStreamsAndLOBs();" ], "header": "@@ -2571,8 +2577,7 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ "\t", "\tunuseStreams();" ] }, { "added": [ "", " unuseStreamsAndLOBs();", "" ], "header": "@@ -3691,9 +3696,9 @@ public abstract class ResultSet implements java.sql.ResultSet,", "removed": [ "\t ", " \t unuseStreams();", "\t " ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java", "hunks": [ { "added": [ " * Indicates which columns have been fetched as a stream or as a LOB for a", " * row. Created on-demand by a getXXXStream or a get[BC]lob call. Note that", " * we only track columns that can be accessed as a stream or a LOB object.", " private boolean[] columnUsedFlags;" ], "header": "@@ -203,10 +203,11 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ " * Indicates which columns have already been fetched", " * as a stream for a row. Created on-demand by a getXXXStream call.", " private boolean[] streamUsedFlags;" ] }, { "added": [ " // Clear the indication of which columns were fetched as streams.", " if (columnUsedFlags != null)", " Arrays.fill(columnUsedFlags, false);" ], "header": "@@ -508,9 +509,9 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ "\t\t\t// Clear the indication of which columns were fetched as streams.", "\t\t\tif (streamUsedFlags != null)", "\t\t\t Arrays.fill(streamUsedFlags, false);" ] }, { "added": [ " useStreamOrLOB(columnIndex);" ], "header": "@@ -1125,7 +1126,7 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ "\t\t useStream(columnIndex);" ] }, { "added": [ " useStreamOrLOB(columnIndex);" ], "header": "@@ -1238,7 +1239,7 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ "\t\t useStream(columnIndex);" ] }, { "added": [ " useStreamOrLOB(columnIndex);", "" ], "header": "@@ -3956,6 +3957,8 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [] }, { "added": [ " useStreamOrLOB(columnIndex);", "" ], "header": "@@ -4007,6 +4010,8 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [] } ] } ]
derby-DERBY-3844-c1832a32
DERBY-3844: ASSERT failure in BasePage.unlatch() when running LobStreamsTest Rewrote test to only call getX once per column per row, as calling for instance getBlob twice on a given column on the same row will be disallowed. Also removed some unused code. Patch file: derby-3844-2a-jdbcdriver_test_rewrite.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@910481 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3845-d8644b2a
DERBY-3845: Problems running org.apache.derbyTesting.system.optimizer.RunOptimizerTest Make sure the database is initialized also when -mode is specified on the command line. Patch contributed by Ole Solberg <Ole.Solberg@Sun.COM>. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@691583 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3850-8106edc9
DERBY-3850: Remove unneeded workarounds for DERBY-177 and DERBY-3693 Removed the wait parameter from methods called from SPSDescriptor.updateSYSSTATEMENTS() since waiting is prevented by another mechanism now. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@692495 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java", "hunks": [ { "added": [ "\t\tTransactionController\ttc" ], "header": "@@ -1072,15 +1072,13 @@ public interface DataDictionary", "removed": [ "\t * @param wait\t\t\tTo wait for lock or not", "\t\tTransactionController\ttc,", "\t\tboolean\t\t\t\t\twait" ] }, { "added": [], "header": "@@ -1092,10 +1090,7 @@ public interface DataDictionary", "removed": [ "\t * @param wait\t\tIf true, then the caller wants to wait for locks. False will be", "\t * when we using a nested user xaction - we want to timeout right away if", "\t * the parent holds the lock. (bug 4821)" ] }, { "added": [], "header": "@@ -1104,7 +1099,6 @@ public interface DataDictionary", "removed": [ "\t\t\tboolean\t\t\t\t\twait," ] } ] }, { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/SPSDescriptor.java", "hunks": [ { "added": [], "header": "@@ -1072,10 +1072,7 @@ public class SPSDescriptor extends TupleDescriptor", "removed": [ "\t\tint[] \t\t\t\t\tcolsToUpdate;", "\t\t//bug 4821 - we want to wait for locks if updating sysstatements on parent transaction", "\t\tboolean wait = false;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java", "hunks": [ { "added": [ "\t\tint insertRetCode = ti.insertRow(row, tc);" ], "header": "@@ -1789,24 +1789,13 @@ public final class\tDataDictionaryImpl", "removed": [ "\t{", "\t\taddDescriptor(td, parent, catalogNumber, duplicatesAllowed, tc, true);", "\t}", "", "\t/**", "\t * @inheritDoc", "\t */", "\tpublic void addDescriptor(TupleDescriptor td, TupleDescriptor parent,", "\t\t\t\t\t\t\t int catalogNumber, boolean duplicatesAllowed,", "\t\t\t\t\t\t\t TransactionController tc, boolean wait)", "\t\tthrows StandardException", "\t\tint insertRetCode = ti.insertRow(row, tc, wait);" ] }, { "added": [], "header": "@@ -3377,9 +3366,6 @@ public final class\tDataDictionaryImpl", "removed": [ "\t * @param wait\t\tIf true, then the caller wants to wait for locks. False will be", "\t * when we using a nested user xaction - we want to timeout right away if the parent", "\t * holds the lock. (bug 4821)" ] }, { "added": [ "\t\t\t\t\t\t\t\t\t\tTransactionController tc)" ], "header": "@@ -3387,8 +3373,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\t\t\t\t\t\t\t\tTransactionController tc,", "\t\t\t\t\t\t\t\t\t\tboolean wait)" ] }, { "added": [ "\t\t\t\t\t tc);" ], "header": "@@ -3458,8 +3443,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\t\t\t tc,", "\t\t\t\t\t wait);" ] }, { "added": [ "\t\tTransactionController\ttc" ], "header": "@@ -3959,8 +3943,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\tTransactionController\ttc,", "\t\tboolean wait" ] }, { "added": [ "\t\t\tinsertRetCode = ti.insertRow(row, tc);" ], "header": "@@ -3982,7 +3965,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\tinsertRetCode = ti.insertRow(row, tc, wait);" ] }, { "added": [ "\t\taddSPSParams(descriptor, tc);", "\tprivate void addSPSParams(SPSDescriptor spsd, TransactionController tc)" ], "header": "@@ -3995,14 +3978,14 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\taddSPSParams(descriptor, tc, wait);", "\tprivate void addSPSParams(SPSDescriptor spsd, TransactionController tc, boolean wait)" ] }, { "added": [ "\t\t\t\t\t\t tc);" ], "header": "@@ -4034,7 +4017,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\t\t\t\t tc, wait);" ] }, { "added": [], "header": "@@ -4079,13 +4062,9 @@ public final class\tDataDictionaryImpl", "removed": [ "\t * @param wait\t\tIf true, then the caller wants to wait for locks. False will be", "\t * when we using a nested user xaction - we want to timeout right away if the parent", "\t * holds the lock. (bug 4821)", "\t *" ] }, { "added": [], "header": "@@ -4093,14 +4072,12 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\tboolean\t\t\t\t\twait,", "\t\tDataValueDescriptor\t\t\tcolumnNameOrderable;" ] }, { "added": [ "\t\t\t\t\t tc);" ], "header": "@@ -4148,8 +4125,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\t\t\t tc,", "\t\t\t\t\t wait);" ] }, { "added": [ "\t\t\taddSPSParams(spsd, tc);" ], "header": "@@ -4180,7 +4156,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\taddSPSParams(spsd, tc, wait);" ] }, { "added": [ "\t\t\t\t\t\t\t\t\t tc);" ], "header": "@@ -4220,8 +4196,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\t\t\t\t\t\t\t tc,", "\t\t\t\t\t\t\t\t\t wait);" ] }, { "added": [ "\t\tti.insertRow(row, tc);" ], "header": "@@ -5796,7 +5771,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\tti.insertRow(row, tc, true);" ] }, { "added": [ "\t\t\t\t\t\t\t\ttc);" ], "header": "@@ -7309,8 +7284,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\t\t\t\t\t\ttc,", "\t\t\t\t\t\t\t\ttrue);" ] }, { "added": [ "\t\tint insertRetCode = ti.insertRow(row, tc);" ], "header": "@@ -7949,7 +7923,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\tint insertRetCode = ti.insertRow(row, tc, true);" ] }, { "added": [ "\t\t\taddSPSDescriptor(spsd, tc);" ], "header": "@@ -9388,7 +9362,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\taddSPSDescriptor(spsd, tc, true);" ] }, { "added": [ " int insertRetCode = ti.insertRow(row, tc);" ], "header": "@@ -11834,7 +11808,7 @@ public final class\tDataDictionaryImpl", "removed": [ " int insertRetCode = ti.insertRow(row, tc, true /* wait */);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/catalog/TabInfoImpl.java", "hunks": [ { "added": [ "\tint insertRow( ExecRow row, TransactionController tc)", "\t\treturn insertRowListImpl(new ExecRow[] {row},tc,notUsed);" ], "header": "@@ -413,19 +413,18 @@ class TabInfoImpl", "removed": [ "\t *\t@param\twait\t\tto wait on lock or quickly TIMEOUT", "\tint insertRow( ExecRow row, TransactionController tc, boolean wait)", "\t\treturn insertRowListImpl(new ExecRow[] {row},tc,notUsed, wait);" ] }, { "added": [ "\t\treturn insertRowListImpl(rowList,tc,notUsed);" ], "header": "@@ -446,7 +445,7 @@ class TabInfoImpl", "removed": [ "\t\treturn insertRowListImpl(rowList,tc,notUsed, true);" ] }, { "added": [ "\tprivate int insertRowListImpl(ExecRow[] rowList, TransactionController tc,", " RowLocation[] rowLocationOut)" ], "header": "@@ -461,12 +460,11 @@ class TabInfoImpl", "removed": [ "\t @param wait to wait on lock or quickly TIMEOUT", "\tprivate int insertRowListImpl(ExecRow[] rowList, TransactionController tc, RowLocation[] rowLocationOut,", "\t\t\t\t\t\t\t\t boolean wait)" ] }, { "added": [ "\t\t\t\tTransactionController.OPENMODE_FORUPDATE," ], "header": "@@ -482,8 +480,7 @@ class TabInfoImpl", "removed": [ "\t\t\t\t(TransactionController.OPENMODE_FORUPDATE |", " ((wait) ? 0 : TransactionController.OPENMODE_LOCK_NOWAIT))," ] }, { "added": [ "\t\t\t\t\t\tTransactionController.OPENMODE_FORUPDATE," ], "header": "@@ -504,8 +501,7 @@ class TabInfoImpl", "removed": [ "\t\t\t\t\t\t(TransactionController.OPENMODE_FORUPDATE |", " \t\t((wait) ? 0 : TransactionController.OPENMODE_LOCK_NOWAIT))," ] } ] }, { "file": "java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java", "hunks": [ { "added": [ "\t\t\tTransactionController tc) throws StandardException {", "\t\t\tboolean recompile, boolean updateSYSCOLUMNS," ], "header": "@@ -450,13 +450,13 @@ public class EmptyDictionary implements DataDictionary, ModuleSupportable {", "removed": [ "\t\t\tTransactionController tc, boolean wait) throws StandardException {", "\t\t\tboolean recompile, boolean updateSYSCOLUMNS, boolean wait," ] }, { "added": [], "header": "@@ -794,12 +794,6 @@ public class EmptyDictionary implements DataDictionary, ModuleSupportable {", "removed": [ "\tpublic void addDescriptor(TupleDescriptor tuple, TupleDescriptor parent,", "\t\t\tint catalogNumber, boolean allowsDuplicates,", "\t\t\tTransactionController tc, boolean wait) throws StandardException {", "\t}", "", "" ] } ] } ]
derby-DERBY-3850-a9215529
DERBY-3850: Remove unneeded workarounds for DERBY-177 and DERBY-3693 Removed the wait parameter from TabInfoImpl.updateRow(). The method only had two callers, both of which called it with wait=true. updateRow() passed the parameter on to openForUpdate() in RowChanger, but that method is sometimes called with wait=false, so the parameter couldn't be removed from that method. Also removed an unused variable and some unused imports. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@695244 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/catalog/TabInfoImpl.java", "hunks": [ { "added": [], "header": "@@ -22,18 +22,14 @@", "removed": [ "import org.apache.derby.iapi.services.context.ContextService;", "import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;", "import org.apache.derby.iapi.sql.execute.ExecutionContext;", "import org.apache.derby.iapi.sql.execute.ExecutionFactory;" ] }, { "added": [], "header": "@@ -46,11 +42,8 @@ import org.apache.derby.iapi.store.access.StaticCompiledOpenConglomInfo;", "removed": [ "import org.apache.derby.iapi.types.DataValueFactory;", "import org.apache.derby.catalog.UUID;", "import java.util.Enumeration;" ] }, { "added": [ "\t\tupdateRow(key, newRows, indexNumber, indicesToUpdate, colsToUpdate, tc);" ], "header": "@@ -936,7 +929,7 @@ class TabInfoImpl", "removed": [ "\t\tupdateRow(key, newRows, indexNumber, indicesToUpdate, colsToUpdate, tc, true);" ] }, { "added": [], "header": "@@ -963,46 +956,11 @@ class TabInfoImpl", "removed": [ "\t{", "\t\tupdateRow(key, newRows, indexNumber, indicesToUpdate, colsToUpdate, tc, true);", "\t}", "", "\t/**", "\t * Updates a set of base rows in a catalog with the same key on an index", "\t * and updates all the corresponding index rows. If parameter wait is true,", "\t * then the caller wants to wait for locks. When using a nested user xaction", "\t * we want to timeout right away if the parent holds the lock.", "\t *", "\t *\t@param\tkey\t\t\tkey row", "\t *\t@param\tnewRows\t\tnew version of the array of rows", "\t *\t@param\tindexNumber\tindex that key operates", "\t *\t@param\tindicesToUpdate\tarray of booleans, one for each index on the catalog.", "\t *\t\t\t\t\t\t\tif a boolean is true, that means we must update the", "\t *\t\t\t\t\t\t\tcorresponding index because changes in the newRow", "\t *\t\t\t\t\t\t\taffect it.", "\t *\t@param colsToUpdate\tarray of ints indicating which columns (1 based)", "\t *\t\t\t\t\t\t\tto update. If null, do all.", "\t *\t@param\ttc\t\t\ttransaction controller", "\t *\t@param wait\t\tIf true, then the caller wants to wait for locks. When", "\t *\t\t\t\t\t\t\tusing a nested user xaction we want to timeout right away", "\t *\t\t\t\t\t\t\tif the parent holds the lock. (bug 4821)", "\t *", "\t * @exception StandardException\t\tThrown on failure", "\t */", "\tprivate void updateRow( ExecIndexRow\t\t\t\tkey,", "\t\t\t\t\t\t ExecRow[]\t\t\t\tnewRows,", "\t\t\t\t\t\t int\t\t\t\t\t\tindexNumber,", "\t\t\t\t\t\t boolean[]\t\t\t\tindicesToUpdate,", "\t\t\t\t\t\t int[]\t\t\t\t\tcolsToUpdate,", "\t\t\t\t\t\t TransactionController\ttc,", "\t\t\t\t\t\t boolean wait)", "\t\tthrows StandardException", "\t\tExecIndexRow\t\t\t\ttemplateRow;" ] }, { "added": [ "\t\trc.openForUpdate(indicesToUpdate, TransactionController.MODE_RECORD, true);", " TransactionController.OPENMODE_FORUPDATE,", "\t\t\tTransactionController.OPENMODE_FORUPDATE," ], "header": "@@ -1014,22 +972,20 @@ class TabInfoImpl", "removed": [ "\t\trc.openForUpdate(indicesToUpdate, TransactionController.MODE_RECORD, wait); ", " (TransactionController.OPENMODE_FORUPDATE |", " ((wait) ? 0 : TransactionController.OPENMODE_LOCK_NOWAIT)),", "\t\t\t(TransactionController.OPENMODE_FORUPDATE |", " ((wait) ? 0 : TransactionController.OPENMODE_LOCK_NOWAIT)), " ] } ] } ]
derby-DERBY-3853-d1661b20
DERBY-3853: Behaviour of setTypeMap() differs between embedded and client Changed the client driver to match the embedded driver. Patch contributed by Yun Lee <yun.lee.bj@gmail.com>. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@764217 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3856-79fec783
DERBY-3856: difference between Embedded vs DerbyNetClient in format of return from timestamp(cast(? as varchar(32))) Stop caching the original input string in the parse methods of SQLDate and SQLTimestamp, and instead generate (and cache) a normalized datetime string on the first call to getString(). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@952581 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/SQLDate.java", "hunks": [ { "added": [], "header": "@@ -471,7 +471,6 @@ public final class SQLDate extends DataType", "removed": [ " valueString = parser.getTrimmedString();" ] }, { "added": [], "header": "@@ -507,7 +506,6 @@ public final class SQLDate extends DataType", "removed": [ " valueString = parser.checkEnd();" ] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/SQLTimestamp.java", "hunks": [ { "added": [], "header": "@@ -519,7 +519,6 @@ public final class SQLTimestamp extends DataType", "removed": [ " valueString = parser.getTrimmedString();" ] } ] } ]
derby-DERBY-3863-40a45842
DERBY-3863; improve test for import export using ij git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@691506 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-387-5bd651f5
DERBY-387: Fix Simple Network Client sample to work correctly by fixing database name. Contributed by Rajesh Kartha git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@375715 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/demo/nserverdemo/SimpleNetworkClientSample.java", "hunks": [ { "added": [ "import java.lang.reflect.InvocationTargetException;", "import java.lang.reflect.Method;", "import java.sql.Connection;", "import java.sql.DriverManager;", "import java.sql.ResultSet;", "import java.sql.SQLException;", "import java.sql.Statement;", "", "import javax.sql.DataSource;" ], "header": "@@ -18,12 +18,16 @@", "removed": [ "import java.sql.*;", "import java.lang.reflect.*;", "import javax.sql.DataSource;", "import java.io.BufferedReader;", "import java.io.InputStreamReader;" ] }, { "added": [ "\tprivate static String DBNAME=\"NSSampleDB\";" ], "header": "@@ -51,7 +55,7 @@ public class SimpleNetworkClientSample", "removed": [ "\tprivate static String DBNAME=\"NSSimpleDB\";" ] }, { "added": [ " private static final String DERBY_CLIENT_URL= \"jdbc:derby://localhost:\"+ NETWORKSERVER_PORT+\"/\"+DBNAME+\";create=true\";" ], "header": "@@ -78,7 +82,7 @@ public static final String DERBY_CLIENT_DRIVER = \"org.apache.derby.jdbc.ClientDr", "removed": [ " private static final String DERBY_CLIENT_URL= \"jdbc:derby://localhost:\"+ NETWORKSERVER_PORT+\"/NSSampledb;create=true\";" ] } ] } ]
derby-DERBY-3870-044afae1
DERBY-3870: Concurrent Inserts of rows with XML data results in an exception Store XML utility instance in the activation instead of in the compiled plan, so that it's never accessed concurrently by multiple threads. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1125305 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/SqlXmlUtil.java", "hunks": [ { "added": [], "header": "@@ -23,8 +23,6 @@ package org.apache.derby.iapi.types;", "removed": [ "import org.apache.derby.iapi.services.io.Formatable;", "import org.apache.derby.iapi.services.io.StoredFormatIds;" ] }, { "added": [], "header": "@@ -33,8 +31,6 @@ import java.util.Collections;", "removed": [ "import java.io.ObjectOutput;", "import java.io.ObjectInput;" ] }, { "added": [ "public class SqlXmlUtil" ], "header": "@@ -113,7 +109,7 @@ import javax.xml.transform.stream.StreamResult;", "removed": [ "public class SqlXmlUtil implements Formatable" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/BinaryOperatorNode.java", "hunks": [ { "added": [], "header": "@@ -21,26 +21,18 @@", "removed": [ "import org.apache.derby.iapi.sql.compile.Visitable;", "import org.apache.derby.iapi.sql.dictionary.DataDictionary;", "import org.apache.derby.iapi.services.io.StoredFormatIds;", "import org.apache.derby.impl.sql.compile.ExpressionClassBuilder;", "import org.apache.derby.impl.sql.compile.ActivationClassBuilder;", "import org.apache.derby.iapi.types.StringDataValue;", "import org.apache.derby.iapi.store.access.Qualifier;", "" ] }, { "added": [], "header": "@@ -48,7 +40,6 @@ import org.apache.derby.iapi.reference.SQLState;", "removed": [ "import java.sql.Types;" ] }, { "added": [ "public class BinaryOperatorNode extends OperatorNode" ], "header": "@@ -59,7 +50,7 @@ import java.util.Vector;", "removed": [ "public class BinaryOperatorNode extends ValueNode" ] }, { "added": [ " /** The query expression if the operator is XMLEXISTS or XMLQUERY. */", " private String xmlQuery;" ], "header": "@@ -125,9 +116,8 @@ public class BinaryOperatorNode extends ValueNode", "removed": [ "\t// Class used to compile an XML query expression and/or load/process", "\t// XML-specific objects.", "\tprivate SqlXmlUtil sqlxUtil;" ] }, { "added": [ " xmlQuery = ((CharConstantNode)leftOperand).getString();", "", " // Compile the query expression. The compiled query will not be", " // used, as each activation will need to compile its own version.", " // But we still do this here to get a compile-time error in case", " // the query expression has syntax errors.", " new SqlXmlUtil().compileXQExpr(xmlQuery, operator);" ], "header": "@@ -352,11 +342,13 @@ public class BinaryOperatorNode extends ValueNode", "removed": [ " // compile the query expression.", " sqlxUtil = new SqlXmlUtil();", " sqlxUtil.compileXQExpr(", " ((CharConstantNode)leftOperand).getString(),", " (operatorType == XMLEXISTS_OP ? \"XMLEXISTS\" : \"XMLQUERY\"));" ] }, { "added": [ " pushSqlXmlUtil(acb, mb, xmlQuery, operator);", " mb.pushNewComplete(1);" ], "header": "@@ -518,7 +510,8 @@ public class BinaryOperatorNode extends ValueNode", "removed": [ "\t\t\tmb.pushNewComplete(addXmlOpMethodParams(acb, mb));" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/UnaryOperatorNode.java", "hunks": [ { "added": [], "header": "@@ -23,11 +23,8 @@ package\torg.apache.derby.impl.sql.compile;", "removed": [ "import org.apache.derby.iapi.sql.compile.Visitable;", "import org.apache.derby.iapi.sql.dictionary.DataDictionary;", "" ] }, { "added": [], "header": "@@ -35,15 +32,11 @@ import org.apache.derby.iapi.error.StandardException;", "removed": [ "import org.apache.derby.iapi.services.io.StoredFormatIds;", "import org.apache.derby.iapi.types.StringDataValue;", "import org.apache.derby.iapi.types.SqlXmlUtil;", "import org.apache.derby.impl.sql.compile.ExpressionClassBuilder;" ] }, { "added": [ "public class UnaryOperatorNode extends OperatorNode" ], "header": "@@ -59,7 +52,7 @@ import java.util.Vector;", "removed": [ "public class UnaryOperatorNode extends ValueNode" ] }, { "added": [], "header": "@@ -121,10 +114,6 @@ public class UnaryOperatorNode extends ValueNode", "removed": [ "\t// Class used to hold XML-specific objects required for", "\t// parsing/serializing XML data.", "\tprivate SqlXmlUtil sqlxUtil;", "" ] }, { "added": [], "header": "@@ -383,12 +372,6 @@ public class UnaryOperatorNode extends ValueNode", "removed": [ " // Create a new XML compiler object; the constructor", " // here automatically creates the XML-specific objects ", " // required for parsing/serializing XML, so all we", " // have to do is create an instance.", " sqlxUtil = new SqlXmlUtil();", "" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/SqlXmlExecutor.java", "hunks": [ { "added": [], "header": "@@ -22,8 +22,6 @@", "removed": [ "import org.apache.derby.iapi.reference.SQLState;", "import org.apache.derby.iapi.sql.Activation;" ] }, { "added": [ " * <p>" ], "header": "@@ -32,6 +30,7 @@ import org.apache.derby.iapi.types.XMLDataValue;", "removed": [] }, { "added": [ " * provided is an already-constructed instance of SqlXmlUtil from the current", " * </p>", " * <p>", " * </p>", " * <pre>", " * </pre>", " * <p>", " * For each activation of the statement, the first time a row is read from", " * xtable, the expression \"/simple\" is compiled and stored in the activation.", " * Then, for each row in xtable, we'll generate the following:", " * </p>", " * <pre>", " * (new SqlXmlExecutor(cachedSqlXmlUtilInstance)).", " * </pre>", " * <p>" ], "header": "@@ -40,33 +39,38 @@ import org.apache.derby.iapi.types.SqlXmlUtil;", "removed": [ " * provided is an id that is used to retrieve an already-constructed", " * (from compilation time) instance of SqlXmlUtil from the current", " * At compilation time we will compile the expression \"/simple\"", " * and store the compiled version of the query into an instance", " * of SqlXmlUtil. Then we will save that instance of SqlXmlUtil", " * as an object in the statement activation, from which we will", " * receive an id that can be used later to retrieve the object", " * (i.e. to retrieve the SqlXmlUtil). Then, for *each* row", " * in xtable, we'll generate the following:", " * (new SqlXmlExecutor(activation, compileTimeObjectId))." ] }, { "added": [ " * </p>", " *", " * <p>" ], "header": "@@ -75,7 +79,9 @@ import org.apache.derby.iapi.types.SqlXmlUtil;", "removed": [ " * " ] }, { "added": [ " * the target result set. By caching the SqlXmlUtil instance in the", " * Activation and access it via this SqlXmlExecutor class, we make", " * create XML-specific objects once per activation, and then", " * class (SqlXmlExecutor) once per row, but this is", " * </p>", " *", " * <p>", " * </p>", " *", " * <p>", " * The next paragraph contains a historical note about why this class is", " * placed in this package. It is no longer true that the class uses the", " * {@code getSavedObject()} method on the Activation, so it should now be", " * safe to move it to the types package.", " * </p>", " * <p><i>" ], "header": "@@ -88,22 +94,32 @@ import org.apache.derby.iapi.types.SqlXmlUtil;", "removed": [ " * the target result set. By using the \"saveObject\" functionality", " * in Activation along with this SqlXmlExecutor class, we make", " * create XML-specific objects once (at compile time), and then", " * class (SqlXmlExecutor) once per row, and yes we have to fetch", " * the appropriate SqlXmlUtil object once per row, but this is", " * " ] }, { "added": [ " * </i></p>", " /** Utility instance that performs the actual XML operations. */", " private final SqlXmlUtil sqlXmlUtil;" ], "header": "@@ -116,15 +132,13 @@ import org.apache.derby.iapi.types.SqlXmlUtil;", "removed": [ " // The activation from which we load the compile-time XML", " // objects (including the compiled XML query expression in", " // case of XMLEXISTS and XMLQUERY).", " private Activation activation;", " private int sqlXUtilId;" ] }, { "added": [ " * @param sqlXmlUtil utility that performs the parsing", " public SqlXmlExecutor(SqlXmlUtil sqlXmlUtil, boolean preserveWS)", " this.sqlXmlUtil = sqlXmlUtil;" ], "header": "@@ -138,15 +152,12 @@ public class SqlXmlExecutor {", "removed": [ " * @param activation Activation from which to retrieve saved objects", " * @param utilId Id by which we find saved objects in activation", " public SqlXmlExecutor(Activation activation, int utilId,", " boolean preserveWS)", " this.activation = activation;", " this.sqlXUtilId = utilId;" ] }, { "added": [ " this.sqlXmlUtil = null;" ], "header": "@@ -159,6 +170,7 @@ public class SqlXmlExecutor {", "removed": [] }, { "added": [ " * @param sqlXmlUtil utility that performs the query", " public SqlXmlExecutor(SqlXmlUtil sqlXmlUtil)", " this.sqlXmlUtil = sqlXmlUtil;" ], "header": "@@ -166,13 +178,11 @@ public class SqlXmlExecutor {", "removed": [ " * @param activation Activation from which to retrieve saved objects", " * @param utilId Id by which we find saved objects in activation", " public SqlXmlExecutor(Activation activation, int utilId)", " this.activation = activation;", " this.sqlXUtilId = utilId;" ] }, { "added": [ " xmlText.getString(), preserveWS, sqlXmlUtil);" ], "header": "@@ -201,7 +211,7 @@ public class SqlXmlExecutor {", "removed": [ " xmlText.getString(), preserveWS, getSqlXmlUtil());" ] }, { "added": [ " return xmlContext.XMLExists(sqlXmlUtil);" ], "header": "@@ -235,7 +245,7 @@ public class SqlXmlExecutor {", "removed": [ " return xmlContext.XMLExists(getSqlXmlUtil());" ] }, { "added": [ " * result of evaluating the query expression against xmlContext." ], "header": "@@ -249,7 +259,7 @@ public class SqlXmlExecutor {", "removed": [ " * result of evaulating the query expression against xmlContext." ] } ] }, { "file": "java/testing/org/apache/derbyTesting/functionTests/suites/XMLSuite.java", "hunks": [ { "added": [ " suite.addTest(org.apache.derbyTesting.functionTests.tests.lang.XMLConcurrencyTest.suite());" ], "header": "@@ -50,8 +50,7 @@ public final class XMLSuite extends BaseTestCase {", "removed": [ " // XMLConcurrencyTest is disabled until DERBY-3870 is fixed.", " // suite.addTest(org.apache.derbyTesting.functionTests.tests.lang.XMLConcurrencyTest.suite());" ] } ] } ]
derby-DERBY-3870-0f64b702
DERBY-5289 Unable to boot 10.5.1.1 database - fails during soft/hard upgrade process for a new version number while trying to drop jdbc metadata Checking testcase for DERBY-5289. In trunk theDERBY-3870 fix contributed by Knut Anders Hatlen fixed the issue so no code change is needed. Just the portion of DERBY-3870 that is relevant to DERBY-5289 will be backported to the other branches. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1139449 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3870-118ac264
DERBY-3870: Concurrent Inserts of rows with XML data results in an exception Added a test case for the bug (disabled for now). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1101839 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3870-2a09eb57
DERBY-3870: Concurrent Inserts of rows with XML data results in an exception Remove lazy initialization of the field that holds the cached SqlXmlUtil instance. This simplifies the generated byte code. It also removes the need for an explicit syntax check of the XML query during the bind phase, as the earlier initialization ensures that syntax errors will be detected at compile time. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1126358 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/BinaryOperatorNode.java", "hunks": [ { "added": [], "header": "@@ -31,7 +31,6 @@ import org.apache.derby.iapi.services.compiler.LocalField;", "removed": [ "import org.apache.derby.iapi.types.SqlXmlUtil;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/OperatorNode.java", "hunks": [ { "added": [ " * instance will be created and cached in the activation's constructor, so", " * that we don't need to create a new instance for every row." ], "header": "@@ -36,9 +36,8 @@ abstract class OperatorNode extends ValueNode {", "removed": [ " * instance will be created and cached in the activation the first time", " * the code is executed, so that we don't need to create a new instance", " * for every row." ] }, { "added": [ " Modifier.PRIVATE | Modifier.FINAL, SqlXmlUtil.class.getName());", "", " // Add code that creates the SqlXmlUtil instance in the constructor.", " MethodBuilder constructor = acb.getConstructor();", " constructor.pushNewStart(SqlXmlUtil.class.getName());", " constructor.pushNewComplete(0);", " constructor.putField(sqlXmlUtil);", "", " // Compile the query, if one is specified.", " if (xmlQuery == null) {", " // No query. The SqlXmlUtil instance is still on the stack. Pop it", " // to restore the initial state of the stack.", " constructor.pop();", " } else {", " // Compile the query. This will consume the SqlXmlUtil instance", " // and leave the stack in its initial state.", " constructor.push(xmlQuery);", " constructor.push(xmlOpName);", " constructor.callMethod(", " // Read the cached value and push it onto the stack in the method", " // generated for the operator.", " mb.getField(sqlXmlUtil);" ], "header": "@@ -59,37 +58,31 @@ abstract class OperatorNode extends ValueNode {", "removed": [ " Modifier.PRIVATE, SqlXmlUtil.class.getName());", "", " // Read the cached value.", " mb.getField(sqlXmlUtil);", "", " // Check if the cached value is null. If it is, create a new instance.", " // Otherwise, we're happy with the stack as it is (the cached instance", " // will be on top of it), and nothing more is needed.", " mb.dup();", " mb.conditionalIfNull();", "", " // The cached value is null. Pop it from the stack so that we can put", " // a fresh instance there in its place.", " mb.pop();", "", " // Create a new instance and cache it in the field. Its value will be", " // on the top of the stack after this sequence.", " mb.pushNewStart(SqlXmlUtil.class.getName());", " mb.pushNewComplete(0);", " mb.putField(sqlXmlUtil);", "", " // If a query is specified, compile it.", " if (xmlQuery != null) {", " mb.dup();", " mb.push(xmlQuery);", " mb.push(xmlOpName);", " mb.callMethod(", " mb.completeConditional();" ] } ] } ]
derby-DERBY-3870-5fc727ca
DERBY-3870: Concurrent Inserts of rows with XML data results in an exception Allow databases that contain triggers with XML operators to be upgraded from a version without the fix for this issue to a fixed version. It used to fail because it couldn't deserialize the old plan for the trigger. Fix it by not trying to deserialize the plan before invalidating it. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1132546 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java", "hunks": [ { "added": [ " * The returned descriptors don't contain the compiled statement, so it", " * it safe to call this method during upgrade when it isn't known if the", " * saved statement can still be deserialized with the new version." ], "header": "@@ -4580,6 +4580,9 @@ public final class\tDataDictionaryImpl", "removed": [] }, { "added": [ " // DERBY-3870: The compiled plan may not be possible to deserialize", " // during upgrade. Skip the column that contains the compiled plan to", " // prevent deserialization errors when reading the rows. We don't care", " // about the value in that column, since this method is only called", " // when we want to drop or invalidate rows in SYSSTATEMENTS.", " FormatableBitSet cols = new FormatableBitSet(", " ti.getCatalogRowFactory().getHeapColumnCount());", " for (int i = 0; i < cols.size(); i++) {", " if (i + 1 == SYSSTATEMENTSRowFactory.SYSSTATEMENTS_CONSTANTSTATE) {", " cols.clear(i);", " } else {", " cols.set(i);", " }", " }", "", " cols," ], "header": "@@ -4592,7 +4595,23 @@ public final class\tDataDictionaryImpl", "removed": [] }, { "added": [ " null," ], "header": "@@ -4647,6 +4666,7 @@ public final class\tDataDictionaryImpl", "removed": [] }, { "added": [ "\t\tgetDescriptorViaHeap(null, scanQualifier, ti, null, cdl);" ], "header": "@@ -7039,10 +7059,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\tgetDescriptorViaHeap(scanQualifier,", "\t\t\t\t\t\t\t\t ti,", "\t\t\t\t\t\t\t\t null,", "\t\t\t\t\t\t\t\t cdl);" ] }, { "added": [ " * @param columns which columns to fetch from the system", " * table, or null to fetch all columns" ], "header": "@@ -9475,6 +9492,8 @@ public final class\tDataDictionaryImpl", "removed": [] }, { "added": [ " FormatableBitSet columns," ], "header": "@@ -9486,6 +9505,7 @@ public final class\tDataDictionaryImpl", "removed": [] } ] } ]
derby-DERBY-3870-d09782a3
DERBY-3870: Concurrent Inserts of rows with XML data results in an exception Remove the unneeded indirection via SqlXmlExecutor. This reduces the amount of code and removes one object allocation per row when using an XML operator. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1127883 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/XML.java", "hunks": [ { "added": [ " * @param stringValue The string value to check." ], "header": "@@ -598,7 +598,7 @@ public class XML", "removed": [ " * @param text The string value to check." ] }, { "added": [ " public XMLDataValue XMLParse(", " StringDataValue stringValue,", " boolean preserveWS,", " SqlXmlUtil sqlxUtil)", " throws StandardException", " if (stringValue.isNull()) {", " setToNull();", " return this;", " }", "", " String text = stringValue.getString();" ], "header": "@@ -609,9 +609,18 @@ public class XML", "removed": [ " public XMLDataValue XMLParse(String text, boolean preserveWS,", " SqlXmlUtil sqlxUtil) throws StandardException" ] }, { "added": [ " * @param result The result of a previous call to this method; null", " * if not called yet." ], "header": "@@ -834,10 +843,10 @@ public class XML", "removed": [ " * @param result The result of a previous call to this method; null", " * if not called yet." ] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/XMLDataValue.java", "hunks": [ { "added": [ " * @param stringValue The string value to check." ], "header": "@@ -30,7 +30,7 @@ public interface XMLDataValue extends DataValueDescriptor", "removed": [ " * @param text The string value to check." ] }, { "added": [ " public XMLDataValue XMLParse(", " StringDataValue stringValue,", " boolean preserveWS,", " SqlXmlUtil sqlxUtil)", " throws StandardException;" ], "header": "@@ -41,8 +41,11 @@ public interface XMLDataValue extends DataValueDescriptor", "removed": [ "\tpublic XMLDataValue XMLParse(String text, boolean preserveWS,", "\t\tSqlXmlUtil sqlxUtil) throws StandardException;" ] }, { "added": [ " * @param result The result of a previous call to this method; null", " * if not called yet." ], "header": "@@ -90,10 +93,10 @@ public interface XMLDataValue extends DataValueDescriptor", "removed": [ " * @param result The result of a previous call to this method; null", " * if not called yet." ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/BinaryOperatorNode.java", "hunks": [ { "added": [ " // The number of arguments to pass to the method that implements the", " // operator, depends on the type of the operator.", " int numArgs;", "" ], "header": "@@ -483,30 +483,15 @@ public class BinaryOperatorNode extends OperatorNode", "removed": [ "\t\tif (xmlGen) {", "\t\t// We create an execution-time object so that we can retrieve", "\t\t// saved objects (esp. our compiled query expression) from", "\t\t// the activation. We do this for two reasons: 1) this level", "\t\t// of indirection allows us to separate the XML data type", "\t\t// from the required XML implementation classes (esp. JAXP", "\t\t// and Xalan classes)--for more on how this works, see the", "\t\t// comments in SqlXmlUtil.java; and 2) we can take", "\t\t// the XML query expression, which we've already compiled,", "\t\t// and pass it to the execution-time object for each row,", "\t\t// which means that we only have to compile the query", "\t\t// expression once per SQL statement (instead of once per", "\t\t// row); see SqlXmlExecutor.java for more.", "\t\t\tmb.pushNewStart(", "\t\t\t\t\"org.apache.derby.impl.sql.execute.SqlXmlExecutor\");", " pushSqlXmlUtil(acb, mb, xmlQuery, operator);", " mb.pushNewComplete(1);", "\t\t}", "" ] }, { "added": [ "", " // We've pushed two arguments", " numArgs = 2;" ], "header": "@@ -544,6 +529,9 @@ public class BinaryOperatorNode extends OperatorNode", "removed": [] }, { "added": [ "\t\t\t** <right expression>.method(sqlXmlUtil)", " if (xmlGen) {", " // Push one argument (the SqlXmlUtil instance)", " numArgs = 1;", " pushSqlXmlUtil(acb, mb, xmlQuery, operator);", " // stack: right,sqlXmlUtil", " } else {", " // Push two arguments (left, right)", " numArgs = 2;", "", " leftOperand.generateExpression(acb, mb);", " mb.cast(leftInterfaceType); // second arg with cast", " // stack: right,right,left", "", " mb.swap();", " // stack: right,left,right", " }" ], "header": "@@ -567,28 +555,33 @@ public class BinaryOperatorNode extends OperatorNode", "removed": [ "\t\t\t** SqlXmlExecutor.method(left, right)\"", "\t\t\t**", "\t\t\t** and we've already pushed the SqlXmlExecutor object to", "\t\t\t** the stack.", "\t\t\tif (!xmlGen) {", "\t\t\t}", "\t\t\t", "\t\t\tleftOperand.generateExpression(acb, mb);", "\t\t\tmb.cast(leftInterfaceType); // second arg with cast", "\t\t\t// stack: right,right,left", "\t\t\tmb.swap();", "\t\t\t// stack: right,left,right\t\t\t" ] }, { "added": [ " // Boolean return types don't need a result field. For other types,", " // allocate an object for re-use to hold the result of the operator.", " LocalField resultField = getTypeId().isBooleanTypeId() ?", " null : acb.newFieldDeclaration(Modifier.PRIVATE, resultTypeName);", " // Push the result field onto the stack, if there is a result field.", "\t\tif (resultField != null) {" ], "header": "@@ -596,15 +589,13 @@ public class BinaryOperatorNode extends OperatorNode", "removed": [ "\t\t// Boolean return types don't need a result field", "\t\tboolean needField = !getTypeId().isBooleanTypeId();", "", "\t\tif (needField) {", "", "\t\t\t/* Allocate an object for re-use to hold the result of the operator */", "\t\t\tLocalField resultField =", "\t\t\t\tacb.newFieldDeclaration(Modifier.PRIVATE, resultTypeName);" ] }, { "added": [ " // Adjust number of arguments for the result field", " numArgs++;", "" ], "header": "@@ -613,6 +604,9 @@ public class BinaryOperatorNode extends OperatorNode", "removed": [] }, { "added": [ " numArgs++;", " }", "", " mb.callMethod(VMOpcode.INVOKEINTERFACE, receiverType,", " methodName, resultTypeName, numArgs);", " // Store the result of the method call, if there is a result field.", " if (resultField != null) {" ], "header": "@@ -623,17 +617,15 @@ public class BinaryOperatorNode extends OperatorNode", "removed": [ "\t\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE, receiverType, methodName, resultTypeName, 4);", "\t\t\t}", "\t\t\telse if (xmlGen) {", "\t\t\t// This is for an XMLQUERY operation, so invoke the method", "\t\t\t// on our execution-time object.", "\t\t\t\tmb.callMethod(VMOpcode.INVOKEVIRTUAL, null,", "\t\t\t\t\tmethodName, resultTypeName, 3);", "\t\t\telse", "\t\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE, receiverType, methodName, resultTypeName, 3);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/UnaryOperatorNode.java", "hunks": [ { "added": [], "header": "@@ -607,26 +607,6 @@ public class UnaryOperatorNode extends OperatorNode", "removed": [ "\t\t// For XML operator we do some extra work.", "\t\tboolean xmlGen = (operatorType == XMLPARSE_OP) ||", "\t\t\t(operatorType == XMLSERIALIZE_OP);", "", "\t\tif (xmlGen) {", "\t\t// We create an execution-time object from which we call", "\t\t// the necessary methods. We do this for two reasons: 1) this", "\t\t// level of indirection allows us to separate the XML data type", "\t\t// from the required XML implementation classes (esp. JAXP and", "\t\t// Xalan classes)--for more on how this works, see the comments", "\t\t// in SqlXmlUtil.java; and 2) this allows us to create the", "\t\t// required XML objects a single time (which we did at bind time", "\t\t// when we created a new SqlXmlUtil) and then reuse those objects", "\t\t// for each row in the target result set, instead of creating", "\t\t// new objects every time; see SqlXmlUtil.java for more.", "\t\t\tmb.pushNewStart(", "\t\t\t\t\"org.apache.derby.impl.sql.execute.SqlXmlExecutor\");", "\t\t\tmb.pushNewComplete(addXmlOpMethodParams(acb, mb));", "\t\t}", "" ] }, { "added": [ " int numArgs = 1;", "", " // XML operators take extra arguments.", " numArgs += addXmlOpMethodParams(acb, mb, field);", "", " mb.callMethod(VMOpcode.INVOKEINTERFACE, null,", " methodName, resultTypeName, numArgs);" ], "header": "@@ -647,25 +627,13 @@ public class UnaryOperatorNode extends OperatorNode", "removed": [ "\t\t\t/* If we're calling a method on a class (SqlXmlExecutor) instead", "\t\t\t * of calling a method on the operand interface, then we invoke", "\t\t\t * VIRTUAL; we then have 2 args (the operand and the local field)", "\t\t\t * instead of one, i.e:", "\t\t\t *", "\t\t\t * SqlXmlExecutor.method(operand, field)", "\t\t\t *", "\t\t\t * instead of", "\t\t\t *", "\t\t\t * <operand>.method(field).", "\t\t\t */", "\t\t\tif (xmlGen) {", "\t\t\t\tmb.callMethod(VMOpcode.INVOKEVIRTUAL, null,", "\t\t\t\t\tmethodName, resultTypeName, 2);", "\t\t\t}", "\t\t\telse {", "\t\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE,", "\t\t\t\t\t(String) null, methodName, resultTypeName, 1);", "\t\t\t}" ] }, { "added": [ " *", " * @param acb the builder for the class in which the method lives", " * @param resultField the field that contains the previous result", "\t\tMethodBuilder mb, LocalField resultField) throws StandardException" ], "header": "@@ -746,11 +714,14 @@ public class UnaryOperatorNode extends OperatorNode", "removed": [ "\t\tMethodBuilder mb) throws StandardException" ] } ] } ]
derby-DERBY-3870-ed7f8b90
DERBY-6634: Improve test coverage of SqlXmlUtil.java Remove dead code from the time when SqlXmlUtil implemented the Formatable interface (before DERBY-3870). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1605285 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/SqlXmlUtil.java", "hunks": [ { "added": [], "header": "@@ -128,12 +128,6 @@ public class SqlXmlUtil", "removed": [ " // Used to recompile the XPath expression when this formatable", " // object is reconstructed. e.g.: SPS ", " private String queryExpr;", " private String opName;", " private boolean recompileQuery;", "" ] }, { "added": [], "header": "@@ -266,10 +260,6 @@ public class SqlXmlUtil", "removed": [ " this.queryExpr = queryExpr;", " this.opName = opName;", " this.recompileQuery = false;", "" ] }, { "added": [], "header": "@@ -552,12 +542,6 @@ public class SqlXmlUtil", "removed": [ " // if this object is in an SPS, we need to recompile the query", " if (recompileQuery)", " {", " \tcompileXQExpr(queryExpr, opName);", " }", "" ] } ] } ]
derby-DERBY-3871-0c5c5aa1
DERBY-3871: EmbedBlob.setBytes returns incorrect insertion count. Made EmbedBlob.setBytes return the number of bytes inserted, instead of returning the Blob position after the insert. Improved some JavaDoc comments. Added regression tests. Patch file: derby-3871-1a_insertion_count.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@701372 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedBlob.java", "hunks": [ { "added": [ " * Writes the given array of bytes to the BLOB value that this Blob object", " * represents, starting at position pos, and returns the number of bytes", " * written.", " *", " * @param pos the position in the BLOB object at which to start writing", " * @param bytes the array of bytes to be written to the BLOB value that this", " * Blob object represents", " * @return The number of bytes written to the BLOB.", " * @throws SQLException if writing the bytes to the BLOB fails", " * @since 1.4", "\t */", " /**", " * Writes all or part of the given array of byte array to the BLOB value", " * that this Blob object represents and returns the number of bytes written.", " * Writing starts at position pos in the BLOB value; len bytes from the", " * given byte array are written.", " *", " * @param pos the position in the BLOB object at which to start writing", " * @param bytes the array of bytes to be written to the BLOB value that this", " * Blob object represents", " * @param offset the offset into the byte array at which to start reading", " * the bytes to be written", " * @param len the number of bytes to be written to the BLOB value from the", " * array of bytes bytes", " * @return The number of bytes written to the BLOB.", " * @throws SQLException if writing the bytes to the BLOB fails", " * @throws IndexOutOfBoundsException if {@code len} is larger than", " * {@code bytes.length - offset}", " * @since 1.4", "\t */" ], "header": "@@ -793,39 +793,40 @@ final class EmbedBlob extends ConnectionChild implements Blob, EngineLOB", "removed": [ " * JDBC 3.0", " *", " * Writes the given array of bytes to the BLOB value that this Blob object", " * represents, starting at position pos, and returns the number of bytes written.", " *", " * @param pos - the position in the BLOB object at which to start writing", " * @param bytes - the array of bytes to be written to the BLOB value that this", " * Blob object represents", " * @return the number of bytes written", " * @exception SQLException Feature not implemented for now.", "\t*/", " /**", " * JDBC 3.0", " *", " * Writes all or part of the given array of byte array to the BLOB value that", " * this Blob object represents and returns the number of bytes written.", " * Writing starts at position pos in the BLOB value; len bytes from the given", " * byte array are written.", " *", " * @param pos - the position in the BLOB object at which to start writing", " * @param bytes - the array of bytes to be written to the BLOB value that this", " * Blob object represents", " * @param offset - the offset into the array bytes at which to start reading", " * the bytes to be set", " * @param len - the number of bytes to be written to the BLOB value from the", " * array of bytes bytes", " * @return the number of bytes written", " * @exception SQLException Feature not implemented for now.", "\t*/" ] } ] } ]
derby-DERBY-3872-f36770cc
DERBY-3872 The NPE in this jira entry was caused by the missing overwrite of accept() method in IndexToBaseRowNode. Because of the missing code, the additional layer of VirtualColumn node over ResultColumn was not happening for the where clause in HAVING. Once the accept method was added to IndexToBaseRowNode, the VirtualColumn on top of the ResultColumn got the correct resultset number associated with it and at the code generation time, we start referencing the correct resultset rather than the one associated with the JOIN clause. Thanks a ton to Army and Bryan on this jira entry for their help. I have added a test case for this in lang/GroupByTest.java git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@705037 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/IndexToBaseRowNode.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.sql.compile.Visitable;", "import org.apache.derby.iapi.sql.compile.Visitor;" ], "header": "@@ -27,6 +27,8 @@ import org.apache.derby.iapi.sql.compile.AccessPath;", "removed": [] } ] } ]
derby-DERBY-3875-c33b0cfc
DERBY-3875; closing containers after a problem has been found allows restoreFrom to work correctly. Patch contributions by Jason McLaurin, Kristian Waagan and Myrna van Lunteren. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@703246 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/Utilities.java", "hunks": [ { "added": [ "import java.util.StringTokenizer;" ], "header": "@@ -33,6 +33,7 @@ import java.security.PrivilegedExceptionAction;", "removed": [] }, { "added": [ " ", " /**", " * Splits a string around matches of the given delimiter character.", " * Copied from org.apache.derby.iapi.util.StringUtil", " *", " * Where applicable, this method can be used as a substitute for", " * <code>String.split(String regex)</code>, which is not available", " * on a JSR169/Java ME platform.", " *", " * @param str the string to be split", " * @param delim the delimiter", " * @throws NullPointerException if str is null", " */", " static public String[] split(String str, char delim)", " {", " if (str == null) {", " throw new NullPointerException(\"str can't be null\");", " }", "", " // Note the javadoc on StringTokenizer:", " // StringTokenizer is a legacy class that is retained for", " // compatibility reasons although its use is discouraged in", " // new code.", " // In other words, if StringTokenizer is ever removed from the JDK,", " // we need to have a look at String.split() (or java.util.regex)", " // if it is supported on a JSR169/Java ME platform by then.", " StringTokenizer st = new StringTokenizer(str, String.valueOf(delim));", " int n = st.countTokens();", " String[] s = new String[n];", " for (int i = 0; i < n; i++) {", " s[i] = st.nextToken();", " }", " return s;", " }" ], "header": "@@ -195,5 +196,39 @@ public class Utilities {", "removed": [] } ] } ]
derby-DERBY-3878-42526fd1
DERBY-3878: Replication: stopSlave does not close serversocket when master has crashed Use try/finally to ensure that the sockets are closed when closing the socket streams fails. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@707591 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/replication/net/SocketConnection.java", "hunks": [ { "added": [ " // If the other party has crashed, closing the streams may fail (at", " // least the output stream since its close() method calls flush()).", " // In any case, we want the socket to be closed, so close it in a", " // finally clause. DERBY-3878", " try {", " objInputStream.close();", " objOutputStream.close();", " } finally {", " socket.close();", " }" ], "header": "@@ -110,8 +110,15 @@ public class SocketConnection {", "removed": [ " objInputStream.close();", " objOutputStream.close();", " socket.close();" ] } ] } ]
derby-DERBY-388-851bcbb2
Port fix for DERBY-388 into 10.1 branch. Address intermittent failures when executing trigger statements caused by references to internal SQL formats. Fix originally submitted to 10.0 branch by Army Brown (qozinx@sbcglobal.net) git-svn-id: https://svn.apache.org/repos/asf/incubator/derby/code/trunk@219115 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/GenericPreparedStatement.java", "hunks": [ { "added": [ "\t\t\tif (!spsAction) {", "\t\t\t// only re-prepare if this isn't an SPS for a trigger-action;", "\t\t\t// if it _is_ an SPS for a trigger action, then we can't just", "\t\t\t// do a regular prepare because the statement might contain", "\t\t\t// internal SQL that isn't allowed in other statements (such as a", "\t\t\t// static method call to get the trigger context for retrieval", "\t\t\t// of \"new row\" or \"old row\" values). So in that case we", "\t\t\t// skip the call to 'rePrepare' and if the statement is out", "\t\t\t// of date, we'll get a NEEDS_COMPILE exception when we try", "\t\t\t// to execute. That exception will be caught by the executeSPS()", "\t\t\t// method of the GenericTriggerExecutor class, and at that time", "\t\t\t// the SPS action will be recompiled correctly.", "\t\t\t\trePrepare(lccToUse);", "\t\t\t}" ], "header": "@@ -353,7 +353,20 @@ recompileOutOfDatePlan:", "removed": [ "\t\t\trePrepare(lccToUse);" ] } ] } ]
derby-DERBY-388-8f235488
DERBY-388: Add a test case to cover this patch submitted earlier. Submitted by Army Brown (qozinx@sbcglobal.net) git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@226896 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3880-2e7e8f6d
DERBY-4698 Simple query with HAVING clause crashes with NullPointerException Patch derby-4698-2. The case of column references in HAVING clauses being wrong after JOIN flattening was initially solved by DERBY-3880. That solution was partial in that it can sometimes happen too late. This patch changes the fix-up of column references in a having clause after join flattening to the same point in time as that of other column references that need fix-up after the flattening (rcl, column references in join predicates and group by clauses). Thus the fixup is moved from the modifyaccesspath phase to the preprocess phase. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@956234 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/FromList.java", "hunks": [ { "added": [ " * @param havingClause The HAVING clause, if any", " GroupByList gbl,", " ValueNode havingClause)" ], "header": "@@ -708,13 +708,15 @@ public class FromList extends QueryTreeNodeVector implements OptimizableList", "removed": [ "\t\t\t\t\t\t\t\t GroupByList gbl)" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/FromSubquery.java", "hunks": [ { "added": [ " * @param havingClause The HAVING clause, if any" ], "header": "@@ -483,6 +483,7 @@ public class FromSubquery extends FromTable", "removed": [] }, { "added": [ " GroupByList gbl,", " ValueNode havingClause)" ], "header": "@@ -491,7 +492,8 @@ public class FromSubquery extends FromTable", "removed": [ "\t\t\t\t\t\t\tGroupByList gbl)" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/FromTable.java", "hunks": [ { "added": [ " * @param havingClause The HAVING clause, if any" ], "header": "@@ -1420,6 +1420,7 @@ abstract class FromTable extends ResultSetNode implements Optimizable", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/JoinNode.java", "hunks": [ { "added": [ " * @param havingClause The HAVING clause, if any" ], "header": "@@ -1414,6 +1414,7 @@ public class JoinNode extends TableOperatorNode", "removed": [] }, { "added": [ " GroupByList gbl,", " ValueNode havingClause)" ], "header": "@@ -1422,7 +1423,8 @@ public class JoinNode extends TableOperatorNode", "removed": [ "\t\t\t\t\t\t\tGroupByList gbl)" ] } ] } ]
derby-DERBY-3880-8d13a1f3
DERBY-3880 NPE on a query with having clause involving a join remap expression for AggregateNode operand if the JoinNode has been flattened. Fix contributed by Army Brown git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@711321 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3882-4880a417
DERBY-3882: Expensive cursor name lookup in network server Statements executed via the network client are always given cursor names on the server, even if the user has not set a cursor name. When a statement has a cursor name, the embedded driver will check on each execution that there is no other statement in the same connection with the same cursor name and an open result set. To perform this check, the list of activations in the connection is traversed and each cursor name is compared with the cursor name of the statement to be executed. If the number of open statements in the connection is high, which is very likely if ClientConnectionPoolDataSource with the JDBC statement cache is used, traversing the list of activations and performing string comparisons may become expensive. This patch attempts to make this operation cheaper without performing a full rewrite and use a new data structure. It exploits that the most common implementations of java.lang.String cache the hash code, so calling hashCode() on the cursor names in the list will simply read the value from an int field after warm-up. By checking if the hash codes of the cursor names match first, we can avoid many of the string comparisons because we know that strings with different hash codes cannot be equal. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@818807 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/conn/GenericLanguageConnectionContext.java", "hunks": [ { "added": [ " int cursorHash = cursorName.hashCode();", "" ], "header": "@@ -876,6 +876,8 @@ public class GenericLanguageConnectionContext", "removed": [] }, { "added": [ " // If the executing cursor has no name, or if the hash code of", " // its name is different from the one we're looking for, it", " // can't possibly match. Since java.lang.String caches the", " // hash code (at least in the most common implementations),", " // checking the hash code is cheaper than comparing the names", " // with java.lang.String.equals(), especially if there are many", " // open statements associated with the connection. See", " // DERBY-3882. Note that we can only use the hash codes to", " // determine that the names don't match. Even if the hash codes", " // are equal, we still need to call equals() to verify that the", " // two names actually are equal.", " if (executingCursorName == null ||", " executingCursorName.hashCode() != cursorHash) {", " continue;", " }", "" ], "header": "@@ -888,6 +890,22 @@ public class GenericLanguageConnectionContext", "removed": [] } ] } ]
derby-DERBY-3884-ebad814e
DERBY-3877 SQL roles: build support for dblook Patch derby-3877-2, which adds basic support for roles in dblook (but see DERBY-3884), adds test cases, and also enables the old test harness to shutdown the server using credentials when required due to authentication being enabled (needed by modified test for dblook). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@700295 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/functionTests/harness/NetServer.java", "hunks": [ { "added": [ "\tString appsRequiredPassword;" ], "header": "@@ -43,6 +43,7 @@ public class NetServer", "removed": [] }, { "added": [ "" ], "header": "@@ -89,6 +90,7 @@ public class NetServer", "removed": [] }, { "added": [ " public NetServer(File homeDir, String jvmName, String clPath,", "\t\t\t\t\t String javaCmd, String jvmflags, String framework,", "\t\t\t\t\t boolean startServer, String appsRequiredPassword)" ], "header": "@@ -120,8 +122,9 @@ public class NetServer", "removed": [ " public NetServer(File homeDir, String jvmName, String clPath, String ", " \t javaCmd, String jvmflags, String framework, boolean startServer)" ] }, { "added": [ "", "\t // if authentication is required to shutdown server we need password", "\t // for user APP (the dbo).", " \tthis.appsRequiredPassword = appsRequiredPassword;" ], "header": "@@ -130,6 +133,10 @@ public class NetServer", "removed": [] }, { "added": [ "", "\t\tif (appsRequiredPassword != null) {", "\t\t\tString[] modifiedStopCmd = new String[stopcmd1.length + 4];", "\t\t\tSystem.arraycopy(stopcmd1, 0, modifiedStopCmd, 0, stopcmd1.length);", "\t\t\tmodifiedStopCmd[stopcmd1.length] = \"-user\";", "\t\t\tmodifiedStopCmd[stopcmd1.length + 1] = \"app\";", "\t\t\tmodifiedStopCmd[stopcmd1.length + 2] = \"-password\";", "\t\t\tmodifiedStopCmd[stopcmd1.length + 3] = appsRequiredPassword;", "\t\t\tstopcmd1 = modifiedStopCmd;", "\t\t}", "", "" ], "header": "@@ -285,7 +292,18 @@ public class NetServer", "removed": [ "\t\t" ] } ] }, { "file": "java/tools/org/apache/derby/impl/tools/dblook/DB_GrantRevoke.java", "hunks": [ { "added": [ " Derby - Class org.apache.derby.impl.tools.dblook.DB_GrantRevoke" ], "header": "@@ -1,6 +1,6 @@", "removed": [ " Derby - Class org.apache.derby.impl.tools.dblook.DB_Alias" ] }, { "added": [ "\t\t\tString authName = dblook.addQuotes", "\t\t\t\t(dblook.expandDoubleQuotes(rs.getString(1)));", "\t\t\tString schemaName = dblook.addQuotes", "\t\t\t\t(dblook.expandDoubleQuotes(rs.getString(2)));", "\t\t\tString tableName = dblook.addQuotes", "\t\t\t\t(dblook.expandDoubleQuotes(rs.getString(3)));" ], "header": "@@ -83,9 +83,12 @@ public class DB_GrantRevoke {", "removed": [ "\t\t\tString authName = rs.getString(1);", "\t\t\tString schemaName = dblook.addQuotes(dblook.expandDoubleQuotes(rs.getString(2)));", "\t\t\tString tableName = dblook.addQuotes(dblook.expandDoubleQuotes(rs.getString(3)));" ] }, { "added": [ "\t\t\tString authName = dblook.addQuotes", "\t\t\t\t(dblook.expandDoubleQuotes(rs.getString(1)));" ], "header": "@@ -175,7 +178,8 @@ public class DB_GrantRevoke {", "removed": [ "\t\t\tString authName = rs.getString(1);" ] } ] }, { "file": "java/tools/org/apache/derby/tools/dblook.java", "hunks": [ { "added": [ "import org.apache.derby.impl.tools.dblook.DB_Roles;" ], "header": "@@ -48,6 +48,7 @@ import org.apache.derby.impl.tools.dblook.DB_Schema;", "removed": [] }, { "added": [ "\t\t\tDB_Roles.doRoles(this.conn);" ], "header": "@@ -539,6 +540,7 @@ public final class dblook {", "removed": [] } ] } ]
derby-DERBY-3886-7cbf0216
DERBY-3886 SQL roles: ij show enabled and settable roles Adds a VTI table function SYSCS_DIAG.ENABLED_ROLES and two new ij commands; "show settable_roles" and "show enabled_roles", plus new tests. Also tweaks ScriptTestCase to be able to run scripts that need SQL authorization. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@702266 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3887-7f4445db
DERBY-3887 Embedded Derby fails under JBoss because of JMX-related conflicts Backed out JMX related changes for DERBY-3745. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@784831 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/services/jmx/JMXManagementService.java", "hunks": [ { "added": [], "header": "@@ -163,43 +163,7 @@ public final class JMXManagementService implements ManagementService, ModuleCont", "removed": [ " //DERBY-3745 We want to avoid the timer leaking class loaders, so we make", " // sure the context class loader is null before we start the MBean", " // server which will create threads that we want to have a null context", " // class loader", " boolean hasGetClassLoaderPerms=false;", " ClassLoader savecl = null;", " try {", " savecl = (ClassLoader)AccessController.doPrivileged(", " new PrivilegedAction<ClassLoader>() {", " public ClassLoader run() {", " return Thread.currentThread().getContextClassLoader();", " }", " });", " hasGetClassLoaderPerms = true;", " } catch (SecurityException se) {", " // ignore security exception. Earlier versions of Derby, before the ", " // DERBY-3745 fix did not require getClassloader permissions.", " // We may leak class loaders if we are not able to get this, but ", " // cannot just fail. ", " }", " if (hasGetClassLoaderPerms)", " try {", " AccessController.doPrivileged(", " new PrivilegedAction<Object>() {", " public Object run() {", " Thread.", " currentThread().setContextClassLoader(null);", " return null;", " }", " });", " } catch (SecurityException se1) {", " // ignore security exception. Earlier versions of Derby, before the ", " // DERBY-3745 fix did not require setContextClassloader permissions.", " // We may leak class loaders if we are not able to set this, but ", " // cannot just fail.", " }" ] }, { "added": [], "header": "@@ -216,22 +180,6 @@ public final class JMXManagementService implements ManagementService, ModuleCont", "removed": [ " if (hasGetClassLoaderPerms)", " try {", " final ClassLoader tmpsavecl = savecl;", " AccessController.doPrivileged(", " new PrivilegedAction<Object>() {", " public Object run() {", " Thread.currentThread().setContextClassLoader(tmpsavecl);", " return null;", " }", " });", " } catch (SecurityException se) {", " // ignore security exception. Earlier versions of Derby, before the ", " // DERBY-3745 fix did not require setContextClassloader permissions.", " // We may leak class loaders if we are not able to set this, but ", " // cannot just fail.", " }" ] } ] } ]
derby-DERBY-3888-b5392dbe
DERBY-3888: ALTER TABLE ... ADD COLUMN cannot add identity columns Enable support for adding identity columns with ALTER TABLE. It is only enabled if identity columns are backed by sequences. That is, the database format has to be 10.11 or higher. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1626141 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java", "hunks": [ { "added": [], "header": "@@ -24,7 +24,6 @@ package org.apache.derby.impl.sql.execute;", "removed": [ "import org.apache.derby.catalog.DefaultInfo;" ] }, { "added": [], "header": "@@ -82,7 +81,6 @@ import org.apache.derby.iapi.types.DataTypeDescriptor;", "removed": [ "import org.apache.derby.iapi.util.StringUtil;" ] }, { "added": [ " columnInfo[ix].autoincInc,", " columnInfo[ix].autoinc_create_or_modify_Start_Increment" ], "header": "@@ -1227,7 +1225,8 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ " columnInfo[ix].autoincInc" ] }, { "added": [ " if (columnDescriptor.isAutoincrement())", " //", " // Create a sequence generator for the auto-increment column.", " // See DERBY-6542.", " //", " CreateSequenceConstantAction csca =", " CreateTableConstantAction.makeCSCA(", " columnInfo[ ix],", " TableDescriptor.makeSequenceName(td.getUUID()));", " csca.executeConstantAction(activation);", " if (columnDescriptor.isAutoincrement() ||", " columnDescriptor.hasNonNullDefault())" ], "header": "@@ -1236,15 +1235,22 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ " if (SanityManager.DEBUG)", " // support for adding identity columns was removed before Derby", " // was open-sourced", "\t\t\tSanityManager.ASSERT( !columnDescriptor.isAutoincrement(), \"unexpected attempt to add an identity column\" );", "\t\tif (columnDescriptor.hasNonNullDefault())" ] } ] }, { "file": "java/testing/org/apache/derbyTesting/junit/JDBC.java", "hunks": [ { "added": [ " String... expectedColNames) throws SQLException" ], "header": "@@ -830,7 +830,7 @@ public class JDBC {", "removed": [ " String [] expectedColNames) throws SQLException" ] } ] } ]
derby-DERBY-3889-2fb5c8dd
DERBY-3889: LOBStreamControl.truncate() doesn't delete temporary files Created helper method for closing and deleting temporary files to make it less likely that some of the required operations are forgotten. Also moved the lobFile field into the LOBFile class to reduce the risk of getting inconsistencies between the fields tmpFile and lobFile. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@704010 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/LOBFile.java", "hunks": [ { "added": [ " /** The temporary file where the contents of the LOB should be stored. */", " private final StorageFile storageFile;", "", " /** An object giving random access to {@link #storageFile}. */", "" ], "header": "@@ -32,7 +32,12 @@ import org.apache.derby.io.StorageRandomAccessFile;", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/LOBStreamControl.java", "hunks": [ { "added": [], "header": "@@ -55,7 +55,6 @@ import org.apache.derby.shared.common.error.ExceptionUtil;", "removed": [ " private StorageFile lobFile;" ] }, { "added": [ " StorageFile lobFile =" ], "header": "@@ -97,7 +96,7 @@ class LOBStreamControl {", "removed": [ " lobFile =" ] }, { "added": [ " releaseTempFile(tmpFile);" ], "header": "@@ -352,8 +351,7 @@ class LOBStreamControl {", "removed": [ " tmpFile.close();", " conn.removeLobFile(tmpFile);" ] }, { "added": [ " releaseTempFile(tmpFile);", "", " /**", " * Close and release all resources held by a temporary file. The file will", " * also be deleted from the file system and removed from the list of", " * {@code LOBFile}s in {@code EmbedConnection}.", " *", " * @param file the temporary file", " * @throws IOException if the file cannot be closed or deleted", " */", " private void releaseTempFile(LOBFile file) throws IOException {", " file.close();", " conn.removeLobFile(file);", " deleteFile(file.getStorageFile());", " }" ], "header": "@@ -436,12 +434,24 @@ class LOBStreamControl {", "removed": [ " tmpFile.close();", " deleteFile(lobFile);", " conn.removeLobFile(tmpFile);" ] }, { "added": [], "header": "@@ -486,7 +496,6 @@ class LOBStreamControl {", "removed": [ " StorageFile oldStoreFile = lobFile;" ] }, { "added": [ " releaseTempFile(oldFile);" ], "header": "@@ -510,9 +519,7 @@ class LOBStreamControl {", "removed": [ " oldFile.close();", " conn.removeLobFile(oldFile);", " deleteFile(oldStoreFile);" ] } ] } ]
derby-DERBY-389-c62aaa25
DERBY-389 Fix a hang accessing the statement cache in network server stress test. The change was to remove the synchronization on the statement cache from the removeStatement method of GenericLanguageConnectionContext. It is not needed because the statement cache handles its own synchronization. Also changes the javadoc to remove some incorrect info about temp tables. git-svn-id: https://svn.apache.org/repos/asf/incubator/derby/code/trunk@201792 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/conn/GenericLanguageConnectionContext.java", "hunks": [ { "added": [ "\t* This method will remove a statement from the statement cache.", "\t* It will be called, for example, if there is an exception preparing", "\t* the statement.", "\t*", "\t* @param statement Statement to remove", "\t* @exception StandardException thrown if lookup goes wrong.", "\t*/\t", " " ], "header": "@@ -766,25 +766,22 @@ public class GenericLanguageConnectionContext", "removed": [ "\t * This method will get called if the statement is referencing tables in SESSION schema.", "\t * We do not want to cache such statements because multiple connections can have", "\t * different definition of the same table name and hence compiled plan for one connection", "\t * may not make sense for some other connection. Because of this, remove the statement from the cache", "\t *", "\t * @exception StandardException thrown if lookup goes wrong.", "\t */", "", "\t\tsynchronized (statementCache) {", "", "\t\t}" ] } ] } ]
derby-DERBY-389-df5e1dd0
DERBY-389 Fix nullSQLText.java for J2ME Attaching a patch for the test jdbcapi/nullSQLText.java. This test fails in J2ME because it uses a stored procedure with server-side JDBC. As I understand, the purpose of the test is only to test any stored procedure call. So I replaced the stored procedure in this test with a procedure in org.apache.derbyTesting.functionTests.util.ProcedureTest. Also changed the master files. With this patch, I have run jdbcapi/nullSQLText.java in embedded, client and jcc frameworks. Also run this test in CDC/FP. This patch changes only tests. Please review/commit this patch. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@330705 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3890-9cc8ad0a
DERBY-3890: Replication: NPE for startSlave of encrypted database Removes NPE for replication of encrypted databases by setting RawStoreFactory in LogFactory before calling SlaveFactory#startSlave. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@708510 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java", "hunks": [ { "added": [ " // RawStoreFactory is used by LogFactory.recover() and by", " // SlaveFactory.startSlave (for the SlaveFactory case, it is", " // only used if the database is encrypted)", " logFactory.setRawStoreFactory(this);", "" ], "header": "@@ -314,6 +314,11 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/log/LogToFile.java", "hunks": [ { "added": [ "\t// use this only when in slave mode or after recovery is finished" ], "header": "@@ -409,7 +409,7 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [ "\t\t\t\t\t\t\t\t// use this only after recovery is finished" ] }, { "added": [ "\t/**", "\t\tMake log factory aware of which raw store factory it belongs to", "\t*/", "\tpublic void setRawStoreFactory(RawStoreFactory rsf) {", "\t\trawStoreFactory = rsf;", "\t}", "" ], "header": "@@ -643,6 +643,13 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [] }, { "added": [], "header": "@@ -666,20 +673,17 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [ " RawStoreFactory rsf, ", "\t\t\tSanityManager.ASSERT(rsf != null, \"raw store factory == null\");", "\t\trawStoreFactory = rsf;" ] }, { "added": [ " rawStoreFactory," ], "header": "@@ -889,7 +893,7 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [ " rsf," ] }, { "added": [ "\t\t\t\t\ttf.rollbackAllTransactions(recoveryTransaction, rawStoreFactory);" ], "header": "@@ -1210,7 +1214,7 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [ "\t\t\t\t\ttf.rollbackAllTransactions(recoveryTransaction, rsf);" ] }, { "added": [ " tf.handlePreparedXacts(rawStoreFactory);" ], "header": "@@ -1249,7 +1253,7 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [ " tf.handlePreparedXacts(rsf);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/log/ReadOnly.java", "hunks": [ { "added": [ "\t/** Not applicable in readonly databases */", "\tpublic void setRawStoreFactory(RawStoreFactory rsf) {", "\t}", "", "\tpublic void recover(DataFactory dataFactory," ], "header": "@@ -76,12 +76,15 @@ public class ReadOnly implements LogFactory, ModuleSupportable {", "removed": [ "\tpublic void recover(RawStoreFactory rawStoreFactory,", "\t\t\t\t\t\tDataFactory dataFactory," ] } ] } ]
derby-DERBY-3897-e3883f5f
DERBY-3897 SQLSessionContext not correctly initialized in some non-method call nested contexts Patch derby-3897-3, which sets up the SQL session context correctly also for substatements. See javadoc for LanguageConnectionContext#setupSubStatementSessionContext for an enumeration of these cases. Also adds test cases for this. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@703295 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/sql/PreparedStatement.java", "hunks": [ { "added": [ "\t * Execute the PreparedStatement and return results, used for top level", "\t * statements (not substatements) in a connection." ], "header": "@@ -93,7 +93,8 @@ public interface PreparedStatement", "removed": [ "\t * Execute the PreparedStatement and return results." ] }, { "added": [], "header": "@@ -101,9 +102,6 @@ public interface PreparedStatement", "removed": [ " \t * @param rollbackParentContext True if 1) the statement context is", "\t * NOT a top-level context, AND 2) in the event of a statement-level", "\t *\t exception, the parent context needs to be rolled back, too." ] } ] }, { "file": "java/engine/org/apache/derby/iapi/sql/conn/LanguageConnectionContext.java", "hunks": [ { "added": [ "\t * the SQLSessionContext stack to the initial default," ], "header": "@@ -471,7 +471,7 @@ public interface LanguageConnectionContext extends Context {", "removed": [ "\t * the SQLSessionContext stack to the initial default, presumably" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java", "hunks": [ { "added": [ "\t\t\t\t\tps.executeSubStatement(activation, act, true, 0L);" ], "header": "@@ -3627,7 +3627,7 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ " ps.execute(act, true, 0L); " ] }, { "added": [ " org.apache.derby.iapi.sql.ResultSet rs =", "\t\t\t\tps.executeSubStatement(activation, act, true, 0L);" ], "header": "@@ -3702,7 +3702,8 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ " org.apache.derby.iapi.sql.ResultSet rs = ps.execute(act, true, 0L);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/GenericActivationHolder.java", "hunks": [ { "added": [ "final public class GenericActivationHolder implements Activation", "\tpublic BaseActivation\t\t\tac;" ], "header": "@@ -97,9 +97,9 @@ import java.util.Hashtable;", "removed": [ "final class GenericActivationHolder implements Activation", "\tBaseActivation\t\t\tac;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/GenericPreparedStatement.java", "hunks": [ { "added": [ "\t\tActivation parentAct = null;", "\t\t\t// If not null, parentAct represents one of 1) the activation of a", "\t\t\t// calling statement and this activation corresponds to a statement", "\t\t\t// inside a stored procedure or function, and 2) the activation of", "\t\t\t// a statement that performs a substatement, e.g. trigger body", "\t\t\t// execution.", "\t\t\tparentAct = stmctx.getActivation();", "\t\tac.setParentActivation(parentAct);", "\t/**", "\t * @see PreparedStatement#executeSubStatement(LanguageConnectionContext, boolean, long)", "\t */", " public ResultSet executeSubStatement(LanguageConnectionContext lcc,", "\t\t\t\t\t\t\t\t\t\t boolean rollbackParentContext,", "\t\t\t\t\t\t\t\t\t\t long timeoutMillis)", "\t\tActivation parent = lcc.getLastActivation();", "\t\tlcc.setupSubStatementSessionContext(parent);", "\t\treturn executeStmt(a, rollbackParentContext, timeoutMillis);", "\t}", "", "\t/**", "\t * @see PreparedStatement#executeSubStatement(Activation, Activation, boolean, long)", "\t */", " public ResultSet executeSubStatement(Activation parent,", "\t\t\t\t\t\t\t\t\t\t Activation activation,", "\t\t\t\t\t\t\t\t\t\t boolean rollbackParentContext,", "\t\t\t\t\t\t\t\t\t\t long timeoutMillis)", "\t\tthrows StandardException", "\t{", "\t\tparent.getLanguageConnectionContext().", "\t\t\tsetupSubStatementSessionContext(parent);", "\t\treturn executeStmt(activation, rollbackParentContext, timeoutMillis);", "", "\t/**", "\t * @see PreparedStatement#execute", "\t */", "\tpublic ResultSet execute(Activation activation,", "\t\t\t\t\t\t\t long timeoutMillis)", "\t\t\tthrows StandardException", "\t{", "\t\treturn executeStmt(activation, false, timeoutMillis);", "\t}", "", "" ], "header": "@@ -240,31 +240,64 @@ public class GenericPreparedStatement", "removed": [ "\t\tActivation callingAct = null;", "\t\t\t// if not null, callingAct represents the activation of", "\t\t\t// a calling statement and this activation corresponds to", "\t\t\t// a statement inside a stored procedure or function", "\t\t\tcallingAct = stmctx.getActivation();", "\t\tac.setCallActivation(callingAct);", " public ResultSet execute(LanguageConnectionContext lcc,", " boolean rollbackParentContext,", " long timeoutMillis)", "\t\treturn execute(a, rollbackParentContext, timeoutMillis);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/conn/GenericLanguageConnectionContext.java", "hunks": [ { "added": [ "\t\treturn getCurrentSQLSessionContext(a).getDefaultSchema();" ], "header": "@@ -1844,8 +1844,7 @@ public class GenericLanguageConnectionContext", "removed": [ "\t\treturn getCurrentSQLSessionContext(a.getCallActivation()).", "\t\t\tgetDefaultSchema();" ] }, { "added": [ "\t\tgetCurrentSQLSessionContext(a).setDefaultSchema(sd);" ], "header": "@@ -1908,13 +1907,11 @@ public class GenericLanguageConnectionContext", "removed": [ "\t\tActivation caller = a.getCallActivation();", "", "\t\tgetCurrentSQLSessionContext(caller).setDefaultSchema(sd);" ] }, { "added": [ "\t\tActivation parent = activation.getParentActivation();", "\t\twhile (parent != null) {", "\t\t\tSQLSessionContext ssc = parent.getSQLSessionContextForChildren();" ], "header": "@@ -1925,12 +1922,12 @@ public class GenericLanguageConnectionContext", "removed": [ "\t\tActivation caller = activation.getCallActivation();", "\t\twhile (caller != null) {", "\t\t\tSQLSessionContext ssc = caller.getNestedSQLSessionContext();" ] }, { "added": [ "\t\t\tparent = parent.getParentActivation();" ], "header": "@@ -1940,7 +1937,7 @@ public class GenericLanguageConnectionContext", "removed": [ "\t\t\tcaller = caller.getCallActivation();" ] }, { "added": [ "\t\tgetCurrentSQLSessionContext(a).setRole(role);" ], "header": "@@ -3265,8 +3262,7 @@ public class GenericLanguageConnectionContext", "removed": [ "\t\tgetCurrentSQLSessionContext(a.getCallActivation()).", "\t\t\tsetRole(role);" ] }, { "added": [ "\t\treturn getCurrentSQLSessionContext(a).getRole();" ], "header": "@@ -3274,8 +3270,7 @@ public class GenericLanguageConnectionContext", "removed": [ "\t\treturn getCurrentSQLSessionContext(a.getCallActivation()).", "\t\t\tgetRole();" ] }, { "added": [ "\t\tString role = getCurrentSQLSessionContext(a).getRole();" ], "header": "@@ -3285,8 +3280,7 @@ public class GenericLanguageConnectionContext", "removed": [ "\t\tString role = getCurrentSQLSessionContext(a.getCallActivation()).", "\t\t\tgetRole();" ] }, { "added": [ "\t * Return the current SQL session context of the activation", "\t * @param activation the activation", "\tprivate SQLSessionContext getCurrentSQLSessionContext(Activation activation) {", "\t\tActivation parent = activation.getParentActivation();", "", "\t\tif (parent == null ) {", "\t\t\t// inside a nested connection (stored procedure/function), or when", "\t\t\t// executing a substatement the SQL session context is maintained", "\t\t\t// in the activation of the parent", "\t\t\tcurr = parent.getSQLSessionContextForChildren();" ], "header": "@@ -3335,22 +3329,23 @@ public class GenericLanguageConnectionContext", "removed": [ "\t * Return the current SQL session context based on caller", "\t * @param caller the activation of the caller, if any, of the", "\t * current activation", "\tprivate SQLSessionContext getCurrentSQLSessionContext(Activation caller) {", "\t\tif (caller == null ) {", "\t\t\t// inside a nested SQL session context (stored", "\t\t\t// procedure/function), the SQL session context is", "\t\t\t// maintained in the activation of the caller", "\t\t\tcurr = caller.getNestedSQLSessionContext();" ] }, { "added": [ "\t\tsetupSessionContextMinion(a, true);", "\t}", "", "\tprivate void setupSessionContextMinion(Activation a,", "\t\t\t\t\t\t\t\t\t\t\t\t boolean push) {", "\t\tSQLSessionContext sc = a.setupSQLSessionContextForChildren(push);" ], "header": "@@ -3386,7 +3381,12 @@ public class GenericLanguageConnectionContext", "removed": [ "\t\tSQLSessionContext sc = a.getNestedSQLSessionContext();" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java", "hunks": [ { "added": [ "\t\tResultSet rs = ps.executeSubStatement(lcc, true, 0L);" ], "header": "@@ -3101,7 +3101,7 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ "\t\tResultSet rs = ps.execute(lcc, true, 0L);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/BaseActivation.java", "hunks": [ { "added": [ "\t * The 'parentActivation' of an activation of a statement executing in", "\t * A non-null 'parentActivation' represents the activation of the calling", "\t * statement (if we are in a nested connection of a stored routine), or the", "\t * activation of the parent statement (if we are executing a substatement)", "\t * 'parentActivation' is set when this activation is created (@see", "\t * by code generated for the call, after parameters are evaluated", "\t * or just substatement execution starts.", "\t * @see org.apache.derby.impl.sql.GenericPreparedStatement#executeSubStatement", "\tprivate Activation parentActivation;", "\t * The SQL session context to be used inside a nested connection in a", "\t * stored routine or in a substatement. In the latter case, it is an alias", "\t * to the superstatement's session context.", "\tprivate SQLSessionContext sqlSessionContextForChildren;" ], "header": "@@ -173,37 +173,31 @@ public abstract class BaseActivation implements CursorActivation, GeneratedByteC", "removed": [ "\t * The 'callActivation' of an activation of a statement executing in", "\t * A non-null 'callActivation' represents the activation of the", "\t * calling statement.", "\t * That is, if we are executing an SQL statement ('this'", "\t * activation) inside a stored procedure or function in a nested", "\t * connection, then 'callActivation' will be non-null.", "\t *", "\t * 'callActivation' is set when this activation is created (@see", "\t * by code generated for the call, after parsameters are evaluated", "\tprivate Activation callActivation;", "\t * The SQL session context of a call is kept here. Also, @see", "\t * BaseActivation#callActivation.", "", "\t * A nested execution maintains its session context,", "\t * nestedSQLSessionContext, in the activation of the calling", "\t * statement's activation ('this'). While not inside a stored", "\t * procedure or function, SQL session state state is held by the", "\t * LanguageConnectionContext.", "\tprivate SQLSessionContext nestedSQLSessionContext;" ] } ] }, { "file": "java/testing/org/apache/derbyTesting/junit/JDBC.java", "hunks": [ { "added": [ " * be null to indicate SQL NULL. The comparision is made" ], "header": "@@ -764,7 +764,7 @@ public class JDBC {", "removed": [ " * be null to indicate SQL NULL. The comparision is make" ] } ] } ]
derby-DERBY-3898-24400cd2
DERBY-3898: Blob.setBytes differs between embedded and client driver when the specified length is invalid Added fix and test case for a remaining corner case: When the sum of offset and length is greater than Integer.MAX_VALUE, the client driver silently ignores the error whereas the embedded driver fails with an IndexOutOfBoundsException. The unexpected results are caused by a check for offset + len > bytes.length where offset+len overflows and evaluates to a negative value. The fix changes this condition to the equivalent len > bytes.length - offset which won't overflow (because both bytes.length and offset are known to be non-negative at this point in the code, and subtracting one non-negative int from another is guaranteed to result in a value in the range [-Integer.MAX_VALUE, Integer.MAX_VALUE]). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@986345 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-390-c47d471f
DERBY-390: Patch to handle case-sensitive SQL identifiers correctly. Import/export procedure parameters for table names, schema names, columns names should be passed in the case-sensitive form if they are quoted identfiers and in upper case if they are not quoted SQL identifiers. Import/export will generate insert/select statements with quoted table names, schema name and column names to be execute on the database after this patch. committed on behalf of: Suresh Thalamati git-svn-id: https://svn.apache.org/repos/asf/incubator/derby/code/trunk@208770 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/load/ColumnInfo.java", "hunks": [ { "added": [ "", "\t\tthis.schemaName = sName;", "\t\tthis.tableName = tName;", "\t\t\t//eg: C2 , C1 , C3" ], "header": "@@ -80,13 +80,14 @@ class ColumnInfo {", "removed": [ "\t\tthis.schemaName = (sName !=null ? sName.toUpperCase(java.util.Locale.ENGLISH):sName);", "\t\tthis.tableName = (tName !=null ? tName.toUpperCase(java.util.Locale.ENGLISH):tName);", "\t\t\t//eg: c2 , c1 , c3" ] }, { "added": [ "\t\t\t\t\t\t\t\t\t columnPattern);" ], "header": "@@ -152,7 +153,7 @@ class ColumnInfo {", "removed": [ "\t\t\t\t\t\t\t\t\t (columnPattern !=null ? columnPattern.toUpperCase(java.util.Locale.ENGLISH):columnPattern));" ] }, { "added": [ "\t/* returns comma seperated column Names delimited by quotes for the insert ", " * statement", "\t * eg: \"C1\", \"C2\" , \"C3\" , \"C4\" " ], "header": "@@ -306,8 +307,9 @@ class ColumnInfo {", "removed": [ "\t/* returns comma seperated column Names for insert statement", "\t * eg: c1, c2 , c3 , c4 " ] }, { "added": [ "\t\t\t// column names can be SQL reserved words, so it ", "\t\t\t// is necessary delimit them using quotes for insert to work correctly. ", "\t\t\tsb.append(\"\\\"\");", "\t\t\tsb.append(\"\\\"\");" ], "header": "@@ -319,7 +321,11 @@ class ColumnInfo {", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/load/ExportResultSetForObject.java", "hunks": [ { "added": [ "import java.sql.DatabaseMetaData;", "import java.sql.SQLException;", " private Connection con;", " private String selectQuery;", " private ResultSet rs;", " private int columnCount;", " private String columnNames[];", " private String columnTypes[];", " private int columnLengths[];", "", " private Statement expStmt = null; ", " private String schemaName;", " private String tableName;", "", "\t/* set up the connection and table/view name or the select query", "\t * to make the result set, whose data is exported. ", "\t **/", "\t\t\t\t\t\t\t\t\tString tableName, String selectQuery ", "\t\tif( selectQuery == null)", "\t\t{", "\t\t\tthis.schemaName = schemaName;", "\t\t\tthis.tableName = tableName;", "\t\t\t", "\t\t\t// delimit schema Name and table Name using quotes because", "\t\t\t// they can be case-sensitive names or SQL reserved words. Export", "\t\t\t// procedures are expected to be called with case-senisitive names. ", "\t\t\t// undelimited names are passed in upper case, because that is", "\t\t\t// the form database stores them. ", "\t\t\t", "\t\t\tthis.selectQuery = \"select * from \" + ", "\t\t\t\t(schemaName == null ? \"\\\"\" + tableName + \"\\\"\" : ", "\t\t\t\t \"\\\"\" + schemaName + \"\\\"\" + \".\" + \"\\\"\" + tableName + \"\\\"\"); ", "\t\t}", " else", "\t\t{", "\t\t\tthis.selectQuery = selectQuery;", "\t\t}", " public ResultSet getResultSet() throws SQLException {", " rs = null;", " //execute the select query and keep it's meta data info ready", " expStmt = con.createStatement();", " rs = expStmt.executeQuery(selectQuery);", " getMetaDataInfo();", " return rs;", "", "", " public int getColumnCount() {", " return columnCount;", " }", "", " public String[] getColumnDefinition() {", " return columnNames;", " }", "", " public String[] getColumnTypes() {", " return columnTypes;", " }", "", " public int[] getColumnLengths() {", " return columnLengths;", " }", "", " //if the entity to be exported has non-sql types in it, an exception will be thrown", " private void getMetaDataInfo() throws SQLException {", " ResultSetMetaData metaData = rs.getMetaData();", " columnCount = metaData.getColumnCount();", " int numColumns = columnCount;", " columnNames = new String[numColumns];", " columnTypes = new String[numColumns];", " columnLengths = new int[numColumns];", "", " for (int i=0; i<numColumns; i++) {", " int jdbcTypeId = metaData.getColumnType(i+1);", " columnNames[i] = metaData.getColumnName(i+1);", " columnTypes[i] = metaData.getColumnTypeName(i+1);", " if(!ColumnInfo.importExportSupportedType(jdbcTypeId))", " {", " throw LoadError.nonSupportedTypeColumn(", " columnNames[i], columnTypes[i]); ", " }", " ", " columnLengths[i] = metaData.getColumnDisplaySize(i+1);", " }" ], "header": "@@ -24,93 +24,104 @@ import java.sql.Connection;", "removed": [ " private Connection con;", " private String entityName;", " private String selectStatement;", " private ResultSet rs;", " private int columnCount;", " private String columnNames[];", " private String columnTypes[];", " private int columnLengths[];", "", "\tprivate Statement expStmt = null; ", "", "\t//uses the passed connection and table/view name to make the resultset on", "\t//that entity.", "\t\t\t\t\t\t\t\t\tString tableName, String selectStatement ", "\t\tif( selectStatement == null)", "\t\t\tthis.entityName = (schemaName == null ? tableName : schemaName + \".\" + tableName); ", "\t\tthis.selectStatement = selectStatement;", " public ResultSet getResultSet() throws Exception {", " rs = null;", " String queryString = getQuery();", " //execute select on passed enitity and keep it's meta data info ready", " Statement expStmt = con.createStatement();", " rs = expStmt.executeQuery(queryString);", " getMetaDataInfo();", " return rs;", " }", "", " public String getQuery(){", "\t if(selectStatement != null)", "\t\t return selectStatement;", "\t else", "\t {", "\t\t selectStatement = \"select * from \" + entityName;", "\t\t return selectStatement;", "\t }", " }", "", " public int getColumnCount() {", " return columnCount;", " }", "", " public String[] getColumnDefinition() {", " return columnNames;", " }", "", " public String[] getColumnTypes() {", " return columnTypes;", " }", "", " public int[] getColumnLengths() {", " return columnLengths;", " }", "", " //if the entity to be exported has non-sql types in it, an exception will be thrown", " private void getMetaDataInfo() throws Exception {", " ResultSetMetaData metaData = rs.getMetaData();", " columnCount = metaData.getColumnCount();", "\t int numColumns = columnCount;", " columnNames = new String[numColumns];", "\tcolumnTypes = new String[numColumns];", " columnLengths = new int[numColumns];", " for (int i=0; i<numColumns; i++) {", "\t int jdbcTypeId = metaData.getColumnType(i+1);", "\t columnNames[i] = metaData.getColumnName(i+1);", "\t columnTypes[i] = metaData.getColumnTypeName(i+1);", "\t if(!ColumnInfo.importExportSupportedType(jdbcTypeId))", "\t {", "\t\t throw LoadError.nonSupportedTypeColumn(columnNames[i],", "\t\t\t\t\t\t\t\t\t\t\t\t columnTypes[i]); ", " \t ", " columnLengths[i] = metaData.getColumnDisplaySize(i+1);", " }" ] } ] }, { "file": "java/engine/org/apache/derby/impl/load/Import.java", "hunks": [ { "added": [ "\t", "\t\tif (tableName == null)" ], "header": "@@ -160,9 +160,9 @@ public class Import extends ImportAbstract{", "removed": [ "\t\tString entityName = (schemaName == null ? tableName : schemaName + \".\" + tableName); ", "\t\tif (entityName == null)" ] } ] } ]
derby-DERBY-3902-47509e86
DERBY-3902; adjust orphaned message strings; - move J107, J108, J109 to common MessageId.java - remove references to 08000.S.1 and XJ102 from message.xml & mes*.properties git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@704290 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/client/org/apache/derby/client/am/SqlException.java", "hunks": [ { "added": [ "import org.apache.derby.shared.common.reference.MessageId;" ], "header": "@@ -26,6 +26,7 @@ import java.util.TreeMap;", "removed": [] }, { "added": [], "header": "@@ -89,10 +90,6 @@ public class SqlException extends Exception implements Diagnosable {", "removed": [ " // Constants for message ids used in text we print out -- not used", " // in SqlExceptions", " public static final String BATCH_POSITION_ID = \"J107\";", " " ] } ] }, { "file": "java/client/org/apache/derby/client/am/Version.java", "hunks": [ { "added": [ "import org.apache.derby.shared.common.reference.MessageId;" ], "header": "@@ -22,16 +22,13 @@", "removed": [ " // Constants for internationalized message ids", " private static String SECURITY_MANAGER_NO_ACCESS_ID = \"J108\";", " private static String UNKNOWN_HOST_ID = \"J109\";", " " ] }, { "added": [ " msgutil.getTextMessage(MessageId.SECURITY_MANAGER_NO_ACCESS_ID, property));" ], "header": "@@ -152,7 +149,7 @@ public abstract class Version {", "removed": [ " msgutil.getTextMessage(SECURITY_MANAGER_NO_ACCESS_ID, property));" ] } ] } ]
derby-DERBY-3902-648e3486
DERBY-3902; correct messages.xml and translated files for SQLState 2003.S.4 (was accidentally listed as 2004.S.4 - see also comment in DERBY-1567). Also fix up MessageBundleTest to ignore two sqlstates that are not exposed to the user so don't need text. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@703259 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3904-57191b46
DERBY-3904: NPE on left join with aggregate The issue involves a very special optimization that is performed for MIN and MAX queries in which we may be able to use an index to go directly to the lowest/highest value of the desired column. For example, in the query SELECT MAX(d1) FROM t1 if there is an index on d1, we can use that index to retrieve the max value very rapidly. In order to incorporate this optimization, the following conditions must be met: - No group by - One of: - min/max(ColumnReference) is only aggregate && source is ordered on the ColumnReference - min/max(ConstantNode) The optimization of the other way around (min with desc index or max with asc index) has the same restrictions with the additional temporary restriction of no qualifications at all (because we don't have true backward scans). The source of the data must also be "simple" (not a result of a join), and the NullPointerException occurred during the code that tried to establish the above conditions because it wasn't thoroughly enough excluding the join case. In the query: SELECT MAX( T1.D1 ) AS D FROM T1 LEFT JOIN T2 ON T1.D1 = T2.D2 WHERE T2.D2 IS NULL the code in GroupByNode.considerPostOptimizeOptimizations was trying to traverse the AccessPathImpl to find the index scan information, but for this LEFT JOIN case there is an AccessPathImpl but no index scan information, because there is a join being performed, not an index scan. The solution is to examine the AccessPathImpl more carefully, and only search the index scan information if an index scan is actually present. Also added a few tests, including an enhancement to the test library's RuntimeStatisticsParser so that it can determine if a Last Key Index Scan is being performed by the query. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@708002 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/RuntimeStatisticsParser.java", "hunks": [ { "added": [ "\tprivate final boolean lastKeyIndexScan;" ], "header": "@@ -31,6 +31,7 @@ public class RuntimeStatisticsParser {", "removed": [] }, { "added": [ " lastKeyIndexScan = (rts.indexOf(\"Last Key Index Scan ResultSet\") >= 0);" ], "header": "@@ -65,6 +66,7 @@ public class RuntimeStatisticsParser {", "removed": [] }, { "added": [ " /**", " * Return whether or not a last key index scan result set was used", "\t * in the query. A last key index scan is a special optimization for", "\t * MIN and MAX queries against an indexed column (SELECT MAX(ID) FROM T).", " */", " public boolean usedLastKeyIndexScan() {", " return lastKeyIndexScan;", " }", "" ], "header": "@@ -202,6 +204,15 @@ public class RuntimeStatisticsParser {", "removed": [] } ] } ]
derby-DERBY-3905-d7731394
DERBY-3905 Failed tests should save the database off to the fail directory git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@704964 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3907-51c45b8b
DERBY-3907: Save useful length information for Clobs in store. Cleanup of ReaderToUTF8Stream, which has to deal with the header in the streams being passed in to Derby. Changes: o Simplified constructors. o Added JavaDoc and comments. o Removed unused imports. o Removed instance variable maximumLength. o Added more information to the error messages for truncation. o Added CHAR as a truncatable string data type. o Removed "throws IOException" from close. Updated the test to pass inn a valid type name to the constructor. Patch file: derby-3907-3b-readertoutf8stream_cleanup.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@732676 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/ReaderToUTF8Stream.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.sanity.SanityManager;", " * Converts the characters served by a {@code java.io.Reader} to a stream", " * returning the data in the on-disk modified UTF-8 encoded representation used", " * by Derby.", " * <p>", " * Length validation is performed. If required and allowed by the target column", " * type, truncation of blanks will also be performed.", " */" ], "header": "@@ -25,17 +25,20 @@ import java.io.InputStream;", "removed": [ "import java.io.UTFDataFormatException;", "import org.apache.derby.iapi.types.TypeId;", "\tConverts a java.io.Reader to the on-disk UTF8 format used by Derby", " for character types.", "*/" ] }, { "added": [ " /**", " * Size of buffer to hold the data read from stream and converted to the", " * modified UTF-8 format.", " */", " private final static int BUFSIZE = 32768;", " private byte[] buffer = new byte[BUFSIZE];", " private int blen = -1;", " /** Tells if the stream content is/was larger than the buffer size. */", " /**", " * Number of characters to truncate from this stream.", " * The SQL standard allows for truncation of trailing spaces for CLOB,", " * VARCHAR and CHAR. If zero, no characters are truncated, unless the", " * stream length execeeds the maximum length of the column we are inserting", " * into.", " * If positive, length of the expected final value, after truncation if any,", " * in characters. If negative, the maximum length allowed in the column we", " * are inserting into. A negative value means we are working with a stream", " * of unknown length, inserted through one of the JDBC 4.0 \"lengthless", " * override\" methods.", " */" ], "header": "@@ -44,33 +47,36 @@ public final class ReaderToUTF8Stream", "removed": [ "\tprivate byte[] buffer;", "\tprivate int blen;", " // buffer to hold the data read from stream ", " // and converted to UTF8 format", " private final static int BUFSIZE = 32768;", " /** Number of characters to truncate from this stream", " The SQL standard allows for truncation of trailing spaces ", " for clobs,varchar,char.", " If zero, no characters are truncated.", " * Length of the final value, after truncation if any,", " * in characters.", " this stream needs to fit into a column of colWidth", " if truncation error happens ,then the error message includes ", " information about the column width.", " */", " /** The maximum allowed length of the stream. */", " private final int maximumLength;" ] }, { "added": [ " * @param valueLength the expected length of the reader in characters", " * (positive), or the inverse (maxColWidth * -1) of the maximum column", " * width if the expected stream length is unknown" ], "header": "@@ -83,7 +89,9 @@ public final class ReaderToUTF8Stream", "removed": [ " * @param valueLength the length of the reader in characters" ] }, { "added": [ " if (SanityManager.DEBUG) {", " // Check the type name", " // The national types (i.e. NVARCHAR) are not used/supported.", " SanityManager.ASSERT(typeName != null && (", " typeName.equals(TypeId.CHAR_NAME) ||", " typeName.equals(TypeId.VARCHAR_NAME) ||", " typeName.equals(TypeId.CLOB_NAME)) ||", " typeName.equals(TypeId.LONGVARCHAR_NAME));", " }", " * Creates a UTF-8 stream for an application reader whose length isn't", " * known at insertion time.", " * <p>", " * The application reader is coming in through one of the \"lengthless", " * overrides\" added in JDBC 4.0, for instance", " * {@link java.sql.PreparedStatement#setCharacterStream(int,Reader)}.", " * A limit is placed on the length of the application reader. If the reader", " * exceeds the maximum length, truncation of trailing blanks is attempted.", " * If truncation fails, an exception is thrown.", " * the reader, typically the maximum field size", " * @throws IllegalArgumentException if maximum length is negative", " this(appReader, -1 * maximumLength, 0, typeName);", " * Reads a byte from the stream.", " * <p>", " * Characters read from the source stream are converted to the UTF-8 Derby", " * specific encoding.", " *", " * @return The byte read, or {@code -1} if the end-of-stream is reached.", " * @throws EOFException if the end-of-stream has already been reached or", " * the stream has been closed", " * @throws IOException if reading from the source stream fails" ], "header": "@@ -93,52 +101,58 @@ public final class ReaderToUTF8Stream", "removed": [ " buffer = new byte[BUFSIZE];", " blen = -1; ", " this.maximumLength = -1;", " * Create a UTF-8 stream for a length less application reader.", " *", " * A limit is placed on the length of the reader. If the reader exceeds", " * the maximum length, truncation of trailing blanks is attempted. If", " * truncation fails, an exception is thrown.", " * the reader", " * @throws IllegalArgumentException if maximum length is negative, or type", " * name is <code>null<code>", " if (typeName == null) {", " throw new IllegalArgumentException(\"Type name cannot be null\");", " }", " this.reader = new LimitReader(appReader);", " buffer = new byte[BUFSIZE];", " blen = -1;", " this.maximumLength = maximumLength;", " this.typeName = typeName;", " this.charsToTruncate = -1;", " this.valueLength = -1;", " * read from stream; characters converted to utf-8 derby specific encoding.", " * If stream has been read, and eof reached, in that case any subsequent", " * read will throw an EOFException" ] }, { "added": [ " /**", " * Reads up to {@code len} bytes from the stream.", " * <p>", " * Characters read from the source stream are converted to the UTF-8 Derby", " * specific encoding.", " *", " * @return The number of bytes read, or {@code -1} if the end-of-stream is", " * reached.", " * @throws EOFException if the end-of-stream has already been reached or", " * the stream has been closed", " * @throws IOException if reading from the source stream fails", " * @see java.io.InputStream#read(byte[],int,int)", " */" ], "header": "@@ -175,6 +189,19 @@ public final class ReaderToUTF8Stream", "removed": [] }, { "added": [ " /**", " * Fills the internal buffer with data read from the source stream.", " * <p>", " * The characters read from the source are converted to the modified UTF-8", " * encoding, used as the on-disk format by Derby.", " *", " * @param startingOffset offset at which to start filling the buffer, used", " * to avoid overwriting the stream header data on the first iteration", " * @throws DerbyIOException if the source stream has an invalid length", " * (different than specified), or if truncation of blanks fails", " * @throws IOException if reading from the source stream fails", " */" ], "header": "@@ -230,6 +257,18 @@ public final class ReaderToUTF8Stream", "removed": [] }, { "added": [ " SQLState.LANG_STRING_TRUNCATION,", " typeName,", " \"<stream-value>\", // Don't show the whole value.", " String.valueOf(Math.abs(valueLength)))," ], "header": "@@ -322,7 +361,10 @@ public final class ReaderToUTF8Stream", "removed": [ " SQLState.LANG_STRING_TRUNCATION)," ] }, { "added": [ " } else if (typeName.equals(TypeId.CHAR_NAME)) {", " return true;" ], "header": "@@ -354,6 +396,8 @@ public final class ReaderToUTF8Stream", "removed": [] }, { "added": [ " \"<stream-value>\", // Don't show the whole value.", " String.valueOf(Math.abs(valueLength)))," ], "header": "@@ -374,8 +418,8 @@ public final class ReaderToUTF8Stream", "removed": [ " \"XXXX\", ", " String.valueOf(valueLength))," ] }, { "added": [ " public void close() {" ], "header": "@@ -384,8 +428,7 @@ public final class ReaderToUTF8Stream", "removed": [ "\tpublic void close() throws IOException", "\t{" ] }, { "added": [ " * the stream.", " * <p>", " * Note, it is not exactly per {@code java.io.InputStream#available()}." ], "header": "@@ -395,8 +438,9 @@ public final class ReaderToUTF8Stream", "removed": [ " * the stream ", " * Note, it is not exactly per java.io.InputStream#available()" ] } ] } ]
derby-DERBY-3907-52625a50
DERBY-3907: Save useful length information for Clobs in store. Enabled the new header format for Clobs. Description: o ClobStreamHeaderGenerator Enabled the callback mechanism to inform the DVD about whether the database is accessed in soft upgrade mode or not. o SQLChar Added method writeClobUTF, which writes a Clob to the on-disk format. Added method readExternalClobFromArray. o SQLClob Added a variable to tell if the database is accessed in soft upgrade mode or not. It is used to reduce object creation (header generators) and to avoid consulting the data dictionary as much. It requires that the DVDs are reused, and I'm sure this can be further optimized. Implemented getLength, which returns the length of the Clob in one of three ways: 1) Delegate to SQLChar if the Clob has been materialized. 2) Read length from stream header if present. 3) Decode the whole stream. Updated getStreamWithDescriptor to deal with both header formats. Made restoreToNull nullify the character stream descriptor. Implemented writeExternal. Implemented getStreamHeaderGenerator, which will return one of two shared generator instances it is known if the database is accessed in soft upgrade mode or not. If unknown, a new generator instance is created, which will determine the mode when the header is asked for. Implemented investigateHeader, which decodes a stream header. Implemented readExternal. Implemented readExternalFromArray. Implemented utility method rewindStream, which resets the stream and skips the number of characters specified. Added a utility class for holding header information (currently only length). o StreamHeaderHolder Deleted the class, it is no longer used. NOTE: Databases created with this revision (or later) containing Clobs, cannot be accessed by earlier trunk revisions. Patch file: derby-3907-7a3-use_new_header_format.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@738408 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/SQLChar.java", "hunks": [ { "added": [ " /**", " * Writes the header and the user data for a CLOB to the destination stream.", " *", " * @param out destination stream", " * @throws IOException if writing to the destination stream fails", " */", " protected final void writeClobUTF(ObjectOutput out)", " throws IOException {", " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(!isNull());", " SanityManager.ASSERT(stream == null, \"Stream not null!\");", " }", " boolean isRaw = rawLength >= 0;", " // Assume isRaw, update afterwards if required.", " int strLen = rawLength;", " if (!isRaw) {", " strLen = value.length();", " }", " // Generate the header and invoke the encoding routine.", " StreamHeaderGenerator header = getStreamHeaderGenerator();", " int toEncodeLen = header.expectsCharCount() ? strLen : -1;", " header.generateInto(out, toEncodeLen);", " writeUTF(out, strLen, isRaw);", " header.writeEOF(out, toEncodeLen);", " }", "" ], "header": "@@ -899,6 +899,32 @@ public class SQLChar", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/SQLClob.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.io.ArrayInputStream;", "import org.apache.derby.iapi.util.UTF8Util;", "import java.io.ObjectInput;", "import java.io.ObjectOutput;" ], "header": "@@ -25,13 +25,17 @@ import org.apache.derby.iapi.error.StandardException;", "removed": [] }, { "added": [ " /** The maximum number of bytes used by the stream header. */", " private static final int MAX_STREAM_HEADER_LENGTH = 5;", "", " /** The header generator used for 10.4 (or older) databases. */", " private static final StreamHeaderGenerator TEN_FOUR_CLOB_HEADER_GENERATOR =", " new ClobStreamHeaderGenerator(true);" ], "header": "@@ -49,15 +53,12 @@ public class SQLClob", "removed": [ " /**", " * Static stream header holder with the header used for a 10.5", " * stream with unknown char length. This header will be used with 10.5, and", " * possibly later databases. The expected EOF marker is '0xE0 0x00 0x00'.", " */", " protected static final StreamHeaderHolder UNKNOWN_LEN_10_5_HEADER_HOLDER =", " new StreamHeaderHolder(", " new byte[] {0x00, 0x00, (byte)0xF0, 0x00, 0x00},", " new byte[] {24, 16, -1, 8, 0}, true, true);" ] }, { "added": [ " * <em>Note</em>: Always check if {@code stream} is non-null before using", " * the information stored in the descriptor internally.", " /** Tells if the database is being accessed in soft upgrade mode. */", " private Boolean inSoftUpgradeMode = null;", "" ], "header": "@@ -67,9 +68,14 @@ public class SQLClob", "removed": [] }, { "added": [ " // TODO: Should this be rewritten to clone the stream instead of", " // materializing the value if possible?" ], "header": "@@ -90,6 +96,8 @@ public class SQLClob", "removed": [] }, { "added": [ " /**", " * Returns the character length of this Clob.", " * <p>", " * If the value is stored as a stream, the stream header will be read. If", " * the stream header doesn't contain the stream length, the whole stream", " * will be decoded to determine the length.", " *", " * @return The character length of this Clob.", " * @throws StandardException if obtaining the length fails", " */", " public int getLength() throws StandardException {", " if (stream == null) {", " return super.getLength();", " }", " // The Clob is represented as a stream.", " // Make sure we have a stream descriptor.", " boolean repositionStream = (csd != null);", " if (csd == null) {", " getStreamWithDescriptor();", " // We know the stream is at the first char position here.", " }", " if (csd.getCharLength() != 0) {", " return (int)csd.getCharLength();", " }", " // We now know that the Clob is represented as a stream, but not if the", " // length is unknown or actually zero. Check.", " if (SanityManager.DEBUG) {", " // The stream isn't expecetd to be position aware here.", " SanityManager.ASSERT(!csd.isPositionAware());", " }", " long charLength = 0;", " try {", " if (repositionStream) {", " rewindStream(csd.getDataOffset());", " }", " charLength = UTF8Util.skipUntilEOF(stream);", " // We just drained the whole stream. Reset it.", " rewindStream(0);", " } catch (IOException ioe) {", " throwStreamingIOException(ioe);", " }", " // Update the descriptor in two ways;", " // (1) Set the char length, whether it is zero or not.", " // (2) Set the current byte pos to zero.", " csd = new CharacterStreamDescriptor.Builder().copyState(csd).", " charLength(charLength).curBytePos(0).", " curCharPos(CharacterStreamDescriptor.BEFORE_FIRST).build();", " return (int)charLength;", " }", "" ], "header": "@@ -193,6 +201,56 @@ public class SQLClob", "removed": [] }, { "added": [ " * <p>", " * When this method returns, the stream is positioned on the first", " * character position, such that the next read will return the first", " * character in the stream." ], "header": "@@ -227,6 +285,10 @@ public class SQLClob", "removed": [] }, { "added": [ " // Assume new header format, adjust later if necessary.", " byte[] header = new byte[MAX_STREAM_HEADER_LENGTH];", " HeaderInfo hdrInfo = investigateHeader(header, read);", " if (read > hdrInfo.headerLength()) {", " // We have read too much. Reset the stream.", " ((Resetable)stream).resetStream();", " read = 0;", " curCharPos(read == 0 ?", " CharacterStreamDescriptor.BEFORE_FIRST : 1).", " curBytePos(read).", " dataOffset(hdrInfo.headerLength()).", " byteLength(hdrInfo.byteLength()).", " charLength(hdrInfo.charLength()).build();" ], "header": "@@ -265,31 +327,24 @@ public class SQLClob", "removed": [ " // NOTE: For now, just read the old header format.", " final int dataOffset = 2;", " byte[] header = new byte[dataOffset];", " if (read != dataOffset) {", " String hdr = \"[\";", " for (int i=0; i < read; i++) {", " hdr += Integer.toHexString(header[i] & 0xff);", " }", " throw new IOException(\"Invalid stream header length \" +", " read + \", got \" + hdr + \"]\");", " }", "", " // Note that we add the two bytes holding the header *ONLY* if", " // we know how long the user data is.", " long utflen = ((header[0] & 0xff) << 8) | ((header[1] & 0xff));", " if (utflen > 0) {", " utflen += dataOffset;", "", " curCharPos(1).curBytePos(dataOffset).", " dataOffset(dataOffset).byteLength(utflen).build();" ] }, { "added": [ " public final void restoreToNull() {", " this.csd = null;", " super.restoreToNull();", " }", "" ], "header": "@@ -399,6 +454,11 @@ public class SQLClob", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/StreamHeaderHolder.java", "hunks": [ { "added": [], "header": "@@ -1,161 +0,0 @@", "removed": [ "/*", "", " Derby - Class org.apache.derby.iapi.types.StreamHeaderHolder", "", " Licensed to the Apache Software Foundation (ASF) under one or more", " contributor license agreements. See the NOTICE file distributed with", " this work for additional information regarding copyright ownership.", " The ASF licenses this file to you under the Apache License, Version 2.0", " (the \"License\"); you may not use this file except in compliance with", " the License. You may obtain a copy of the License at", "", " http://www.apache.org/licenses/LICENSE-2.0", "", " Unless required by applicable law or agreed to in writing, software", " distributed under the License is distributed on an \"AS IS\" BASIS,", " WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.", " See the License for the specific language governing permissions and", " limitations under the License.", "", " */", "package org.apache.derby.iapi.types;", "", "import org.apache.derby.iapi.services.sanity.SanityManager;", "", "/**", " * A holder class for a stream header.", " * <p>", " * A stream header is used to store meta information about stream, typically", " * length information.", " */", "//@Immutable", "public final class StreamHeaderHolder {", "", " /** The header bytes. */", " private final byte[] hdr;", " /**", " * Describes if and how the header can be updated with a new length.", " * <p>", " * If {@code null}, updating the length is not allowed, and an exception", " * will be thrown if the update method is called. If allowed, the update", " * is described by the numbers of bits to right-shift at each position of", " * the header. Positions with a \"negative shift\" are skipped. Example:", " * <pre>", " * current hdr shift updated hdr", " * 0x00 24 (byte)(length >>> 24)", " * 0x00 16 (byte)(length >>> 16)", " * 0xF0 -1 0xF0", " * 0x00 8 (byte)(length >>> 8)", " * 0x00 0 (byte)(length >>> 0)", " * </pre>", " * <p>", " * Needless to say, this mechanism is rather simple, but sufficient for the", " * current header formats.", " */", " private final byte[] shifts;", " /**", " * Tells if the header encodes the character or byte length of the stream.", " */", " private final boolean lengthIsCharCount;", " /**", " * Whether a Derby-specific end-of-stream marker is required or not.", " * It is expected that the same EOF marker is used for all headers:", " * {@code 0xE0 0x00 0x00}.", " */", " private final boolean writeEOF;", "", " /**", " * Creates a new stream header holder object.", " *", " * @param hdr the stream header bytes", " * @param shifts describes how to update the header with a new length, or", " * {@code null} if updating the header is forbidden", " * @param lengthIsCharCount whether the length is in characters", " * ({@code true}) or bytes ({@code false})", " * @param writeEOF whether a Derby-specific EOF marker is required", " */", " public StreamHeaderHolder(byte[] hdr, byte[] shifts,", " boolean lengthIsCharCount, boolean writeEOF) {", " this.hdr = hdr;", " this.shifts = shifts;", " this.lengthIsCharCount = lengthIsCharCount;", " this.writeEOF = writeEOF;", " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(shifts == null || hdr.length == shifts.length);", " }", " }", "", " /**", " * Copies the header bytes into the specified buffer at the given offset.", " *", " * @param buf target byte array", " * @param offset offset in the target byte array", " * @return The number of bytes written (the header length).", " */", " public int copyInto(byte[] buf, int offset) {", " System.arraycopy(hdr, 0, buf, offset, hdr.length);", " return hdr.length;", " }", "", " /**", " * Returns the header length.", " *", " * @return The header length in bytes.", " */", " public int headerLength() {", " return hdr.length;", " }", "", " /**", " * Tells if the header encodes the character or the byte length of the", " * stream.", " *", " * @return {@code true} if the character length is expected, {@code false}", " * if the byte length is expected.", " */", " public boolean expectsCharLength() {", " return lengthIsCharCount;", " }", "", " /**", " * Tells if a Derby-specific end-of-stream marker should be appended to the", " * stream associated with this header.", " *", " * @return {@code true} if EOF marker required, {@code false} if not.", " */", " public boolean writeEOF() {", " return writeEOF;", " }", "", " /**", " * Creates a new holder object with a header updated for the new length.", " * <p>", " * <em>NOTE</em>: This method does not update the header in the stream", " * itself. It must be updated explicitly using {@linkplain #copyInto}.", " *<p>", " * <em>Implementation note</em>: This update mechanism is very simple and", " * may not be sufficient for later header formats. It is based purely on", " * shifting of the bits in the new length.", " *", " * @param length the new length to encode into the header", " * @param writeEOF whether the new header requires an EOF marker or not", " * @return A new stream header holder for the new length.", " * @throws IllegalStateException if updating the header is disallowed", " */", " public StreamHeaderHolder updateLength(int length, boolean writeEOF) {", " if (shifts == null) {", " throw new IllegalStateException(", " \"Updating the header has been disallowed\");", " }", " byte[] newHdr = new byte[hdr.length];", " for (int i=0; i < hdr.length; i++) {", " if (shifts[i] >= 0) {", " newHdr[i] = (byte)(length >>> shifts[i]);", " } else {", " newHdr[i] = hdr[i];", " }", " }", " return new StreamHeaderHolder(", " newHdr, shifts, lengthIsCharCount, writeEOF);", " }", "}" ] } ] } ]
derby-DERBY-3907-6f4c92af
DERBY-3907 (partial): Save useful length information for Clobs in store. Started using the new framework for handling stream headers for string data values. The behavior regarding stream headers is kept unchanged, but the code is now ready to deal with multiple stream header formats. Short description: * EmbedResultSet & EmbedPreparesStatement Adjusted code to use the new interface method and pass in the correct class to the ReaderToUTF8Stream constructor. Note the special case of telling the DVD/generator if the database being accessed is in soft upgrade mode in EmbedResultSet. * ArrayInputStream The stream header is no longer read inside readDerbyUTF. * ReaderToUTF8Stream Adjusted code to use the new StreamHeaderGenerator interface, and made the stream count the number of characters encountered. If possible, the header is updated when the stream has been drained. * StringDataValue Added methods getStreamHeaderGenerator and setSoftUpgradeMode. * SQLChar Refactoring in preparation for handling multiple stream header formats. Pulled common code out into writeUTF. The header generator is now repsonsible for writing both the header bytes and an EOF marker if required. Made a second readExternal method, which is not reading the stream header format. This must now be done outside of this method and any length information is passed in as arguments. Implemented the new methods in StringDataValue. * SQLClob Adjusted a single call to ReaderToUTF8Stream. * UTF8UtilTest Adjusted code invoking the new ReaderToUTF8Stream constructor. Patch file: derby-3907-7a2-use_new_framework.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@736636 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/services/io/ArrayInputStream.java", "hunks": [ { "added": [], "header": "@@ -23,7 +23,6 @@ package org.apache.derby.iapi.services.io;", "removed": [ "import java.io.ObjectInput;" ] }, { "added": [ " * <p>", " * The stream must be positioned on the first user byte when this method", " * is invoked." ], "header": "@@ -373,6 +372,9 @@ public final class ArrayInputStream extends InputStream implements LimitObjectIn", "removed": [] }, { "added": [ " * @param utflen the byte length of the value, or {@code 0} if unknown", " public final int readDerbyUTF(char[][] rawData_array, int utflen)" ], "header": "@@ -385,10 +387,11 @@ public final class ArrayInputStream extends InputStream implements LimitObjectIn", "removed": [ " public final int readDerbyUTF(char[][] rawData_array) " ] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/ReaderToUTF8Stream.java", "hunks": [ { "added": [ " /** Constant indicating the first iteration of {@code fillBuffer}. */", " private final static int FIRST_READ = Integer.MIN_VALUE;" ], "header": "@@ -47,6 +47,8 @@ public final class ReaderToUTF8Stream", "removed": [] }, { "added": [ " * The generator for the stream header to use for this stream.", " private final StreamHeaderGenerator hdrGen;", " /** The length of the header. */", " private int headerLength;", "" ], "header": "@@ -59,19 +61,13 @@ public final class ReaderToUTF8Stream", "removed": [ " * The stream header to use for this stream.", " * <p>", " * The holder object is immutable, and the header should not have to be", " * changed, but we may replace it as an optimizataion. If the length of", " * the stream is unknown at the start of the insertion and the whole stream", " * content fits into the buffer, the header is updated with the length", " * after the source stream has been drained. This means that even though", " * the object is immutable and the reference final, another header may be", " * written to the stream.", " private final StreamHeaderHolder header;", " " ] }, { "added": [ " /** The number of chars encoded. */", " private int charCount;" ], "header": "@@ -92,6 +88,8 @@ public final class ReaderToUTF8Stream", "removed": [] }, { "added": [ " * @param headerGenerator the stream header generator", " StreamHeaderGenerator headerGenerator) {", " this.hdrGen = headerGenerator;" ], "header": "@@ -107,18 +105,19 @@ public final class ReaderToUTF8Stream", "removed": [ " StreamHeaderHolder headerHolder) {", " this.header = headerHolder;" ] }, { "added": [ " * @param headerGenerator the stream header generator", " StreamHeaderGenerator headerGenerator) {", " this(appReader, -1 * maximumLength, 0, typeName, headerGenerator);" ], "header": "@@ -145,13 +144,14 @@ public final class ReaderToUTF8Stream", "removed": [ " StreamHeaderHolder headerHolder) {", " this(appReader, -1 * maximumLength, 0, typeName, headerHolder);" ] }, { "added": [ " fillBuffer(FIRST_READ);" ], "header": "@@ -183,7 +183,7 @@ public final class ReaderToUTF8Stream", "removed": [ " fillBuffer(header.copyInto(buffer, 0));" ] }, { "added": [ " fillBuffer(FIRST_READ);" ], "header": "@@ -230,7 +230,7 @@ public final class ReaderToUTF8Stream", "removed": [ " fillBuffer(header.copyInto(buffer, 0));" ] }, { "added": [ " if (startingOffset == FIRST_READ) {", " // Generate the header. Provide the char length only if the header", " // encodes a char count and we actually know the char count.", " if (hdrGen.expectsCharCount() && valueLength >= 0) {", " headerLength = hdrGen.generateInto(buffer, 0, valueLength);", " } else {", " headerLength = hdrGen.generateInto(buffer, 0, -1);", " }", " // Make startingOffset point at the first byte after the header.", " startingOffset = headerLength;", " }" ], "header": "@@ -287,6 +287,17 @@ public final class ReaderToUTF8Stream", "removed": [] }, { "added": [ " charCount++; // Increment the character count." ], "header": "@@ -301,6 +312,7 @@ public final class ReaderToUTF8Stream", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/SQLChar.java", "hunks": [ { "added": [ " * Stream header generator for CHAR, VARCHAR and LONG VARCHAR. Currently,", " * only one header format is used for these data types.", " protected static final StreamHeaderGenerator CHAR_HEADER_GENERATOR =", " new CharStreamHeaderGenerator();" ], "header": "@@ -145,15 +145,11 @@ public class SQLChar", "removed": [ " * Static stream header holder with the header used for a 10.4 (and earlier)", " * stream with unknown byte length. This header will be used with 10.4 or", " * earlier databases, and sometimes also in newer databases for the other", " * string data types beside of Clob. The expected EOF marker is", " * '0xE0 0x00 0x00'.", " protected static final StreamHeaderHolder UNKNOWN_LEN_10_4_HEADER_HOLDER =", " new StreamHeaderHolder(", " new byte[] {0x00, 0x00}, new byte[] {8, 0}, false, true);" ] }, { "added": [ " Writes a non-Clob data value to the modified UTF-8 format used by Derby.", "" ], "header": "@@ -765,6 +761,8 @@ public class SQLChar", "removed": [] }, { "added": [ " StreamHeaderGenerator header = getStreamHeaderGenerator();", " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(!header.expectsCharCount());", " // Generate the header, write it to the destination stream, write the", " // user data and finally write an EOF-marker is required.", " header.generateInto(out, utflen);", " writeUTF(out, strlen, isRaw);", " header.writeEOF(out, utflen);", " }", " /**", " * Writes the user data value to a stream in the modified UTF-8 format.", " *", " * @param out destination stream", " * @param strLen string length of the value", " * @param isRaw {@code true} if the source is {@code rawData}, {@code false}", " * if the source is {@code value}", " * @throws IOException if writing to the destination stream fails", " */", " private final void writeUTF(ObjectOutput out, int strLen,", " final boolean isRaw)", " throws IOException {", " // Copy the source reference into a local variable (optimization).", " final char[] data = isRaw ? rawData : null;", " final String lvalue = isRaw ? null : value;", "", " // Iterate through the value and write it as modified UTF-8.", " for (int i = 0 ; i < strLen ; i++) {" ], "header": "@@ -853,18 +851,35 @@ public class SQLChar", "removed": [ " boolean isLongUTF = false;", " // for length than 64K, see format description above", " if (utflen > 65535)", " {", " isLongUTF = true;", " utflen = 0;", " out.write((utflen >>> 8) & 0xFF);", " out.write((utflen >>> 0) & 0xFF);", " for (int i = 0 ; i < strlen ; i++)", " {" ] }, { "added": [], "header": "@@ -882,15 +897,6 @@ public class SQLChar", "removed": [ "", " if (isLongUTF)", " {", " // write the following 3 bytes to terminate the string:", " // (11100000, 00000000, 00000000)", " out.write(0xE0);", " out.write(0);", " out.write(0);", " }" ] }, { "added": [ " resetForMaterialization();", " int utfLen = (((in.read() & 0xFF) << 8) | (in.read() & 0xFF));", " if (rawData == null || rawData.length < utfLen) {", " // This array may be as much as three times too big. This happens", " // if the content is only 3-byte characters (i.e. CJK).", " // TODO: Decide if a threshold should be introduced, where the", " // content is copied to a smaller array if the number of", " // unused array positions exceeds the threshold.", " rawData = new char[utfLen];", " }", " rawLength = in.readDerbyUTF(arg_passer, utfLen);", " }", " char[][] arg_passer = new char[1][];", " /**", " * Resets state after materializing value from an array.", " */", " private void resetForMaterialization() {", " // Read the stored length in the stream header.", " readExternal(in, utflen, 0);", " }", " /**", " * Restores the data value from the source stream, materializing the value", " * in memory.", " *", " * @param in the source stream", " * @param utflen the byte length, or {@code 0} if unknown", " * @param knownStrLen the char length, or {@code 0} if unknown", " * @throws UTFDataFormatException if an encoding error is detected", " * @throws IOException if reading the stream fails", " */", " protected void readExternal(ObjectInput in, int utflen,", " final int knownStrLen)", " throws IOException {" ], "header": "@@ -920,26 +926,52 @@ public class SQLChar", "removed": [ " rawLength = in.readDerbyUTF(arg_passer);", "", " // restoreToNull();", "", " char[][] arg_passer = new char[1][];", " // if in.available() blocked at 0, use this default string size ", "" ] }, { "added": [ " resetForMaterialization();", " while (((strlen < knownStrLen) || (knownStrLen == 0)) &&", " ((count < utflen) || (utflen == 0)))" ], "header": "@@ -974,13 +1006,13 @@ public class SQLChar", "removed": [ " restoreToNull();", "", " while ( ((count < utflen) || (utflen == 0)))" ] }, { "added": [ " throw new UTFDataFormatException(", " \"Invalid code point: \" + Integer.toHexString(c));" ], "header": "@@ -1106,8 +1138,8 @@ readingLoop:", "removed": [ "", " throw new UTFDataFormatException();" ] }, { "added": [], "header": "@@ -1116,8 +1148,6 @@ readingLoop:", "removed": [ " ", " cKey = null;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.sql.dictionary.DataDictionary;" ], "header": "@@ -77,6 +77,7 @@ import java.util.Arrays;", "removed": [] }, { "added": [ " // In the case of updatable result sets, we cannot guarantee that a", " // context is pushed when the header needs to be generated. To fix", " // this, tell the DVD/generator whether we are running in soft", " // upgrade mode or not.", " dvd.setSoftUpgradeMode(Boolean.valueOf(", " !getEmbedConnection().getDatabase().getDataDictionary().", " checkVersion(DataDictionary.DD_VERSION_CURRENT, null)));" ], "header": "@@ -2926,6 +2927,13 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [] } ] } ]
derby-DERBY-3907-7672693c
DERBY-3907: Save useful length information for Clobs in store. These changes make Derby start using the stream descriptor object to correctly handle streams from store. This is done to be able to use several header formats, where a varying number of bytes are used. Description of changes: o EmbedClob Changed constructor to take a StringDataValue instead of a DataValueDescriptor. Updated call to the StoreStreamClob constructor. o EmbedResultSet Started using the getStreamWithDescriptor method and updated invocations of the UTF8Reader constructor. o StoreStreamClob Added a CharacterStreamDescriptor (CSD), and made the constructor take one as an argument. Adapted the class to use a CSD. o UTF8Reader Updated some comments. Fixed bug where the header length wasn't added to the byte length of the stream, and updated the class appropriately (adjusted utfCount, fixed the reset routine). Made sure the header bytes are skipped (either by skipping them in the constructor or by adjusting the position to on the next reposition). o ResultSetStreamTest Added a test for maxFieldSize, where truncation have to happen. o Various tests Adjusted tests to run with the new implementation. Patch file: derby-3907-5a-use_getStreamWithDescriptor.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@734630 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedClob.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;", "import org.apache.derby.iapi.types.StringDataValue;" ], "header": "@@ -25,15 +25,11 @@ package org.apache.derby.impl.jdbc;", "removed": [ "import org.apache.derby.iapi.types.DataValueDescriptor;", "import org.apache.derby.impl.jdbc.ConnectionChild;", "import org.apache.derby.impl.jdbc.EmbedConnection;", "import org.apache.derby.impl.jdbc.Util;", "import org.apache.derby.impl.jdbc.ReaderToAscii;", "import java.io.InputStream;" ] }, { "added": [ " * @param dvd string data value descriptor providing the Clob source", " protected EmbedClob(EmbedConnection con, StringDataValue dvd)" ], "header": "@@ -101,11 +97,11 @@ final class EmbedClob extends ConnectionChild implements Clob, EngineLOB", "removed": [ " * @param dvd data value descriptor providing the Clob source", " protected EmbedClob(EmbedConnection con, DataValueDescriptor dvd)" ] }, { "added": [ " CharacterStreamDescriptor csd = dvd.getStreamWithDescriptor();", " if (csd == null) {" ], "header": "@@ -115,9 +111,9 @@ final class EmbedClob extends ConnectionChild implements Clob, EngineLOB", "removed": [ " InputStream storeStream = dvd.getStream();", " if (storeStream == null) {" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java", "hunks": [ { "added": [ "", "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;" ], "header": "@@ -75,6 +75,8 @@ import java.net.URL;", "removed": [] }, { "added": [ " StringDataValue dvd = (StringDataValue)getColumn(columnIndex);", " CharacterStreamDescriptor csd = dvd.getStreamWithDescriptor();", " if (csd == null) {" ], "header": "@@ -1123,18 +1125,16 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ "\t\t\tDataValueDescriptor dvd = getColumn(columnIndex);", "\t\t\tStreamStorable ss = (StreamStorable) dvd;", "", "\t\t\tInputStream stream = ss.returnStream();", "\t\t\tif (stream == null) {" ] }, { "added": [ " // See if we have to enforce a max field size.", " if (lmfs > 0) {", " csd = new CharacterStreamDescriptor.Builder().copyState(csd).", " maxCharLength(lmfs).build();", " }", " java.io.Reader ret = new UTF8Reader(csd, this, syncLock);" ], "header": "@@ -1146,7 +1146,12 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ "\t\t\tjava.io.Reader ret = new UTF8Reader(stream, lmfs, this, syncLock);" ] }, { "added": [ "\t\t\t\tStringDataValue dvd = (StringDataValue)getColumn(columnIndex);", "\t\t\t\t// since a Clob may keep a pointer to a long column in the" ], "header": "@@ -4020,13 +4025,13 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [ "\t\t\t\tDataValueDescriptor dvd = getColumn(columnIndex);", "\t\t\t\t// since a blob may keep a pointer to a long column in the" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/StoreStreamClob.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.sanity.SanityManager;" ], "header": "@@ -34,8 +34,8 @@ import java.sql.SQLException;", "removed": [ "import org.apache.derby.iapi.types.TypeId;" ] }, { "added": [ " * <ol> <li>The first few bytes are used for length encoding. Currently the", " * number of bytes is either 2 or 5." ], "header": "@@ -44,10 +44,8 @@ import org.apache.derby.iapi.util.UTF8Util;", "removed": [ " * <ol> <li>The first two bytes are used for length encoding. Note that due to", " * the inadequate max number of this format, it is always ignored. This", " * is also true if there actually is a length encoded there. The two", " * bytes are excluded from the length of the stream." ] }, { "added": [ " /** The descriptor used to describe the underlying source stream. */", " private CharacterStreamDescriptor csd;" ], "header": "@@ -66,14 +64,8 @@ final class StoreStreamClob", "removed": [ " /**", " * The cached length of the store stream in number of characters.", " * A value of {@code 0} means the length is unknown, and zero is an invalid", " * length for a store stream Clob. It is set to zero because that is the", " * value encoded as length in the store stream (on disk format) when the", " * length is unknown or cannot be represented.", " */", " private long cachedCharLength = 0;" ] }, { "added": [ " * The stream used as a source for this Clob has to implement the interface", " * {@code Resetable}, as the stream interface from store only allows for", " * movement forwards. If the stream has been advanced too far with regards", " * to the user request, the stream must be reset and we start from the", " * beginning.", " * @param csd descriptor for the source stream, including a reference to it", " public StoreStreamClob(CharacterStreamDescriptor csd,", " ConnectionChild conChild)", " if (SanityManager.DEBUG) {", " // We create a position aware stream below, the stream is not", " // supposed to be a position aware stream already!", " SanityManager.ASSERT(!csd.isPositionAware());", " }", " this.positionedStoreStream = ", " new PositionedStoreStream(csd.getStream());" ], "header": "@@ -94,28 +86,26 @@ final class StoreStreamClob", "removed": [ " * Note that the stream passed in have to fulfill certain requirements,", " * which are not currently totally enforced by Java (the language).", " * @param stream the stream containing the Clob value. This stream is", " * expected to implement {@link Resetable} and to be a", " * {@link org.apache.derby.iapi.services.io.FormatIdInputStream} with", " * an ${link org.apache.derby.impl.store.raw.data.OverflowInputStream}", " * inside. However, the available interfaces does not guarantee this.", " * See the class JavaDoc for more information about this stream.", " * @throws StandardException if initializing the store stream fails", " * @throws NullPointerException if <code>stream</code> or", " * <code>conChild</code> is null", " * @throws ClassCastException if <code>stream</code> is not an instance", " * of <code>Resetable</code>", " * @see org.apache.derby.iapi.services.io.FormatIdInputStream", " * @see org.apache.derby.impl.store.raw.data.OverflowInputStream", " public StoreStreamClob(InputStream stream, ConnectionChild conChild)", " this.positionedStoreStream = new PositionedStoreStream(stream);" ] }, { "added": [ " if (SanityManager.DEBUG) {", " // Creating the positioned stream should reset the stream.", " SanityManager.ASSERT(positionedStoreStream.getPosition() == 0);", " }", " this.csd = new CharacterStreamDescriptor.Builder().copyState(csd).", " stream(positionedStoreStream). // Replace with positioned stream", " positionAware(true). // Update description", " curBytePos(0L).", " curCharPos(CharacterStreamDescriptor.BEFORE_FIRST).", " build();" ], "header": "@@ -129,6 +119,16 @@ final class StoreStreamClob", "removed": [] }, { "added": [ " if (this.csd.getCharLength() == 0) {", " long charLength = 0;", " charLength = UTF8Util.skipUntilEOF(" ], "header": "@@ -154,12 +154,13 @@ final class StoreStreamClob", "removed": [ " if (this.cachedCharLength == 0) {", " this.cachedCharLength = UTF8Util.skipUntilEOF(" ] }, { "added": [ " // Update the stream descriptor.", " this.csd = new CharacterStreamDescriptor.Builder().", " copyState(this.csd).charLength(charLength).build();", " return this.csd.getCharLength();" ], "header": "@@ -167,8 +168,11 @@ final class StoreStreamClob", "removed": [ " return this.cachedCharLength;" ] }, { "added": [ " this.positionedStoreStream.reposition(this.csd.getDataOffset());" ], "header": "@@ -188,7 +192,7 @@ final class StoreStreamClob", "removed": [ " this.positionedStoreStream.reposition(2L);" ] }, { "added": [], "header": "@@ -213,15 +217,6 @@ final class StoreStreamClob", "removed": [ " // Describe the stream to allow the reader to configure itself.", " CharacterStreamDescriptor csd =", " new CharacterStreamDescriptor.Builder().", " stream(positionedStoreStream).bufferable(false).", " positionAware(true).dataOffset(2L). // TODO", " curCharPos(CharacterStreamDescriptor.BEFORE_FIRST).", " maxCharLength(TypeId.CLOB_MAXWIDTH).", " charLength(cachedCharLength). // 0 means unknown.", " build();" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java", "hunks": [ { "added": [ " /** Stream that can reposition itself on request (may be {@code null}). */", " /**", " * Store the last visited position in the store stream, if it is capable of", " * repositioning itself ({@code positionedIn != null}).", " */", " /** Number of bytes read from the stream, including any header bytes. */" ], "header": "@@ -62,11 +62,14 @@ public final class UTF8Reader extends Reader", "removed": [ " /** Stream that can reposition itself on request. */", " /** Store last visited position in the store stream. */", " /** Number of bytes read from the stream. */" ] }, { "added": [ " positionAware(positionedIn != null).", " byteLength(utfLen == 0 ? 0 : utfLen +2). // Add header bytes", " utfCount = 2;", " Object sync)", " throws IOException {" ], "header": "@@ -170,13 +173,16 @@ public final class UTF8Reader extends Reader", "removed": [ " positionAware(positionedIn != null).byteLength(utfLen).", " Object sync) {" ] }, { "added": [ " if (csd.isPositionAware()) {", " // Check and save the stream state.", " if (SanityManager.DEBUG) {", " this.rawStreamPos = positionedIn.getPosition();", " // Make sure we start at the first data byte, not in the header.", " // The position will be changed on the next buffer fill.", " if (rawStreamPos < csd.getDataOffset()) {", " rawStreamPos = csd.getDataOffset();", " }", " } else {", " // Skip the header if required.", " if (csd.getCurBytePos() < csd.getDataOffset()) {", " csd.getStream().skip(csd.getDataOffset() - csd.getCurBytePos());", " }" ], "header": "@@ -186,17 +192,23 @@ public final class UTF8Reader extends Reader", "removed": [ " // Check and save the stream state.", " if (SanityManager.DEBUG) { ", " if (csd.isPositionAware()) {", " }", " this.rawStreamPos = positionedIn.getPosition();", " // Make sure we start at the first data byte, not in the header.", " if (rawStreamPos < csd.getDataOffset()) {", " rawStreamPos = csd.getDataOffset();" ] }, { "added": [ " // Add the header portion to the utfCount.", " utfCount = csd.getDataOffset();" ], "header": "@@ -205,6 +217,8 @@ public final class UTF8Reader extends Reader", "removed": [] }, { "added": [ " final long utfLen = csd.getByteLength();", " final long maxFieldSize = csd.getMaxCharLength();" ], "header": "@@ -462,8 +476,8 @@ public final class UTF8Reader extends Reader", "removed": [ " long utfLen = csd.getByteLength();", " long maxFieldSize = csd.getMaxCharLength();" ] } ] } ]
derby-DERBY-3907-7af67265
DERBY-3907 (partial): Save useful length information for Clobs in store. Added the framework required to handle multiple stream header formats for stream sources. Note that handling of the new header format is not yet added, so the code should behave as before, using only the old header format. A description of the changes: o EmbedResultSet and EmbedPreparedStatement Started using the new ReaderToUTF8Stream constructor, where the stream header is passed in. Also started to treat the DataValueDescriptor as a StringDataValue, which should always be the case at this point in the code. o ReaderToUTF8Stream Added field 'header', which holds a StreamHeaderHolder coming from a StringDataValue object. Updated the constructors with a new argument. The first execution of fillBuffer now uses the header holder to obtain the header length, and the header holder object is consulted when checking if the header can be updated with the length after the application stream has been drained. Note that updating the header with a character count is not yet supported. o StringDataValue Added new method generateStreamHeader. o SQLChar Implemented generateStreamHeader, which always return a header for a stream with unknown length (see the constant). o SQLClob Added a constant for a 10.5 stream header holder representing a stream with unknown character length. Also updated the use of the ReaderToUTF8Stream constructor. o StreamHeaderHolder (new file) Holder object for a stream header, containing the header itself and the following additional information; "instructions" on how to update the header with a new length, if the length is expected to be in number of bytes or characters, and if an EOF marker is expected to be appended to the stream. o UTF8UTilTest Updated usage of the ReaderToUTF8Stream constructor, and replaced the hardcoded byte count to skip with a call to the header holder object. o jdbc4.ClobTest Added some simple tests inserting and fetching Clobs to test the basics of stream header handling. o StreamTruncationTest (new file) New test testing truncation of string data values when they are inserted as streams. Patch file: derby-3907-2c-header_write_preparation.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@734065 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/ReaderToUTF8Stream.java", "hunks": [ { "added": [ " /**", " * The stream header to use for this stream.", " * <p>", " * The holder object is immutable, and the header should not have to be", " * changed, but we may replace it as an optimizataion. If the length of", " * the stream is unknown at the start of the insertion and the whole stream", " * content fits into the buffer, the header is updated with the length", " * after the source stream has been drained. This means that even though", " * the object is immutable and the reference final, another header may be", " * written to the stream.", " * @see #checkSufficientData()", " */", " private final StreamHeaderHolder header;" ], "header": "@@ -58,6 +58,19 @@ public final class ReaderToUTF8Stream", "removed": [] }, { "added": [ " String typeName,", " StreamHeaderHolder headerHolder) {", " this.header = headerHolder;" ], "header": "@@ -98,12 +111,14 @@ public final class ReaderToUTF8Stream", "removed": [ " String typeName) {" ] }, { "added": [ " String typeName,", " StreamHeaderHolder headerHolder) {", " this(appReader, -1 * maximumLength, 0, typeName, headerHolder);" ], "header": "@@ -134,8 +149,9 @@ public final class ReaderToUTF8Stream", "removed": [ " String typeName) {", " this(appReader, -1 * maximumLength, 0, typeName);" ] }, { "added": [ " fillBuffer(header.copyInto(buffer, 0));" ], "header": "@@ -167,7 +183,7 @@ public final class ReaderToUTF8Stream", "removed": [ "\t\t\tfillBuffer(2);" ] }, { "added": [ " fillBuffer(header.copyInto(buffer, 0));" ], "header": "@@ -214,7 +230,7 @@ public final class ReaderToUTF8Stream", "removed": [ "\t\t\tfillBuffer(2);" ] }, { "added": [ " // can put the correct length into the stream.", " if (!multipleBuffer) {", " StreamHeaderHolder tmpHeader = header;", " if (header.expectsCharLength()) {", " if (SanityManager.DEBUG) {", " SanityManager.THROWASSERT(\"Header update with character \" +", " \"length is not yet supported\");", " }", " } else {", " int utflen = blen - header.headerLength(); // Length in bytes", " tmpHeader = header.updateLength(utflen, false);", " // Update the header we have already written to our buffer,", " // still at postition zero.", " tmpHeader.copyInto(buffer, 0);", " if (SanityManager.DEBUG) {", " // Check that we didn't overwrite any of the user data.", " SanityManager.ASSERT(", " header.headerLength() == tmpHeader.headerLength());", " }", " }", " // The if below is temporary, it won't be necessary when support", " // for writing the new header has been added.", " if (tmpHeader.writeEOF()) {", " // Write the end-of-stream marker.", " buffer[blen++] = (byte) 0xE0;", " buffer[blen++] = (byte) 0x00;", " buffer[blen++] = (byte) 0x00;", " }", " } else if (header.writeEOF()) {", " // Write the end-of-stream marker.", " buffer[blen++] = (byte) 0xE0;", " buffer[blen++] = (byte) 0x00;", " buffer[blen++] = (byte) 0x00;", " }", " }" ], "header": "@@ -369,23 +385,42 @@ public final class ReaderToUTF8Stream", "removed": [ "\t\t", "\t\t// can put the correct length into the stream.", "\t\tif (!multipleBuffer)", "\t\t{", "\t\t\tint utflen = blen - 2;", "\t\t\tbuffer[0] = (byte) ((utflen >>> 8) & 0xFF);", "\t\t\tbuffer[1] = (byte) ((utflen >>> 0) & 0xFF);", "", "\t\t}", "\t\telse", "\t\t{", "\t\t\tbuffer[blen++] = (byte) 0xE0;", "\t\t\tbuffer[blen++] = (byte) 0x00;", "\t\t\tbuffer[blen++] = (byte) 0x00;", "\t\t}", "\t}" ] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/SQLChar.java", "hunks": [ { "added": [ " /**", " * Static stream header holder with the header used for a 10.4 (and earlier)", " * stream with unknown byte length. This header will be used with 10.4 or", " * earlier databases, and sometimes also in newer databases for the other", " * string data types beside of Clob. The expected EOF marker is", " * '0xE0 0x00 0x00'.", " */", " protected static final StreamHeaderHolder UNKNOWN_LEN_10_4_HEADER_HOLDER =", " new StreamHeaderHolder(", " new byte[] {0x00, 0x00}, new byte[] {8, 0}, false, true);", "" ], "header": "@@ -144,6 +144,17 @@ public class SQLChar", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/SQLClob.java", "hunks": [ { "added": [ "", " /**", " * Static stream header holder with the header used for a 10.5", " * stream with unknown char length. This header will be used with 10.5, and", " * possibly later databases. The expected EOF marker is '0xE0 0x00 0x00'.", " */", " protected static final StreamHeaderHolder UNKNOWN_LEN_10_5_HEADER_HOLDER =", " new StreamHeaderHolder(", " new byte[] {0x00, 0x00, (byte)0xF0, 0x00, 0x00},", " new byte[] {24, 16, -1, 8, 0}, true, true);", "" ], "header": "@@ -47,6 +47,17 @@ import java.util.Calendar;", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java", "hunks": [ { "added": [], "header": "@@ -25,11 +25,8 @@ import org.apache.derby.iapi.services.sanity.SanityManager;", "removed": [ "import org.apache.derby.iapi.sql.dictionary.DataDictionary;", "import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;", "import org.apache.derby.iapi.sql.ResultSet;" ] }, { "added": [], "header": "@@ -40,8 +37,6 @@ import org.apache.derby.iapi.types.ReaderToUTF8Stream;", "removed": [ "import org.apache.derby.iapi.services.io.LimitReader;", "" ] }, { "added": [ "import org.apache.derby.iapi.types.StringDataValue;" ], "header": "@@ -63,15 +58,13 @@ import java.sql.Clob;", "removed": [ "import java.io.DataInputStream;", "import java.io.IOException;", "import java.io.EOFException;" ] }, { "added": [ " final StringDataValue dvd = (StringDataValue)", " getParms().getParameter(parameterIndex -1);" ], "header": "@@ -740,7 +733,8 @@ public abstract class EmbedPreparedStatement", "removed": [ " ParameterValueSet pvs = getParms();" ] }, { "added": [ " truncationLength, getParameterSQLType(parameterIndex),", " dvd.generateStreamHeader(length));", " getParameterSQLType(parameterIndex),", " dvd.generateStreamHeader(-1));" ], "header": "@@ -787,12 +781,14 @@ public abstract class EmbedPreparedStatement", "removed": [ " truncationLength, getParameterSQLType(parameterIndex));", " getParameterSQLType(parameterIndex));" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.types.StringDataValue;" ], "header": "@@ -75,6 +75,7 @@ import java.net.URL;", "removed": [] }, { "added": [ " final StringDataValue dvd = (StringDataValue)", " getDVDforColumnToBeUpdated(columnIndex, updateMethodName);" ], "header": "@@ -2918,6 +2919,8 @@ public abstract class EmbedResultSet extends ConnectionChild", "removed": [] } ] } ]
derby-DERBY-3907-cf0fdc4f
DERBY-3907 (partial): Save useful length information for Clobs in store. Added StringDataValue.getStreamWithDescriptor(). It it intended to be used when getting a stream from a StringDataValue to be used with a Clob object, or with streaming of string data values in general. The DVD is responsible for returning a correct descriptor for the raw stream. The descriptor is in turn used by other classes to correctly configure themselves with respect to data offsets, buffering, repositioning and so on. Changes: o CharacterStreamDescriptor Added a toString method and more verbose assert-messages. o StringDataValue Added method 'CharacterStreamDescriptor getStreamWithDescriptor()'. o SQLChar Made setStream non-final so it can be overridden in SQLClob. Added default implementation of getStreamWithDescriptor that always returns null. This means that all non-Clob string data types will be handled as strings instead of streams in situations where a stream is requested through getStreamWithDescriptor. This might be changed. Made throwStreamingIOException protected to access it from SQLClob. o SQLClob Implemented getStreamWithDescriptor, handling the old 2-byte format only. Overrid setStream to discard the stream descriptor when a new stream is set for the DVD. Patch file: derby-3907-4a-add_getStreamWithDescriptor.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@734148 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/jdbc/CharacterStreamDescriptor.java", "hunks": [ { "added": [ " SanityManager.ASSERT(curBytePos >= 0, \"Negative curBytePos\");", " curCharPos == BEFORE_FIRST, \"Invalid curCharPos \" +", " \"(BEFORE_FIRST=\" + BEFORE_FIRST + \"), \" + toString());", " SanityManager.ASSERT(byteLength >= 0, \"Negative byteLength\");", " SanityManager.ASSERT(charLength >= 0, \"Negative charLength\");", " SanityManager.ASSERT(dataOffset >= 0, \"Negative dataOffset\");", " SanityManager.ASSERT(maxCharLength >= 0, \"Negative max length\");" ], "header": "@@ -375,13 +375,14 @@ public class CharacterStreamDescriptor {", "removed": [ " SanityManager.ASSERT(curBytePos >= 0);", " curCharPos == BEFORE_FIRST);", " SanityManager.ASSERT(byteLength >= 0);", " SanityManager.ASSERT(charLength >= 0);", " SanityManager.ASSERT(dataOffset >= 0);", " SanityManager.ASSERT(maxCharLength >= 0);" ] }, { "added": [ " SanityManager.ASSERT(curCharPos == BEFORE_FIRST,", " \"curCharPos in header, \" + toString());", " SanityManager.ASSERT(byteLength - dataOffset >= charLength,", " \"Less than one byte per char, \" + toString());", " SanityManager.ASSERT(stream instanceof PositionedStream,", " \"Stream not a positioned stream, \" + toString());" ], "header": "@@ -395,16 +396,19 @@ public class CharacterStreamDescriptor {", "removed": [ " SanityManager.ASSERT(curCharPos == BEFORE_FIRST);", " SanityManager.ASSERT(byteLength - dataOffset >= charLength);", " SanityManager.ASSERT(stream instanceof PositionedStream);" ] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/SQLChar.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;" ], "header": "@@ -40,6 +40,7 @@ import org.apache.derby.iapi.types.ConcatableDataValue;", "removed": [] }, { "added": [], "header": "@@ -66,7 +67,6 @@ import java.sql.PreparedStatement;", "removed": [ "import java.text.CollationElementIterator;" ] }, { "added": [ " public void setStream(InputStream newStream) {" ], "header": "@@ -531,8 +531,7 @@ public class SQLChar", "removed": [ " public final void setStream(InputStream newStream)", " {" ] }, { "added": [ "", " /**", " * Returns a descriptor for the input stream for this character data value.", " *", " * @return Unless the method is overridden, {@code null} is returned.", " * @throws StandardException if obtaining the descriptor fails", " * @see SQLClob#getStreamWithDescriptor()", " */", " public CharacterStreamDescriptor getStreamWithDescriptor()", " throws StandardException {", " // For now return null for all non-Clob types.", " // TODO: Is this what we want, or do we want to treat some of the other", " // string types as streams as well?", " return null;", " }", "" ], "header": "@@ -560,6 +559,22 @@ public class SQLChar", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/SQLClob.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;", "", "import java.io.IOException;", "import java.io.InputStream;" ], "header": "@@ -21,16 +21,16 @@", "removed": [ "import org.apache.derby.iapi.types.DataValueDescriptor;", "import org.apache.derby.iapi.types.TypeId;", "import org.apache.derby.iapi.reference.SQLState;", "import java.sql.Blob;" ] }, { "added": [ " /**", " * The descriptor for the stream. If there is no stream this should be", " * {@code null}, which is also true if the descriptor hasen't been", " * constructed yet.", " */", " private CharacterStreamDescriptor csd;", "" ], "header": "@@ -58,6 +58,13 @@ public class SQLClob", "removed": [] }, { "added": [ " /**", " * Returns a descriptor for the input stream for this CLOB value.", " * <p>", " * The descriptor contains information about header data, current positions,", " * length, whether the stream should be buffered or not, and if the stream", " * is capable of repositioning itself.", " *", " * @return A descriptor for the stream, which includes a reference to the", " * stream itself. If the value cannot be represented as a stream,", " * {@code null} is returned instead of a decsriptor.", " * @throws StandardException if obtaining the descriptor fails", " */", " public CharacterStreamDescriptor getStreamWithDescriptor()", " throws StandardException {", " if (stream == null) {", " // Lazily reset the descriptor here, to avoid further changes in", " // {@code SQLChar}.", " csd = null;", " return null;", " }", " // NOTE: Getting down here several times is potentially dangerous.", " // When the stream is published, we can't assume we know the position", " // any more. The best we can do, which may hurt performance to some", " // degree in some non-recommended use-cases, is to reset the stream if", " // possible.", " if (csd != null) {", " if (stream instanceof Resetable) {", " try {", " ((Resetable)stream).resetStream();", " } catch (IOException ioe) {", " throwStreamingIOException(ioe);", " }", " } else {", " if (SanityManager.DEBUG) {", " SanityManager.THROWASSERT(\"Unable to reset stream when \" +", " \"fetched the second time: \" + stream.getClass());", " }", " }", " }", "", " if (csd == null) {", " // First time, read the header format of the stream.", " // NOTE: For now, just read the old header format.", " try {", " final int dataOffset = 2;", " byte[] header = new byte[dataOffset];", " int read = stream.read(header);", " if (read != dataOffset) {", " String hdr = \"[\";", " for (int i=0; i < read; i++) {", " hdr += Integer.toHexString(header[i] & 0xff);", " }", " throw new IOException(\"Invalid stream header length \" +", " read + \", got \" + hdr + \"]\");", " }", "", " // Note that we add the two bytes holding the header *ONLY* if", " // we know how long the user data is.", " long utflen = ((header[0] & 0xff) << 8) | ((header[1] & 0xff));", " if (utflen > 0) {", " utflen += dataOffset;", " }", "", " csd = new CharacterStreamDescriptor.Builder().stream(stream).", " bufferable(false).positionAware(false).", " curCharPos(1).curBytePos(dataOffset).", " dataOffset(dataOffset).byteLength(utflen).build();", " } catch (IOException ioe) {", " throwStreamingIOException(ioe);", " }", " }", " return this.csd;", " }", "" ], "header": "@@ -209,6 +216,80 @@ public class SQLClob", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/StringDataValue.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;" ], "header": "@@ -24,6 +24,7 @@ package org.apache.derby.iapi.types;", "removed": [] }, { "added": [ "", " /**", " * Returns a descriptor for the input stream for this data value.", " * <p>", " * The descriptor contains information about header data, current positions,", " * length, whether the stream should be buffered or not, and if the stream", " * is capable of repositioning itself.", " *", " * @return A descriptor for the stream, which includes a reference to the", " * stream itself, or {@code null} if the value cannot be represented", " * as a stream.", " * @throws StandardException if obtaining the descriptor fails", " */", " public CharacterStreamDescriptor getStreamWithDescriptor()", " throws StandardException;" ], "header": "@@ -207,4 +208,19 @@ public interface StringDataValue extends ConcatableDataValue", "removed": [] } ] } ]
derby-DERBY-3907-dfdebd5e
DERBY-3907 (partial): Save useful length information for Clobs in store. Added an interface for a stream header generator object, and two implementations; one for Clob and one for the non-Clob string types (CHAR, VARCHAR and LONG VARCHAR). To support pre 10.5 databases in soft upgrade mode, the Clob stream header generator can generate both the new header format and the old one. Note that the Clob header generator depends on either knowing up front if it is being used in soft upgrade mode, or to be able to determine this through a database context. Patch file: derby-3907-7a1-write_new_header_format.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@736612 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/StreamHeaderGenerator.java", "hunks": [ { "added": [ "/*", "", " Derby - Class org.apache.derby.iapi.types.StreamHeaderGenerator", "", " Licensed to the Apache Software Foundation (ASF) under one or more", " contributor license agreements. See the NOTICE file distributed with", " this work for additional information regarding copyright ownership.", " The ASF licenses this file to You under the Apache License, Version 2.0", " (the \"License\"); you may not use this file except in compliance with", " the License. You may obtain a copy of the License at", "", " http://www.apache.org/licenses/LICENSE-2.0", "", " Unless required by applicable law or agreed to in writing, software", " distributed under the License is distributed on an \"AS IS\" BASIS,", " WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.", " See the License for the specific language governing permissions and", " limitations under the License.", "", "*/", "package org.apache.derby.iapi.types;", "", "import java.io.IOException;", "import java.io.ObjectOutput;", "", "/**", " * Generates stream headers encoding the length of the stream.", " */", "public interface StreamHeaderGenerator {", "", " /** The Derby-specific end-of-stream marker. */", " byte[] DERBY_EOF_MARKER = new byte[] {(byte)0xE0, 0x00, 0x00};", "", " /**", " * Tells if the header encodes a character or byte count.", " *", " * @return {@code true} if the character count is encoded into the header,", " * {@code false} if the byte count is encoded into the header.", " */", " boolean expectsCharCount();", "", " /**", " * Generates the header for the specified length and writes it into the", " * provided buffer, starting at the specified offset.", " *", " * @param buf the buffer to write into", " * @param offset starting offset in the buffer", " * @param valueLength the length of the stream, can be in either bytes or", " * characters depending on the header format", " * @return The number of bytes written into the buffer.", " */", " int generateInto(byte[] buf, int offset, long valueLength);", "", " /**", " * Generates the header for the specified length and writes it into the", " * destination stream.", " *", " * @param out the destination stream", " * @param valueLength the length of the stream, can be in either bytes or", " * characters depending on the header format", " * @return The number of bytes written to the destination stream.", " * @throws IOException if writing to the destination stream fails", " */", " int generateInto(ObjectOutput out, long valueLength) throws IOException;", "", " /**", " * Writes a Derby-specific end-of-stream marker to the buffer for a stream", " * of the specified length, if required.", " *", " * @param buffer the buffer to write into", " * @param offset starting offset in the buffer", " * @param valueLength the length of the stream, can be in either bytes or", " * characters depending on the header format", " * @return Number of bytes written (zero or more).", " */", " int writeEOF(byte[] buffer, int offset, long valueLength);", "", " /**", " * Writes a Derby-specific end-of-stream marker to the destination stream", " * for the specified length, if required.", " *", " * @param out the destination stream", " * @param valueLength the length of the stream, can be in either bytes or", " * characters depending on the header format", " * @return Number of bytes written (zero or more).", " * @throws IOException if writing to the destination stream fails", " */", " int writeEOF(ObjectOutput out, long valueLength) throws IOException;", "}" ], "header": "@@ -0,0 +1,89 @@", "removed": [] } ] } ]
derby-DERBY-3909-db8b20bd
DERBY-3909: Race condition in NetXAResource.removeXaresFromSameRMchain() Removed the code that had race conditions since the data structures it touched weren't actually used for anything. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@704904 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/client/org/apache/derby/client/net/NetConnection.java", "hunks": [ { "added": [], "header": "@@ -1753,9 +1753,6 @@ public class NetConnection extends org.apache.derby.client.am.Connection {", "removed": [ " if (xares_ != null) {", " xares_.removeXaresFromSameRMchain();", " }" ] }, { "added": [], "header": "@@ -1769,9 +1766,6 @@ public class NetConnection extends org.apache.derby.client.am.Connection {", "removed": [ " if (xares_ != null) {", " xares_.removeXaresFromSameRMchain();", " }" ] }, { "added": [], "header": "@@ -1785,9 +1779,6 @@ public class NetConnection extends org.apache.derby.client.am.Connection {", "removed": [ " if (xares_ != null) {", " xares_.removeXaresFromSameRMchain();", " }" ] } ] }, { "file": "java/client/org/apache/derby/client/net/NetXAResource.java", "hunks": [ { "added": [], "header": "@@ -45,7 +45,6 @@ import java.util.Collections;", "removed": [ "import java.util.Vector;" ] }, { "added": [], "header": "@@ -83,16 +82,6 @@ public class NetXAResource implements XAResource {", "removed": [ " public int nextElement = 0;", "", " // XAResources with same RM group list", " protected static Vector xaResourceSameRMGroup_ = new Vector();", " protected int sameRMGroupIndex_ = 0;", " protected NetXAResource nextSameRM_ = null;", " protected boolean ignoreMe_ = false;", "", "", "" ] }, { "added": [], "header": "@@ -145,9 +134,6 @@ public class NetXAResource implements XAResource {", "removed": [ "", " // add this new XAResource to the list of other XAResources for the Same RM", " initForReuse();" ] }, { "added": [], "header": "@@ -450,7 +436,6 @@ public class NetXAResource implements XAResource {", "removed": [ " nextElement = 0;" ] }, { "added": [], "header": "@@ -979,87 +964,4 @@ public class NetXAResource implements XAResource {", "removed": [ "", " protected void removeXaresFromSameRMchain() {", " // check all NetXAResources on the same RM for the NetXAResource to remove", " try {", " this.ignoreMe_ = true; // use the ignoreMe_ flag to indicate the", " // XAResource to remove", " NetXAResource prevXAResource = null;", " NetXAResource currXAResource;", " synchronized (xaResourceSameRMGroup_) { // make sure no one changes this vector list", " currXAResource = (NetXAResource) xaResourceSameRMGroup_.elementAt(sameRMGroupIndex_);", " while (currXAResource != null) { // is this the XAResource to remove?", " if (currXAResource.ignoreMe_) { // this NetXAResource is the one to remove", " if (prevXAResource != null) { // this XAResource is not first in chain, just move next to prev", " prevXAResource.nextSameRM_ = currXAResource.nextSameRM_;", " } else { // this XAResource is first in chain, just move next to root", " xaResourceSameRMGroup_.set(sameRMGroupIndex_,", " currXAResource.nextSameRM_);", " }", " return;", " }", " // this is not the NetXAResource to remove, try the next one", " prevXAResource = currXAResource;", " currXAResource = currXAResource.nextSameRM_;", " }", " }", " } finally {", " this.ignoreMe_ = false;", " }", " }", "", "", " public void initForReuse() {", " // add this new XAResource to the list of other XAResources for the Same RM", " // first find out if there are any other XAResources for the same RM", " // then check to make sure it is not already in the chain", " synchronized (xaResourceSameRMGroup_) { // make sure no one changes this vector list", " int groupCount = xaResourceSameRMGroup_.size();", " int index = 0;", " int firstFreeElement = -1;", " NetXAResource xaResourceGroup = null;", "", " for (; index < groupCount; ++index) { // check if this group is the same RM", " xaResourceGroup = (NetXAResource) xaResourceSameRMGroup_.elementAt(index);", " if (xaResourceGroup == null) { // this is a free element, save its index if first found", " if (firstFreeElement == -1) { // first free element, save index", " firstFreeElement = index;", " }", " continue; // go to next element", " }", " try {", " if (xaResourceGroup.isSameRM(this)) { // it is the same RM add this XAResource to the chain if not there", " NetXAResource nextXares = (NetXAResource)", " xaResourceSameRMGroup_.elementAt(sameRMGroupIndex_);", " while (nextXares != null) { // is this NetXAResource the one we are trying to add?", " if (nextXares.equals(this)) { // the XAResource to be added already is in chain, don't add", " break;", " }", " // Xid was not on that NetXAResource, try the next one", " nextXares = nextXares.nextSameRM_;", " }", "", " if (nextXares == null) { // XAResource to be added is not in the chain already, add it", " // add it at the head of the chain", " sameRMGroupIndex_ = index;", " this.nextSameRM_ = xaResourceGroup.nextSameRM_;", " xaResourceGroup.nextSameRM_ = this;", " }", " return; // done", " }", " } catch (XAException xae) {", " }", " }", "", " // no other same RM was found, add this as first of new group", " if (firstFreeElement == -1) { // no free element found, add new element to end", " xaResourceSameRMGroup_.add(this);", " sameRMGroupIndex_ = groupCount;", " } else { // use first free element found", " xaResourceSameRMGroup_.setElementAt(this, firstFreeElement);", " sameRMGroupIndex_ = firstFreeElement;", " }", " }", " }" ] } ] } ]
derby-DERBY-3917-52d76403
DERBY-3917; skip fixture testCurrentRoleInWeirdContexts and 1 test case in fixture testDefaultCurrentRole in lang.RolesConferredPrivilegesTest with JSR169. Patch contributed by Dag H. Wanvik. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@713533 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3922-d4f93087
DERBY-3922: Support for adding generated columns via ALTER TABLE. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@709152 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/UpdateNode.java", "hunks": [ { "added": [ "import java.util.HashSet;" ], "header": "@@ -81,6 +81,7 @@ import java.lang.reflect.Modifier;", "removed": [] }, { "added": [ " int count = updateColumnList.size();", " HashSet updatedColumns = new HashSet();", "\t\tfor (int ix = 0; ix < count; ix++)", "\t\t\tString name = ((ResultColumn)updateColumnList.elementAt( ix )).getName();", "", " updatedColumns.add( name );" ], "header": "@@ -1109,17 +1110,18 @@ public final class UpdateNode extends DMLModStatementNode", "removed": [ "\t\tFormatableBitSet\t columnMap = new FormatableBitSet(columnCount + 1);", "\t\tint[]\tchangedColumnIds = updateColumnList.sortMe();", "", "\t\tfor (int ix = 0; ix < changedColumnIds.length; ix++)", "\t\t\tcolumnMap.set(changedColumnIds[ix]);" ] }, { "added": [ " // handle the case of setting a generated column to the DEFAULT", " // literal", " if ( updatedColumns.contains( gc.getColumnName() ) ) { affectedGeneratedColumns.add( tableID, gc ); }", "", " ColumnDescriptor mentionedColumn = baseTable.getColumnDescriptor( mentionedColumns[ mcIdx ] );", " String mentionedColumnName = mentionedColumn.getColumnName();", " if ( updatedColumns.contains( mentionedColumnName ) )" ], "header": "@@ -1129,13 +1131,18 @@ public final class UpdateNode extends DMLModStatementNode", "removed": [ " int mentionedColumnID = mentionedColumns[ mcIdx ];", " if ( columnMap.isSet( mentionedColumnID ) )" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java", "hunks": [ { "added": [ "import org.apache.derby.catalog.DefaultInfo;" ], "header": "@@ -28,6 +28,7 @@ import java.util.Iterator;", "removed": [] }, { "added": [ "\t\t\tupdateNewColumnToDefault(activation, columnDescriptor, lcc);" ], "header": "@@ -1339,10 +1340,7 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ "\t\t\tupdateNewColumnToDefault(activation,", "\t\t\t\t\t\t\t\tcolumnInfo[ix].name,", "\t\t\t\t\t\t\t\tcolumnInfo[ix].defaultInfo.getDefaultText(),", "\t\t\t\t\t\t\t\tlcc);" ] }, { "added": [ "\t * @param columnDescriptor catalog descriptor for the column" ], "header": "@@ -3068,8 +3066,7 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ "\t * @param columnName\t\tcolumn name", "\t * @param defaultText\t\tdefault text" ] } ] } ]
derby-DERBY-3925-fc061da9
DERBY-3925 - testMetaDataQueryRunInSYScompilationSchema(.....upgradeTests.Changes10_4) fails on CVM/phoneME Use 'territory=en' instead of 'territory=no' if the 'no' locale is not available. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@823555 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3926-200a5ea7
DERBY-3926 Following is the patch description for DERBY-3926. The problem with the trunk codeline is that when optimizer goes through optimizables in a join order, it only looks at those optimizables individually to decide whether sorting can be avoided on them or not. That approach leaves out few queries which require sorting but do not get sorted. The decision for avoiding sorting should also include relationship between the optimizables in a given join order. Following query demonstrates the trunk problem SELECT table1.id, table2.value, table3.value FROM --DERBY-PROPERTIES joinOrder=FIXED table3 -- DERBY-PROPERTIES index=nonUniqueOnValue_Table3 , table2 -- DERBY-PROPERTIES index=nonUniqueOnValue_Table2 , table1 WHERE table1.id=table2.id AND table2.name='PageSequenceId' AND table1.id=table3.id AND table3.name='PostComponentId' AND table3.value='21857' ORDER BY table2.value; In the query above, when optimizer is considering [table3, table2, -1] join order, it determines that sorting can be avoided on this join order because the order by column table2.value is already covered by the index nonUniqueOnValue_Table2. It does not see that the outermost optimizable table3 will qualify more than one row and hence it will be a multi-row resulset and for each one of those rows, we will be doing a scan into table2. In other words, there will be multiple scans into table2(and the rows returned by each one of those scans will be ordered on table2.value) but the collective rows from those multiple scans are not necessarily going to be ordered on table2.value. This patch is attempting to fix that problem. Currently, in trunk, a column is marked always ordered during a query processing when the optimizer finds that there is constant comparison predicate on the order by column. If the column does not have a constant predicate (as in our example above), we next see if we are using an index which will provide the required ordering on column (which is true in our case. The required ordering on table2.value is provided by the index nonUniqueOnValue_Table2). But as we can see in the query above, this index coverage is not enough to say that sorting is not needed. We need to add 2 more conditions before we can decide to avoid the sorting. One of those cases is 1)if the order by column does not belong to the outermost optimizable, then check if the order by column's optimizable is a one-row resultset. If yes, then it will be safe for the optimizer to avoid the sorting. The second case to consider is 2)if the order by column does not belong to the outermost optimizable, then check if the order by column's optimizable is multi-row resultset BUT all the outer optimizables are one-row resulsets. If either of these 2 additional conditions are satisfied then optimizer can choose to avoid the sorting. Otherwise sorting should be added to the query plan. The example query above does not satisfy the 2 additional checks and hence sorting should be done as part of the query plan. The changes for the 1)check above has gone into OrderbyByList.sortRequired(RowOrdering, JBitSet, OptimizableList). The implementation of this change just required us to check the outer optimizables to be one row since the order by column's optimizable is not one row. If outer optimizables are all one-row, then we say that sorting can be avoided. Otherwise sorting is required. The changes for the 2)check above has gone into FromBaseTable.nextAccessPath(Optimizer optimizer, OptimizablePredicateList predList, RowOrdering rowOrdering) The implementation of this change requires us to see if the order by column is involved in equijoin with outer optimizable's indexed column. If yes, then we know that since outer optimizable is ordered, the rows qualified via the equijoin will also be ordered and hence sorting can be avoided. But if this is not true, then we can't rely on outer optimizables' rows to be ordered on the order by column. To avoid sorting, we need to identify this case 2) as another case when the column can be marked as always ordered and that is when there is an equijoin predicate on the order by column with some other column which is already known to be always ordered. Taking the query from wisconsin as an example will explain this behavior select * from --DERBY-PROPERTIES joinOrder=FIXED TENKTUP1 -- DERBY-PROPERTIES index=TK1UNIQUE1 , TENKTUP2 -- DERBY-PROPERTIES index=TK2UNIQUE1 where TENKTUP1.unique1 = TENKTUP2.unique1 order by TENKTUP1.unique1, TENKTUP2.unique1; For the above query, as per the current trunk codeline, none of the order by columns are marked as always ordered because there is no constant comparison predicate on them. But, for the given join order, with TENKTUP1 as the outermost resultset and with the index TK1UNIQUE1, we know that the current row ordering at this point is going to ensure that rows from TENKTUP1 are ordered on UNIQUE1. Next, when we process TEKTUP2 in the 2nd join order position, we find that there is no constant predicate on TENKTUP2.unique1 and hence we conculde that the rows from TENKTUP2 are not going to be ordered and we decide to force a sort node on the top of the query. But in reality, even though the outer optimizable is not a single row resultset, it is ordered on TENKTUP1.unique1 and hence all those rows from outer optimizable are going to be ordered on TENKTUP1.unique1 and the inner optimizable has an equality join on TENKTUP1.unique1 using the order by column TENKTUP2.unique1 What that translates to is that even if there will be multiple scans into TENKTUP2, the rows qualified are going to be all ordered because of the equijoin between the outer and inner optimizables on the order by columns. So, with my latest patch, I have expanded the notion of always ordered columns to include both constant comparison predicates AND ordered column that has equijoin with an outer optimizable's ordered column. I think this patch is also improving the existing queries to include a better path than what it was picking up before. Following is an example of one such query from wisconsin. select * from TENKTUP1, TENKTUP2 where TENKTUP1.unique1 = TENKTUP2.unique1 and TENKTUP2.unique1 < 100 order by TENKTUP1.unique1; For this query, the trunk currently decides to use TENKTUP1 as the outermost optimizable using the TK1UNIQUE1 index and then those rows are filtered using TENKTUP2.unique1 < 100. Each of the 2 tables involved in the query have 10000 rows each. So we are going through 10000 qualified indexed rows from TENKTUP1 and then applying TENKTUP2.unique1 < 100 on them. With the attached patch, we use TENKTUP2 as the outermost optimizable with the index TK2UNIQUE1 and only gets the indexed rows which satisfy TENKTUP2.unique1 < 100 and then on them, we use the equlity join to fetch qualified rows from TENKTUP1. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@783168 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/sql/compile/RequiredRowOrdering.java", "hunks": [ { "added": [ "import org.apache.derby.impl.sql.compile.PredicateList;" ], "header": "@@ -24,6 +24,7 @@ package org.apache.derby.iapi.sql.compile;", "removed": [] }, { "added": [ "\t * @param optimizableList\tThe current join order being considered by ", "\t * the optimizer. We need to look into this to determine if the outer", "\t * optimizables are single row resultset if the order by column is", "\t * on an inner optimizable and that inner optimizable is not a one", "\t * row resultset. DERBY-3926" ], "header": "@@ -43,6 +44,11 @@ public interface RequiredRowOrdering", "removed": [] }, { "added": [ "\tint sortRequired(RowOrdering rowOrdering, OptimizableList optimizableList) ", "\tthrows StandardException;" ], "header": "@@ -52,7 +58,8 @@ public interface RequiredRowOrdering", "removed": [ "\tint sortRequired(RowOrdering rowOrdering) throws StandardException;" ] }, { "added": [ "\t * @param optimizableList\tThe current join order being considered by ", "\t * the optimizer. We need to look into this to determine if the outer", "\t * optimizables are single row resultset if the order by column is", "\t * on an inner optimizable and that inner optimizable is not a one", "\t * row resultset. DERBY-3926" ], "header": "@@ -63,6 +70,11 @@ public interface RequiredRowOrdering", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java", "hunks": [ { "added": [ "\t\t\t\t\t//Check if the order by column has equijoin on another ", "\t\t\t\t\t//column which is already identified as an ordered column", "\t\t\t\t\tif (doesOrderByColumnHaveEquiJoin(", "\t\t\t\t\t\t\tirg, predList, rowOrdering))", "\t\t\t\t\t\trowOrdering.columnAlwaysOrdered(this, ", "\t\t\t\t\t\t\t\tbaseColumnPositions[i]);", "" ], "header": "@@ -462,6 +462,13 @@ public class FromBaseTable extends FromTable", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/OptimizerImpl.java", "hunks": [ { "added": [ "\t\t\t\t\tif (requiredRowOrdering.sortRequired(", "\t\t\t\t\t\t\tbestRowOrdering, optimizableList) == ", "\t\t\t\t\t\t\t\tRequiredRowOrdering.NOTHING_REQUIRED)" ], "header": "@@ -1789,8 +1789,9 @@ public class OptimizerImpl implements Optimizer", "removed": [ "\t\t\t\t\tif (requiredRowOrdering.sortRequired(bestRowOrdering) ==", "\t\t\t\t\t\t\t\t\tRequiredRowOrdering.NOTHING_REQUIRED)" ] }, { "added": [ "\t\t\t\t\t\t\t\t\t\t\t\t\t\tassignedTableMap,", "\t\t\t\t\t\t\t\t\t\t\t\t\t\toptimizableList)", "\t\t\t\t\t\t\t\t\t\t==RequiredRowOrdering.NOTHING_REQUIRED)" ], "header": "@@ -2246,8 +2247,9 @@ public class OptimizerImpl implements Optimizer", "removed": [ "\t\t\t\t\t\t\t\t\t\t\t\t\t\tassignedTableMap)", "\t\t\t\t\t\t\t\t\t\t== RequiredRowOrdering.NOTHING_REQUIRED)" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/OrderByList.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.sql.compile.Optimizable;", "import org.apache.derby.iapi.sql.compile.OptimizableList;" ], "header": "@@ -22,7 +22,9 @@", "removed": [] }, { "added": [ "\tpublic int sortRequired(RowOrdering rowOrdering,", "\t\t\tOptimizableList optimizableList) throws StandardException", "\t\treturn sortRequired(rowOrdering, (JBitSet) null, optimizableList);" ], "header": "@@ -437,9 +439,10 @@ public class OrderByList extends OrderedColumnList", "removed": [ "\tpublic int sortRequired(RowOrdering rowOrdering) throws StandardException", "\t\treturn sortRequired(rowOrdering, (JBitSet) null);" ] }, { "added": [ "\tpublic int sortRequired(RowOrdering rowOrdering, ", "\t\t\tJBitSet tableMap,", "\t\t\tOptimizableList optimizableList)" ], "header": "@@ -447,7 +450,9 @@ public class OrderByList extends OrderedColumnList", "removed": [ "\tpublic int sortRequired(RowOrdering rowOrdering, JBitSet tableMap)" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/RowOrderingImpl.java", "hunks": [ { "added": [ "\t** result set are ordered. Another instance of always ordered is when", "\t** the column is involved in an equijoin with an optimizable which is ", "\t** always ordered on the column on which the equijoin is happening." ], "header": "@@ -39,7 +39,9 @@ class RowOrderingImpl implements RowOrdering {", "removed": [ "\t** result set are ordered." ] } ] } ]
derby-DERBY-3926-397f968f
DERBY-4331 Fixes a number of sort avoidance bugs that were introduced by the fix for DERBY-3926. This check in backs out the equi-join part of the DERBY-3926. The changes for this were isolated and were the only changes to FromBaseTable.java. Backing out only this part of the 3926 checkin fixes new problems identified in DERBY-4331, and continues to fix the problem queries in DERBY-3926. Knowledge of equijoin is no longer used as a factor for sort avoidance. Also included is an update to the wisconsin tests. A number of diffs resulted from different join order to maintain a sort avoidance plan. 2 queries identified in DERBY-4339 no longer use sort avoidance. The new test cases were reported as part of 4331 were added to the OrderByAndSortAvoidance test. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@801481 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java", "hunks": [ { "added": [], "header": "@@ -462,13 +462,6 @@ public class FromBaseTable extends FromTable", "removed": [ "\t\t\t\t\t//Check if the order by column has equijoin on another ", "\t\t\t\t\t//column which is already identified as an ordered column", "\t\t\t\t\tif (doesOrderByColumnHaveEquiJoin(", "\t\t\t\t\t\t\tirg, predList, rowOrdering))", "\t\t\t\t\t\trowOrdering.columnAlwaysOrdered(this, ", "\t\t\t\t\t\t\t\tbaseColumnPositions[i]);", "" ] } ] } ]
derby-DERBY-3926-8f4810aa
DERBY-3926 Tars Joris contributed a new reproducible script(test-script.zip) which has been granted to ASF for inclusion. I changed the junit test which I added last week to be based on this new script. The changes involved were data change only. The actual test cases in the junit test did not require any changes. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@769147 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3931-31887565
DERBY-3931: Make generated columns tests independent so that they will run smoothly on platforms which run the test cases in reverse alphabetical order. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@709161 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3932-9a9b9326
DERBY-3932: Add basic tests of permissions on generated columns. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@713158 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3934-4987173e
DERBY-3934: Removed unused method 'readUnsignedShort' from UTF8Reader. Patch file: derby-3934-6a-UTF8Reader_remove_method.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@742380 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java", "hunks": [ { "added": [], "header": "@@ -592,25 +592,6 @@ readChars:", "removed": [ " /**", " * Decode the length encoded in the stream.", " * ", " * This method came from {@link java.io.DataInputStream}", " * ", " * @return The number of bytes in the stream, or <code>0</code> if the", " * length is unknown and the end of stream must be marked by the", " * Derby-specific end of stream marker.", " */", " private final int readUnsignedShort() throws IOException {", " int ch1 = in.read();", " int ch2 = in.read();", " if ((ch1 | ch2) < 0)", " throw new EOFException(\"Reached EOF when reading\" +", " \"encoded length bytes\");", "", " return (ch1 << 8) + (ch2 << 0);", " }", "" ] } ] } ]
derby-DERBY-3934-71dca8c7
DERBY-3934: Improve performance of reading modified Clobs. There are two major changes with this patch: a) The implementation of getInternalReader improves the performance of getSubString significantly. This benefits the embedded driver too, but it is crucial for the performance of all read operations on Clob from the client driver. Again, the mechanism used to get better performance is to keep an internal reader around to avoid repositioning on every request. b) Added caching and updating of the Clob character length. Also added some tests for verifying that the Clob length is handled correctly. Patch file: derby-3934-4a-getinternalreader_cachedlength.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@726683 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/TemporaryClob.java", "hunks": [ { "added": [ "import java.io.FilterReader;" ], "header": "@@ -24,6 +24,7 @@ package org.apache.derby.impl.jdbc;", "removed": [] }, { "added": [ "import org.apache.derby.iapi.types.TypeId;" ], "header": "@@ -31,6 +32,7 @@ import java.io.Writer;", "removed": [] }, { "added": [ " /**", " * Cached character length of the Clob.", " * <p>", " * A value of {@code 0} is interpreted as unknown length, even though it is", " * a valid value. If the length is requested and the value is zero, an", " * attempt to obtain the length is made by draining the source.", " */", " private long cachedCharLength;", " /**", " * Shared internal reader, closed when the Clob is released.", " * This is a performance optimization, and the stream is shared between", " * \"one time\" operations, for instance {@code getSubString} calls. Often a", " * subset, or the whole, of the Clob is read subsequently and then this", " * optimization avoids repositioning costs (the store does not support", " * random access for LOBs).", " * <b>NOTE</b>: Do not publish this reader to the end-user.", " */", " private UTF8Reader internalReader;", " /** The internal reader wrapped so that it cannot be closed. */", " private FilterReader unclosableInternalReader;" ], "header": "@@ -54,6 +56,26 @@ final class TemporaryClob implements InternalClob {", "removed": [] }, { "added": [ " if (internalReader != null) {", " internalReader.close();", " internalReader = null;", " unclosableInternalReader = null;", " }" ], "header": "@@ -126,6 +148,11 @@ final class TemporaryClob implements InternalClob {", "removed": [] }, { "added": [ " // getCSD obtains a descriptor for the stream to allow the reader", " // to configure itself.", " Reader isr = new UTF8Reader(getCSD(), conChild,", " conChild.getConnectionSynchronization());" ], "header": "@@ -241,15 +268,10 @@ final class TemporaryClob implements InternalClob {", "removed": [ " // Describe the stream to allow the reader to configure itself.", " CharacterStreamDescriptor csd = new CharacterStreamDescriptor.Builder().", " stream(this.bytes.getInputStream(0)).", " positionAware(true).", " bufferable(this.bytes.getLength() > 4096). // Cache if on disk.", " byteLength(this.bytes.getLength()).", " build();", " Reader isr = new UTF8Reader(", " csd, conChild, conChild.getConnectionSynchronization());" ] }, { "added": [ " if (this.internalReader == null) {", " // getCSD obtains a descriptor for the stream to allow the reader", " // to configure itself.", " this.internalReader = new UTF8Reader(getCSD(), conChild,", " conChild.getConnectionSynchronization());", " this.unclosableInternalReader =", " new FilterReader(this.internalReader) {", " public void close() {", " // Do nothing.", " // Stream will be closed when the Clob is released.", " }", " };", " }", " try {", " this.internalReader.reposition(characterPosition);", " } catch (StandardException se) {", " throw Util.generateCsSQLException(se);", " }", " return this.unclosableInternalReader;" ], "header": "@@ -270,8 +292,25 @@ final class TemporaryClob implements InternalClob {", "removed": [ " // TODO: See if we can optimize for a shared internal reader.", " return getReader(characterPosition);" ] }, { "added": [ " if (cachedCharLength == 0) {", " cachedCharLength = UTF8Util.skipUntilEOF(", " new BufferedInputStream(getRawByteStream()));", " }", " return cachedCharLength;" ], "header": "@@ -282,8 +321,11 @@ final class TemporaryClob implements InternalClob {", "removed": [ " return", " UTF8Util.skipUntilEOF(new BufferedInputStream(getRawByteStream()));" ] }, { "added": [ " long prevLength = cachedCharLength;", " updateInternalState(insertionPoint);" ], "header": "@@ -314,6 +356,8 @@ final class TemporaryClob implements InternalClob {", "removed": [] }, { "added": [ " // Update the length if we know the previous length.", " if (prevLength != 0) {", " long newLength = (insertionPoint -1) + str.length();", " if (newLength > prevLength) {", " cachedCharLength = newLength; // The Clob grew.", " } else {", " // We only wrote over existing characters, length unchanged.", " cachedCharLength = prevLength;", " }", " }" ], "header": "@@ -344,6 +388,16 @@ final class TemporaryClob implements InternalClob {", "removed": [] }, { "added": [ " public synchronized boolean isReleased() {" ], "header": "@@ -353,7 +407,7 @@ final class TemporaryClob implements InternalClob {", "removed": [ " public boolean isReleased() {" ] }, { "added": [ " // Reset the internal state, and then update the length.", " updateInternalState(newCharLength);", " cachedCharLength = newCharLength;" ], "header": "@@ -380,10 +434,9 @@ final class TemporaryClob implements InternalClob {", "removed": [ " if (newCharLength <= this.posCache.getCharPos()) {", " // Reset the cache if last cached position has been cut away.", " this.posCache.reset();", " }" ] } ] } ]
derby-DERBY-3934-910b77f0
DERBY-3934 (partial): Improve performance of reading modified Clobs. The following files are touched (all in derby.impl.jdbc): *** EmbedClob. Updated call to ClobUpdatableReader. The change of the position argument is intentional. *** TemporaryClob Replaced the ClobUpdatableReader returned by getReader with a UTF8Reader. Internal handling of TemporaryClob should deal with changing contents specifically, or create a ClobUpdatableReader where required. Note also the use of the new CharacterStreamDescriptor class. This piece of code will probably be changed later on, when there is more information about the stream available. For instance, caching byte/char positions allows to skip directly to the byte position through the underlying file API. This way, we don't have to decode all the raw bytes to skip the correct number of chars. *** ClobUpdatableReader More or less rewritten. It now uses the new methods exposed by InternalClob to detect changes in the underlying Clob content. Note that this class doesn't handle repositioning, only detection of changes and forwarding of read/skip calls. Note the lazy initialization of the underlying reader. WARNING: There is one thing missing, which is proper synchronization. Access to store will be synchronized in other locations, but this class is not thread safe. I haven't decided yet whether to synchronize on the reader object or the root connection. I think the latter is the best choice. Does anyone know anything about the cost of taking locks on the same object multiple times? *** StoreStreamClob Replaced old UTF8Reader constructor with the new one. Again, this code needs to be updated when more information about the stream is available. This is to allow UTF8Reader to perform better. *** UTF8Reader Added a new constructor, using the new CharacterStreamDescriptor class. Removed one constructor. Retrofitted the second old constructor to use CharacterStreamDescriptor. This will be removed when the calling code has been updated. The old method calculating the buffer size will also be removed. Stopped referencing PositionedStoreStream, using PositionedStream interface instead. This allows the positioning logic to be used for both store streams and LOBInputStream streams. The reader has been prepared to be able to deal with multiple data offsets, i.e. handling several store stream formats. For instance, the current implementations has an offset of two bytes, where as the planned new one will have an offset of at least five bytes. LOBInputStream has an offset of zero bytes (no header information). From now on, position aware streams are not closed as early as before, because we might have go backwards in the stream. Streams that can only move forwards are closed as soon as possible (as before). Note that this patch doesn't fix the most serious performance issue. This will be done in a follow-up patch by implementing getInternalReader in TemporaryClob. Patch file: derby-3934-3a-clobupdreader_utf8reader.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@724294 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/StoreStreamClob.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;" ], "header": "@@ -32,6 +32,7 @@ import java.io.Writer;", "removed": [] }, { "added": [ " // Describe the stream to allow the reader to configure itself.", " CharacterStreamDescriptor csd =", " new CharacterStreamDescriptor.Builder().", " stream(positionedStoreStream).bufferable(false).", " positionAware(true).dataOffset(2L). // TODO", " curCharPos(CharacterStreamDescriptor.BEFORE_FIRST).", " maxCharLength(TypeId.CLOB_MAXWIDTH).", " charLength(cachedCharLength). // 0 means unknown.", " build();", " Reader reader = new UTF8Reader(", " csd, this.conChild, this.synchronizationObject);" ], "header": "@@ -212,9 +213,17 @@ final class StoreStreamClob", "removed": [ " Reader reader = new UTF8Reader(this.positionedStoreStream,", " TypeId.CLOB_MAXWIDTH, this.conChild,", " this.synchronizationObject);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/TemporaryClob.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;" ], "header": "@@ -30,6 +30,7 @@ import java.io.Reader;", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;", "import org.apache.derby.iapi.types.PositionedStream;" ], "header": "@@ -30,7 +30,9 @@ import java.io.EOFException;", "removed": [] }, { "added": [ " /** Stream that can reposition itself on request. */", " private final PositionedStream positionedIn;" ], "header": "@@ -60,28 +62,14 @@ public final class UTF8Reader extends Reader", "removed": [ " /** Store stream that can reposition itself on request. */", " private final PositionedStoreStream positionedIn;", " /**", " * The expected number of bytes in the stream, if known.", " * <p>", " * A value of <code>0<code> means the length is unknown, and that the end", " * of the stream is marked with a Derby-specific end of stream marker.", " */", " private final long utfLen; // bytes", " /** ", " * The maximum number of characters allowed for the column", " * represented by the passed stream.", " * <p>", " * A value of <code>0</code> means there is no associated maximum length.", " */", " private final long maxFieldSize; // characters" ] }, { "added": [ " * Descriptor containing information about the stream.", " * Except for the current positions, the information in this object is", " * considered permanent and valid for the life-time of the stream.", " */", " private final CharacterStreamDescriptor csd;", "", " /**", " * TODO: This constructor will be removed! Is is currently retrofitted to", " * use a CharacterStreamDescriptor.", " *" ], "header": "@@ -102,6 +90,16 @@ public final class UTF8Reader extends Reader", "removed": [] }, { "added": [ " long utfLen = 0;" ], "header": "@@ -127,8 +125,8 @@ public final class UTF8Reader extends Reader", "removed": [ " this.maxFieldSize = maxFieldSize;" ] }, { "added": [ " ((Resetable)this.positionedIn).resetStream();", " utfLen = readUnsignedShort();" ], "header": "@@ -142,14 +140,14 @@ public final class UTF8Reader extends Reader", "removed": [ " this.positionedIn.resetStream();", " this.utfLen = readUnsignedShort();" ] }, { "added": [ " this.csd = new CharacterStreamDescriptor.Builder().", " bufferable(positionedIn == null).", " positionAware(positionedIn != null).byteLength(utfLen).", " dataOffset(2).curBytePos(2).stream(in).", " build();", " public UTF8Reader(CharacterStreamDescriptor csd, ConnectionChild conChild,", " Object sync) {", " super(sync);", " this.csd = csd;", " this.positionedIn =", " (csd.isPositionAware() ? csd.getPositionedStream() : null);", " this.parent = conChild;", "", " int buffersize = calculateBufferSize(csd);", " this.buffer = new char[buffersize];", "", " // Check and save the stream state.", " if (SanityManager.DEBUG) { ", " if (csd.isPositionAware()) {", " SanityManager.ASSERT(", " csd.getCurBytePos() == positionedIn.getPosition());", " }", " }", " this.rawStreamPos = positionedIn.getPosition();", " // Make sure we start at the first data byte, not in the header.", " if (rawStreamPos < csd.getDataOffset()) {", " rawStreamPos = csd.getDataOffset();", " }", " // Buffer stream for improved performance, if appropriate.", " if (csd.isBufferable()) {", " this.in = new BufferedInputStream(csd.getStream(), buffersize);", " } else {", " this.in = csd.getStream();" ], "header": "@@ -170,45 +168,43 @@ public final class UTF8Reader extends Reader", "removed": [ " /**", " * Constructs a <code>UTF8Reader</code> using a stream.", " * <p>", " * This consturctor accepts the stream size as parameter and doesn't", " * attempt to read the length from the stream.", " *", " * @param in the underlying stream", " * @param maxFieldSize the maximum allowed length for the associated column", " * @param streamSize size of the underlying stream in bytes", " * @param parent the connection child this stream is associated with", " * @param synchronization object to synchronize on", " */", " public UTF8Reader(", " InputStream in,", " long maxFieldSize,", " long streamSize,", " ConnectionChild parent,", " Object synchronization) {", " super(synchronization);", " this.maxFieldSize = maxFieldSize;", " this.parent = parent;", " this.utfLen = streamSize;", " this.positionedIn = null;", " if (SanityManager.DEBUG) {", " // Do not allow the inputstream here to be a Resetable, as this", " // means (currently, not by design...) that the length is encoded in", " // the stream and we can't pass that out as data to the user.", " SanityManager.ASSERT(!(in instanceof Resetable));", " int bufferSize = calculateBufferSize(streamSize, maxFieldSize);", " this.buffer = new char[bufferSize];", " // Buffer this for improved performance.", " // Note that the stream buffers bytes, whereas the internal buffer", " // buffers characters. In worst case, the stream buffer must be filled", " // three times to fill the internal character buffer.", " this.in = new BufferedInputStream(in, bufferSize);" ] }, { "added": [ " // Keep track of how much we are allowed to read.", " long utfLen = csd.getByteLength();", " long maxFieldSize = csd.getMaxCharLength();" ], "header": "@@ -465,6 +461,9 @@ public final class UTF8Reader extends Reader", "removed": [] }, { "added": [ " // Close the stream if it cannot be reset.", " if (!csd.isPositionAware()) {", " closeIn();", " }" ], "header": "@@ -475,7 +474,10 @@ readChars:", "removed": [ " closeIn();" ] }, { "added": [ " // Close the stream if it cannot be reset.", " if (!csd.isPositionAware()) {", " closeIn();", " }" ], "header": "@@ -528,7 +530,10 @@ readChars:", "removed": [ " closeIn();" ] }, { "added": [ " // Close the stream if it cannot be reset.", " if (!csd.isPositionAware()) {", " closeIn();", " }" ], "header": "@@ -570,7 +575,10 @@ readChars:", "removed": [ " closeIn();" ] }, { "added": [ " // Skip the length encoding bytes.", " this.positionedIn.reposition(csd.getDataOffset());", " // If bufferable, discard buffered stream and create a new one.", " if (csd.isBufferable()) {", " this.in = new BufferedInputStream(csd.getStream(), buffer.length);", " }" ], "header": "@@ -591,10 +599,13 @@ readChars:", "removed": [ " // 2L to skip the length encoding bytes.", " this.positionedIn.reposition(2L);", " this.in = this.positionedIn;" ] }, { "added": [ " * TODO: Remove this when CSD is fully integrated.", " *" ], "header": "@@ -660,6 +671,8 @@ readChars:", "removed": [] }, { "added": [ " /**", " * Calculates an optimized buffer size.", " * <p>", " * The maximum size allowed is returned if the specified values don't give", " * enough information to say a smaller buffer size is preferable.", " *", " * @param csd stream descriptor", " * @return An (sub)optimal buffer size.", " */", " private final int calculateBufferSize(CharacterStreamDescriptor csd) {", " // Using the maximum buffer size will be optimal,", " // unless the data is smaller than the maximum buffer.", " int bufferSize = MAXIMUM_BUFFER_SIZE;", " long knownLength = csd.getCharLength();", " long maxCharLength = csd.getMaxCharLength();", " if (knownLength < 1) {", " // Unknown char length, use byte count instead (might be zero too).", " knownLength = csd.getByteLength();", " }", " if (knownLength > 0 && knownLength < bufferSize) {", " bufferSize = (int)knownLength;", " }", " if (maxCharLength > 0 && maxCharLength < bufferSize) {", " bufferSize = (int)maxCharLength;", " }", " return bufferSize;", " }", "" ], "header": "@@ -680,6 +693,34 @@ readChars:", "removed": [] } ] } ]
derby-DERBY-3934-9b9c25ad
DERBY-3934: Added two tests for Clob modifications (character replacement). Patch file: derby-3934-1a-clob_replace_test.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@720767 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3934-a694c19f
DERBY-3934: Improve performance of reading modified Clobs. Removed deprecated constructor and adjusted calling code. Patch file: derby-3934-5a-UTF8Reader_cleanup.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@736000 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement20.java", "hunks": [ { "added": [], "header": "@@ -35,7 +35,6 @@ import java.sql.Blob;", "removed": [ "import java.net.URL;" ] }, { "added": [ "import org.apache.derby.iapi.jdbc.CharacterStreamDescriptor;", "import org.apache.derby.iapi.types.StringDataValue;" ], "header": "@@ -49,11 +48,12 @@ import java.util.Calendar;", "removed": [ "import org.apache.derby.iapi.sql.conn.StatementContext;" ] }, { "added": [ " StringDataValue param = (StringDataValue)" ], "header": "@@ -1124,7 +1124,7 @@ public abstract class EmbedCallableStatement20", "removed": [ " DataValueDescriptor param = " ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/UTF8Reader.java", "hunks": [ { "added": [], "header": "@@ -33,7 +33,6 @@ import org.apache.derby.iapi.error.StandardException;", "removed": [ "import org.apache.derby.iapi.types.Resetable;" ] }, { "added": [ " * Constructs a reader on top of the source UTF-8 encoded stream.", " * @param csd a description of and reference to the source stream", " * @param conChild the parent object / connection child", " * @param sync synchronization object used when accessing the underlying", " * data stream" ], "header": "@@ -100,86 +99,14 @@ public final class UTF8Reader extends Reader", "removed": [ " * TODO: This constructor will be removed! Is is currently retrofitted to", " * use a CharacterStreamDescriptor.", " * Constructs a reader and consumes the encoded length bytes from the", " * stream.", " * <p>", " * The encoded length bytes either state the number of bytes in the stream,", " * or it is <code>0</code> which informs us the length is unknown or could", " * not be represented and that we have to look for the Derby-specific", " * end of stream marker.", " * ", " * @param in the underlying stream", " * @param maxFieldSize the maximum allowed column length in characters", " * @param parent the parent object / connection child", " * @param synchronization synchronization object used when accessing the", " * underlying data stream", " * ", " * @throws SQLException if setting up or restoring the context stack fails", " public UTF8Reader(", " InputStream in,", " long maxFieldSize,", " ConnectionChild parent,", " Object synchronization)", " throws IOException, SQLException", " {", " super(synchronization);", " this.parent = parent;", " long utfLen = 0;", "", " parent.setupContextStack();", " try {", " synchronized (lock) { // Synchronize access to store.", " this.in = in; // Note the possible reassignment below.", " if (in instanceof PositionedStoreStream) {", " this.positionedIn = (PositionedStoreStream)in;", " // This stream is already buffered, and buffering it again", " // this high up complicates the handling a lot. Must", " // implement a special buffered reader to buffer again.", " // Note that buffering this UTF8Reader again, does not", " // cause any trouble...", " try {", " ((Resetable)this.positionedIn).resetStream();", " } catch (StandardException se) {", " throw Util.newIOException(se);", " }", " } else {", " this.positionedIn = null;", " }", " utfLen = readUnsignedShort();", " // Even if we are reading the encoded length, the stream may", " // not be a positioned stream. This is currently true when a", " // stream is passed in after a ResultSet.getXXXStream method.", " if (this.positionedIn != null) {", " this.rawStreamPos = this.positionedIn.getPosition();", " }", " } // End synchronized block", " } finally {", " parent.restoreContextStack();", " }", " // Setup buffering.", " int bufferSize = calculateBufferSize(utfLen, maxFieldSize);", " this.buffer = new char[bufferSize];", " if (this.positionedIn == null) {", " // Buffer this for improved performance.", " // Note that the stream buffers bytes, whereas the internal buffer", " // buffers characters. In worst case, the stream buffer must be", " // filled three times to fill the internal character buffer.", " this.in = new BufferedInputStream(in, bufferSize);", " }", " this.csd = new CharacterStreamDescriptor.Builder().", " bufferable(positionedIn == null).", " positionAware(positionedIn != null).", " byteLength(utfLen == 0 ? 0 : utfLen +2). // Add header bytes", " dataOffset(2).curBytePos(2).stream(in).", " build();", " utfCount = 2;", " }", "" ] }, { "added": [], "header": "@@ -684,29 +611,6 @@ readChars:", "removed": [ " /**", " * TODO: Remove this when CSD is fully integrated.", " *", " * Calculates an optimized buffer size.", " * <p>", " * The maximum size allowed is returned if the specified values don't give", " * enough information to say a smaller buffer size is preferable.", " *", " * @param encodedSize data length in bytes", " * @param maxFieldSize maximum data length in bytes", " * @return An (sub)optimal buffer size.", " */", " private final int calculateBufferSize(long encodedSize, long maxFieldSize) {", " int bufferSize = MAXIMUM_BUFFER_SIZE;", " if (encodedSize > 0 && encodedSize < bufferSize) {", " bufferSize = (int)encodedSize;", " }", " if (maxFieldSize > 0 && maxFieldSize < bufferSize) {", " bufferSize = (int)maxFieldSize;", " }", " return bufferSize;", " }", "" ] } ] } ]
derby-DERBY-3934-d9319b8e
DERBY-3934: Improve performance of reading modified Clobs. Added two new methods to InternalClob; - getUpdateCount - isReleased These methods can be used to detect if the contents have been changed, for instance by streams reading from the internal Clob representation. Patch file: derby-3934-2a-intclob_new_methods.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@721162 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/InternalClob.java", "hunks": [ { "added": [ " /**", " * Returns the update count of the Clob.", " * <p>", " * The update count is increased each time a modification of the Clob", " * content is made.", " *", " * @return Update count, starting at zero.", " */", " long getUpdateCount();", "" ], "header": "@@ -109,6 +109,16 @@ interface InternalClob {", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/StoreStreamClob.java", "hunks": [ { "added": [ " /**", " * Returns the update count of this Clob.", " * <p>", " * Always returns zero, as this Clob cannot be updated.", " *", " * @return Zero (read-only Clob).", " */", " public long getUpdateCount() {", " return 0L;", " }", "" ], "header": "@@ -249,6 +249,17 @@ final class StoreStreamClob", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/TemporaryClob.java", "hunks": [ { "added": [ " /**", " * Returns the update count of this Clob.", " *", " * @return Update count.", " */", " public long getUpdateCount() {", " return bytes.getUpdateCount();", " }", "" ], "header": "@@ -195,6 +195,15 @@ final class TemporaryClob implements InternalClob {", "removed": [] }, { "added": [ " /**", " * Tells if this Clob has been released.", " *", " * @return {@code true} if released, {@code false} if not.", " */", " public boolean isReleased() {", " return released;", " }", "" ], "header": "@@ -331,6 +340,15 @@ final class TemporaryClob implements InternalClob {", "removed": [] } ] } ]
derby-DERBY-3941-081a08c2
DERBY-3941: Clean up import statements in StoredPage Contributed by Yun Lee. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@765943 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java", "hunks": [ { "added": [ "import java.io.ByteArrayInputStream;", "import java.io.ByteArrayOutputStream;", "import java.io.EOFException;", "import java.io.IOException;", "import java.io.InputStream;", "import java.io.ObjectInput;", "import java.io.OutputStream;", "import java.util.Arrays;", "import java.util.zip.CRC32;", "import org.apache.derby.iapi.error.StandardException;", "import org.apache.derby.iapi.reference.SQLState;", "import org.apache.derby.iapi.services.io.ArrayInputStream;", "import org.apache.derby.iapi.services.io.ArrayOutputStream;", "import org.apache.derby.iapi.services.io.CompressedNumber;", "import org.apache.derby.iapi.services.io.DynamicByteArrayOutputStream;", "import org.apache.derby.iapi.services.io.ErrorObjectInput;", "import org.apache.derby.iapi.services.io.FormatIdUtil;", "import org.apache.derby.iapi.services.io.FormatableBitSet;", "import org.apache.derby.iapi.services.io.LimitObjectInput;", "import org.apache.derby.iapi.store.access.conglomerate.LogicalUndo;" ], "header": "@@ -21,31 +21,34 @@", "removed": [ "import org.apache.derby.iapi.reference.SQLState;", "", "import org.apache.derby.impl.store.raw.data.BasePage;", "", "import org.apache.derby.impl.store.raw.data.LongColumnException;", "import org.apache.derby.impl.store.raw.data.OverflowInputStream;", "import org.apache.derby.impl.store.raw.data.PageVersion;", "import org.apache.derby.impl.store.raw.data.RecordId;", "import org.apache.derby.impl.store.raw.data.RawField;", "import org.apache.derby.impl.store.raw.data.ReclaimSpace;", "import org.apache.derby.impl.store.raw.data.StoredFieldHeader;", "import org.apache.derby.impl.store.raw.data.StoredRecordHeader;", "import org.apache.derby.iapi.services.io.FormatIdUtil;", "import org.apache.derby.iapi.services.io.TypedFormat;", "", "import org.apache.derby.iapi.store.access.conglomerate.LogicalUndo;", "" ] }, { "added": [], "header": "@@ -55,40 +58,8 @@ import org.apache.derby.iapi.store.raw.RawStoreFactory;", "removed": [ "", "import org.apache.derby.iapi.error.StandardException;", "", "", "import org.apache.derby.iapi.types.Orderable;", "", "import org.apache.derby.iapi.services.io.ArrayInputStream;", "import org.apache.derby.iapi.services.io.ArrayOutputStream;", "import org.apache.derby.iapi.services.io.FormatableBitSet;", "import org.apache.derby.iapi.services.io.CompressedNumber;", "import org.apache.derby.iapi.services.io.DynamicByteArrayOutputStream;", "import org.apache.derby.iapi.services.io.DynamicByteArrayOutputStream;", "import org.apache.derby.iapi.services.io.LimitObjectInput;", "import org.apache.derby.iapi.services.io.ErrorObjectInput;", "", "", "import java.util.Arrays;", "import java.util.zip.CRC32;", "", "import java.io.IOException;", "import java.io.EOFException;", "import java.io.Externalizable;", "import java.io.InvalidClassException;", "", "import java.io.ObjectOutput;", "import java.io.ObjectInput;", "import java.io.DataInput;", "import java.io.DataOutput;", "import java.io.InputStream;", "import java.io.OutputStream;", "import java.io.ByteArrayInputStream;", "import java.io.ByteArrayOutputStream;" ] } ] } ]
derby-DERBY-3941-a5d378d2
DERBY-3941: Unsafe use of DataInput.skipBytes() Replaced calls to DataInput.skipBytes() with new utility method DataInputUtil.skipFully(). Patch contributed by Yun Lee. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@766163 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/services/classfile/ClassInvestigator.java", "hunks": [ { "added": [ "import java.io.IOException;", "import java.util.Collections;", "", "import org.apache.derby.iapi.services.io.DataInputUtil;" ], "header": "@@ -22,20 +22,15 @@", "removed": [ "", "import java.io.IOException;", "import java.util.Vector;", "", "import org.apache.derby.iapi.services.classfile.VMDescriptor;", "import org.apache.derby.iapi.services.classfile.VMDescriptor;", "", "import java.util.Enumeration;", "import java.util.Collections;" ] }, { "added": [ "\t\tDataInputUtil.skipFully(ci, 4);// puts us at code_length", "\t\tDataInputUtil.skipFully(ci, len);// puts us at exception_table_length", "\t\t\tDataInputUtil.skipFully(ci, 8 * count);" ], "header": "@@ -305,13 +300,12 @@ public class ClassInvestigator extends ClassHolder {", "removed": [ "", "\t\tci.skipBytes(4); // puts us at code_length", "\t\tci.skipBytes(len); // puts us at exception_table_length", "\t\t\tci.skipBytes(8 * count);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/data/StoredFieldHeader.java", "hunks": [ { "added": [ "import java.io.IOException;", "import org.apache.derby.iapi.error.StandardException;", "import org.apache.derby.iapi.services.io.DataInputUtil;", "import org.apache.derby.iapi.services.sanity.SanityManager;" ], "header": "@@ -20,19 +20,17 @@", "removed": [ "import org.apache.derby.iapi.store.raw.RecordHandle;", "import org.apache.derby.iapi.services.sanity.SanityManager;", "", "import java.io.IOException;", "", "import java.io.InputStream;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.io.DataInputUtil;" ], "header": "@@ -36,6 +36,7 @@ import org.apache.derby.iapi.reference.SQLState;", "removed": [] }, { "added": [ " DataInputUtil.skipFully(lrdi, unread);" ], "header": "@@ -4662,7 +4663,7 @@ public class StoredPage extends CachedPage", "removed": [ " lrdi.skipBytes(unread);" ] }, { "added": [ " DataInputUtil.skipFully(lrdi, unread);" ], "header": "@@ -4711,7 +4712,7 @@ public class StoredPage extends CachedPage", "removed": [ " lrdi.skipBytes(unread);" ] }, { "added": [ "\t\t\t\t\t\t\tDataInputUtil.skipFully(dataIn, unread);" ], "header": "@@ -5258,7 +5259,7 @@ public class StoredPage extends CachedPage", "removed": [ "\t\t\t\t\t\t\tdataIn.skipBytes(unread);" ] }, { "added": [ "\t\t\t\t\tDataInputUtil.skipFully(dataIn, unread);" ], "header": "@@ -5315,7 +5316,7 @@ public class StoredPage extends CachedPage", "removed": [ "\t\t\t\t\tdataIn.skipBytes(unread);" ] }, { "added": [ " DataInputUtil.skipFully(dataIn, unread);" ], "header": "@@ -5561,7 +5562,7 @@ public class StoredPage extends CachedPage", "removed": [ " dataIn.skipBytes(unread);" ] }, { "added": [ " DataInputUtil.skipFully(dataIn, unread);" ], "header": "@@ -5626,7 +5627,7 @@ public class StoredPage extends CachedPage", "removed": [ " dataIn.skipBytes(unread);" ] } ] }, { "file": "java/testing/org/apache/derbyTesting/unitTests/junit/_Suite.java", "hunks": [ { "added": [ " suite.addTest(DataInputUtilTest.suite());" ], "header": "@@ -58,6 +58,7 @@ public class _Suite extends BaseTestCase {", "removed": [] } ] } ]
derby-DERBY-3944-06ac9fb4
DERBY-3944: Always compile CHECK constraints in the schema of the target table. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@964402 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/DMLModStatementNode.java", "hunks": [ { "added": [ " CompilerContext compilerContext = getCompilerContext();", " ", "\t\tcompilerContext.pushCurrentPrivType( Authorizer.NULL_PRIV);" ], "header": "@@ -627,8 +627,10 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [ "\t\tgetCompilerContext().pushCurrentPrivType( Authorizer.NULL_PRIV);" ] } ] } ]
derby-DERBY-3945-01aa1762
DERBY-3945: Resolve unqualified function names in generation clauses against the current schema from DDL rather than DML time. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@719123 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/catalog/types/DefaultInfoImpl.java", "hunks": [ { "added": [ " private String originalCurrentSchema;" ], "header": "@@ -56,6 +56,7 @@ public class DefaultInfoImpl implements DefaultInfo, Formatable", "removed": [] }, { "added": [ " String[] referencedColumnNames,", " String originalCurrentSchema" ], "header": "@@ -86,7 +87,8 @@ public class DefaultInfoImpl implements DefaultInfo, Formatable", "removed": [ " String[] referencedColumnNames" ] }, { "added": [ " this.originalCurrentSchema = originalCurrentSchema;" ], "header": "@@ -94,6 +96,7 @@ public class DefaultInfoImpl implements DefaultInfo, Formatable", "removed": [] }, { "added": [ "\t/**", "\t * @see DefaultInfo#getOriginalCurrentSchema", "\t */", "\tpublic String getOriginalCurrentSchema()", "\t{", "\t\treturn originalCurrentSchema;", "\t}", "" ], "header": "@@ -112,6 +115,14 @@ public class DefaultInfoImpl implements DefaultInfo, Formatable", "removed": [] }, { "added": [ " originalCurrentSchema = (String) in.readObject();" ], "header": "@@ -146,6 +157,7 @@ public class DefaultInfoImpl implements DefaultInfo, Formatable", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/DMLModStatementNode.java", "hunks": [ { "added": [ "import org.apache.derby.catalog.DefaultInfo;" ], "header": "@@ -27,6 +27,7 @@ import java.util.Hashtable;", "removed": [] }, { "added": [ "\t\tCompilerContext \t\t\tcompilerContext = getCompilerContext();" ], "header": "@@ -496,6 +497,7 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [] }, { "added": [ " DefaultInfo di = colDesc.getDefaultInfo();", " ValueNode generationClause = parseGenerationClause( di.getDefaultText(), targetTableDescriptor );" ], "header": "@@ -514,7 +516,8 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [ " ValueNode generationClause = parseGenerationClause( colDesc.getDefaultInfo().getDefaultText(), targetTableDescriptor );" ] } ] } ]
derby-DERBY-3947-e594ab0b
DERBY-3947: Cannot insert 994 char string into indexed column A table created with "CREATE TABLE t (x varchar(1000) primary key)" could encounter problems when a particularly long value of "x" was inserted, because the index that was automatically created to support the PRIMARY KEY constraint was created with a small page size. Such an insert statement would get an error like: "Limitation: Record of a btree secondary index cannot be updated or inserted due to lack of space on the page." This change enhances TableElementList so that, when creating an index for a constraint, it now checks the approximate length of the columns in the index, and, if they are sufficiently long, automatically chooses a larger default page size for the index conglomerate. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@886162 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/TableElementList.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.reference.Property;", "import org.apache.derby.iapi.services.property.PropertyUtil;" ], "header": "@@ -25,6 +25,8 @@ import org.apache.derby.iapi.services.io.FormatableBitSet;", "removed": [] }, { "added": [ "import java.util.Properties;" ], "header": "@@ -61,6 +63,7 @@ import org.apache.derby.catalog.UUID;", "removed": [] } ] } ]
derby-DERBY-3948-6a17f800
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Reject references to generated columns in the NEW transition variables of BEFORE triggers, as required by the SQL standard. See also DERBY-3948. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1527489 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3948-e7354480
DERBY-3948: Forbid references to generated columns in the NEW variable of BEFORE triggers. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@718707 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/CreateTriggerNode.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.sql.dictionary.ColumnDescriptorList;" ], "header": "@@ -34,6 +34,7 @@ import org.apache.derby.iapi.sql.compile.CompilerContext;", "removed": [] }, { "added": [ " // the actions of before triggers may not reference generated columns", " if ( isBefore ) { forbidActionsOnGenCols(); }", " " ], "header": "@@ -369,6 +370,9 @@ public class CreateTriggerNode extends DDLStatementNode", "removed": [] } ] } ]
derby-DERBY-395-22ccbb42
DERBY-395 Server-side "trace on" and "trace off" commands do not appear to be working correctly. Contributed by Bryan Pendleton Attached is a proposed fix. derbyall passed. I put a small comment in the code. I didn't add any new tests, which is unfortunate, but I didn't have any brilliant inside about an easy way to add such tests. Testing with the server tracing is already somewhat of a PITA because the server tracing interacts poorly with things like the security manager. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@373291 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/impl/drda/ClientThread.java", "hunks": [ { "added": [], "header": "@@ -30,8 +30,6 @@ final class ClientThread extends Thread {", "removed": [ "\tprivate String traceDir;", "\tprivate boolean traceAll;" ] }, { "added": [], "header": "@@ -42,8 +40,6 @@ final class ClientThread extends Thread {", "removed": [ "\t\t\ttraceDir=parent.getTraceDirectory();", "\t\t\ttraceAll=parent.getTraceAll();" ] }, { "added": [ "\t\t\t\t// Note that we always re-fetch the tracing", "\t\t\t\t// configuration from the parent, because it", "\t\t\t\t// may have changed (there are administrative", "\t\t\t\t// commands which allow dynamic tracing", "\t\t\t\t// reconfiguration).", "\t\t\t\t\tparent.getTraceDirectory(),", "\t\t\t\t\tparent.getTraceAll());" ], "header": "@@ -87,8 +83,14 @@ final class ClientThread extends Thread {", "removed": [ "\t\t\t\t\ttraceDir, traceAll);" ] } ] } ]
derby-DERBY-3950-97a8b1c7
DERBY-3950: Prevent driving SELECTs from overriding the values of generated columns. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@718381 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumn.java", "hunks": [ { "added": [ " private boolean wasDefault;" ], "header": "@@ -88,6 +88,7 @@ public class ResultColumn extends ValueNode", "removed": [] }, { "added": [ "\t/**", "\t * Returns TRUE if the ResultColumn used to stand in for a DEFAULT keyword in", "\t * an insert/update statement.", "\t */", "\tpublic boolean wasDefaultColumn()", "\t{", "\t\treturn wasDefault;", "\t}", "", "\tpublic void setWasDefaultColumn(boolean value)", "\t{", "\t\twasDefault = value;", "\t}", "" ], "header": "@@ -207,6 +208,20 @@ public class ResultColumn extends ValueNode", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumnList.java", "hunks": [ { "added": [], "header": "@@ -3919,7 +3919,6 @@ public class ResultColumnList extends QueryTreeNodeVector", "removed": [ "" ] }, { "added": [ " rc.setWasDefaultColumn( true );", " rc.setDefaultColumn(false);" ], "header": "@@ -3927,8 +3926,9 @@ public class ResultColumnList extends QueryTreeNodeVector", "removed": [ "\t\t\t\trc.setDefaultColumn(false);" ] }, { "added": [ "\t * check if any autoincrement or generated columns exist in the result column list.", "\t * of a generated or autoincrement column.", "\tpublic void forbidOverrides(ResultColumnList sourceRSRCL)" ], "header": "@@ -4056,13 +4056,13 @@ public class ResultColumnList extends QueryTreeNodeVector", "removed": [ "\t * check if any autoincrement columns exist in the result column list.", "\t * of an autoincrement column.", "\tpublic void checkAutoincrement(ResultColumnList sourceRSRCL)" ] } ] } ]
derby-DERBY-3955-1b72b60b
DERBY-3955; test lang/selectivity.sql can be revived committing patch 3 - which adds the remaining test cases from selectivity.sql to SelectivityTest.java git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1560247 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3956-f9cb8886
DERBY-3956: Remove method TemplateRow.checkPartialColumnTypes Deleted the method, which always returned true and was only called from withing debug blocks (sane builds). Patch file: derby-3956-1a.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@719008 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeScan.java", "hunks": [ { "added": [], "header": "@@ -1541,10 +1541,6 @@ public abstract class BTreeScan extends OpenBTree implements ScanManager", "removed": [ " ", " TemplateRow.checkPartialColumnTypes(", " this.getConglomerate().format_ids, ", " init_scanColumnList, (int []) null, row);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/access/conglomerate/TemplateRow.java", "hunks": [ { "added": [], "header": "@@ -21,8 +21,6 @@", "removed": [ "import org.apache.derby.iapi.reference.SQLState;", "" ] }, { "added": [], "header": "@@ -319,29 +317,6 @@ public final class TemplateRow", "removed": [ " return(ret_val);", "\t}", "", " /**", " * Check that columns in the row conform to a set of format id's, ", " * both in number and type.", " *", "\t * @return boolean indicating if template matches format id's", " *", " * @param format_ids array of format ids which are the types of cols in row", " * @param row the array of columns that make up the row.", " *", "\t * @exception StandardException Standard exception policy.", " **/", "\tstatic public boolean checkPartialColumnTypes(", " int[] format_ids, ", " FormatableBitSet validColumns,", " int[] fieldStates,", " DataValueDescriptor[] row)", "\t\tthrows StandardException", "\t{", " boolean ret_val = true;", "" ] } ] } ]
derby-DERBY-3964-03972928
DERBY-3964: Fix NPE in evaluation of generated columns while processing an ON DELETE SET NULL referential action. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@722623 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/DMLModStatementNode.java", "hunks": [ { "added": [], "header": "@@ -575,7 +575,6 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [ " " ] }, { "added": [ " * @param isUpdate true if this is for an UPDATE statement" ], "header": "@@ -1582,6 +1581,7 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [] }, { "added": [ " boolean isUpdate," ], "header": "@@ -1591,6 +1591,7 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [] }, { "added": [ "\t\t\tMethodBuilder\tuserExprFun = generateGenerationClauses( rcl, resultSetNumber, isUpdate, ecb);" ], "header": "@@ -1637,7 +1638,7 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [ "\t\t\tMethodBuilder\tuserExprFun = generateGenerationClauses( rcl, resultSetNumber, ecb);" ] }, { "added": [ " * @param isUpdate true if this is for an UPDATE statement" ], "header": "@@ -1651,6 +1652,7 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [] }, { "added": [ " boolean isUpdate," ], "header": "@@ -1658,6 +1660,7 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [] }, { "added": [ "" ], "header": "@@ -1666,7 +1669,7 @@ abstract class DMLModStatementNode extends DMLStatementNode", "removed": [ "\t\t" ] } ] } ]
derby-DERBY-3966-99494f14
DERBY-3966: Make 1.4 JDK optional when building Derby. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@726092 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/build/org/apache/derbyPreBuild/PropertySetter.java", "hunks": [ { "added": [ " if ( j14lib != null ) { setClasspathFromLib(J14CLASSPATH, j14lib, true ); }", " if ( j15lib != null ) { setClasspathFromLib(J15CLASSPATH, j15lib, true ); }" ], "header": "@@ -204,8 +204,8 @@ public class PropertySetter extends Task", "removed": [ " if ( j14lib != null ) { setClasspathFromLib(J14CLASSPATH, j14lib ); }", " if ( j15lib != null ) { setClasspathFromLib(J15CLASSPATH, j15lib ); }" ] }, { "added": [ " //", " // We now allow J14CLASSPATH to not be set. If a 1.4 JDK can't be found,", " // then the calling script will set J14CLASSPATH, based on J15CLASSPATH.", " //", "" ], "header": "@@ -236,8 +236,12 @@ public class PropertySetter extends Task", "removed": [ " requireProperty( J14CLASSPATH );" ] }, { "added": [ " { default_j14lib = searchForJreLib(jdkParents, seed14, false ); }", " { default_j15lib = searchForJreLib(jdkParents, seed15, true ); }" ], "header": "@@ -313,10 +317,10 @@ public class PropertySetter extends Task", "removed": [ " { default_j14lib = searchForJreLib(jdkParents, seed14); }", " { default_j15lib = searchForJreLib(jdkParents, seed15); }" ] }, { "added": [ " private String searchForJreLib(List<File> parents, String seed, boolean squawkIfEmpty) {", " String jreLib = getJreLib(parent, seed, squawkIfEmpty);" ], "header": "@@ -328,9 +332,9 @@ public class PropertySetter extends Task", "removed": [ " private String searchForJreLib(List<File> parents, String seed) {", " String jreLib = getJreLib(parent, seed);" ] }, { "added": [ " private String getJreLib( File jdkParentDirectory, String jdkName, boolean squawkIfEmpty )" ], "header": "@@ -407,7 +411,7 @@ public class PropertySetter extends Task", "removed": [ " private String getJreLib( File jdkParentDirectory, String jdkName )" ] }, { "added": [ " if ( squawkIfEmpty )", " { echo( \"Directory '\" + jdkParentDirectory.getAbsolutePath() + \"' does not have any child directories containing the string '\" + jdkName + \"'.\" ); }", " " ], "header": "@@ -417,7 +421,9 @@ public class PropertySetter extends Task", "removed": [ " echo( \"Directory '\" + jdkParentDirectory.getAbsolutePath() + \"' does not have any child directories containing the string '\" + jdkName + \"'.\" );" ] }, { "added": [ " setClasspathFromLib( J14CLASSPATH, j14lib, false );", " setClasspathFromLib( J15CLASSPATH, j15lib, true );" ], "header": "@@ -458,8 +464,8 @@ public class PropertySetter extends Task", "removed": [ " setClasspathFromLib( J14CLASSPATH, j14lib );", " setClasspathFromLib( J15CLASSPATH, j15lib );" ] }, { "added": [ " private void setClasspathFromLib( String classpathProperty, String libraryDirectory, boolean squawkIfEmpty )" ], "header": "@@ -469,7 +475,7 @@ public class PropertySetter extends Task", "removed": [ " private void setClasspathFromLib( String classpathProperty, String libraryDirectory )" ] }, { "added": [ " String jars = listJars( libraryDirectory, squawkIfEmpty );", " if ( squawkIfEmpty && (jars == null) )", " if ( jars != null ) { setProperty( classpathProperty, jars ); }" ], "header": "@@ -477,14 +483,14 @@ public class PropertySetter extends Task", "removed": [ " String jars = listJars( libraryDirectory );", " if ( jars == null )", " setProperty( classpathProperty, jars );" ] }, { "added": [ " private String listJars( String dirName, boolean squawkIfEmpty )" ], "header": "@@ -495,7 +501,7 @@ public class PropertySetter extends Task", "removed": [ " private String listJars( String dirName )" ] } ] } ]
derby-DERBY-3969-b8b524c0
DERBY-3969: Fix NPEs when declaring constraints on generated columns without explicit datatypes. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@723184 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumnList.java", "hunks": [ { "added": [ "\t{", " return getResultColumn( columnName, true );", "\t}", "", "\t/**", "\t * Get a ResultColumn that matches the specified columnName. If requested", "\t * to, mark the column as referenced.", "\t *", "\t * @param columnName\tThe ResultColumn to get from the list", "\t * @param markIfReferenced True if we should mark this column as referenced.", "\t *", "\t * @return\tthe column that matches that name.", "\t */", "", "\tpublic ResultColumn getResultColumn(String columnName, boolean markIfReferenced )" ], "header": "@@ -277,6 +277,21 @@ public class ResultColumnList extends QueryTreeNodeVector", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/TableElementList.java", "hunks": [ { "added": [ " // validation of primary key nullability moved to validatePrimaryKeyNullability()." ], "header": "@@ -344,21 +344,11 @@ public class TableElementList extends QueryTreeNodeVector", "removed": [ "", " if (td == null)", " {", " // in CREATE TABLE so set PRIMARY KEY columns to NOT NULL", " setColumnListToNotNull(cdn);", " }", " else", " {", " // in ALTER TABLE so raise error if any columns are nullable", " checkForNullColumns(cdn, td);", " }" ] }, { "added": [ " /**", "\t * Validate nullability of primary keys. This logic was moved out of the main validate", "\t * method so that it can be called after binding generation clauses. We need", "\t * to perform the nullability checks later on because the datatype may be", "\t * omitted on the generation clause--we can't set/vet the nullability of the", "\t * datatype until we determine what the datatype is.", "\t */", " public void validatePrimaryKeyNullability()", " throws StandardException", " {", "\t\tint\t\t\tsize = size();", "\t\tfor (int index = 0; index < size; index++)", "\t\t{", "\t\t\tTableElementNode tableElement = (TableElementNode) elementAt(index);", "", "\t\t\tif (! (tableElement.hasConstraint()))", "\t\t\t{", "\t\t\t\tcontinue;", "\t\t\t}", " ", "\t\t\tConstraintDefinitionNode cdn = (ConstraintDefinitionNode) tableElement;", "", " if (cdn.hasPrimaryKeyConstraint())", " {", " if (td == null)", " {", " // in CREATE TABLE so set PRIMARY KEY columns to NOT NULL", " setColumnListToNotNull(cdn);", " }", " else", " {", " // in ALTER TABLE so raise error if any columns are nullable", " checkForNullColumns(cdn, td);", " }", " }", " }", " }", " " ], "header": "@@ -386,6 +376,44 @@ public class TableElementList extends QueryTreeNodeVector", "removed": [] }, { "added": [ " ResultColumnList tableColumns = table.getResultColumns();" ], "header": "@@ -721,6 +749,7 @@ public class TableElementList extends QueryTreeNodeVector", "removed": [] } ] } ]
derby-DERBY-3970-dd2650ff
DERBY-3970: PositionedStoreStream doesn't initialize itself properly. Makes PositionedStoreStream initialize itself properly by calling initStream and resetStream on the underlying Resetable. Patch file: derby-3970-1a-PositionedStoreStream_init.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@722812 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedBlob.java", "hunks": [ { "added": [ " myStream = new PositionedStoreStream(dvdStream);" ], "header": "@@ -195,18 +195,8 @@ final class EmbedBlob extends ConnectionChild implements Blob, EngineLOB", "removed": [ " myStream = new PositionedStoreStream(dvdStream);", " try {", " myStream.initStream();", " } catch (StandardException se) {", " if (se.getMessageId().equals(SQLState.DATA_CONTAINER_CLOSED)) {", " throw StandardException", " .newException(SQLState.BLOB_ACCESSED_AFTER_COMMIT);", " } else {", " throw se;", " }", " }" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/PositionedStoreStream.java", "hunks": [ { "added": [ " * <p>", " * Upon creation, the underlying stream is initiated and reset to make", " * sure the states of the streams are in sync with each other.", " public PositionedStoreStream(InputStream in)", " throws IOException, StandardException {", " // We need to know the stream is in a consistent state.", " ((Resetable)in).initStream();", " ((Resetable)in).resetStream();" ], "header": "@@ -76,11 +76,18 @@ public class PositionedStoreStream", "removed": [ " public PositionedStoreStream(InputStream in) {" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/StoreStreamClob.java", "hunks": [ { "added": [ " try {", " this.positionedStoreStream = new PositionedStoreStream(stream);", " } catch (StandardException se) {", " if (se.getMessageId().equals(SQLState.DATA_CONTAINER_CLOSED)) {", " throw StandardException", " .newException(SQLState.BLOB_ACCESSED_AFTER_COMMIT);", " } else {", " throw se;", " }", " } catch (IOException ioe) {", " throw StandardException.newException(", " SQLState.LANG_STREAMING_COLUMN_I_O_EXCEPTION, ioe, \"CLOB\");", " }" ], "header": "@@ -113,10 +113,21 @@ final class StoreStreamClob", "removed": [ " this.positionedStoreStream = new PositionedStoreStream(stream);", " this.positionedStoreStream.initStream();" ] } ] } ]
derby-DERBY-3972-7b9c4ca8
DERBY-3972; patch 3 modifies 2 tests to optionally take a property for a different initial context factory than sun's git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@741227 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3975-92941262
DERBY-3975: SELECT DISTINCT may return duplicates with territory-based collation Made the implementation of hashCode() in the collation-sensitive subclasses of SQLChar consistent with the collation-sensitive implementations of equals(), compareTo() and stringCompare(). Also extended CollationTest with tests for SELECT DISTINCT on strings that contain different characters but are considered equal in French locale. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@728822 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/SQLChar.java", "hunks": [ { "added": [ " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(!(this instanceof CollationElementsInterface),", " \"SQLChar.hashCode() does not work with collation\");", " }", "" ], "header": "@@ -2655,6 +2655,11 @@ readingLoop:", "removed": [] } ] } ]
derby-DERBY-3977-4d20e641
DERBY-3977: Clob.truncate with a value greater than the Clob length raises different exceptions in embedded and client driver. Changed the exception thrown in the embedded driver when calling Clob.truncate with a length greater than the Clob length from XJ076 to XJ079 (see also release note). The client and the embedded driver behavior is now consistent. Updated tests. Patch file: derby-3977-1a-change_emb_exception.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@726695 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3978-a63282c4
DERBY-3978: Clob.truncate(long) in the client driver doesn't update the cached Clob length. Make the client driver update the length of the Clob when it has been truncated.Added a few tests. Patch file: derby-3978-1a-regression_tests.diff, derby-3978-2a-update_length.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@724657 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3980-11850ac0
DERBY-3980 Conflicting select then update with REPEATABLE_READ gives lock timeout instead of deadlock Just javadoc changes and adding a new test for the issue that can't be enabled until we have a fix. This check-in does not fix the issue in any way. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@726121 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/services/locks/Deadlock.java", "hunks": [ { "added": [ "\t * Walk through the graph of all locks and search for cycles among", "\t * the waiting lock requests which would indicate a deadlock. A simple", "\t * deadlock cycle is where the granted locks of waiting compatibility", "\t * space A is blocking compatibility space B and space B holds locks causing", "\t * space A to wait.", "\t * <p>", "\t * Would be nice to get a better high level description of deadlock", "\t * search.", "\t * <p> " ], "header": "@@ -48,6 +48,15 @@ class Deadlock {", "removed": [] } ] } ]
derby-DERBY-3980-e0699eac
DERBY-3980: Conflicting select then update with REPEATABLE_READ gives lock timeout instead of deadlock DERBY-5073: Derby deadlocks without recourse on simultaneous correlated subqueries Added more comments describing the deadlock detection algorithm. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1084561 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/services/locks/Deadlock.java", "hunks": [ { "added": [ " * <p>", " * Code to support deadlock detection.", " * </p>", " *", " * <p>", " * This class implements deadlock detection by searching for cycles in the", " * wait graph. If a cycle is found, it means that (at least) two transactions", " * are blocked by each other, and one of them must be aborted to allow the", " * other one to continue.", " * </p>", " *", " * <p>", " * The wait graph is obtained by asking the {@code LockSet} instance to", " * provide a map representing all wait relations, see {@link #getWaiters}.", " * The map consists of two distinct sets of (key, value) pairs:", " * </p>", " *", " * <ol>", " * <li>(space, lock) pairs, where {@code space} is the compatibility space", " * of a waiting transaction and {@code lock} is the {@code ActiveLock}", " * instance on which the transaction is waiting</li>", " * <li>(lock, prevLock) pairs, where {@code lock} is an {@code ActiveLock} and", " * {@code prevLock} is the {@code ActiveLock} or {@code LockControl} for the", " * first waiter in the queue behind {@code lock}</li>", " * </ol>", " *", " * <p>", " * The search is performed as a depth-first search starting from the lock", " * request of a waiter that has been awoken for deadlock detection (either", " * because {@code derby.locks.deadlockTimeout} has expired or because some", " * other waiter had picked it as a victim in order to break a deadlock).", " * From this lock request, the wait graph is traversed by checking which", " * transactions have already been granted a lock on the object, and who they", " * are waiting for.", " * </p>", " *", " * <p>", " * The state of the search is maintained by pushing compatibility spaces", " * (representing waiting transactions) and granted locks onto a stack. When a", " * dead end is found (that is, a transaction that holds locks without waiting", " * for any other transaction), the stack is popped and the search continues", " * down a different path. This continues until a cycle is found or the stack is", " * empty. Detection of cycles happens when pushing a new compatibility space", " * onto the stack. If the same space already exists on the stack, it means the", " * graph has a cycle and we have a deadlock.", " * </p>", " *", " * <p>", " * When a deadlock is found, one of the waiters in the deadlock cycle is awoken", " * and it will terminate itself, unless it finds that the deadlock has been", " * broken in the meantime, for example because one of the involved waiters", " * has timed out.", " * </p>", " */", " * <p>", " * </p>", "\t *", " * <p>", " * </p>", " *", "\t * to satisfy the synchronization requirements of", " * </p>" ], "header": "@@ -38,33 +38,88 @@ import java.util.Stack;", "removed": [ "\tCode to support deadlock detection.", "*/", "\t * <BR>", "\t * <p>", "\t * Would be nice to get a better high level description of deadlock", "\t * search.", "\t * to satisfy the syncronization requirements of" ] }, { "added": [ " // All paths from the initial waiting lock request have been", " // examined without finding a deadlock. We're done.", " // All granted locks in this lock control have been examined.", "", " // Pick one of the granted lock for examination. rollback()", " // expects us to have examined the last one in the list, so", " // always pick that one." ], "header": "@@ -107,16 +162,22 @@ class Deadlock {", "removed": [ "\t\t\t\t// all done" ] }, { "added": [ " // Oops... The space has been examined once before, so", " // we have what appears to be a cycle in the wait graph.", " // In most cases this means we have a deadlock.", " //", " // However, in some cases, the cycle in the graph may be", " // an illusion. For example, we could have a situation", " // here like this:", " //", " // In this case it's not necessarily a deadlock. If the", " // Lockable returns true from its lockerAlwaysCompatible()", " // method, which means that lock requests within the same", " // compatibility space never conflict with each other,", " // T1 is only waiting for T2 to release its shared lock.", " // T2 isn't waiting for anyone, so there is no deadlock.", " //", " // This is only true if T1 is the first one waiting for", " // a lock on the object. If there are other waiters in", " // between, we have a deadlock regardless of what", " // lockerAlwaysCompatible() returns. Take for example this", " // similar scenario, where T3 is also waiting:", " //", " // Granted T1{S}, T2{S}", " // Waiting T3{X}", " // Waiting T1{X} - deadlock checking on this", " //", " // Here, T1 is stuck behind T3, and T3 is waiting for T1,", " // so we have a deadlock.", " // The two identical compatibility spaces were right", " // next to each other on the stack. This means we have", " // the first scenario described above, with the first", " // waiter already having a lock on the object. It is a" ], "header": "@@ -135,22 +196,45 @@ outer:\tfor (;;) {", "removed": [ "", "\t\t\t\t\t// We could be seeing a situation here like", "\t\t\t\t\t// In this case it's not a deadlock, although it", "\t\t\t\t\t// depends on the locking policy of the Lockable. E.g.", "\t\t\t\t\t// Granted T1(latch)", "\t\t\t\t\t// Waiting T1(latch)", "\t\t\t\t\t// is a deadlock.", "\t\t\t\t\t//" ] }, { "added": [ " // So it wasn't an illusion after all. Pick a victim.", "", " // Otherwise... The space hasn't been examined yet, so put it", " // on the stack and start examining it.", " // Who is this space waiting for?", " // The space isn't waiting for anyone, so we're at the" ], "header": "@@ -163,14 +247,20 @@ inner:\t\tfor (;;) {", "removed": [] }, { "added": [ " // Push all the granted locks on this object onto the", " // stack, and go ahead examining them one by one.", " // Set up the next space for examination.", " // Now, there is a possibility that we're not actually", " // waiting behind the other other waiter. Take for", " // example this scenario:", " //", " // Granted T1{X}", " // Waiting T2{S}", " // Waiting T3{S} - deadlock checking on this", " //", " // Here, T3 isn't blocked by T2. As soon as T1 releases", " // its X lock on the object, both T2 and T3 will be", " // granted an S lock. And if T1 also turns out to be", " // blocked by T3 and we have a deadlock, aborting T2", " // won't resolve the deadlock, so it's not actually", " // part of the deadlock. If we have this scenario, we", " // just skip past T2's space and consider T3 to be", " // waiting on T1 directly.", "", " // We're behind another waiter with a compatible", " // lock request. Skip it since we're not really", " // blocked by it.", " // We are really blocked by the other waiter. Go", " // ahead and investigate its compatibility space." ], "header": "@@ -196,25 +286,44 @@ inner:\t\tfor (;;) {", "removed": [ "", " // We're behind another waiter in the queue, but we", " // request compatible locks, so we'll get the lock", " // too once it gets it. Since we're not actually", " // blocked by the waiter, skip it and see what's", " // blocking it instead." ] }, { "added": [ " /**", " * Backtrack in the depth-first search through the wait graph. Expect", " * the top of the stack to hold the compatibility space we've just", " * investigated. Pop the stack until the most recently examined granted", " * lock has been removed.", " *", " * @param chain the stack representing the state of the search", " */" ], "header": "@@ -225,6 +334,14 @@ inner:\t\tfor (;;) {", "removed": [] }, { "added": [ " /**", " * Get all the waiters in a {@code LockTable}. The waiters are returned", " * as pairs (space, lock) mapping waiting compatibility spaces to the", " * lock request in which they are blocked, and (lock, prevLock) linking", " * a lock request with the lock request that's behind it in the queue of", " * waiters.", " *", " * @param set the lock table", " * @return all waiters in the lock table", " * @see LockControl#addWaiters(java.util.Map)", " */", " /**", " * Handle a deadlock when it has been detected. Find out if the waiter", " * that started looking for the deadlock is involved in it. If it isn't,", " * pick a victim among the waiters that are involved.", " *", " * @return {@code null} if the waiter that started looking for the deadlock", " * isn't involved in the deadlock (in which case another victim will have", " * been picked and awoken), or an array describing the deadlock otherwise", " */" ], "header": "@@ -237,12 +354,32 @@ inner:\t\tfor (;;) {", "removed": [] }, { "added": [ " /**", " * Build an exception that describes a deadlock.", " *", " * @param factory the lock factory requesting the exception", " * @param data an array with information about who's involved in", " * a deadlock (as returned by {@link #handle})", " * @return a deadlock exception", " */" ], "header": "@@ -291,6 +428,14 @@ inner:\t\tfor (;;) {", "removed": [] } ] } ]
derby-DERBY-3981-1aa5b64d
DERBY-3981: Improve distribution of hash codes in SQLBinary and SQLChar git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@731929 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/SQLChar.java", "hunks": [ { "added": [ " /**", " * The pad character (space).", " */", " private static final char PAD = '\\u0020';", "" ], "header": "@@ -112,6 +112,11 @@ public class SQLChar", "removed": [] }, { "added": [ " * o Calculate the hash code based on the characters from the", " * start up to the first non-blank character from the right.", " int lastNonPadChar = lvalue.length() - 1;", " while (lastNonPadChar >= 0 && lvalue.charAt(lastNonPadChar) == PAD) {", " lastNonPadChar--;", " // Build the hash code. It should be identical to what we get from", " // lvalue.substring(0, lastNonPadChar+1).hashCode(), but it should be", " // cheaper this way since we don't allocate a new string.", " int hashcode = 0;", " for (int i = 0; i <= lastNonPadChar; i++) {", " hashcode = hashcode * 31 + lvalue.charAt(i);" ], "header": "@@ -2681,27 +2686,25 @@ readingLoop:", "removed": [ " * o Add up the characters from that character to the 1st in", " * the string and return that as the hash code.", " int index;", " int hashcode = 0;", " for (index = lvalue.length() - 1; ", " index >= 0 && lvalue.charAt(index) == ' '; ", " index--)", " {", " ;", " // Build the hash code", " for ( ; index >= 0; index--)", " {", " hashcode += lvalue.charAt(index);" ] } ] } ]
derby-DERBY-3981-dd584838
DERBY-3981: Improve distribution of hash codes in SQLBinary and SQLChar Added a performance test which shows the worst-case behaviour of the current hash function. The test performs SELECT DISTINCT on a CHAR(5) column and on a CHAR(5) FOR BIT DATA column. The current hash function maps all the rows in the test to the same bucket and the hash table used for elimination of duplicates becomes a linked list. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@730689 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3982-daa4827a
DERBY-3982: Commit Ole's DERBY-3982_p2_diff.txt patch, which makes it easy to specify patch releases as starting points for the upgrade tests. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@728024 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3988-636e8e5f
DERBY-3988: Second attempt to always build the JDBC4 support when compiling Derby. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@728693 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/build/org/apache/derbyPreBuild/PropertySetter.java", "hunks": [ { "added": [ " private static final String J16LIB = \"j16lib\";", " private static final String J16CLASSPATH = \"java16compile.classpath\";" ], "header": "@@ -96,6 +96,8 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ "", " private static final String JAVA_5 = \"1.5\";" ], "header": "@@ -108,7 +110,8 @@ public class PropertySetter extends Task", "removed": [ " " ] }, { "added": [ " //", " // Check for settings which are known to cause problems.", " //", " checkForProblematicSettings();", " " ], "header": "@@ -192,6 +195,11 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ " else if ( usingIBMjdk( jdkVendor ) ) { setForIbmJDKs(); }" ], "header": "@@ -219,7 +227,7 @@ public class PropertySetter extends Task", "removed": [ " else if ( JDK_IBM.equals( jdkVendor ) ) { setForIbmJDKs(); }" ] } ] } ]
derby-DERBY-3989-a8132ce4
DERBY-3989 / DERBY-4699 Made PropertySetter ignore Java 6 libraries if a Java 5 compiler is used. If j16lib is specified explicitly in such an environment, the build will be aborted (an error message will be displayed to the user). Patch file: derby-3989-02-aa-dontUseJava6LibsWithJava5Compiler.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@954421 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/build/org/apache/derbyPreBuild/PropertySetter.java", "hunks": [ { "added": [ " if ( j14lib != null ) {", " debug(\"'j14lib' explicitly set to '\" + j14lib + \"'\");", " setClasspathFromLib(J14CLASSPATH, j14lib, true );", " }", " if ( j15lib != null ) {", " debug(\"'j15lib' explicitly set to '\" + j15lib + \"'\");", " setClasspathFromLib(J15CLASSPATH, j15lib, true );", " }", " if ( j16lib != null ) {", " debug(\"'j16lib' explicitly set to '\" + j16lib + \"'\");", " setClasspathFromLib(J16CLASSPATH, j16lib, true );", " }" ], "header": "@@ -274,9 +274,18 @@ public class PropertySetter extends Task", "removed": [ " if ( j14lib != null ) { setClasspathFromLib(J14CLASSPATH, j14lib, true ); }", " if ( j15lib != null ) { setClasspathFromLib(J15CLASSPATH, j15lib, true ); }", " if ( j16lib != null ) { setClasspathFromLib(J16CLASSPATH, j16lib, true ); }" ] }, { "added": [ " debug(\"\\nSelecting JDK candidates:\");" ], "header": "@@ -405,6 +414,7 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ " debug(\"\\nLocating JDKs:\");" ], "header": "@@ -577,6 +587,7 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ " debug(\"Candidate JDK for specification version \" +" ], "header": "@@ -733,7 +744,7 @@ public class PropertySetter extends Task", "removed": [ " debug(\"Chosen JDK for specification version \" +" ] }, { "added": [ " \"\\nThe build raises version mismatch errors when using a \" +", " \"Java 5 compiler with Java 6 libraries.\\n\" +", " \"Please either use a Java 6 (or later) compiler or do not \" +", " \"set the '\" + J16CLASSPATH + \"' and '\" + J16LIB +", " \"' variables.\\n\"" ], "header": "@@ -998,8 +1009,11 @@ public class PropertySetter extends Task", "removed": [ " \"\\nThe build raises version mismatch errors when using the IBM Java 5 compiler with Java 6 libraries.\\n\" +", " \"Please either use a Java 6 (or later) compiler or do not set the '\" + J16CLASSPATH + \"' and '\" + J16LIB + \"' variables.\\n\"" ] }, { "added": [ " // A Java 5 compiler raises version mismatch errors when used", " // with Java 6 libraries.", " return ( javaVersion.startsWith( JAVA_5 ) &&", " J16CLASSPATH.equals( property ) );" ], "header": "@@ -1013,13 +1027,13 @@ public class PropertySetter extends Task", "removed": [ " // The IBM Java 5 compiler raises version mismatch errors when used", " // with the IBM Java 6 libraries.", " String jdkVendor = getProperty( JDK_VENDOR );", " return ( usingIBMjdk( jdkVendor ) && javaVersion.startsWith( JAVA_5 ) && J16CLASSPATH.equals( property ) );" ] } ] } ]
derby-DERBY-3989-d6b04208
DERBY-3989: Allow the build to succeed on machines which have a Java 6 environment but not a Java 5 environment. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@734242 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/build/org/apache/derbyPreBuild/PropertySetter.java", "hunks": [ { "added": [ " * <li>java16compile.classpath</li>" ], "header": "@@ -46,6 +46,7 @@ import org.apache.tools.ant.taskdefs.Property;", "removed": [] }, { "added": [ " * <li>j16lib</li>" ], "header": "@@ -56,6 +57,7 @@ import org.apache.tools.ant.taskdefs.Property;", "removed": [] }, { "added": [ " private static final String PROPERTY_SETTER_DEBUG_FLAG = \"propertySetterDebug\";", "" ], "header": "@@ -113,6 +115,8 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ " if ( isSet( PROPERTY_SETTER_DEBUG_FLAG ) )", " {", " echo( \"\\nPropertySetter environment =\\n\\n\" + showEnvironment() + \"\\n\\n\" );", " }", "" ], "header": "@@ -194,6 +198,11 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ " if ( isSet( J14CLASSPATH ) && isSet( J15CLASSPATH ) && isSet( J16CLASSPATH ) ) { return; }" ], "header": "@@ -203,7 +212,7 @@ public class PropertySetter extends Task", "removed": [ " if ( isSet( J14CLASSPATH ) && isSet( J15CLASSPATH ) ) { return; }" ] }, { "added": [ " String j16lib = getProperty( J16LIB );", " if ( j16lib != null ) { setClasspathFromLib(J16CLASSPATH, j16lib, true ); }" ], "header": "@@ -211,9 +220,11 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ " // Require that at least one of these be set now.", " requireAtLeastOneProperty( J15CLASSPATH, J16CLASSPATH );" ], "header": "@@ -249,8 +260,8 @@ public class PropertySetter extends Task", "removed": [ " // Require that these be set now.", " requireProperty( J15CLASSPATH );" ] }, { "added": [ " defaultSetter( APPLE_JAVA_ROOT + \"/1.4/Classes\", APPLE_JAVA_ROOT + \"/1.5/Classes\", APPLE_JAVA_ROOT + \"/1.6/Classes\" );" ], "header": "@@ -267,7 +278,7 @@ public class PropertySetter extends Task", "removed": [ " defaultSetter( APPLE_JAVA_ROOT + \"/1.4/Classes\", APPLE_JAVA_ROOT + \"/1.5/Classes\" );" ] }, { "added": [ " setForMostJDKs( \"142\", \"50\", \"60\" );" ], "header": "@@ -284,7 +295,7 @@ public class PropertySetter extends Task", "removed": [ " setForMostJDKs( \"142\", \"50\" );" ] }, { "added": [ " setForMostJDKs( \"1.4.\", \"1.5.\", \"1.6\" );" ], "header": "@@ -302,7 +313,7 @@ public class PropertySetter extends Task", "removed": [ " setForMostJDKs( \"1.4.\", \"1.5.\" );" ] }, { "added": [ " private void setForMostJDKs( String seed14, String seed15, String seed16 )", "", " String default_j16lib = getProperty( J16LIB );", " { default_j15lib = searchForJreLib(jdkParents, seed15, false ); }", " if ( default_j16lib == null )", " { default_j16lib = searchForJreLib(jdkParents, seed16, false ); }", "", " defaultSetter( default_j14lib, default_j15lib, default_j16lib );" ], "header": "@@ -316,21 +327,25 @@ public class PropertySetter extends Task", "removed": [ " private void setForMostJDKs( String seed14, String seed15)", " ", " { default_j15lib = searchForJreLib(jdkParents, seed15, true ); }", " defaultSetter( default_j14lib, default_j15lib );" ] }, { "added": [ " private void defaultSetter( String default_j14lib, String default_j15lib, String default_j16lib )", " String j16lib = getProperty( J16LIB, default_j16lib );", " setClasspathFromLib( J15CLASSPATH, j15lib, false );", " setClasspathFromLib( J16CLASSPATH, j16lib, false );", " * However, refuse to set certain properties if they will cause problems", " * later on." ], "header": "@@ -466,20 +481,24 @@ public class PropertySetter extends Task", "removed": [ " private void defaultSetter( String default_j14lib, String default_j15lib )", " setClasspathFromLib( J15CLASSPATH, j15lib, true );" ] }, { "added": [ " // refuse to set certain properties", " if ( shouldNotSet( classpathProperty ) ) { return; }", "" ], "header": "@@ -491,6 +510,9 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ " if (", " shouldNotSet( J16CLASSPATH ) &&", " ( isSet( J16CLASSPATH ) || isSet( J16LIB ) )", " )", " \"Please either use a Java 6 (or later) compiler or do not set the '\" + J16CLASSPATH + \"' and '\" + J16LIB + \"' variables.\\n\"", " /**", " * <p>", " * Returns true if the given property should not be set.", " * </p>", " */", " private boolean shouldNotSet( String property )", " {", " //", " // The IBM Java 5 compiler raises version mismatch errors when used", " // with the IBM Java 6 libraries.", " //", " String jdkVendor = getProperty( JDK_VENDOR );", " String javaVersion = getProperty( JAVA_VERSION );", " ", " return ( usingIBMjdk( jdkVendor ) && javaVersion.startsWith( JAVA_5 ) && J16CLASSPATH.equals( property ) );", " }", " " ], "header": "@@ -603,23 +625,37 @@ public class PropertySetter extends Task", "removed": [ " //", " // The IBM Java 5 compiler raises version mismatch errors when used", " // with the IBM Java 6 libraries.", " //", " String jdkVendor = getProperty( JDK_VENDOR );", " String javaVersion = getProperty( JAVA_VERSION );", " if ( usingIBMjdk( jdkVendor ) && javaVersion.startsWith( JAVA_5 ) && isSet( J16CLASSPATH ) )", " \"Please either use a Java 6 (or later) compiler or do not set the '\" + J16CLASSPATH + \"' variable.\\n\"" ] }, { "added": [ " /**", " * <p>", " * Require that at least one of the passed in properties be set.", " * </p>", " */", " private void requireAtLeastOneProperty( String... properties )", " throws BuildException", " {", " int count = properties.length;", "", " for ( String property : properties )", " {", " if ( getProperty( property ) != null ) { return; }", " }", "", " throw couldntSetProperty( properties );", " }", "" ], "header": "@@ -692,6 +728,24 @@ public class PropertySetter extends Task", "removed": [] }, { "added": [ " * Object that we couldn't set some properties.", " private BuildException couldntSetProperty( String... properties )", " int count = properties.length;", " ", " buffer.append( \"Don't know how to set \" );", " for ( int i = 0; i < count; i++ )", " {", " if ( i > 0 ) { buffer.append( \", \" ); }", " buffer.append( properties[ i ] );", " }", " buffer.append( showEnvironment() );", " buffer.append( \"\\nPlease consult BUILDING.html for instructions on how to set the compiler-classpath properties.\" );", " ", " return new BuildException( buffer.toString() );", " }", "", " /**", " * <p>", " * Display the environment.", " * </p>", " */", " private String showEnvironment()", " {", " StringBuffer buffer = new StringBuffer();", "", " appendProperty( buffer, J16LIB );", " return buffer.toString();", " }", " " ], "header": "@@ -705,26 +759,47 @@ public class PropertySetter extends Task", "removed": [ " * Object that we couldn't set a property.", " private BuildException couldntSetProperty( String property )", "", " buffer.append( \"Don't know how to set \" + property );", " buffer.append( \"\\nPlease consult BUILDING.html for instructions on how to set the compiler-classpath properties.\" );", " ", " return new BuildException( buffer.toString() );", " }" ] } ] } ]
derby-DERBY-3991-979d9e84
DERBY-3991: Clob.truncate(0) throws exception. Allows a Clob to be truncated to a length of zero characters (empty string). Patch contributed by Yun Lee <yun.lee.bj@gmail.com>. Patch file: derby-3991-3.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@764800 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-3993-8d23f446
DERBY-3993 With IBM 1.6 T_RawStoreFactory fails with There should be 0 observers, but we still have 1 observers on Win 2K The problem will only show up in SANE builds as that is the only time we do the sanity check. Xact.doComplete() is called near the end of the transaction to take care of any cleanup prior to committing or aborting the transaction. It calls notifyObservers(commitOrAbort) and it expects on return that each observer has been notified, and all the observers are coded to delete themselves from the observer list as part of this process. It then asserts that the list should be empty on return. The problem is that one of the DropOnCommit observer as part of it's processing manages to add another observer to the list. I am guessing that the problem becomes intermittent because either different JVM's/memory layouts/hash algorithms result in the order processing of the observer list to be different, or different implementations handle the adding of an observer to the list while scanning the list differently. There is nothing in the Observable javadoc that guarantees and order or says anything about expected behavior of notifyObservers() execution if another observer is added during the execution so I don't think it is a jvm bug. In my case in order to process a drop of a container marked drop on commit the raw store interface requires it to first be opened. The code adds a TruncateOnCommit as part of this open as that layer of the code does not know why it is being opened. I believe it is this "new" TruncateOnCommit observer which is left on the observer queue. Adding an extra notify to the drop on commit processing seems to fix the unit test, I'll see if that causes any problems in the full set of tests. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1082197 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/raw/data/DropOnCommit.java", "hunks": [ { "added": [ "", "\t\t\t\tSanityManager.THROWASSERT(\"still on observer list \" + this);", "\t\tif (arg.equals(RawTransaction.COMMIT) || ", " arg.equals(RawTransaction.ABORT)) {", "", "\t\t\t\t\txact.dropStreamContainer(", " identity.getSegmentId(), identity.getContainerId());", "", "", "", "", " // DERBY-3993", " // make sure any observer that may have been added by either", " // dropContainer() or dropStreamContainer() is also handled.", " // The calling notifyObservers() call from Xact.doComplete()", " // may not \"see\" new observers added during processing of the", " // initial observer list.", " xact.notifyObservers(arg);" ], "header": "@@ -72,25 +72,40 @@ public class DropOnCommit extends ContainerActionOnCommit {", "removed": [ "\t\t\t\tSanityManager.THROWASSERT(\"still on observr list \" + this);", "\t\tif (arg.equals(RawTransaction.COMMIT) || arg.equals(RawTransaction.ABORT)) {", "\t\t\t\t\txact.dropStreamContainer(identity.getSegmentId(), identity.getContainerId());" ] } ] } ]