id
stringlengths 22
25
| commit_message
stringlengths 137
6.96k
| diffs
listlengths 0
63
|
|---|---|---|
derby-DERBY-4923-121a5329
|
DERBY-4923 update of a long row can fail with ERROR nospc: nospc.U exception.
This checkin fixes the problem repro'd by the included new test, which shows
an update of an existing row in db failing with a nosp.U exception.
The problem was an off by 1 error in checking for enough space to update a row
on an overflow page to just an overflow pointer. The intent of the code is to
always reserve enough space in every overflow row to allow for this update.
In this case there was exactly enough space, but the code mistaken thought it
needed 1 more byte.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1535413 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [
" if (spaceAvailable < OVERFLOW_POINTER_SIZE) ",
" // DERBY-4923 ",
" // The fix for DERBY-4923 was to change the above",
" // check from <= to <. The test case for DERBY-4923",
" // got the system into a state where it needed to",
" // exactly write an overflow field pointer and it",
" // had exactly OVERFLOW_POINTER_SIZE spaceAvailable,",
" // but was off by one in its check.",
" // The system insures all rows on an overflow page",
" // have at least OVERFLOW_POINTER_SIZE, so updating",
" // them should check for exactly OVERFLOW_POINTER_SIZE",
" // not <=.",
""
],
"header": "@@ -4147,8 +4147,20 @@ public class StoredPage extends CachedPage",
"removed": [
" if (spaceAvailable <= OVERFLOW_POINTER_SIZE) "
]
}
]
}
] |
derby-DERBY-4923-e8e1864a
|
DERBY-6320 Log a page dump to derby.log if ERROR nospc: nospc.U is returned to the user
This patch adds the ability to dump a page in an insane build, and adds 2 calls to do so in 2 outstanding nospc error cases. In those two cases a new user
level error is thrown and nests the nospc.U error so that we still know the
original stack trace where the lowest error was thrown.
The patch passes all tests and the specific errors were hand tested, one of
them using the test case filed in DERBY-4923 and in the other case just by hand forcing the codepath.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1535075 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.MessageId;"
],
"header": "@@ -32,6 +32,7 @@ import java.util.Arrays;",
"removed": []
},
{
"added": [
"import org.apache.derby.iapi.services.i18n.MessageService;"
],
"header": "@@ -46,6 +47,7 @@ import org.apache.derby.iapi.services.io.FormatableBitSet;",
"removed": []
},
{
"added": [
" String getPageDumpString()",
" {",
" return(",
" MessageService.getTextMessage(",
" MessageId.STORE_PAGE_DUMP,",
" getIdentity(),",
" isOverflowPage,",
" getPageVersion(),",
" slotsInUse,",
" deletedRowCount,",
" getPageStatus(),",
" nextId,",
" firstFreeByte,",
" freeSpace,",
" totalSpace,",
" spareSpace,",
" minimumRecordSize,",
" getPageSize(),",
" pagedataToHexDump(pageData)));",
" }",
""
],
"header": "@@ -8185,6 +8187,27 @@ public class StoredPage extends CachedPage",
"removed": []
},
{
"added": [
" catch (NoSpaceOnPage nsop)",
" {",
" // DERBY-4923",
" //",
" // The actionUpdate() call should not generate a ",
" // NoSpaceOnPage error. ",
"",
" throw StandardException.newException(",
" SQLState.DATA_UNEXPECTED_NO_SPACE_ON_PAGE,",
" nsop,",
" ((PageKey) curPage.getIdentity()).toString(),",
" getPageDumpString(),",
" slot,",
" id,",
" validColumns.toString(),",
" realStartColumn,",
" 0,",
" headRowHandle);",
" }"
],
"header": "@@ -8773,6 +8796,25 @@ slotScan:",
"removed": []
}
]
},
{
"file": "java/shared/org/apache/derby/shared/common/reference/SQLState.java",
"hunks": [
{
"added": [
" String DATA_UNEXPECTED_NO_SPACE_ON_PAGE = \"XSDAP.S\";"
],
"header": "@@ -490,6 +490,7 @@ public interface SQLState {",
"removed": []
}
]
}
] |
derby-DERBY-4929-b175fd27
|
DERBY-4856 DERBY-4929 Add thread dump information for error StandardException and SQLException. Due to DERBY-289, ThreadDump.java and ExceptionUtil.java should go to iapi/error for engine. Currently, all thread dump information goes to derby.log
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1043290 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/context/ContextManager.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.ExceptionUtil;"
],
"header": "@@ -29,6 +29,7 @@ import org.apache.derby.iapi.error.PassThroughException;",
"removed": []
},
{
"added": [
"import java.sql.SQLException;"
],
"header": "@@ -37,6 +38,7 @@ import org.apache.derby.iapi.services.property.PropertyUtil;",
"removed": []
},
{
"added": [
" int errorSeverity = getErrorSeverity(error);"
],
"header": "@@ -302,9 +304,7 @@ public class ContextManager",
"removed": [
"\t\t\tint errorSeverity = error instanceof StandardException ?",
"\t\t\t\t((StandardException) error).getSeverity() :",
"\t\t\t\tExceptionSeverity.NO_APPLICABLE_SEVERITY;"
]
},
{
"added": [
" if (reportError",
" && errorSeverity >= ExceptionSeverity.SESSION_SEVERITY) {",
" threadDump = ExceptionUtil.dumpThreads();",
" } else {",
" threadDump = null;",
" }"
],
"header": "@@ -331,6 +331,12 @@ cleanup:\tfor (int index = holder.size() - 1; index >= 0; index--) {",
"removed": []
},
{
"added": [
" if (threadDump != null)",
" errorStream.println(threadDump);"
],
"header": "@@ -401,6 +407,8 @@ cleanup:\tfor (int index = holder.size() - 1; index >= 0; index--) {",
"removed": []
},
{
"added": [
" ",
" /**",
" * return the severity of the exception. Currently, this method ",
" * does not determine a severity that is not StandardException ",
" * or SQLException.",
" * @param error - Throwable error",
" * ",
" * @return int vendorcode/severity for the Throwable error",
" * - error/exception to extract vendorcode/severity. ",
" * For error that we can not get severity, ",
" * NO_APPLICABLE_SEVERITY will return.",
" */",
" public int getErrorSeverity(Throwable error) {",
" ",
" if (error instanceof StandardException) {",
" return ((StandardException) error).getErrorCode();",
" }",
" ",
" if (error instanceof SQLException) {",
" return ((SQLException) error).getErrorCode();",
" }",
" return ExceptionSeverity.NO_APPLICABLE_SEVERITY;",
" }"
],
"header": "@@ -504,6 +512,29 @@ cleanup:\tfor (int index = holder.size() - 1; index >= 0; index--) {",
"removed": []
}
]
}
] |
derby-DERBY-4932-19d913d9
|
DERBY-4932: Remove the version of StringColumnVTI in the demo subtree.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1043245 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/demo/vtis/java/org/apache/derbyDemo/vtis/core/XmlVTI.java",
"hunks": [
{
"added": [
"import org.apache.derby.vti.StringColumnVTI;",
" "
],
"header": "@@ -30,6 +30,8 @@ import java.text.ParseException;",
"removed": []
}
]
},
{
"file": "java/demo/vtis/java/org/apache/derbyDemo/vtis/example/ApacheServerLogVTI.java",
"hunks": [
{
"added": [
" // OVERRIDES"
],
"header": "@@ -68,7 +68,7 @@ public class ApacheServerLogVTI extends XmlVTI",
"removed": [
" // ResultSet BEHAVIOR"
]
}
]
},
{
"file": "java/engine/org/apache/derby/vti/StringColumnVTI.java",
"hunks": [
{
"added": [
" ///////////////////////////////////////////////////////////////////////////////////",
" //",
" // ACCESSORS",
" //",
" ///////////////////////////////////////////////////////////////////////////////////",
"",
" /**",
" * <p>",
" * Get the number of columns.",
" * </p>",
" */",
" public int getColumnCount() { return _columnNames.length; }",
"",
" /**",
" * <p>",
" * Get name of a column (1-based indexing).",
" * </p>",
" */",
" public String getColumnName( int columnNumber ) { return _columnNames[ columnNumber - 1 ]; }",
""
],
"header": "@@ -104,6 +104,26 @@ public abstract class StringColumnVTI extends VTITemplate",
"removed": []
}
]
}
] |
derby-DERBY-4932-96e3f0c2
|
DERBY-4932: Move StringColumnVTI out of its testing package into the public api.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1043122 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/HarmonySerialClob.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.iapi.types.HarmonySerialClob"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.iapi.types.SQLClob"
]
}
]
},
{
"file": "java/engine/org/apache/derby/vti/StringColumnVTI.java",
"hunks": [
{
"added": [
"Derby - Class org.apache.derby.vti.StringColumnVTI"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
"Derby - Class org.apache.derbyTesting.functionTests.tests.lang.StringColumnVTI"
]
},
{
"added": [
"package org.apache.derby.vti;",
"import java.io.ByteArrayInputStream;",
"import java.io.InputStream;",
"import java.io.UnsupportedEncodingException;",
"import java.sql.Blob;",
"import java.sql.Clob;",
"import java.sql.Date;",
"import java.sql.ResultSetMetaData;",
"import java.sql.SQLException;",
"import java.sql.Time;",
"import java.sql.Timestamp;",
"import org.apache.derby.iapi.types.HarmonySerialBlob;",
"import org.apache.derby.iapi.types.HarmonySerialClob;",
" * This is an abstract table function which assumes that all columns are strings and which"
],
"header": "@@ -19,19 +19,28 @@ limitations under the License.",
"removed": [
"package org.apache.derbyTesting.functionTests.tests.lang;",
"import java.io.*;",
"import java.sql.*;",
"import org.apache.derby.vti.VTITemplate;",
" * This is an abstract VTI which assumes that all columns are strings and which"
]
},
{
"added": [
" * and the following protected method introduced by this class:"
],
"header": "@@ -39,11 +48,10 @@ import org.apache.derby.vti.VTITemplate;",
"removed": [
" * <li>getMetaData()</li>",
" * and the following protected methods introduced by this class:"
]
},
{
"added": [],
"header": "@@ -58,190 +66,6 @@ public abstract class StringColumnVTI extends VTITemplate",
"removed": [
" ///////////////////////////////////////////////////////////////////////////////////",
" //",
" // INNER CLASSES",
" //",
" ///////////////////////////////////////////////////////////////////////////////////",
"",
" /**",
" * <p>",
" * A crude Blob implementation for datatype testing.",
" * </p>",
" */",
" public\tstatic\tfinal\tclass\tSimpleBlob\timplements\tBlob",
" {",
" private\tbyte[]\t_bytes;",
" ",
" public\tSimpleBlob( byte[] bytes )",
" {",
" _bytes = bytes;",
" }",
" ",
" public\tInputStream\tgetBinaryStream()",
" {",
" return new ByteArrayInputStream( _bytes );",
" }",
" ",
" public\tbyte[]\tgetBytes( long position, int length )",
" {",
" byte[] result = new byte[ length ];",
" System.arraycopy( _bytes, ((int) position) - 1, result, 0, length );",
" ",
" return result;",
" }",
" ",
" public\tlong\tlength()",
" {",
" if ( _bytes == null ) { return 0L; }",
" return (long) _bytes.length;",
" }",
" ",
" public\tlong\tposition( Blob pattern, long start ) { return 0L; }",
" public\tlong\tposition( byte[] pattern, long start ) { return 0L; }",
" ",
" public\tboolean\tequals( Object other )",
" {",
" if ( other == null ) { return false; }",
" if ( !( other instanceof Blob ) ) { return false; }",
" ",
" Blob\tthat = (Blob) other;",
" ",
" try {",
" if ( this.length() != that.length() ) { return false; }",
" ",
" InputStream\tthisStream = this.getBinaryStream();",
" InputStream\tthatStream = that.getBinaryStream();",
" ",
" while( true )",
" {",
" int\t\tnextByte = thisStream.read();",
" ",
" if ( nextByte < 0 ) { break; }",
" if ( nextByte != thatStream.read() ) { return false; }",
" }",
" }",
" catch (Exception e)",
" {",
" System.err.println( e.getMessage() );",
" e.printStackTrace();",
" return false;",
" }",
" ",
" return true;",
" }",
" ",
" public int setBytes(long arg0, byte[] arg1) throws SQLException {",
" throw new SQLException(\"not implemented\");",
" }",
" ",
" public int setBytes(long arg0, byte[] arg1, int arg2, int arg3) throws SQLException {",
" throw new SQLException(\"not implemented\");",
" }",
"",
" public OutputStream setBinaryStream(long arg0) throws SQLException {",
" throw new SQLException(\"not implemented\");",
" }",
"",
" public void truncate(long arg0) throws SQLException {",
" throw new SQLException(\"not implemented\");",
" }",
" }",
" ",
" /**",
" * <p>",
" * A crude Clob implementation.",
" * </p>",
" */",
" public\tstatic\tfinal\tclass\tSimpleClob\timplements\tClob",
" {",
" private\tString\t_contents;",
"",
" public\tSimpleClob( String contents )",
" {",
" _contents = contents;",
" }",
" ",
" public\tInputStream\tgetAsciiStream()",
" {",
" try {",
" return new ByteArrayInputStream( _contents.getBytes( \"UTF-8\" ) );",
" }",
" catch (Exception e) { return null; }",
" }",
" ",
" public\tReader\tgetCharacterStream()",
" {",
" return new CharArrayReader( _contents.toCharArray() );",
" }",
" ",
" public\tString\tgetSubString( long position, int length )",
" {",
" return _contents.substring( ((int) position) - 1, length );",
" }",
"\t\t",
" public\tlong\tlength()",
" {",
" if ( _contents == null ) { return 0L; }",
" return (long) _contents.length();",
" }",
" ",
" public\tlong\tposition( Clob searchstr, long start ) { return 0L; }",
" public\tlong\tposition( String searchstr, long start ) { return 0L; }",
" ",
" public\tboolean\tequals( Object other )",
" {",
" if ( other == null ) { return false; }",
" if ( !( other instanceof Clob ) ) { return false; }",
" ",
" Clob\tthat = (Clob) other;",
" ",
" try {",
" if ( this.length() != that.length() ) { return false; }",
" ",
" InputStream\tthisStream = this.getAsciiStream();",
" InputStream\tthatStream = that.getAsciiStream();",
" ",
" while( true )",
" {",
" int\t\tnextByte = thisStream.read();",
" ",
" if ( nextByte < 0 ) { break; }",
" if ( nextByte != thatStream.read() ) { return false; }",
" }",
" }",
" catch (Exception e)",
" {",
" System.err.println( e.getMessage() );",
" e.printStackTrace();",
" return false;",
" }",
" ",
" return true;",
" }",
" ",
" public int setString(long arg0, String arg1) throws SQLException {",
" throw new SQLException(\"not implemented\");",
" }",
" ",
" public int setString(long arg0, String arg1, int arg2, int arg3) throws SQLException {",
" throw new SQLException(\"not implemented\");",
" }",
"",
" public OutputStream setAsciiStream(long arg0) throws SQLException {",
" throw new SQLException(\"not implemented\");",
" }",
"",
" public Writer setCharacterStream(long arg0) throws SQLException {",
" throw new SQLException(\"not implemented\");",
" }",
"",
" public void truncate(long arg0) throws SQLException {",
" throw new SQLException(\"not implemented\"); ",
" }",
"",
" }",
" "
]
},
{
"added": [
" /** This method returns null. Derby does not look at the metadata returned by the table function. */",
" public ResultSetMetaData getMetaData() throws SQLException { return null; }",
" "
],
"header": "@@ -286,6 +110,9 @@ public abstract class StringColumnVTI extends VTITemplate",
"removed": []
},
{
"added": [
" else { return new HarmonySerialBlob( getBytes( columnIndex ) ); }"
],
"header": "@@ -473,7 +300,7 @@ public abstract class StringColumnVTI extends VTITemplate",
"removed": [
" else { return new SimpleBlob( getBytes( columnIndex ) ); }"
]
}
]
}
] |
derby-DERBY-4933-a720ce8d
|
DERBY-4933: Use framework helper methods to check result sets in DatabaseMetaDataTest
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1043039 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4946-433d0845
|
DERBY-4946: Derby 10.7 DatabaseMetaData.getTypeInfo() should not return BOOLEAN for a soft upgraded database
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1051026 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData.java",
"hunks": [
{
"added": [
"\t\treturn getTypeInfoMinion(\"getTypeInfo\");"
],
"header": "@@ -2693,7 +2693,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\treturn getSimpleQuery(\"getTypeInfo\");"
]
}
]
}
] |
derby-DERBY-4947-3c295362
|
DERBY-4947: Missing/broken synchronization in BasicDependencyManager.getDependents()
Fixed unsynchronized access to a shared structure. The references to the
elements in the list are now copied to a new list.
Patch file: derby-4947-1a-sync_fix.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1051271 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/depend/BasicDependencyManager.java",
"hunks": [
{
"added": [
" if (list.isEmpty()) {"
],
"header": "@@ -318,8 +318,7 @@ public class BasicDependencyManager implements DependencyManager {",
"removed": [
"\t\tif (list == null)",
"\t\t{"
]
},
{
"added": [
" //@GuardedBy(\"this\")",
" private void clearProviderDependency(UUID p, Dependency d) {"
],
"header": "@@ -976,7 +975,8 @@ public class BasicDependencyManager implements DependencyManager {",
"removed": [
"\tprotected void clearProviderDependency(UUID p, Dependency d) {"
]
},
{
"added": [
" * @return A list of dependents (possibly empty).",
" List deps = new ArrayList();",
" List memDeps = (List) providers.get(p.getObjectID());",
" if (memDeps != null) {",
" deps.addAll(memDeps);",
" }"
],
"header": "@@ -1108,14 +1108,17 @@ public class BasicDependencyManager implements DependencyManager {",
"removed": [
" * @return {@code null} or a list of dependents (possibly empty).",
" List deps;",
" deps = (List) providers.get(p.getObjectID());"
]
},
{
"added": [
" deps.addAll(storedList);"
],
"header": "@@ -1127,16 +1130,7 @@ public class BasicDependencyManager implements DependencyManager {",
"removed": [
" if (deps == null) {",
" deps = storedList;",
" } else {",
" // We can't modify the list we got from 'providers', create a",
" // new one to merge the two lists.",
" List merged = new ArrayList(deps.size() + storedList.size());",
" merged.addAll(deps);",
" merged.addAll(storedList);",
" deps = merged;",
" }"
]
}
]
}
] |
derby-DERBY-4949-6ef238b8
|
DERBY-4949: Fix coercion error messages in network driver.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1052350 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Cursor.java",
"hunks": [
{
"added": [
" throw coercionError( \"boolean\", column );"
],
"header": "@@ -792,8 +792,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"boolean\");"
]
},
{
"added": [
" throw coercionError( \"byte\", column );"
],
"header": "@@ -821,8 +820,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"byte\");"
]
},
{
"added": [
" throw coercionError( \"short\", column );"
],
"header": "@@ -849,8 +847,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"short\");"
]
},
{
"added": [
" throw coercionError( \"int\", column );"
],
"header": "@@ -877,8 +874,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"int\");"
]
},
{
"added": [
" throw coercionError( \"long\", column );"
],
"header": "@@ -905,8 +901,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"long\");"
]
},
{
"added": [
" throw coercionError( \"float\", column );"
],
"header": "@@ -933,8 +928,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"float\");"
]
},
{
"added": [
" throw coercionError( \"double\", column );"
],
"header": "@@ -963,8 +957,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"double\");"
]
},
{
"added": [
" throw coercionError( \"java.math.BigDecimal\", column );"
],
"header": "@@ -994,8 +987,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.math.BigDecimal\");"
]
},
{
"added": [
" throw coercionError( \"java.sql.Date\", column );"
],
"header": "@@ -1013,8 +1005,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.sql.Date\");"
]
},
{
"added": [
" throw coercionError( \"java.sql.Time\", column );"
],
"header": "@@ -1032,8 +1023,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.sql.Time\");"
]
},
{
"added": [
" throw coercionError( \"java.sql.Timestamp\", column );"
],
"header": "@@ -1054,8 +1044,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.sql.Timestamp\");"
]
},
{
"added": [
" throw coercionError( \"String\", column );"
],
"header": "@@ -1117,8 +1106,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"String\");"
]
},
{
"added": [
" throw coercionError( \"byte[]\", column );"
],
"header": "@@ -1138,8 +1126,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"byte[]\");"
]
},
{
"added": [
" throw coercionError( \"java.io.InputStream\", column );"
],
"header": "@@ -1165,8 +1152,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.io.InputStream\");"
]
},
{
"added": [
" throw coercionError( \"java.io.InputStream\", column );"
],
"header": "@@ -1208,8 +1194,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.io.InputStream\");"
]
},
{
"added": [
" throw coercionError( \"UnicodeStream\", column );"
],
"header": "@@ -1255,8 +1240,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"UnicodeStream\");"
]
},
{
"added": [
" throw coercionError( \"java.io.Reader\", column );"
],
"header": "@@ -1308,8 +1292,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.io.Reader\");"
]
},
{
"added": [
" throw coercionError( \"java.sql.Blob\", column );"
],
"header": "@@ -1318,8 +1301,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.sql.Blob\");"
]
},
{
"added": [
" throw coercionError( \"java.sql.Clob\", column );"
],
"header": "@@ -1328,8 +1310,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"java.sql.Clob\");"
]
},
{
"added": [
" throw coercionError( \"Object\", column );"
],
"header": "@@ -1383,8 +1364,7 @@ public abstract class Cursor {",
"removed": [
" throw new ColumnTypeConversionException(agent_.logWriter_,",
" \"java.sql.Types \" + jdbcTypes_[column -1], \"Object\");"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/SqlException.java",
"hunks": [
{
"added": [
" ColumnTypeConversionException(LogWriter logWriter, String targetType,",
" String sourceType) {",
" targetType, sourceType);"
],
"header": "@@ -519,11 +519,11 @@ public class SqlException extends Exception implements Diagnosable {",
"removed": [
" ColumnTypeConversionException(LogWriter logWriter, String sourceType,",
" String targetType) {",
" sourceType, targetType);"
]
}
]
}
] |
derby-DERBY-4951-14478629
|
DERBY-4951: Revamp tests of binary data types to fix string encoding problems.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1054734 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4955-eea0d50c
|
DERBY-4955 Prepare Derby to run with Compact Profiles (JEP 161)
Patch derby-5955-embed-restructure-followup, some white space changes
plus a missed fix to EmbeddedDataSource40 which was in the original
proff-of-concept patch but fell through the cracks in the committed
patch derby-5955-embed-restructure-04.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1427047 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/jdbc/EmbeddedDataSource.java",
"hunks": [
{
"added": [
" public final Reference getReference() throws NamingException",
" // These fields will be set by the JNDI server when it decides to",
" // materialize a data source.",
" Reference ref = new Reference(",
" this.getClass().getName(),",
" \"org.apache.derby.jdbc.ReferenceableDataSource\",",
" null);",
" return ref;"
],
"header": "@@ -225,18 +225,18 @@ public class EmbeddedDataSource extends ReferenceableDataSource",
"removed": [
" public final Reference getReference() throws NamingException",
" // These fields will be set by the JNDI server when it decides to",
" // materialize a data source.",
" Reference ref = new Reference(",
" this.getClass().getName(),",
" \"org.apache.derby.jdbc.ReferenceableDataSource\",",
" null);",
" return ref;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/EmbeddedDataSource40.java",
"hunks": [
{
"added": [
"public class EmbeddedDataSource40 extends EmbeddedDataSource",
" implements javax.sql.DataSource /* compile-time check for 4.1 extension */",
"{",
" private static final long serialVersionUID = 4472591890758954803L;"
],
"header": "@@ -176,7 +176,10 @@ import org.apache.derby.impl.jdbc.Util;",
"removed": [
"public class EmbeddedDataSource40 extends EmbeddedDataSource {"
]
}
]
}
] |
derby-DERBY-4958-ffac5c3d
|
DERBY-4958: Allow wrapper types as output args in the Java signatures of procedures.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1055181 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4959-f1ec775b
|
DERBY-4959: All null LOBs as procedure output args.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1055676 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4960-01a4f9bd
|
DERBY-4960 Race condition in FileContainer#allocCache when reopening RAFContainer after interrupt
Patch derby-4960-2. When reopening the container after an interrupt we
now call "reopenContainer" instead of
"openContainer". "reopenContainer" is a (new) variant of
"openContainer" which skips reading the header, which is ok, since it
has not changed. This sidesteps the race situation.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1056591 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java",
"hunks": [
{
"added": [
" private boolean reopen;"
],
"header": "@@ -94,6 +94,7 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": []
},
{
"added": [
" throws StandardException {",
" return openContainerMinion(newIdentity, false);",
" }",
"",
" synchronized boolean reopenContainer(ContainerKey newIdentity)",
" throws StandardException {",
" return openContainerMinion(newIdentity, true);",
" }",
"",
" private boolean openContainerMinion(",
" ContainerKey newIdentity,",
" boolean doReopen) throws StandardException",
" reopen = doReopen;"
],
"header": "@@ -901,9 +902,21 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" throws StandardException"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer4.java",
"hunks": [
{
"added": [
" reopenContainer(currentIdentity);"
],
"header": "@@ -857,7 +857,7 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" openContainer(currentIdentity);"
]
},
{
"added": [
" private void readFull(ByteBuffer dstBuffer,",
" FileChannel srcChannel,",
" long position)"
],
"header": "@@ -1097,9 +1097,9 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" private final void readFull(ByteBuffer dstBuffer,",
" FileChannel srcChannel,",
" long position)"
]
},
{
"added": [
" private void writeFull(ByteBuffer srcBuffer,",
" FileChannel dstChannel,",
" long position)"
],
"header": "@@ -1134,9 +1134,9 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" private final void writeFull(ByteBuffer srcBuffer,",
" FileChannel dstChannel,",
" long position)"
]
}
]
}
] |
derby-DERBY-4963-a552fe6e
|
DERBY-4963 Revert to FileDescriptor#sync from FileChannel#force to improve interrupt resilience
Patch derby-4963-2 removes DirRandomAccessFile4 and with it the use of
FileChannel#force. It also removes the metadata boolean argument (no
longer used) from the StorageRandomAccessFile#sync.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1057702 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/BaseDataFileFactory.java",
"hunks": [
{
"added": [
" fileLockOnDB.sync();"
],
"header": "@@ -1899,7 +1899,7 @@ public class BaseDataFileFactory",
"removed": [
" fileLockOnDB.sync( false);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java",
"hunks": [
{
"added": [
" fileData.sync();"
],
"header": "@@ -364,7 +364,7 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" fileData.sync( false);"
]
},
{
"added": [
" fileData.sync();"
],
"header": "@@ -640,7 +640,7 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" fileData.sync(false);"
]
},
{
"added": [
" file.sync();"
],
"header": "@@ -758,7 +758,7 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" file.sync(false);"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/unitTests/store/T_RecoverBadLog.java",
"hunks": [
{
"added": [
"\t\t\tlog.sync();"
],
"header": "@@ -1818,7 +1818,7 @@ public class T_RecoverBadLog extends T_Generic {",
"removed": [
"\t\t\tlog.sync(false);"
]
}
]
}
] |
derby-DERBY-4964-a220692e
|
DERBY-4964: Client driver fails to convert string to boolean with setObject(col, str, Types.BIT)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1059888 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/CrossConverters.java",
"hunks": [
{
"added": [
"import java.util.Locale;"
],
"header": "@@ -25,6 +25,7 @@ import java.sql.Date;",
"removed": []
},
{
"added": [
" case Types.BIT:",
" case Types.BOOLEAN:",
" return Boolean.valueOf(source != 0);",
""
],
"header": "@@ -113,6 +114,10 @@ final class CrossConverters {",
"removed": []
},
{
"added": [
" case Types.BIT:",
" case Types.BOOLEAN:",
" return Boolean.valueOf(source != 0);",
""
],
"header": "@@ -147,6 +152,10 @@ final class CrossConverters {",
"removed": []
},
{
"added": [
" case Types.BIT:",
" case Types.BOOLEAN:",
" return Boolean.valueOf(source != 0);",
""
],
"header": "@@ -237,6 +246,10 @@ final class CrossConverters {",
"removed": []
},
{
"added": [
" case Types.BIT:",
" case Types.BOOLEAN:",
" return Boolean.valueOf(source != 0);",
""
],
"header": "@@ -279,6 +292,10 @@ final class CrossConverters {",
"removed": []
},
{
"added": [
" case Types.BIT:",
" case Types.BOOLEAN:",
" return Boolean.valueOf(source != 0);",
""
],
"header": "@@ -354,6 +371,10 @@ final class CrossConverters {",
"removed": []
},
{
"added": [
" case Types.BIT:",
" case Types.BOOLEAN:",
" return Boolean.valueOf(",
" java.math.BigDecimal.valueOf(0L).compareTo(source) != 0);",
""
],
"header": "@@ -417,6 +438,11 @@ final class CrossConverters {",
"removed": []
},
{
"added": [
" case Types.BIT:",
" case Types.BOOLEAN:",
" {",
" String cleanSource = source.trim().toUpperCase(Locale.ENGLISH);",
" if (cleanSource.equals(\"UNKNOWN\")) {",
" return null;",
" } else if (cleanSource.equals(\"TRUE\")) {",
" return Boolean.TRUE;",
" } else if (cleanSource.equals(\"FALSE\")) {",
" return Boolean.FALSE;",
" } else {",
" throw new SqlException(agent_.logWriter_,",
" new ClientMessageId(SQLState.LANG_FORMAT_EXCEPTION),",
" Types.getTypeString(targetDriverType));",
" }",
" }",
""
],
"header": "@@ -543,6 +569,23 @@ final class CrossConverters {",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Types.java",
"hunks": [
{
"added": [
" public final static int BIT = java.sql.Types.BIT; // -7;"
],
"header": "@@ -29,8 +29,7 @@ import org.apache.derby.iapi.reference.JDBC40Translation;",
"removed": [
" // Not currently supported as a DERBY column type. Mapped to SMALLINT.",
" // public final static int BIT = java.sql.Types.BIT; // -7;"
]
}
]
}
] |
derby-DERBY-4965-d326b7dd
|
DERBY-4965: Boolean to char conversion results in integer
Made JDBC level conversion from boolean to character types result in
"true" and "false" instead of "1" and "0".
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1062743 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/DatabaseMetaData.java",
"hunks": [
{
"added": [
" /**",
" * True if the server supports transport of boolean parameter values as",
" * booleans. If false, boolean values used as parameters in prepared",
" * statements will be transported as smallints to preserve backwards",
" * compatibility. See DERBY-4965.",
" */",
" private boolean supportsBooleanParameterTransport_;",
""
],
"header": "@@ -96,6 +96,14 @@ public abstract class DatabaseMetaData implements java.sql.DatabaseMetaData {",
"removed": []
},
{
"added": [
"",
" supportsBooleanParameterTransport_ =",
" productLevel_.greaterThanOrEqualTo(10, 8, 0);"
],
"header": "@@ -2332,6 +2340,9 @@ public abstract class DatabaseMetaData implements java.sql.DatabaseMetaData {",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetStatementRequest.java",
"hunks": [
{
"added": [
" case DRDAConstants.DRDA_TYPE_NBOOLEAN:",
" write1Byte(((Short) inputs[i]).shortValue());",
" break;"
],
"header": "@@ -721,6 +721,9 @@ public class NetStatementRequest extends NetPackageRequest implements StatementR",
"removed": []
},
{
"added": [
" case java.sql.Types.BIT:",
" if ( netAgent_.netConnection_.databaseMetaData_.",
" serverSupportsBooleanParameterTransport() )"
],
"header": "@@ -1217,8 +1220,10 @@ public class NetStatementRequest extends NetPackageRequest implements StatementR",
"removed": [
" if ( netAgent_.netConnection_.serverSupportsBooleanValues() )"
]
}
]
}
] |
derby-DERBY-4967-482ff80f
|
DERBY-4967 Handle interrupt received while waiting for database lock
Patch derby-4967-locking-4 which makes the existing test
LockInterruptTest assert that the interrupt flag is set when we see
08000 (CONN_INTERRUPT) - in accordance with the behavior we expect
after DERBY-4741. The assert is skipped on Solaris/Sun Java <= 1.6
unless the flag -XX:-UseVMInterruptibleIO is used.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1060832 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4970-0252fa4f
|
DERBY-4970: ClassCastException from getBlob()/getClob() in EmbedCallableStatement
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1058478 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement.java",
"hunks": [
{
"added": [
" Object o = getObject(parameterIndex);",
" if (o == null || o instanceof Blob) {",
" return (Blob) o;",
" }",
" throw newSQLException(SQLState.LANG_DATA_TYPE_GET_MISMATCH,",
" Blob.class.getName(),",
" Util.typeName(getParameterJDBCType(parameterIndex)));"
],
"header": "@@ -574,16 +574,13 @@ public abstract class EmbedCallableStatement extends EmbedPreparedStatement",
"removed": [
"\t\tcheckStatus();",
"\t\ttry {",
"\t\t\tDataValueDescriptor param = getParms().getParameterForGet(parameterIndex-1);",
"\t\t\tBlob v = (Blob) param.getObject();",
"\t\t\twasNull = (v == null);",
"\t\t\treturn v;",
"\t\t} catch (StandardException e)",
"\t\t{",
"\t\t\tthrow EmbedResultSet.noStateChangeException(e);",
"\t\t}"
]
}
]
}
] |
derby-DERBY-4973-d1ba0d0c
|
DERBY-4973 NullPointerException in updatelocks.sql encryption tests on IBM 1.6
Change Xact.getContextId() to just read xc value once to avoid possible NPE with lock table query.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1062096 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/xact/Xact.java",
"hunks": [
{
"added": [
" /**",
" * Get my transaction context Id",
" */",
" public final String getContextId() {",
" //DERBY-4973. Make a copy of xc so we are working on a stable ",
" // copy, especially for the lock table VTI. If we don't, there may",
" // be a chance for a NullPointerException if close() is called ",
" //by another thread after the check but before the dereference.",
" XactContext tempxc = xc;",
" return (tempxc == null) ? null : tempxc.getIdName();",
" }"
],
"header": "@@ -624,14 +624,17 @@ public class Xact extends RawTransaction implements Limit, LockOwner {",
"removed": [
"\t/**",
"\t\tGet my transaction context Id",
"\t*/",
"\tpublic final String getContextId() ",
"\t{",
"\t\treturn (xc == null) ? null : xc.getIdName();",
"\t}",
""
]
}
]
}
] |
derby-DERBY-4974-391d4a65
|
DERBY-4974 InterruptResilienceTest fails on Solaris with Sun VMs prior to 1.6
Patch DERBY-4974, which:
a) skips the tests in InterruptResilienceTest if running with
interruptible IO on Solaris
b) improves the check for interruptible IO to work even if
system/derby.log doesn't yet exist (which would be the case if the
test is run stand-alone).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1061988 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-498-4b9c0f51
|
DERBY-498 - Result set holdability defined inside stored procedures is ignored by server/client
The attached patch 'derby-498.diff' changes network server to use statement holdability set within stored procedures. The patch does the following:
1. For callable statements, the execute method in DRDAStatement gets holdability from the statement that produced the resultset.
2. Added getResultSetHoldability method which takes a resultset and returns holdability.
3. execute method passes this holdability to addResultSet method, which sets DRDAResultSet.withHoldCursor with this value.
4. writeOPNQRYRM method in DRDAConnThread is changed to use the holdability of the current DRDAResultSet for setting SQLCSRHLD.
5. Added tests in lang/holdCursorJava.java. Created a new master file for DerbyNetClient.
Ran derbyall on WinXP Sun jdk1.4.2. No failures. However, in a previous run of derbyall I got failures in few encryption tests. The failures did not seem related to my change. So I ran the encryption suites again and they passed. Then ran derbyall again and all tests passed.
Also attaching an additional patch "xa_proc_test.diff" for xa tests. It does the following:
1. Adds procedure test to jdbcapi/xaSimplePositive.sql.
2. Updates master files.
Contributed by Deepa Remesh dremesh@gmail.com
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@326718 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAStatement.java",
"hunks": [
{
"added": [
"\t",
"\t/**",
"\t *",
"\t * get resultSetHoldability with reflection. ",
"\t * We need to use reflection so we can use hold cursors with 1.3.1. ",
"\t * And also since our statement might be a BrokeredStatement.",
"\t * ",
"\t * @param rs ResultSet ",
"\t * @return the resultSet holdability for the prepared statement",
"\t *",
"\t */",
"\tprotected int getResultSetHoldability(ResultSet rs) throws SQLException",
"\t{",
"\t\tStatement rsstmt = null;",
"\t\tint holdValue = -1;",
"",
"\t\tif (rs != null)",
"\t\t\trsstmt = rs.getStatement();",
"\t\telse",
"\t\t\trsstmt = getPreparedStatement();",
"\t\t\t\t",
"\t\tClass[] getResultSetHoldabilityParam = {};",
"\t\ttry {",
"\t\t\tMethod sh =",
"\t\t\t\trsstmt.getClass().getMethod(\"getResultSetHoldability\", getResultSetHoldabilityParam);",
"\t\t\tholdValue = ((Integer) sh.invoke(rsstmt,null)).intValue();",
"\t\t}",
"\t\tcatch (Exception e) {",
"\t\t\thandleReflectionException(e);",
"\t\t}",
"\t\treturn holdValue;",
"\t}\t"
],
"header": "@@ -273,6 +273,38 @@ class DRDAStatement",
"removed": []
},
{
"added": [
"\t\t\t\t//For callable statement, get holdability of statement generating the result set",
"\t\t\t\tif(isCallable)",
"\t\t\t\t\taddResultSet(rs,getResultSetHoldability(rs));",
"\t\t\t\telse",
"\t\t\t\t\taddResultSet(rs,withHoldCursor);"
],
"header": "@@ -540,7 +572,11 @@ class DRDAStatement",
"removed": [
"\t\t\t\taddResultSet(rs);"
]
},
{
"added": [
"\t",
"\t/**",
"\t * Gets the current DRDA ResultSet",
"\t * ",
"\t * @return DRDAResultSet",
"\t */",
"\tprotected DRDAResultSet getCurrentDrdaResultSet()",
"\t{",
"\t\treturn currentDrdaRs ;",
"\t}"
],
"header": "@@ -706,6 +742,16 @@ class DRDAStatement",
"removed": []
},
{
"added": [
"\t * @param holdValue - Holdability of the ResultSet ",
"\tprotected String addResultSet(ResultSet value, int holdValue) throws SQLException"
],
"header": "@@ -785,11 +831,12 @@ class DRDAStatement",
"removed": [
"\tprotected String addResultSet(ResultSet value) throws SQLException"
]
}
]
}
] |
derby-DERBY-4982-dede9bfc
|
DERBY-4982 Retrying after interrupts in store pops a bug in derbyall/storeall/storeunit/T_RawStoreFactory in some cases
Patch derby-4982c. Makes the sane and insane behavior of double patch attempt detection the same: throw DATA_DOUBLE_LATCH_INTERNAL_ERROR.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1063960 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/unitTests/store/T_Util.java",
"hunks": [
{
"added": [
" if (!\"XSDAO\".equals(se.getSQLState())) {"
],
"header": "@@ -1142,48 +1142,15 @@ public class T_Util",
"removed": [
"\t\t// we expect to hang in getPage() so make sure we are interrupted",
"\t\tfinal Thread me = Thread.currentThread();",
"\t\tRunnable r = new Runnable() {",
"\t\t\t\tpublic void run() {",
"\t\t\t\t\ttry {",
"\t\t\t\t\t\tThread.sleep(2000);",
"\t\t\t\t\t} catch (InterruptedException e) { }",
"\t\t\t\t\tme.interrupt();",
"\t\t\t\t}",
"\t\t\t};",
"\t\tThread interrupter = new Thread(r);",
"\t\tif (!SanityManager.DEBUG) {",
"\t\t\t// don't run the interrupter thread in sane builds, since getPage()",
"\t\t\t// will throw an assert error instead of hanging (DERBY-2635)",
"\t\t\tinterrupter.start();",
"\t\t}",
"\t\t\t// expect thread interrupted exception in insane builds",
"\t\t\tif (SanityManager.DEBUG || !se.getMessageId().equals(\"08000\")) {",
"\t\t} catch (RuntimeException e) {",
"\t\t\t// When running in sane mode, an AssertFailure will be thrown if we",
"\t\t\t// try to double latch a page. The AssertFailure class is not",
"\t\t\t// available in insane jars, so we cannot reference the class",
"\t\t\t// directly.",
"\t\t\tif (!(SanityManager.DEBUG &&",
"\t\t\t\t e.getClass().getName().endsWith(\".sanity.AssertFailure\") &&",
"\t\t\t\t e.getMessage().endsWith(\"Attempted to latch page twice\"))) {",
"\t\t\t\tthrow e;",
"\t\t\t}",
"",
"\t\ttry {",
"\t\t\tif (interrupter.isAlive()) {",
"\t\t\t\tinterrupter.join();",
"\t\t\t}",
"\t\t} catch (InterruptedException ie) { }"
]
}
]
}
] |
derby-DERBY-4984-1bee891c
|
DERBY-5079 " DERBY-4984 caused a regression which will not allow users to drop a table if the table was involved in a trigger action rebind during ALTER TABLE DROP COLUMN
Adding some commented out test cases to show the problem with drop table after ALTER TABLE DROP COLUMN and some combination of triggers. This is caused because the changes for DERBY-4984 used an incorrect current dependent for the dpendency system before doing a recompile of trigger action sql. Work is being done to use the correct dependent and recreate the dependencies in SYSDEPENDS correctly after a trigger action recompile is done following an ALTER TABLE DROP COLUMN.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1076387 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4984-bf58aa63
|
DERBY-5079 (DERBY-4984 caused a regression which will not allow users to drop a table if the table was involved in a trigger action rebind during ALTER TABLE DROP COLUMN)
Trigger action sql should be rebound using the statement and not the trigger table. Also, dependency between the trigger table and trigger action SPS should be established.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1078693 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4985-414ba6f7
|
DERBY-4985 BootLockTest can fail with ERROR XCY03: Required property 'derby.serviceProtocol' has not been set with slow configurations
Use file created by BootLockMinion to coordinate the processes.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1063809 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4987-e9f4ad9a
|
DERBY-4987 BootLockTest can hang reading spawned process output
The test will now only attempt to get the error output if BootLockMinion has exited and also has a max timeout of ten minutes which won't slow down normal runs of the test but hopefully be long enough without risk of hanging again.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1065061 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-4988-69dbd137
|
DERBY-4988 ALTER TABLE DROP COLUMN should make use of information in SYSTRIGGERS to detect column used through REFERENCING clause to find trigger dependencies
Derby at the time of ALTER TABLE DROP COLUMN looks for trigger dependencies by looking for column being dropped in trigger column list but that is not enough. SQL standard requires that column should not be part of explicit trigger column list or a triggered action column set.
starting Derby 10.7, we have started keeping track of trigger action columns which are referenced through the REFERENCING clause. This commit will make use of that additional info to make a step forward towards meeting the SQL standards. It still does not recognize the trigger action columns that are not part of REFERENCING clause. That work can go separately.
I have added upgrade test to make sure that the compatibility does not break between Derby releases prior to 10.7 and forward.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1066290 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [
"\t\t//Now go through each trigger on this table and see if the column ",
"\t\t//being dropped is part of it's trigger columns or trigger action ",
"\t\t//columns which are used through REFERENCING clause",
"\t\t\t//If we find that the trigger is dependent on the column being ",
"\t\t\t//dropped because column is part of trigger columns list, then",
"\t\t\t//we will give a warning or drop the trigger based on whether",
"\t\t\t//ALTER TABLE DROP COLUMN is RESTRICT or CASCADE. In such a",
"\t\t\t//case, no need to check if the trigger action columns referenced",
"\t\t\t//through REFERENCING clause also used the column being dropped.",
"\t\t\tboolean triggerDroppedAlready = false;",
"",
"\t\t\tif (referencedCols != null) {",
"\t\t\t\tint refColLen = referencedCols.length, j;",
"\t\t\t\tboolean changed = false;",
"\t\t\t\tfor (j = 0; j < refColLen; j++)",
"\t\t\t\t{",
"\t\t\t\t\tif (referencedCols[j] > droppedColumnPosition)",
"\t {",
"\t\t\t\t\t\t//Trigger is not defined on the column being dropped",
"\t\t\t\t\t\t//but the column position of trigger column is changing",
"\t\t\t\t\t\t//because the position of the column being dropped is",
"\t\t\t\t\t\t//before the the trigger column",
"\t\t\t\t\t\tchanged = true;",
"\t }",
"\t\t\t\t\telse if (referencedCols[j] == droppedColumnPosition)",
"\t\t\t\t\t{",
"\t\t\t\t\t\t//the trigger is defined on the column being dropped",
"\t\t\t\t\t\tif (cascade)",
"\t\t\t\t\t\t{",
"\t trd.drop(lcc);",
"\t triggerDroppedAlready = true;",
"\t\t\t\t\t\t\tactivation.addWarning(",
"\t\t\t\t\t\t\t\tStandardException.newWarning(",
"\t SQLState.LANG_TRIGGER_DROPPED, ",
"\t trd.getName(), td.getName()));",
"\t\t\t\t\t\t}",
"\t\t\t\t\t\telse",
"\t\t\t\t\t\t{\t// we'd better give an error if don't drop it,",
"\t\t\t\t\t\t\t// otherwsie there would be unexpected behaviors",
"\t\t\t\t\t\t\tthrow StandardException.newException(",
"\t SQLState.LANG_PROVIDER_HAS_DEPENDENT_OBJECT,",
"\t dm.getActionString(DependencyManager.DROP_COLUMN),",
"\t columnName, \"TRIGGER\",",
"\t trd.getName() );",
"\t\t\t\t\t\t}",
"\t\t\t\t\t\tbreak;",
"\t\t\t\t\t}",
"\t\t\t\t}",
"",
"\t\t\t\t// change triggers to refer to columns in new positions",
"\t\t\t\tif (j == refColLen && changed)",
"\t\t\t\t{",
"\t\t\t\t\tdd.dropTriggerDescriptor(trd, tc);",
"\t\t\t\t\tfor (j = 0; j < refColLen; j++)",
"\t\t\t\t\t{",
"\t\t\t\t\t\tif (referencedCols[j] > droppedColumnPosition)",
"\t\t\t\t\t\t\treferencedCols[j]--;",
"\t\t\t\t\t}",
"\t\t\t\t\tdd.addDescriptor(trd, sd,",
"\t\t\t\t\t\t\t\t\t DataDictionary.SYSTRIGGERS_CATALOG_NUM,",
"\t\t\t\t\t\t\t\t\t false, tc);",
"\t\t\t\t}",
"\t\t\t}",
"",
"\t\t\t//If the trigger under consideration already got dropped through ",
"\t\t\t//the referencedCols loop above, then move to next trigger",
"\t\t\tif (triggerDroppedAlready) continue;",
"\t\t\t",
"\t\t\t//None of the triggers use column being dropped as a trigger ",
"\t\t\t//column. Check if the column being dropped is getting used ",
"\t\t\t//inside the trigger action through REFERENCING clause.",
"\t\t\tint[] referencedColsInTriggerAction = trd.getReferencedColsInTriggerAction();",
"\t\t\tif (referencedColsInTriggerAction == null)",
"",
"\t\t\tint refColInTriggerActionLen = referencedColsInTriggerAction.length, j;",
"\t\t\tboolean changedColPositionInTriggerAction = false;",
"\t\t\tfor (j = 0; j < refColInTriggerActionLen; j++)",
"\t\t\t\tif (referencedColsInTriggerAction[j] > droppedColumnPosition)",
"\t\t\t\t{",
"\t\t\t\t\tchangedColPositionInTriggerAction = true;",
"\t\t\t\t}",
"\t\t\t\telse if (referencedColsInTriggerAction[j] == droppedColumnPosition)"
],
"header": "@@ -1329,24 +1329,96 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\t// need to deal with triggers if has referencedColumns",
"\t\t\tif (referencedCols == null)",
"\t\t\tint refColLen = referencedCols.length, j;",
"\t\t\tboolean changed = false;",
"\t\t\tfor (j = 0; j < refColLen; j++)",
"\t\t\t\tif (referencedCols[j] > droppedColumnPosition)",
" {",
"\t\t\t\t\tchanged = true;",
" }",
"\t\t\t\telse if (referencedCols[j] == droppedColumnPosition)"
]
},
{
"added": [
"\t\t\t\t\t\t// otherwise there would be unexpected behaviors"
],
"header": "@@ -1358,7 +1430,7 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\t\t\t\t\t// otherwsie there would be unexpected behaviors"
]
}
]
}
] |
derby-DERBY-4997-22db8062
|
DERBY-4997 SysinfoTest version output filtering is fragile with new java versions
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1067250 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5001-7711d18e
|
DERBY-5001 Intermittent bug in InterruptResilienceTest
Fix to allow for lock interrupts during writing in the MT RAF test fixture.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1066911 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5002-cf3b595e
|
DERBY-5002 In case of checksum error, insure correct error reported to user.
Changed the order of sanity page checking to make sure that if there is a
checksum error on reading the page from disk, then it is the error reported.
Before this change certain page inconsistencies would be found before doing
the checksum check and would report different kinds of errors depending on
where the corruption happened on the page. The main error case checksums
try to catch is partially written page where because a derby page is made up
of multiple OS/drive blocks some blocks can make it to disk before others and
in the case of a hardware crash an incomplete page may be written. In this
case, the current derby implementation can not recover from log as it needs
a valid page to look at in order to apply log records. The db must be
recovered from derby backup in this case.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1067357 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [],
"header": "@@ -774,18 +774,6 @@ public class StoredPage extends CachedPage",
"removed": [
" try ",
" {",
" readPageHeader();",
" initSlotTable(newIdentity);",
" }",
" catch (IOException ioe) ",
" {",
" // i/o methods on the byte array have thrown an IOException",
" throw dataFactory.markCorrupt(",
" StandardException.newException(",
" SQLState.DATA_CORRUPT_PAGE, ioe, newIdentity));",
" }"
]
},
{
"added": [
" try ",
" {",
" readPageHeader();",
" initSlotTable(newIdentity);",
" }",
" catch (IOException ioe) ",
" {",
" // i/o methods on the byte array have thrown an IOException",
" throw dataFactory.markCorrupt(",
" StandardException.newException(",
" SQLState.DATA_CORRUPT_PAGE, ioe, newIdentity));",
" }"
],
"header": "@@ -849,8 +837,19 @@ public class StoredPage extends CachedPage",
"removed": [
" "
]
},
{
"added": [
" if (SanityManager.DEBUG) ",
" {",
" if (getRecordOffset(slot) <= 0)",
" {",
" SanityManager.DEBUG_PRINT(\"DEBUG_TRACE\",",
" \"getTotalSpace failed with getRecordOffset(\" + ",
" slot + \") = \" +",
" getRecordOffset(slot) + \" must be greater than 0.\" +",
" \"page dump = \\n\" +",
" toUncheckedString());",
" SanityManager.THROWASSERT(",
" \"bad record offset found in getTotalSpace()\");",
" }",
" }",
""
],
"header": "@@ -1092,6 +1091,21 @@ public class StoredPage extends CachedPage",
"removed": []
},
{
"added": [
" if (getRecordOffset(slot) <= 0)",
" {",
" SanityManager.DEBUG_PRINT(\"DEBUG_TRACE\",",
" \"getRecordPortionLength failed with getRecordOffset(\" + ",
" slot + \") = \" +",
" getRecordOffset(slot) + \" must be greater than 0.\" +",
" \"page dump = \\n\" +",
" toUncheckedString());",
" SanityManager.THROWASSERT(",
" \"bad record offset found in getRecordPortionLength()\");",
" }"
],
"header": "@@ -2042,7 +2056,17 @@ public class StoredPage extends CachedPage",
"removed": [
" SanityManager.ASSERT(getRecordOffset(slot) != 0);"
]
},
{
"added": [
" if (getRecordOffset(slot) <= 0)",
" {",
" SanityManager.DEBUG_PRINT(\"DEBUG_TRACE\",",
" \"getReservedCount failed with getRecordOffset(\" + ",
" slot + \") = \" +",
" getRecordOffset(slot) + \" must be greater than 0.\" +",
" \"page dump = \\n\" +",
" toUncheckedString());",
" SanityManager.THROWASSERT(",
" \"bad record offset found in getReservedCount\");",
" }"
],
"header": "@@ -2071,7 +2095,17 @@ public class StoredPage extends CachedPage",
"removed": [
" SanityManager.ASSERT(getRecordOffset(slot) != 0);"
]
},
{
"added": [
" public String toUncheckedString()",
" {",
" if (SanityManager.DEBUG)",
" {",
" String str = \"---------------------------------------------------\\n\";",
" str += pageHeaderToString();",
"",
" //if (SanityManager.DEBUG_ON(\"dumpPageImage\"))",
" {",
" str += \"---------------------------------------------------\\n\";",
" str += pagedataToHexDump(pageData);",
" str += \"---------------------------------------------------\\n\";",
" }",
" return str;",
" }",
" else",
" return null;",
" }",
""
],
"header": "@@ -8044,6 +8078,25 @@ public class StoredPage extends CachedPage",
"removed": []
}
]
}
] |
derby-DERBY-5003-87c74010
|
DERBY-5003 NPE in ReplicationRun_Local_3_p5 when stopping slave after purposefully crashing master
Patch to collect diagnostics (sane jars only): derby-5003-diagnostics.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1617484 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/log/LogToFile.java",
"hunks": [
{
"added": [
" catch (NullPointerException e) {",
" if (SanityManager.DEBUG) {",
" SanityManager.DEBUG_PRINT(\"DERBY-5003 [1]:\", this.toString());",
" }",
" throw e;",
" }"
],
"header": "@@ -4109,6 +4109,12 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": []
},
{
"added": [
" catch (NullPointerException e) {",
" if (SanityManager.DEBUG) {",
" SanityManager.DEBUG_PRINT(\"DERBY-5003 [2]\", this.toString());",
" }",
" throw e;",
" }"
],
"header": "@@ -4166,6 +4172,12 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": []
},
{
"added": [
" ",
" @Override",
" @SuppressWarnings(\"StringConcatenationInsideStringBufferAppend\")",
" public String toString() {",
" StringBuilder sb = new StringBuilder();",
" if (SanityManager.DEBUG) { // to reduce footprint in insane code",
" sb.append(\"LogToFile: [\\n\");",
" sb.append(\" logOut=\" + logOut + \"\\n\");",
" sb.append(\" dataDirectory=\" + dataDirectory + \"\\n\");",
" sb.append(\" logStorageFactory=\" + logStorageFactory + \"\\n\");",
" sb.append(\" logBeingFlushed=\" + logBeingFlushed + \"\\n\");",
" sb.append(\" firstLog=\" + firstLog + \"\\n\");",
" sb.append(\" endPosition=\" + endPosition + \"\\n\");",
" sb.append(\" lastFlush=\" + lastFlush + \"\\n\");",
" sb.append(\" logFileNumber=\" + logFileNumber + \"\\n\");",
" sb.append(\" bootTimeLogFileNumber=\" + bootTimeLogFileNumber + \"\\n\");",
" sb.append(\" firstLogFileNumber=\" + firstLogFileNumber + \"\\n\");",
" sb.append(\" maxLogFileNumber=\" + maxLogFileNumber + \"\\n\");",
" sb.append(\" currentCheckpoint=\" + currentCheckpoint + \"\\n\");",
" sb.append(\" checkpointInstant=\" + checkpointInstant + \"\\n\");",
" sb.append(\" currentCheckpoint=\" + currentCheckpoint + \"\\n\");",
" sb.append(\" checkpointDaemon=\" + checkpointDaemon + \"\\n\");",
" sb.append(\" myClientNumber=\" + myClientNumber + \"\\n\");",
" sb.append(\" checkpointDaemonCalled=\" + checkpointDaemonCalled + \"\\n\");",
" sb.append(\" logWrittenFromLastCheckPoint=\" + logWrittenFromLastCheckPoint + \"\\n\");",
" sb.append(\" rawStoreFactory=\" + rawStoreFactory + \"\\n\");",
" sb.append(\" dataFactory=\" + dataFactory + \"\\n\");",
" sb.append(\" ReadOnlyDB=\" + ReadOnlyDB + \"\\n\");",
" sb.append(\" masterFactory=\" + masterFactory + \"\\n\");",
" sb.append(\" inReplicationMasterMode=\" + inReplicationMasterMode + \"\\n\");",
" sb.append(\" inReplicationSlaveMode=\" + inReplicationSlaveMode + \"\\n\");",
" sb.append(\" replicationSlaveException=\" + replicationSlaveException + \"\\n\");",
" sb.append(\" inReplicationSlaveMode=\" + inReplicationSlaveMode + \"\\n\");",
" sb.append(\" replicationSlaveException=\" + replicationSlaveException + \"\\n\");",
" sb.append(\" inReplicationSlavePreMode=\" + inReplicationSlavePreMode + \"\\n\");",
" sb.append(\" replicationSlaveException=\" + replicationSlaveException + \"\\n\");",
" sb.append(\" slaveRecoveryMonitor=\" + slaveRecoveryMonitor + \"\\n\");",
" sb.append(\" allowedToReadFileNumber=\" + allowedToReadFileNumber + \"\\n\");",
" sb.append(\" slaveRecoveryMonitor=\" + slaveRecoveryMonitor + \"\\n\");",
" sb.append(\" keepAllLogs=\" + keepAllLogs + \"\\n\");",
" sb.append(\" databaseEncrypted=\" + databaseEncrypted + \"\\n\");",
" sb.append(\" keepAllLogs=\" + keepAllLogs + \"\\n\");",
" sb.append(\" recoveryNeeded=\" + recoveryNeeded + \"\\n\");",
" sb.append(\" inCheckpoint=\" + inCheckpoint + \"\\n\");",
" sb.append(\" inRedo=\" + inRedo + \"\\n\");",
" sb.append(\" inLogSwitch=\" + inLogSwitch + \"\\n\");",
" sb.append(\" stopped=\" + stopped + \"\\n\");",
" sb.append(\" logDevice=\" + logDevice + \"\\n\");",
" sb.append(\" logNotSynced=\" + logNotSynced + \"\\n\");",
" sb.append(\" logArchived=\" + logArchived + \"\\n\");",
" sb.append(\" logSwitchRequired=\" + logSwitchRequired + \"\\n\");",
" sb.append(\" test_logWritten=\" + test_logWritten + \"\\n\");",
" sb.append(\" test_numRecordToFillLog=\" + test_numRecordToFillLog + \"\\n\");",
" sb.append(\" mon_flushCalls=\" + mon_flushCalls + \"\\n\");",
" sb.append(\" mon_syncCalls=\" + mon_syncCalls + \"\\n\");",
" sb.append(\" mon_numLogFlushWaits=\" + mon_numLogFlushWaits + \"\\n\");",
" sb.append(\" mon_LogSyncStatistics=\" + mon_LogSyncStatistics + \"\\n\");",
" sb.append(\" corrupt=\" + corrupt + \"\\n\");",
" sb.append(\" isFrozen=\" + isFrozen + \"\\n\");",
" sb.append(\" jbmsVersion=\" + jbmsVersion + \"\\n\");",
" sb.append(\" onDiskMajorVersion=\" + onDiskMajorVersion + \"\\n\");",
" sb.append(\" onDiskMinorVersion=\" + onDiskMinorVersion + \"\\n\");",
" sb.append(\" onDiskBeta=\" + onDiskBeta + \"\\n\");",
" sb.append(\" checksum=\" + checksum + \"\\n\");",
" sb.append(\" onDiskBeta=\" + onDiskBeta + \"\\n\");",
" sb.append(\" isWriteSynced=\" + isWriteSynced + \"\\n\");",
" sb.append(\" jvmSyncErrorChecked=\" + jvmSyncErrorChecked + \"\\n\");",
" sb.append(\" logFileToBackup=\" + logFileToBackup + \"\\n\");",
" sb.append(\" backupInProgress=\" + backupInProgress + \"]\\n\");",
" }",
" return sb.toString();",
" }"
],
"header": "@@ -5870,4 +5882,76 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": []
}
]
}
] |
derby-DERBY-501-fade7e97
|
DERBY-501: Client and embedded drivers differ on invoking a procedure
that returns a single Dynamic resultSet using CallableStatement.executeQuery()
This patch modifies EmbedStatement.processDynamicResults() so that it
returns the number of dynamic results instead of a
boolean. EmbedStatement.executeStatement() uses this number to decide
whether an exception is to be raised. With this change, the
executeQuery and executeUpdate parameters are no longer needed in
GenericPreparedStatement.execute().
ProcedureTest.junit is now enabled in derbyall (all frameworks). Seven
of the test cases run in the embedded framework only, but I expect all
of them to succeed with the client driver after DERBY-1314 and
DERBY-1364 have been fixed.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@414795 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/PreparedStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -101,8 +101,6 @@ public interface PreparedStatement",
"removed": [
"\t * @param executeQuery\t\tWhether or not called from a Statement.executeQuery()",
"\t * @param executeUpdate\tWhether or not called from a Statement.executeUpdate()"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [
" ps.execute(act, true, 0L); "
],
"header": "@@ -3485,7 +3485,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" ps.execute(act, false, true, true, 0L); "
]
},
{
"added": [
" // Execute the update where current of sql.",
" org.apache.derby.iapi.sql.ResultSet rs = ps.execute(act, true, 0L);"
],
"header": "@@ -3556,7 +3556,8 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" org.apache.derby.iapi.sql.ResultSet rs = ps.execute(act, false, true, true, 0L); //execute the update where current of sql"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedStatement.java",
"hunks": [
{
"added": [
"",
" // The statement returns rows, so calling it with",
" // executeUpdate() is not allowed.",
" if (executeUpdate) {",
" throw StandardException.newException(",
" SQLState.LANG_INVALID_CALL_TO_EXECUTE_UPDATE);",
" }",
""
],
"header": "@@ -1179,14 +1179,20 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" executeQuery,",
" executeUpdate,"
]
},
{
"added": [
" int dynamicResultCount = 0;",
" dynamicResultCount =",
" processDynamicResults(a.getDynamicResults(),",
" a.getMaxDynamicResults());",
"",
" // executeQuery() is not allowed if the statement",
" // doesn't return exactly one ResultSet.",
" if (executeQuery && dynamicResultCount != 1) {",
" throw StandardException.newException(",
" SQLState.LANG_INVALID_CALL_TO_EXECUTE_QUERY);",
" }",
"",
" // executeUpdate() is not allowed if the statement",
" // returns ResultSets.",
" if (executeUpdate && dynamicResultCount > 0) {",
" throw StandardException.newException(",
" SQLState.LANG_INVALID_CALL_TO_EXECUTE_UPDATE);",
" }",
" if (dynamicResultCount == 0) {"
],
"header": "@@ -1217,12 +1223,28 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\t\t\tboolean haveDynamicResults = false;",
"\t\t\t\t\t\thaveDynamicResults = processDynamicResults(a.getDynamicResults(), a.getMaxDynamicResults());",
"\t\t\t\t\tif (!haveDynamicResults) {"
]
},
{
"added": [
" retval = (dynamicResultCount > 0);"
],
"header": "@@ -1240,7 +1262,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\t\t\t\t\tretval = haveDynamicResults;"
]
},
{
"added": [
"",
" /**",
" * Go through a holder of dynamic result sets, remove those that",
" * should not be returned, and sort the result sets according to",
" * their creation.",
" *",
" * @param holder a holder of dynamic result sets",
" * @param maxDynamicResultSets the maximum number of result sets",
" * to be returned",
" * @return the actual number of result sets",
" * @exception SQLException if an error occurs",
" */",
" private int processDynamicResults(java.sql.ResultSet[][] holder,",
" int maxDynamicResultSets)",
" throws SQLException",
" {"
],
"header": "@@ -1446,7 +1468,22 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"\tprivate boolean processDynamicResults(java.sql.ResultSet[][] holder, int maxDynamicResultSets) throws SQLException {"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/GenericPreparedStatement.java",
"hunks": [
{
"added": [
"\t\treturn execute(a, rollbackParentContext, timeoutMillis);"
],
"header": "@@ -237,15 +237,13 @@ public class GenericPreparedStatement",
"removed": [
"\t\treturn execute(a, false, false, rollbackParentContext, timeoutMillis);",
"\t * @param\texecuteQuery\t\t\t\tCalled via executeQuery",
"\t * @param\texecuteUpdate\t\t\t\tCalled via executeUpdate"
]
},
{
"added": [],
"header": "@@ -256,8 +254,6 @@ public class GenericPreparedStatement",
"removed": [
" boolean executeQuery,",
" boolean executeUpdate,"
]
}
]
}
] |
derby-DERBY-5012-3f4ebad6
|
DERBY-5012: bad allocation guard in ResultSet#resetUpdatedColumnsForInsert
Reworked fix based on patch contributed by Dave Brosius <dbrosius@apache.org>.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1068772 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
" for (int i = 0; i < resultSetMetaData_.columns_; i++) {"
],
"header": "@@ -4694,16 +4694,8 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" if (updatedColumns_ == null) {",
" updatedColumns_ = new Object[resultSetMetaData_.columns_];",
" }",
" if (columnUpdated_ != null) {",
" columnUpdated_ = new boolean[resultSetMetaData_.columns_];",
" }",
" for (int i = 0; i < updatedColumns_.length; i++) {",
" }",
" for (int i = 0; i < columnUpdated_.length; i++) {"
]
}
]
}
] |
derby-DERBY-5014-fff8cb7a
|
DERBY-5014 Tests should restore the timeout values to default after they are done running.
Fixed one test that was not restoring the timeout value.
Contributed by Siddharth Srivastava <akssps011@gmail.com>
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1132747 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5015-c3bf8e5b
|
DERBY-5015: Use Arrays.fill() in client/am/ResultSet.java
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1069354 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
" Arrays.fill(updatedColumns_, null);",
" Arrays.fill(columnUpdated_, false);"
],
"header": "@@ -4702,14 +4702,10 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" for (int i = 0; i < updatedColumns_.length; i++) {",
" updatedColumns_[i] = null;",
" }",
" for (int i = 0; i < columnUpdated_.length; i++) {",
" columnUpdated_[i] = false;",
" }"
]
},
{
"added": [
" Arrays.fill(rowsetSqlca_, null);"
],
"header": "@@ -5437,9 +5433,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" for (int i = 0; i < rowsetSqlca_.length; i++) {",
" rowsetSqlca_[i] = null;",
" }"
]
}
]
}
] |
derby-DERBY-5017-c7a1d17d
|
DERBY-5017: push code assignments down to where they are used
Patch contributed by Dave Brosius <dbrosius@apache.org>.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1071320 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredConnection.java",
"hunks": [
{
"added": [
"\t\t Connection conn = getRealConnection();"
],
"header": "@@ -424,12 +424,9 @@ public abstract class BrokeredConnection implements EngineConnection",
"removed": [
"\t\tClass[] CONN_PARAM = { Integer.TYPE };",
"\t\tObject[] CONN_ARG = { new Integer(stateHoldability)};",
"",
"\t\tConnection conn = getRealConnection();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/DDUtils.java",
"hunks": [
{
"added": [],
"header": "@@ -381,9 +381,6 @@ public\tclass\tDDUtils",
"removed": [
"\t\tString refTableName = refTd.getSchemaName() + \".\" + refTd.getName();",
"",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t\t stmtKey = new TableKey(schemaUUID, stmtName);"
],
"header": "@@ -4123,12 +4123,12 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tstmtKey = new TableKey(schemaUUID, stmtName);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DropSchemaNode.java",
"hunks": [
{
"added": [],
"header": "@@ -63,7 +63,6 @@ public class DropSchemaNode extends DDLStatementNode",
"removed": [
" String currentUser = stx.getSQLSessionContext().getCurrentUser();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ExtractOperatorNode.java",
"hunks": [
{
"added": [],
"header": "@@ -95,7 +95,6 @@ public class ExtractOperatorNode extends UnaryOperatorNode {",
"removed": [
"\t\tTypeCompiler tc = operand.getTypeCompiler();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/heap/D_HeapController.java",
"hunks": [
{
"added": [
" String double_str = \"\" + ratio;"
],
"header": "@@ -116,11 +116,11 @@ public class D_HeapController extends DiagnosticableGeneric",
"removed": [
" String double_str = \"\" + ratio;"
]
}
]
}
] |
derby-DERBY-5021-fc54674b
|
DERBY-5021: avoid map look ups in a loop by using entrySet
Patch contributed by Dave Brosius <dbrosius@apache.org>.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1085409 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/tools/org/apache/derby/impl/tools/dblook/DB_Table.java",
"hunks": [
{
"added": [
"import java.util.Map;"
],
"header": "@@ -29,6 +29,7 @@ import java.sql.SQLException;",
"removed": []
},
{
"added": [
"\t\tSet entries = tableIdToNameMap.entrySet();",
"\t\tfor (Iterator itr = entries.iterator(); itr.hasNext(); ) {",
" Map.Entry entry = (Map.Entry)itr.next();",
"\t\t\tString tableId = (String)entry.getKey();",
"\t\t\tString tableName = (String)entry.getValue();"
],
"header": "@@ -76,11 +77,12 @@ public class DB_Table {",
"removed": [
"\t\tSet tableIds = tableIdToNameMap.keySet();",
"\t\tfor (Iterator itr = tableIds.iterator(); itr.hasNext(); ) {",
"\t\t\tString tableId = (String)itr.next();",
"\t\t\tString tableName = (String)(tableIdToNameMap.get(tableId));"
]
}
]
}
] |
derby-DERBY-5022-65014c58
|
DERBY-5022 : override equals correctly
Fix for the .equals() override
Patch contributed by Dave Brosius <dbrosius@apache.org>
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1070190 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/io/FormatableBitSet.java",
"hunks": [
{
"added": [
"\tpublic boolean equals(Object other)",
" if (other instanceof FormatableBitSet) ",
" {",
" FormatableBitSet that = (FormatableBitSet) other;",
"\t\t if (this.getLength() != that.getLength())",
"\t\t {",
"\t\t\t return false;",
"\t\t }",
"\t\t return (this.compare(that) == 0);",
" }",
" return false;"
],
"header": "@@ -373,14 +373,19 @@ public final class FormatableBitSet implements Formatable, Cloneable",
"removed": [
"\tpublic boolean equals(FormatableBitSet other)",
"\t\tif (this.getLength() != other.getLength())",
"\t\t{",
"\t\t\treturn false;",
"\t\t}",
"\t\treturn (this.compare(other) == 0);"
]
}
]
}
] |
derby-DERBY-5025-c287c8c3
|
DERBY-5025: Disable the automatic calculation of statistics when running UpdateStatisticsTest.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1069890 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5028-f24f53cb
|
DERBY-4463; JMX test in nightly test suite failed with: JMXTest:clientjava.lang.InterruptedException
DERBY-5028; InterruptResilienceTest passes with IBM 1.6 SR9 but creates javacore dumps
Adjusted the skipping of this test with ibm jvms to only skip with 1.5;
Added setting of derby.stream.error.extendedDiagSeverityLevel to 50000 to
prevent unnecessary javacore dump files.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1071545 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5029-c341df03
|
DERBY-5029, DERBY-2095 getParentLogger() won't work after the engine has been shut down onece. Change comments and AutoloadTest per Dag's comments.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1069981 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/jdbc/AutoloadedDriver.java",
"hunks": [
{
"added": [
" // This is the driver that memorizes the autoloadeddriver (DERBY-2905)",
" // This flag is true unless the deregister attribute has been set to",
" // false by the user (DERBY-2905)"
],
"header": "@@ -61,11 +61,11 @@ public class AutoloadedDriver implements Driver",
"removed": [
" //This is the driver that memorizes the autoloadeddriver (DERBY-2905)",
" //This flag is set is deregister attribute is set by user, ",
" //default is true (DERBY-2905)"
]
},
{
"added": [
" //Support JDBC 4 or higher (DERBY-2905)",
" _autoloadedDriver = makeAutoloadedDriver();"
],
"header": "@@ -228,7 +228,8 @@ public class AutoloadedDriver implements Driver",
"removed": [
" _autoloadedDriver = new AutoloadedDriver();"
]
}
]
}
] |
derby-DERBY-5037-ccdcb3de
|
DERBY-5037: Swallow exceptions encountered by the istat thread while the database is being shutdown.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1071310 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
" tryToGatherStats(lcc, td, cds, AS_BACKGROUND_TASK);"
],
"header": "@@ -321,7 +321,7 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" updateIndexStatsMinion(lcc, td, cds, AS_BACKGROUND_TASK);"
]
},
{
"added": [
" /**",
" * Try to gather statistics. Fail gracefully if we are being shutdown, e.g., the database is killed",
" * while we're busy. See DERBY-5037.",
" *",
" * @param lcc language connection context used to perform the work",
" * @param td the table to update index stats for",
" * @param cds the conglomerates to update statistics for (non-index",
" * conglomerates will be ignored)",
" * @param asBackgroundTask whether the updates are done automatically as",
" * part of a background task or if explicitly invoked by the user",
" * @throws StandardException if something goes wrong",
" */",
" private void tryToGatherStats(LanguageConnectionContext lcc,",
" TableDescriptor td,",
" ConglomerateDescriptor[] cds,",
" boolean asBackgroundTask)",
" throws StandardException",
" {",
" //",
" // Swallow exceptions raised while we are being shutdown.",
" //",
" try {",
" updateIndexStatsMinion( lcc, td, cds, asBackgroundTask );",
" }",
" catch (StandardException se)",
" {",
" if ( !isShuttingDown( lcc ) ) { throw se; }",
" }",
" // to filter assertions raised by debug jars",
" catch (RuntimeException re)",
" {",
" if ( !isShuttingDown( lcc ) ) { throw re; }",
" }",
" }",
" /** Return true if we are being shutdown */",
" private boolean isShuttingDown( LanguageConnectionContext lcc )",
" {",
" if ( daemonStopped ) { return true; }",
" else { return !lcc.getDatabase().isActive(); }",
" }",
" "
],
"header": "@@ -350,6 +350,47 @@ public class IndexStatisticsDaemonImpl",
"removed": []
}
]
}
] |
derby-DERBY-5040-c2c2fb81
|
DERBY-5040: On Windows, cascade of errors after failed test AutomaticIndexStatisticsTest
Rewrote failing test (deleting the database directory fails) to use a separate
database instead of the default db wombat. This should eliminate the cascade
of errors.
Patch file: derby-5040-1a-use_separate_db.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1071783 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5042-74f00971
|
DERBY-5042: ResultSet.updateBoolean() on new BOOLEAN type throws exception
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1074127 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5044-2198fafc
|
DERBY-5379 testDERBY5120NumRowsInSydependsForTrigger - The number of values assigned is not the same as the number of specified or implied columns.
DERBY-5484 Upgradetest fails with upgrade from 10.8.2.2 (7 errors, 1 failure) on trunk
The above 2 jiras are duplicates. The upgrade tests are failing when doing an upgrade from 10.8.2.2 to trunk.
The tests that are failing were written for DERBY-5120, DERBY-5044. Both these bugs got fixed in 10.8.2.2 and higher.
The purpose of these tests is to show that when the tests are done with a release with those fixes missing, we will see the incorrect behavior but once the database is upgraded to 10.8.2.2 and higher, the tests will start functioning correctly. The problem is that we do not recognize that if the database is created with 10.8.2.2, then we will not the problem behavior because 10.8.2.2 already has the required fixes in it for DERBY-5120 and DERBY-5044. I have fixed this by making the upgrade test understand that incorrect behavior would be seen only for releases under 10.8.2.2
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1203252 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5044-4895279b
|
DERBY-5044 ALTER TABLE DROP COLUMN will not detect triggers defined on other with their trigger action using the column being dropped
Adding few tests for this jira
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1080707 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5044-9355c131
|
Fixing comments for the tests written for DERBY-5044. The comments don't need to talk in terms of specific Derby release numbers. This will allow us to backport DERBY-5044 changes to 10.8 without having to fix the comments during the backport.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1171227 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5045-328c3cab
|
DERBY-5045: Assert failures in UpdateStatisticsTest
Updated statistics shouldn't be inserted into SYSSTATISTICS if the
index has been dropped.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1080947 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
"",
" // DERBY-5045: When running as a background task, we don't take",
" // intention locks that prevent dropping the table or its indexes.",
" // So there is a possibility that this index was dropped before",
" // we wrote the statistics to the SYSSTATISTICS table. If the table",
" // isn't there anymore, issue a rollback to prevent inserting rows",
" // for non-existent indexes in SYSSTATISTICS.",
" if (asBackgroundTask && cd == null) {",
" log(asBackgroundTask, td,",
" \"rolled back index stats because index has been dropped\");",
" lcc.internalRollback();",
" }"
],
"header": "@@ -559,6 +559,18 @@ public class IndexStatisticsDaemonImpl",
"removed": []
}
]
}
] |
derby-DERBY-5046-afbc4ba9
|
DERBY-6144 nightly regression test failure, intermittent error : testStatisticsCorrectness(org.apache.derbyTesting.functionTests.tests.store.AutomaticIndexStatisticsTest)junit.framework.A
test was asserting that stats had to be created after "now". I think the
nightly was getting a case where the time was the same. DERBY-5046 fixed
a different part to the same test to check for greater than now, so implemented
that same fix at the offending line. Also added some text to be printed with
the variable values if it happens again.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1467011 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5050-b16f7705
|
DERBY-5050: BrokeredConnection could call setHoldability() without using reflection
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1071558 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -32,14 +32,7 @@ import java.sql.SQLWarning;",
"removed": [
"import java.io.ObjectOutput;",
"import java.io.ObjectInput;",
"",
"import java.lang.reflect.*;",
"",
"import org.apache.derby.iapi.error.PublicAPI;",
"import org.apache.derby.iapi.error.StandardException;"
]
},
{
"added": [
" conn.setHoldability(stateHoldability);"
],
"header": "@@ -433,17 +426,7 @@ public abstract class BrokeredConnection implements EngineConnection",
"removed": [
"\t\t\t// jdk13 does not have Connection.setHoldability method and hence using",
"\t\t\t// reflection to cover both jdk13 and higher jdks",
"\t\t\ttry {",
"\t\t Class[] CONN_PARAM = { Integer.TYPE };",
"\t\t Object[] CONN_ARG = { new Integer(stateHoldability)};",
"\t\t\t\tMethod sh = conn.getClass().getMethod(\"setHoldability\", CONN_PARAM);",
"\t\t\t\tsh.invoke(conn, CONN_ARG);",
"\t\t\t} catch( Exception e)",
"\t\t\t{",
"\t\t\t\tthrow PublicAPI.wrapStandardException( StandardException.plainWrapException( e));",
"\t\t\t}"
]
}
]
}
] |
derby-DERBY-506-24c87005
|
DERBY-506
Removing unused class.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@344344 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/QueryTimerTask.java",
"hunks": [
{
"added": [],
"header": "@@ -1,51 +0,0 @@",
"removed": [
"/*",
"",
" Derby - Class org.apache.derby.client.am.QueryTimerTask",
"",
" Copyright (c) 2001, 2005 The Apache Software Foundation or its licensors, where applicable.",
"",
" Licensed under the Apache License, Version 2.0 (the \"License\");",
" you may not use this file except in compliance with the License.",
" You may obtain a copy of the License at",
"",
" http://www.apache.org/licenses/LICENSE-2.0",
"",
" Unless required by applicable law or agreed to in writing, software",
" distributed under the License is distributed on an \"AS IS\" BASIS,",
" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
" See the License for the specific language governing permissions and",
" limitations under the License.",
"",
"*/",
"package org.apache.derby.client.am;",
"",
"public class QueryTimerTask extends java.util.TimerTask {",
" private Statement statement_;",
" private java.util.Timer timer_;",
"",
" public QueryTimerTask(Statement statement, java.util.Timer timer) {",
" statement_ = statement;",
" timer_ = timer;",
" }",
"",
" public void run() {",
" timer_.cancel(); // method call on java.util.Timer to kill the timer thread that triggered this task thread",
" try {",
" statement_.cancel(); // jdbc cancel",
" } catch (SqlException e) {",
" SqlWarning warning = new SqlWarning(statement_.agent_.logWriter_,",
" \"An exception occurred while trying to cancel a statement that has timed out.\" +",
" \" See chained SQLException.\");",
" warning.setNextException(e);",
" statement_.accumulateWarning(warning);",
" }",
" boolean notYetRun = this.cancel(); // method call on java.util.TimerTask to kill this task thread.",
" if (notYetRun) {",
" // The following is most likely just a bugcheck - but we'll see.",
" // May be able to remove it later.",
" SqlWarning warning = new SqlWarning(statement_.agent_.logWriter_,",
" \"An unexpected error occurred while trying to cancel a statement that has timed out.\");",
" statement_.accumulateWarning(warning);",
" }",
" }",
"}"
]
}
]
}
] |
derby-DERBY-506-c8ceb5f0
|
DERBY-506
Implements Statement.setQueryTimeout in the client driver by
"piggybacking" an EXCSQLSET command on statement execution.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@344147 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/PreparedStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -1259,15 +1259,6 @@ public class PreparedStatement extends Statement",
"removed": [
" java.util.Timer queryTimer = null;",
" QueryTimerTask queryTimerTask = null;",
" if (timeout_ != 0) {",
" queryTimer = new java.util.Timer(); // A thread that ticks the seconds",
" queryTimerTask = new QueryTimerTask(this, queryTimer);",
" queryTimer.schedule(queryTimerTask, 1000 * timeout_);",
" }",
"",
" try {"
]
},
{
"added": [
" boolean timeoutSent = false;",
" if (doWriteTimeout) {",
" timeoutArrayList.set(0, TIMEOUT_STATEMENT + timeout_);",
" writeSetSpecialRegister(timeoutArrayList);",
" doWriteTimeout = false;",
" timeoutSent = true;",
" }"
],
"header": "@@ -1278,8 +1269,15 @@ public class PreparedStatement extends Statement",
"removed": []
},
{
"added": [
" if (timeoutSent) {",
" readSetSpecialRegister(); // Read response to the EXCSQLSET",
" }",
""
],
"header": "@@ -1369,6 +1367,10 @@ public class PreparedStatement extends Statement",
"removed": []
},
{
"added": [],
"header": "@@ -1450,14 +1452,6 @@ public class PreparedStatement extends Statement",
"removed": [
"",
" } finally {",
" if (timeout_ != 0) { // query timers need to be cancelled.",
" queryTimer.cancel();",
" queryTimerTask.cancel();",
" }",
" }",
""
]
},
{
"added": [
" boolean timeoutSent = false;"
],
"header": "@@ -1476,6 +1470,7 @@ public class PreparedStatement extends Statement",
"removed": []
},
{
"added": [
" if (doWriteTimeout) {",
" timeoutArrayList.set(0, TIMEOUT_STATEMENT + timeout_);",
" writeSetSpecialRegister(timeoutArrayList);",
" doWriteTimeout = false;",
" timeoutSent = true;",
" }",
""
],
"header": "@@ -1510,6 +1505,13 @@ public class PreparedStatement extends Statement",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Statement.java",
"hunks": [
{
"added": [
" protected final static String TIMEOUT_STATEMENT = \"SET STATEMENT_TIMEOUT \";",
" protected java.util.ArrayList timeoutArrayList = new java.util.ArrayList(1);",
" protected boolean doWriteTimeout = false;",
" int timeout_ = 0; // for query timeout in seconds"
],
"header": "@@ -131,7 +131,10 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": [
" int timeout_ = 0; // for query timeout in seconds, multiplied by 1000 when passed to java.util.Timer"
]
},
{
"added": [
" if (timeoutArrayList.size() == 0) {",
" timeoutArrayList.add(null); // Make sure the list's length is 1",
" }"
],
"header": "@@ -204,6 +207,9 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": []
},
{
"added": [
" doWriteTimeout = false;"
],
"header": "@@ -232,6 +238,7 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": []
},
{
"added": [
" throw new SqlException(agent_.logWriter_,",
" \"Attempt to set a negative query timeout\",",
" \"XJ074.S\");",
" }",
" if (seconds != timeout_) {",
" timeout_ = seconds;",
" doWriteTimeout = true;"
],
"header": "@@ -536,9 +543,14 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": [
" throw new SqlException(agent_.logWriter_, \"Attempt to set a negative query timeout\");",
" timeout_ = seconds; // java.util.Timer takes milliseconds"
]
},
{
"added": [
" boolean timeoutSent = false;"
],
"header": "@@ -1446,17 +1458,8 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": [
" java.util.Timer queryTimer = null;",
" QueryTimerTask queryTimerTask = null;",
" if (timeout_ != 0) {",
" queryTimer = new java.util.Timer(); // A thread that ticks the seconds",
" queryTimerTask = new QueryTimerTask(this, queryTimer);",
" queryTimer.schedule(queryTimerTask, 1000 * timeout_);",
" }",
" // enclose the processing in a try finally block in order to make sure",
" // the query timeout is cancelled at the end of this method.",
" try {"
]
},
{
"added": [
" if (doWriteTimeout) {",
" timeoutArrayList.set(0, TIMEOUT_STATEMENT + timeout_);",
" writeSetSpecialRegister(timeoutArrayList);",
" doWriteTimeout = false;",
" timeoutSent = true;",
" }"
],
"header": "@@ -1464,6 +1467,12 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": []
},
{
"added": [
" if (timeoutSent) {",
" readSetSpecialRegister(); // Read response to the EXCSQLSET",
" }",
""
],
"header": "@@ -1555,6 +1564,10 @@ public class Statement implements java.sql.Statement, StatementCallbackInterface",
"removed": []
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
" private final static String TIMEOUT_STATEMENT = \"SET STATEMENT_TIMEOUT \";",
"",
" private int pendingStatementTimeout; // < 0 means no pending timeout to set",
""
],
"header": "@@ -139,6 +139,10 @@ public class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" this.pendingStatementTimeout = -1;"
],
"header": "@@ -171,6 +175,7 @@ public class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" if (pendingStatementTimeout >= 0) {",
" ps.setQueryTimeout(pendingStatementTimeout);",
" pendingStatementTimeout = -1;",
" }"
],
"header": "@@ -706,6 +711,10 @@ public class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" if (pendingStatementTimeout >= 0) {",
" stmt.getPreparedStatement().setQueryTimeout(pendingStatementTimeout);",
" pendingStatementTimeout = -1;",
" }"
],
"header": "@@ -3506,6 +3515,10 @@ public class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" Statement statement = drdaStmt.getStatement();",
" statement.clearWarnings();",
" if (pendingStatementTimeout >= 0) {",
" statement.setQueryTimeout(pendingStatementTimeout);",
" pendingStatementTimeout = -1;",
" }",
"\t\tint updCount = statement.executeUpdate(sqlStmt);"
],
"header": "@@ -4323,8 +4336,13 @@ public class DRDAConnThread extends Thread {",
"removed": [
"\t\tdrdaStmt.getStatement().clearWarnings();",
"\t\tint updCount = drdaStmt.getStatement().executeUpdate(sqlStmt);"
]
}
]
}
] |
derby-DERBY-5060-a32eb9f3
|
DERBY-5060: use collection apis
Patch contributed by Dave Brosius <dbrosius@apache.org>.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1073375 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
"\t\t\t\tknownManagers.add(new Integer(manager));"
],
"header": "@@ -1685,7 +1685,7 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\tknownManagers.addElement(new Integer(manager));"
]
},
{
"added": [
"\t\t\t\t\t\terrorManagers.add(new Integer(manager));",
"\t\t\t\t\t\terrorManagersLevel.add(new Integer (managerLevel));",
"\t\t\t\tunknownManagers.add(new Integer(manager));"
],
"header": "@@ -1696,14 +1696,14 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\t\t\terrorManagers.addElement(new Integer(manager));",
"\t\t\t\t\t\terrorManagersLevel.addElement(new Integer (managerLevel));",
"\t\t\t\tunknownManagers.addElement(new Integer(manager));"
]
},
{
"added": [
"\t\t\t\toa[j++] = errorManagers.get(i);",
"\t\t\t\toa[j++] = errorManagersLevel.get(i);"
],
"header": "@@ -1716,8 +1716,8 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\toa[j++] = errorManagers.elementAt(i);",
"\t\t\t\toa[j++] = errorManagersLevel.elementAt(i);"
]
},
{
"added": [
"\t\t\tmanager = ((Integer)knownManagers.get(i)).intValue();"
],
"header": "@@ -1798,7 +1798,7 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\tmanager = ((Integer)knownManagers.elementAt(i)).intValue();"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"\t\t\t\tretval = (Session) runQueue.get(0);",
"\t\t\t\trunQueue.remove(0);"
],
"header": "@@ -1834,8 +1834,8 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\tretval = (Session) runQueue.elementAt(0);",
"\t\t\t\trunQueue.removeElementAt(0);"
]
},
{
"added": [
"\t\t\t\t\tboolean on = isOn((String)commandArgs.get(0));",
"\t\t\t\tString directory = (String) commandArgs.get(0);"
],
"header": "@@ -2246,13 +2246,13 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\t\tboolean on = isOn((String)commandArgs.elementAt(0));",
"\t\t\t\tString directory = (String) commandArgs.elementAt(0);"
]
},
{
"added": [
"\t\t\t\t\tboolean on = isOn((String)commandArgs.get(0));"
],
"header": "@@ -2263,7 +2263,7 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\t\tboolean on = isOn((String)commandArgs.elementAt(0));"
]
},
{
"added": [
"\t\t\t\t\tmax = Integer.parseInt((String)commandArgs.get(0));",
"\t\t\t\t\t\t{(String)commandArgs.get(0), \"maxthreads\"});"
],
"header": "@@ -2277,10 +2277,10 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\t\tmax = Integer.parseInt((String)commandArgs.elementAt(0));",
"\t\t\t\t\t\t{(String)commandArgs.elementAt(0), \"maxthreads\"});"
]
},
{
"added": [
"\t\t\t\tString timeSliceArg = (String)commandArgs.get(0);",
"\t\t\t\t\t\t{(String)commandArgs.get(0), \"timeslice\"});"
],
"header": "@@ -2294,12 +2294,12 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\tString timeSliceArg = (String)commandArgs.elementAt(0);",
"\t\t\t\t\t\t{(String)commandArgs.elementAt(0), \"timeslice\"});"
]
},
{
"added": [
"\t\t\trunQueue.add(clientSession);"
],
"header": "@@ -2324,7 +2324,7 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\trunQueue.addElement(clientSession);"
]
},
{
"added": [
"\t\t\t\t\t\tcommandArgs.add(args[i++]);",
"\t\t\t\t\tcommandArgs.add(args[i++]);"
],
"header": "@@ -2349,12 +2349,12 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\t\t\tcommandArgs.addElement(args[i++]);",
"\t\t\t\t\tcommandArgs.addElement(args[i++]);"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/TestProto.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\tmanager.add(new Integer(reader.readNetworkShort()));",
"\t\t\t\t\t\tmanagerLevel.add(new Integer(reader.readNetworkShort()));"
],
"header": "@@ -834,8 +834,8 @@ public class TestProto {",
"removed": [
"\t\t\t\t\t\tmanager.addElement(new Integer(reader.readNetworkShort()));",
"\t\t\t\t\t\tmanagerLevel.addElement(new Integer(reader.readNetworkShort()));"
]
}
]
},
{
"file": "java/engine/org/apache/derby/diag/StatementCache.java",
"hunks": [
{
"added": [
"\t\t\t\tdata.add(ps);"
],
"header": "@@ -87,7 +87,7 @@ public final class StatementCache extends VTITemplate {",
"removed": [
"\t\t\t\tdata.addElement(ps);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/classfile/ClassHolder.java",
"hunks": [
{
"added": [
"\t\tcptEntries.add(item);",
"\t\t\tcptEntries.add(null);"
],
"header": "@@ -421,12 +421,12 @@ public class ClassHolder {",
"removed": [
"\t\tcptEntries.addElement(item);",
"\t\t\tcptEntries.addElement(null);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/classfile/ClassInvestigator.java",
"hunks": [
{
"added": [
" implemented.add(className(interfaces[i]));"
],
"header": "@@ -143,7 +143,7 @@ public class ClassInvestigator extends ClassHolder {",
"removed": [
" implemented.addElement(className(interfaces[i]));"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/classfile/MemberTable.java",
"hunks": [
{
"added": [
"\t\tentries.add(item);"
],
"header": "@@ -45,7 +45,7 @@ class MemberTable {",
"removed": [
"\t\tentries.addElement(item);"
]
},
{
"added": [
"\t\treturn (ClassMember) entries.get(mth.index);"
],
"header": "@@ -65,7 +65,7 @@ class MemberTable {",
"removed": [
"\t\treturn (ClassMember) entries.elementAt(mth.index);"
]
},
{
"added": [
"\t\t\t((ClassMember) lentries.get(i)).put(out);"
],
"header": "@@ -73,7 +73,7 @@ class MemberTable {",
"removed": [
"\t\t\t((ClassMember) lentries.elementAt(i)).put(out);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/property/PropertyValidation.java",
"hunks": [
{
"added": [
"\t\t\t\t\tPropertySetCallback psc = (PropertySetCallback) notifyOnSet.get(i);"
],
"header": "@@ -57,7 +57,7 @@ public class PropertyValidation implements PropertyFactory",
"removed": [
"\t\t\t\t\tPropertySetCallback psc = (PropertySetCallback) notifyOnSet.elementAt(i);"
]
},
{
"added": [
"\t\t\t\tPropertySetCallback psc = (PropertySetCallback) notifyOnSet.get(i);"
],
"header": "@@ -100,7 +100,7 @@ public class PropertyValidation implements PropertyFactory",
"removed": [
"\t\t\t\tPropertySetCallback psc = (PropertySetCallback) notifyOnSet.elementAt(i);"
]
},
{
"added": [
"\t\t\t\tPropertySetCallback psc = (PropertySetCallback) notifyOnSet.get(i);"
],
"header": "@@ -125,7 +125,7 @@ public class PropertyValidation implements PropertyFactory",
"removed": [
"\t\t\t\tPropertySetCallback psc = (PropertySetCallback) notifyOnSet.elementAt(i);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/util/IdUtil.java",
"hunks": [
{
"added": [
"\t\t\tv.add(thisId);"
],
"header": "@@ -129,7 +129,7 @@ public abstract class IdUtil",
"removed": [
"\t\t\tv.addElement(thisId);"
]
},
{
"added": [
"\t\t\t\tv.add(thisQName);"
],
"header": "@@ -421,7 +421,7 @@ public abstract class IdUtil",
"removed": [
"\t\t\t\tv.addElement(thisQName); "
]
},
{
"added": [
"\t\t\t\tv.add(thisId);"
],
"header": "@@ -492,7 +492,7 @@ public abstract class IdUtil",
"removed": [
"\t\t\t\tv.addElement(thisId);"
]
},
{
"added": [
"\t\tfor(int ix=0;ix<l1.length;ix++) if (h.contains(l1[ix])) v.add(l1[ix]);"
],
"header": "@@ -532,7 +532,7 @@ public abstract class IdUtil",
"removed": [
"\t\tfor(int ix=0;ix<l1.length;ix++) if (h.contains(l1[ix])) v.addElement(l1[ix]);"
]
},
{
"added": [
"\t\t\t\tv.add(l[ix]);"
],
"header": "@@ -603,7 +603,7 @@ public abstract class IdUtil",
"removed": [
"\t\t\t\tv.addElement(l[ix]);"
]
},
{
"added": [
"\t\t\t\tv.add(external_a[ix]);"
],
"header": "@@ -626,7 +626,7 @@ public abstract class IdUtil",
"removed": [
"\t\t\t\tv.addElement(external_a[ix]);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedStatement.java",
"hunks": [
{
"added": [
" batchStatements.add(sql);"
],
"header": "@@ -910,7 +910,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" batchStatements.addElement(sql);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/BCMethod.java",
"hunks": [
{
"added": [
"\t\tthrownExceptions.add(exceptionClass);"
],
"header": "@@ -211,7 +211,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\tthrownExceptions.addElement(exceptionClass);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/daemon/BasicDaemon.java",
"hunks": [
{
"added": [
"\t\t\tsubscription.add(clientNumber, clientRecord);"
],
"header": "@@ -147,7 +147,7 @@ public class BasicDaemon implements DaemonService, Runnable",
"removed": [
"\t\t\tsubscription.insertElementAt(clientRecord, clientNumber);"
]
},
{
"added": [
"\t\tsubscription.set(clientNumber, null);"
],
"header": "@@ -177,7 +177,7 @@ public class BasicDaemon implements DaemonService, Runnable",
"removed": [
"\t\tsubscription.setElementAt(null, clientNumber);"
]
},
{
"added": [
"\t\tServiceRecord clientRecord = (ServiceRecord)subscription.get(clientNumber);"
],
"header": "@@ -185,7 +185,7 @@ public class BasicDaemon implements DaemonService, Runnable",
"removed": [
"\t\tServiceRecord clientRecord = (ServiceRecord)subscription.elementAt(clientNumber);"
]
},
{
"added": [
"\t\t\tclientRecord = (ServiceRecord)subscription.get(nextService++);"
],
"header": "@@ -251,7 +251,7 @@ public class BasicDaemon implements DaemonService, Runnable",
"removed": [
"\t\t\tclientRecord = (ServiceRecord)subscription.elementAt(nextService++);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/monitor/BaseMonitor.java",
"hunks": [
{
"added": [
"\t\tservices.add(new TopService(this));\t// first element is always the free-floating service"
],
"header": "@@ -143,7 +143,7 @@ abstract class BaseMonitor",
"removed": [
"\t\tservices.addElement(new TopService(this));\t// first element is always the free-floating service"
]
},
{
"added": [
"\t\t\t\tts = (TopService) services.get(position);"
],
"header": "@@ -184,7 +184,7 @@ abstract class BaseMonitor",
"removed": [
"\t\t\t\tts = (TopService) services.elementAt(position);"
]
},
{
"added": [
"\t\t((TopService) services.get(0)).shutdown();"
],
"header": "@@ -205,7 +205,7 @@ abstract class BaseMonitor",
"removed": [
"\t\t((TopService) services.elementAt(0)).shutdown();"
]
},
{
"added": [
"\t\t\t\t\tboolean found = services.remove(ts);"
],
"header": "@@ -230,7 +230,7 @@ abstract class BaseMonitor",
"removed": [
"\t\t\t\t\tboolean found = services.removeElement(ts);"
]
},
{
"added": [
"\t\t\t\tTopService ts = (TopService) services.get(i);"
],
"header": "@@ -408,7 +408,7 @@ abstract class BaseMonitor",
"removed": [
"\t\t\t\tTopService ts = (TopService) services.elementAt(i);"
]
},
{
"added": [
"\t\t\treturn (TopService) services.get(0);",
"\t\t\tTopService ts = (TopService) services.get(i);"
],
"header": "@@ -539,10 +539,10 @@ abstract class BaseMonitor",
"removed": [
"\t\t\treturn (TopService) services.elementAt(0);",
"\t\t\tTopService ts = (TopService) services.elementAt(i);"
]
},
{
"added": [
"\t\t\tObject instance = newInstance((Class) implementations.get(index));"
],
"header": "@@ -729,7 +729,7 @@ abstract class BaseMonitor",
"removed": [
"\t\t\tObject instance = newInstance((Class) implementations.elementAt(index));"
]
},
{
"added": [
"\t\t\t\tClass factoryClass = (Class) implementations.get(i);"
],
"header": "@@ -747,7 +747,7 @@ abstract class BaseMonitor",
"removed": [
"\t\t\t\tClass factoryClass = (Class) implementations.elementAt(i);"
]
},
{
"added": [
"\t\t\t\tts = (TopService) services.get(i);"
],
"header": "@@ -831,7 +831,7 @@ abstract class BaseMonitor",
"removed": [
"\t\t\t\tts = (TopService) services.elementAt(i);"
]
},
{
"added": [
"\t\t\t\t\tts = (TopService) services.get(i);"
],
"header": "@@ -843,7 +843,7 @@ abstract class BaseMonitor",
"removed": [
"\t\t\t\t\tts = (TopService) services.elementAt(i);"
]
},
{
"added": [
"\t\t\t\t\timplementations.add(offset, possibleModule);",
"\t\t\t\t\timplementations.add(possibleModule);"
],
"header": "@@ -1149,13 +1149,13 @@ nextModule:",
"removed": [
"\t\t\t\t\timplementations.insertElementAt(possibleModule, offset);",
"\t\t\t\t\timplementations.addElement(possibleModule);"
]
},
{
"added": [
"\t\t\t\t\tTopService ts2 = (TopService) services.get(i);"
],
"header": "@@ -1715,7 +1715,7 @@ nextModule:",
"removed": [
"\t\t\t\t\tTopService ts2 = (TopService) services.elementAt(i);"
]
},
{
"added": [
"\t\t\t\tservices.add(ts);"
],
"header": "@@ -1748,7 +1748,7 @@ nextModule:",
"removed": [
"\t\t\t\tservices.addElement(ts);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/monitor/TopService.java",
"hunks": [
{
"added": [
" module = (ModuleInstance) moduleInstances.get(i);"
],
"header": "@@ -280,7 +280,7 @@ final class TopService {",
"removed": [
" module = (ModuleInstance) moduleInstances.elementAt(i);"
]
},
{
"added": [
"\t\tmoduleInstances.add(module);",
"\t\t\tmoduleInstances.remove(module);"
],
"header": "@@ -327,12 +327,12 @@ final class TopService {",
"removed": [
"\t\tmoduleInstances.addElement(module);",
"\t\t\tmoduleInstances.removeElement(module);"
]
},
{
"added": [
"\t\tmoduleInstances.remove(module);"
],
"header": "@@ -357,7 +357,7 @@ final class TopService {",
"removed": [
"\t\tmoduleInstances.removeElement(module);"
]
},
{
"added": [
"\t\t\t\tmodule = (ModuleInstance) moduleInstances.get(0);"
],
"header": "@@ -385,7 +385,7 @@ final class TopService {",
"removed": [
"\t\t\t\tmodule = (ModuleInstance) moduleInstances.elementAt(0);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ColumnOrdering.java",
"hunks": [
{
"added": [
"\t\t\tInteger col = (Integer) columns.get(i);",
"\t\t\tInteger tab = (Integer) tables.get(i);"
],
"header": "@@ -83,8 +83,8 @@ class ColumnOrdering {",
"removed": [
"\t\t\tInteger col = (Integer) columns.elementAt(i);",
"\t\t\tInteger tab = (Integer) tables.elementAt(i);"
]
},
{
"added": [
"\t\ttables.add(new Integer(tableNumber));",
"\t\tcolumns.add(new Integer(columnNumber));"
],
"header": "@@ -112,8 +112,8 @@ class ColumnOrdering {",
"removed": [
"\t\ttables.addElement(new Integer(tableNumber));",
"\t\tcolumns.addElement(new Integer(columnNumber));"
]
},
{
"added": [
"\t\t\tInteger tab = (Integer) tables.get(i);",
"\t\t\t\ttables.remove(i);",
"\t\t\t\tcolumns.remove(i);"
],
"header": "@@ -127,11 +127,11 @@ class ColumnOrdering {",
"removed": [
"\t\t\tInteger tab = (Integer) tables.elementAt(i);",
"\t\t\t\ttables.removeElementAt(i);",
"\t\t\t\tcolumns.removeElementAt(i);"
]
},
{
"added": [
"\t\t\tretval.columns.addElement(columns.get(i));",
"\t\t\tretval.tables.addElement(tables.get(i));"
],
"header": "@@ -150,8 +150,8 @@ class ColumnOrdering {",
"removed": [
"\t\t\tretval.columns.addElement(columns.elementAt(i));",
"\t\t\tretval.tables.addElement(tables.elementAt(i));"
]
},
{
"added": [
"\t\t\tInteger tab = (Integer) tables.get(i);"
],
"header": "@@ -163,7 +163,7 @@ class ColumnOrdering {",
"removed": [
"\t\t\tInteger tab = (Integer) tables.elementAt(i);"
]
},
{
"added": [
"\t\t\tInteger tab = (Integer) tables.get(i);"
],
"header": "@@ -178,7 +178,7 @@ class ColumnOrdering {",
"removed": [
"\t\t\tInteger tab = (Integer) tables.elementAt(i);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CompilerContextImpl.java",
"hunks": [
{
"added": [
"\t\tsavedObjects.add(obj);"
],
"header": "@@ -341,7 +341,7 @@ public class CompilerContextImpl extends ContextImpl",
"removed": [
"\t\tsavedObjects.addElement(obj);"
]
},
{
"added": [
"\t\t\tLong conglomId = (Long) storeCostConglomIds.get(i);",
"\t\t\t\treturn (StoreCostController) storeCostControllers.get(i);"
],
"header": "@@ -456,9 +456,9 @@ public class CompilerContextImpl extends ContextImpl",
"removed": [
"\t\t\tLong conglomId = (Long) storeCostConglomIds.elementAt(i);",
"\t\t\t\treturn (StoreCostController) storeCostControllers.elementAt(i);"
]
},
{
"added": [
"\t\tstoreCostControllers.add(storeCostControllers.size(), retval);",
"\t\tstoreCostConglomIds.add(storeCostConglomIds.size(), new Long(conglomerateNumber));"
],
"header": "@@ -468,13 +468,10 @@ public class CompilerContextImpl extends ContextImpl",
"removed": [
"\t\tstoreCostControllers.insertElementAt(retval,",
"\t\t\t\t\t\t\t\t\t\t\tstoreCostControllers.size());",
"\t\tstoreCostConglomIds.insertElementAt(",
"\t\t\t\t\t\t\t\tnew Long(conglomerateNumber),",
"\t\t\t\t\t\t\t\tstoreCostConglomIds.size());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DMLModStatementNode.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\trefTableNames.add(fktd.getSchemaName() + \".\" + fktd.getName());",
"\t\t\t\t\t\trefActions.add(new Integer(raRules[inner]));"
],
"header": "@@ -953,8 +953,8 @@ abstract class DMLModStatementNode extends DMLStatementNode",
"removed": [
"\t\t\t\t\t\trefTableNames.addElement(fktd.getSchemaName() + \".\" + fktd.getName());",
"\t\t\t\t\t\trefActions.addElement(new Integer(raRules[inner]));"
]
},
{
"added": [
"\t\t\t\t\t\trefColDescriptors.add(releventColDes);",
"\t\t\t\t\t\trefIndexConglomNum.add(new Long(conglomNumbers[inner]));",
"\t\t\t\t\t\tfkColMap.add(colArray);"
],
"header": "@@ -964,9 +964,9 @@ abstract class DMLModStatementNode extends DMLStatementNode",
"removed": [
"\t\t\t\t\t\trefColDescriptors.addElement(releventColDes);",
"\t\t\t\t\t\trefIndexConglomNum.addElement(new Long(conglomNumbers[inner]));",
"\t\t\t\t\t\tfkColMap.addElement(colArray);"
]
},
{
"added": [
"\t\t\tfkVector.add(new FKInfo("
],
"header": "@@ -980,7 +980,7 @@ abstract class DMLModStatementNode extends DMLStatementNode",
"removed": [
"\t\t\tfkVector.addElement(new FKInfo("
]
},
{
"added": [
"\t\t\t\tfkInfo[i] = (FKInfo)fkVector.get(i);"
],
"header": "@@ -1006,7 +1006,7 @@ abstract class DMLModStatementNode extends DMLStatementNode",
"removed": [
"\t\t\t\tfkInfo[i] = (FKInfo)fkVector.elementAt(i);"
]
},
{
"added": [
"\t\t\t\tfkTableNames[i] = (String)refTableNames.get(i);",
"\t\t\t\tfkRefActions[i] = ((Integer) refActions.get(i)).intValue();",
"\t\t\t\t\t(ColumnDescriptorList)refColDescriptors.get(i);",
"\t\t\t\t\t((Long)refIndexConglomNum.get(i)).longValue();",
"\t\t\t\tfkColArrays[i] = ((int[])fkColMap.get(i));"
],
"header": "@@ -1021,13 +1021,13 @@ abstract class DMLModStatementNode extends DMLStatementNode",
"removed": [
"\t\t\t\tfkTableNames[i] = (String)refTableNames.elementAt(i);",
"\t\t\t\tfkRefActions[i] = ((Integer) refActions.elementAt(i)).intValue();",
"\t\t\t\t\t(ColumnDescriptorList)refColDescriptors.elementAt(i);",
"\t\t\t\t\t((Long)refIndexConglomNum.elementAt(i)).longValue();",
"\t\t\t\tfkColArrays[i] = ((int[])fkColMap.elementAt(i));"
]
},
{
"added": [
"\t\t\t\t\tconglomVector.add( cd );"
],
"header": "@@ -1759,7 +1759,7 @@ abstract class DMLModStatementNode extends DMLStatementNode",
"removed": [
"\t\t\t\t\tconglomVector.addElement( cd );"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java",
"hunks": [
{
"added": [
"\t\t\t\tthis.dependencyMap.clear(((Integer)locations.get(i)).intValue());"
],
"header": "@@ -2159,7 +2159,7 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\t\t\tthis.dependencyMap.clear(((Integer)locations.elementAt(i)).intValue());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/GroupByNode.java",
"hunks": [
{
"added": [
"\t\t\taggregate = (AggregateNode) aggregateVector.get(index);"
],
"header": "@@ -659,7 +659,7 @@ public class GroupByNode extends SingleChildResultSetNode",
"removed": [
"\t\t\taggregate = (AggregateNode) aggregateVector.elementAt(index);"
]
},
{
"added": [
"\t\t\t\t\t\t(AggregateNode)aggregateVector.get(i);"
],
"header": "@@ -896,7 +896,7 @@ public class GroupByNode extends SingleChildResultSetNode",
"removed": [
"\t\t\t\t\t\t(AggregateNode)aggregateVector.elementAt(i);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/HashJoinStrategy.java",
"hunks": [
{
"added": [
"\t\t\t\thashKeyVector.add(new Integer(colCtr));"
],
"header": "@@ -651,7 +651,7 @@ public class HashJoinStrategy extends BaseJoinStrategy {",
"removed": [
"\t\t\t\thashKeyVector.addElement(new Integer(colCtr));"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/PredicateList.java",
"hunks": [
{
"added": [
"\t\t\t\t\tmovePreds.add(predicate);",
" (Predicate) movePreds.get(mpIndex), 0);"
],
"header": "@@ -1967,13 +1967,13 @@ public class PredicateList extends QueryTreeNodeVector implements OptimizablePre",
"removed": [
"\t\t\t\t\tmovePreds.addElement(predicate);",
" (Predicate) movePreds.elementAt(mpIndex), 0);"
]
},
{
"added": [
"\t\t\tmaxPreds.clear();"
],
"header": "@@ -3893,7 +3893,7 @@ public class PredicateList extends QueryTreeNodeVector implements OptimizablePre",
"removed": [
"\t\t\tmaxPreds.removeAllElements();"
]
},
{
"added": [
"\t\t\t\tPredicate p =(Predicate) maxPreds.get(i);"
],
"header": "@@ -3908,7 +3908,7 @@ public class PredicateList extends QueryTreeNodeVector implements OptimizablePre",
"removed": [
"\t\t\t\tPredicate p =(Predicate) maxPreds.elementAt(i);"
]
},
{
"added": [
"\t\t\t\t((PredicateWrapper)uniquepreds.get(i)).getPredicate();",
"\t\t\tret.add(p);"
],
"header": "@@ -4018,8 +4018,8 @@ public class PredicateList extends QueryTreeNodeVector implements OptimizablePre",
"removed": [
"\t\t\t\t((PredicateWrapper)uniquepreds.elementAt(i)).getPredicate();",
"\t\t\tret.addElement(p);"
]
},
{
"added": [
"\t\t\tpwList.remove(index);",
"\t\t\treturn (PredicateWrapper)pwList.get(i);"
],
"header": "@@ -4151,13 +4151,13 @@ public class PredicateList extends QueryTreeNodeVector implements OptimizablePre",
"removed": [
"\t\t\tpwList.removeElementAt(index);",
"\t\t\treturn (PredicateWrapper)pwList.elementAt(i);"
]
},
{
"added": [
"\t\t\tpwList.add(i, pw);"
],
"header": "@@ -4172,7 +4172,7 @@ public class PredicateList extends QueryTreeNodeVector implements OptimizablePre",
"removed": [
"\t\t\tpwList.insertElementAt(pw, i);"
]
},
{
"added": [
"\t\t\t\tpwList.clear();"
],
"header": "@@ -4203,7 +4203,7 @@ public class PredicateList extends QueryTreeNodeVector implements OptimizablePre",
"removed": [
"\t\t\t\tpwList.removeAllElements();"
]
},
{
"added": [
"\t\t\t\tpwList.remove(k);"
],
"header": "@@ -4228,7 +4228,7 @@ public class PredicateList extends QueryTreeNodeVector implements OptimizablePre",
"removed": [
"\t\t\t\tpwList.removeElementAt(k);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/RowOrderingImpl.java",
"hunks": [
{
"added": [
"\t\tColumnOrdering co = (ColumnOrdering) ordering.get(orderPosition);"
],
"header": "@@ -101,7 +101,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\tColumnOrdering co = (ColumnOrdering) ordering.elementAt(orderPosition);"
]
},
{
"added": [
"\t\t\tColumnOrdering co = (ColumnOrdering) ordering.get(i);"
],
"header": "@@ -138,7 +138,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\tColumnOrdering co = (ColumnOrdering) ordering.elementAt(i);"
]
},
{
"added": [
"\t\t\t\t\t\t\t(Optimizable) vec.get(i);"
],
"header": "@@ -168,7 +168,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\t\t\t\t\t(Optimizable) vec.elementAt(i);"
]
},
{
"added": [
"\t\t\tordering.add(currentColumnOrdering);",
"\t\t\t\t(ColumnOrdering) ordering.get(ordering.size() - 1);"
],
"header": "@@ -195,12 +195,12 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\tordering.addElement(currentColumnOrdering);",
"\t\t\t\t(ColumnOrdering) ordering.elementAt(ordering.size() - 1);"
]
},
{
"added": [
"\t\tordering.add(currentColumnOrdering);"
],
"header": "@@ -223,7 +223,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\tordering.addElement(currentColumnOrdering);"
]
},
{
"added": [
"\t\t\t\t\t((ColumnOrdering) ordering.get(0)).hasTable("
],
"header": "@@ -253,7 +253,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\t\t\t((ColumnOrdering) ordering.elementAt(0)).hasTable("
]
},
{
"added": [
"\t\t\talwaysOrderedOptimizables.add(optimizable);"
],
"header": "@@ -267,7 +267,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\talwaysOrderedOptimizables.addElement(optimizable);"
]
},
{
"added": [
"\t\t\tColumnOrdering ord = (ColumnOrdering) ordering.get(i);",
"\t\t\t\tordering.remove(i);"
],
"header": "@@ -301,10 +301,10 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\tColumnOrdering ord = (ColumnOrdering) ordering.elementAt(i);",
"\t\t\t\tordering.removeElementAt(i);"
]
},
{
"added": [
"\t\t\t\t\t\t\t(Optimizable) vec.get(i);",
"\t\t\t\t\tvec.remove(i);"
],
"header": "@@ -328,13 +328,13 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\t\t\t\t\t(Optimizable) vec.elementAt(i);",
"\t\t\t\t\tvec.removeElementAt(i);"
]
},
{
"added": [
"\t\tunorderedOptimizables.add(optimizable);"
],
"header": "@@ -343,7 +343,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\tunorderedOptimizables.addElement(optimizable);"
]
},
{
"added": [
"\t\tdest.ordering.clear();",
"\t\tdest.unorderedOptimizables.clear();",
"\t\t\tdest.unorderedOptimizables.add(",
"\t\t\t\t\t\t\t\t\t\t\tunorderedOptimizables.get(i));",
"\t\tdest.alwaysOrderedOptimizables.clear();",
"\t\t\tdest.alwaysOrderedOptimizables.add(",
"\t\t\t\t\t\t\t\t\t\talwaysOrderedOptimizables.get(i));",
"\t\t\tColumnOrdering co = (ColumnOrdering) ordering.get(i);",
"\t\t\tdest.ordering.add(co.cloneMe());"
],
"header": "@@ -359,25 +359,25 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\tdest.ordering.removeAllElements();",
"\t\tdest.unorderedOptimizables.removeAllElements();",
"\t\t\tdest.unorderedOptimizables.addElement(",
"\t\t\t\t\t\t\t\t\t\t\tunorderedOptimizables.elementAt(i));",
"\t\tdest.alwaysOrderedOptimizables.removeAllElements();",
"\t\t\tdest.alwaysOrderedOptimizables.addElement(",
"\t\t\t\t\t\t\t\t\t\talwaysOrderedOptimizables.elementAt(i));",
"\t\t\tColumnOrdering co = (ColumnOrdering) ordering.elementAt(i);",
"\t\t\tdest.ordering.addElement(co.cloneMe());"
]
},
{
"added": [
"\t\tcurrentColumnOrdering = (ColumnOrdering) ordering.get(posn);"
],
"header": "@@ -389,7 +389,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\tcurrentColumnOrdering = (ColumnOrdering) ordering.elementAt(posn);"
]
},
{
"added": [
"\t\t\t\tOptimizable opt = (Optimizable) unorderedOptimizables.get(i);",
"\t\t\t\t\tretval += unorderedOptimizables.get(i).toString();"
],
"header": "@@ -402,14 +402,14 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\t\tOptimizable opt = (Optimizable) unorderedOptimizables.elementAt(i);",
"\t\t\t\t\tretval += unorderedOptimizables.elementAt(i).toString();"
]
},
{
"added": [
"\t\t\t\tOptimizable opt = (Optimizable) alwaysOrderedOptimizables.get(i);",
"\t\t\t\t\tretval += alwaysOrderedOptimizables.get(i).toString();",
"\t\t\t\tretval += \" ColumnOrdering \" + i + \": \" + ordering.get(i);"
],
"header": "@@ -419,21 +419,21 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\t\tOptimizable opt = (Optimizable) alwaysOrderedOptimizables.elementAt(i);",
"\t\t\t\t\tretval += alwaysOrderedOptimizables.elementAt(i).toString();",
"\t\t\t\tretval += \" ColumnOrdering \" + i + \": \" + ordering.elementAt(i);"
]
},
{
"added": [
"\t\t\t\t(Optimizable) unorderedOptimizables.get(i);"
],
"header": "@@ -449,7 +449,7 @@ class RowOrderingImpl implements RowOrdering {",
"removed": [
"\t\t\t\t(Optimizable) unorderedOptimizables.elementAt(i);"
]
}
]
}
] |
derby-DERBY-5062-39244bff
|
DERBY-5062 push code assignments down to where they are used
Patch contributed by Dave Brosius <dbrosius@apache.org>
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1074227 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/drda/NetServlet.java",
"hunks": [
{
"added": [],
"header": "@@ -85,8 +85,6 @@ public class NetServlet extends HttpServlet {",
"removed": [
"\t\tLocalizedResource langUtil = new LocalizedResource(null,null,SERVLET_PROP_MESSAGES);",
"\t\t\t\t"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/GClass.java",
"hunks": [
{
"added": [],
"header": "@@ -78,8 +78,6 @@ public abstract class GClass implements ClassBuilder {",
"removed": [
"\t\t// find the error stream",
"\t\tHeaderPrintWriter errorStream = Monitor.getStream();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DropSchemaNode.java",
"hunks": [
{
"added": [],
"header": "@@ -60,9 +60,6 @@ public class DropSchemaNode extends DDLStatementNode",
"removed": [
"\t\t",
" LanguageConnectionContext lcc = getLanguageConnectionContext();",
" StatementContext stx = lcc.getStatementContext();"
]
},
{
"added": [
" LanguageConnectionContext lcc = getLanguageConnectionContext();",
" StatementContext stx = lcc.getStatementContext();",
" "
],
"header": "@@ -80,6 +77,9 @@ public class DropSchemaNode extends DDLStatementNode",
"removed": []
}
]
}
] |
derby-DERBY-5063-8e6e96f8
|
DERBY-5063: Embedded driver allows updateBytes() on BOOLEAN column
Make embedded driver throw same exception as the client driver when
updateBytes() is executed on a BOOLEAN column.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1073287 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5067-167094b1
|
DERBY-5067: Performance regression tests should populate tables before creating indexes
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1074449 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/BankAccountFiller.java",
"hunks": [
{
"added": [
" \"(ACCOUNT_ID INT NOT NULL, \" +"
],
"header": "@@ -147,7 +147,7 @@ public class BankAccountFiller implements DBFiller {",
"removed": [
" \"(ACCOUNT_ID INT PRIMARY KEY, \" +"
]
},
{
"added": [
" \"(BRANCH_ID INT NOT NULL, \" +"
],
"header": "@@ -156,7 +156,7 @@ public class BankAccountFiller implements DBFiller {",
"removed": [
" \"(BRANCH_ID INT PRIMARY KEY, \" +"
]
},
{
"added": [
" \"(TELLER_ID INT NOT NULL, \" +"
],
"header": "@@ -164,7 +164,7 @@ public class BankAccountFiller implements DBFiller {",
"removed": [
" \"(TELLER_ID INT PRIMARY KEY, \" +"
]
},
{
"added": [
" Statement s = c.createStatement();",
""
],
"header": "@@ -192,6 +192,8 @@ public class BankAccountFiller implements DBFiller {",
"removed": []
},
{
"added": [
"",
" s.executeUpdate(\"ALTER TABLE \" + ACCOUNT_TABLE + \" ADD CONSTRAINT \" +",
" ACCOUNT_TABLE + \"_PK PRIMARY KEY (ACCOUNT_ID)\");",
""
],
"header": "@@ -203,6 +205,10 @@ public class BankAccountFiller implements DBFiller {",
"removed": []
},
{
"added": [
"",
" s.executeUpdate(\"ALTER TABLE \" + BRANCH_TABLE + \" ADD CONSTRAINT \" +",
" BRANCH_TABLE + \"_PK PRIMARY KEY (BRANCH_ID)\");",
""
],
"header": "@@ -215,6 +221,10 @@ public class BankAccountFiller implements DBFiller {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/perf/clients/SingleRecordFiller.java",
"hunks": [
{
"added": [
" \"CREATE TABLE \" + tableName + \"(ID INT NOT NULL, \" +"
],
"header": "@@ -116,7 +116,7 @@ public class SingleRecordFiller implements DBFiller {",
"removed": [
" \"CREATE TABLE \" + tableName + \"(ID INT PRIMARY KEY, \" +"
]
},
{
"added": [
" s.executeUpdate(\"ALTER TABLE \" + tableName + \" ADD CONSTRAINT \" +",
" tableName + \"_PK PRIMARY KEY (ID)\");",
""
],
"header": "@@ -172,6 +172,9 @@ public class SingleRecordFiller implements DBFiller {",
"removed": []
}
]
}
] |
derby-DERBY-5068-018948af
|
DERBY-5068: Investigate increased CPU usage on client after introduction of UTF-8 CcsidManager
Make the CcsidManager implementations encode strings directly into the
ByteBuffer instead of going via an intermediate byte array.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1125299 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/CcsidManager.java",
"hunks": [
{
"added": [
"import java.nio.ByteBuffer;",
"import java.nio.CharBuffer;",
"import org.apache.derby.client.am.Agent;",
"import org.apache.derby.client.am.SqlException;",
""
],
"header": "@@ -21,6 +21,11 @@",
"removed": []
},
{
"added": [],
"header": "@@ -57,26 +62,6 @@ public abstract class CcsidManager {",
"removed": [
"",
" // Convert a Java String into bytes for a particular ccsid.",
" // The String is converted into a buffer provided by the caller.",
" //",
" // @param sourceString A Java String to convert.",
" // @param buffer The buffer to convert the String into.",
" // @param offset Offset in buffer to start putting output.",
" // @return An int containing the buffer offset after conversion.",
" public abstract int convertFromJavaString(String sourceString,",
" byte[] buffer,",
" int offset,",
" org.apache.derby.client.am.Agent agent) throws org.apache.derby.client.am.SqlException;",
"",
" // Convert a byte array representing characters in a particular ccsid into a Java String.",
" //",
" // @param sourceBytes An array of bytes to be converted.",
" // @return String A new Java String Object created after conversion.",
" abstract String convertToJavaString(byte[] sourceBytes);",
"",
""
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/EbcdicCcsidManager.java",
"hunks": [
{
"added": [
"import java.nio.ByteBuffer;",
"import java.nio.CharBuffer;",
"import org.apache.derby.client.am.Agent;"
],
"header": "@@ -21,6 +21,9 @@",
"removed": []
},
{
"added": [
" CharBuffer src = CharBuffer.wrap(sourceString);",
" ByteBuffer dest = ByteBuffer.allocate(sourceString.length());",
" startEncoding();",
" encode(src, dest, agent);",
" return dest.array();",
" public void startEncoding() {",
" // We don't have a CharsetEncoder instance to reset, or any other",
" // internal state associated with earlier encode() calls. Do nothing.",
" }",
"",
" public boolean encode(CharBuffer src, ByteBuffer dest, Agent agent)",
" throws SqlException {",
" // Encode as many characters as the destination buffer can hold.",
" int charsToEncode = Math.min(src.remaining(), dest.remaining());",
" for (int i = 0; i < charsToEncode; i++) {",
" char c = src.get();",
" if (c > 0xff) {",
" throw new SqlException(agent.logWriter_,",
" new ClientMessageId(",
" SQLState.CANT_CONVERT_UNICODE_TO_EBCDIC));",
" dest.put((byte) conversionArrayToEbcdic[c]);",
" if (src.remaining() == 0) {",
" // All characters have been encoded. We're done.",
" return true;",
" } else {",
" // We still have more characters to encode, but no room in",
" // destination buffer.",
" return false;"
],
"header": "@@ -127,41 +130,41 @@ public class EbcdicCcsidManager extends CcsidManager {",
"removed": [
" byte[] bytes = new byte[sourceString.length()];",
" convertFromJavaString(sourceString, bytes, 0, agent);",
" return bytes;",
" public int convertFromJavaString(String sourceString,",
" byte[] buffer,",
" int offset,",
" org.apache.derby.client.am.Agent agent) throws SqlException {",
" for (int i = 0; i < sourceString.length(); i++) {",
" char c = sourceString.charAt(i);",
" if (c > 0xff)",
" // buffer[offset++] = (byte) 63;",
" {",
" throw new SqlException(agent.logWriter_, ",
" new ClientMessageId(SQLState.CANT_CONVERT_UNICODE_TO_EBCDIC));",
" buffer[offset++] = (byte) (conversionArrayToEbcdic[c]);",
" ;",
" return offset;",
" }",
" String convertToJavaString(byte[] sourceBytes) {",
" int i = 0;",
" char[] theChars = new char[sourceBytes.length];",
" int num = 0;",
"",
" for (i = 0; i < sourceBytes.length; i++) {",
" num = (sourceBytes[i] < 0) ? (sourceBytes[i] + 256) : sourceBytes[i];",
" theChars[i] = (char) conversionArrayToUCS2[num];",
"",
" return new String(theChars);"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetConnection.java",
"hunks": [
{
"added": [
"import java.nio.ByteBuffer;",
"import java.nio.CharBuffer;"
],
"header": "@@ -20,6 +20,8 @@",
"removed": []
},
{
"added": [
"import org.apache.derby.shared.common.sanity.SanityManager;"
],
"header": "@@ -38,6 +40,7 @@ import org.apache.derby.jdbc.ClientDriver;",
"removed": []
},
{
"added": [
" private ByteBuffer prddta_;"
],
"header": "@@ -127,8 +130,7 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
"",
" byte[] prddta_;"
]
},
{
"added": [
" prddta_.array(),"
],
"header": "@@ -837,7 +839,7 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" prddta_,"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetPackageRequest.java",
"hunks": [
{
"added": [
" CcsidManager ccsidMgr = netAgent_.getCurrentCcsidManager();",
"",
" byte[] dbnameBytes = ccsidMgr.convertFromJavaString(",
" netAgent_.netConnection_.databaseName_, netAgent_);",
"",
" byte[] collectionToFlowBytes = ccsidMgr.convertFromJavaString(",
" collectionToFlow, netAgent_);",
"",
" byte[] pkgNameBytes = ccsidMgr.convertFromJavaString(",
" section.getPackageName(), netAgent_);",
" dbnameBytes.length,",
" collectionToFlowBytes.length,",
" pkgNameBytes.length,",
" byte padByte = ccsidMgr.space_;",
" writeScalarPaddedBytes(dbnameBytes,",
" NetConfiguration.PKG_IDENTIFIER_FIXED_LEN, padByte);",
" writeScalarPaddedBytes(collectionToFlowBytes,",
" NetConfiguration.PKG_IDENTIFIER_FIXED_LEN, padByte);",
" writeScalarPaddedBytes(pkgNameBytes,",
" NetConfiguration.PKG_IDENTIFIER_FIXED_LEN, padByte);",
" buildSCLDTA(dbnameBytes, NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" buildSCLDTA(collectionToFlowBytes, NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" buildSCLDTA(pkgNameBytes, NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" private void buildSCLDTA(byte[] identifier, int minimumLength)",
" throws SqlException {",
" int length = Math.max(minimumLength, identifier.length);",
" write2Bytes(length);",
" byte padByte = netAgent_.getCurrentCcsidManager().space_;",
" writeScalarPaddedBytes(identifier, length, padByte);"
],
"header": "@@ -57,49 +57,59 @@ public class NetPackageRequest extends NetConnectionRequest {",
"removed": [
" writeScalarPaddedString(netAgent_.netConnection_.databaseName_,",
" NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" writeScalarPaddedString(collectionToFlow,",
" NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" writeScalarPaddedString(section.getPackageName(),",
" NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" buildSCLDTA(netAgent_.netConnection_.databaseName_, NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" buildSCLDTA(collectionToFlow, NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" buildSCLDTA(section.getPackageName(), NetConfiguration.PKG_IDENTIFIER_FIXED_LEN);",
" private void buildSCLDTA(String identifier, int minimumLength) throws SqlException {",
" int length = netAgent_.getCurrentCcsidManager().getByteLength(identifier);",
" ",
" if (length <= minimumLength) {",
" write2Bytes(minimumLength);",
" writeScalarPaddedString(identifier, minimumLength);",
" } else {",
" write2Bytes(length);",
" writeScalarPaddedString(identifier, length);",
" }"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/Request.java",
"hunks": [
{
"added": [
"import java.nio.CharBuffer;"
],
"header": "@@ -35,6 +35,7 @@ import java.io.IOException;",
"removed": []
},
{
"added": [
" // We don't know the length of the string yet, so set it to 0 for now.",
" // Will be updated later.",
" int lengthPos = buffer.position();",
" writeLengthCodePoint(0, codePoint);",
"",
" int stringByteLength = encodeString(string);",
" stringByteLength = byteMinLength;",
" // Update the length field. The length includes two bytes for the",
" // length field itself and two bytes for the codepoint.",
" buffer.putShort(lengthPos, (short) (stringByteLength + 4));",
" }",
" /**",
" * Encode a string and put it into the buffer. A larger buffer will be",
" * allocated if the current buffer is too small to hold the entire string.",
" *",
" * @param string the string to encode",
" * @return the number of bytes in the encoded representation of the string",
" */",
" private int encodeString(String string) throws SqlException {",
" int startPos = buffer.position();",
" CharBuffer src = CharBuffer.wrap(string);",
" CcsidManager ccsidMgr = netAgent_.getCurrentCcsidManager();",
" ccsidMgr.startEncoding();",
" while (!ccsidMgr.encode(src, buffer, netAgent_)) {",
" // The buffer was too small to hold the entire string. Let's",
" // allocate a larger one. We don't know how much more space we",
" // need, so we just tell ensureLength() that we need more than",
" // what we have, until we manage to encode the entire string.",
" // ensureLength() typically doubles the size of the buffer, so",
" // we shouldn't have to call it many times before we get a large",
" // enough buffer.",
" ensureLength(buffer.remaining() + 1);",
" }",
" return buffer.position() - startPos;"
],
"header": "@@ -1106,48 +1107,51 @@ public class Request {",
"removed": [
" int stringByteLength = currentCcsidMgr.getByteLength(string);",
" writeScalarHeader(codePoint, Math.max(byteMinLength, stringByteLength));",
"",
" buffer.position(",
" currentCcsidMgr.convertFromJavaString(",
" string, buffer.array(), buffer.position(), netAgent_));",
"",
" }",
"",
" ",
" // this method inserts ddm character data into the buffer and pad's the",
" // data with the ccsid manager's space character if the character data length",
" // is less than paddedLength.",
" // Not: this method is not to be used for String truncation and the string length",
" // must be <= paddedLength.",
" // This method assumes that the String argument can be",
" // converted by the ccsid manager. This should be fine because usually",
" // there are restrictions on the characters which can be used for ddm",
" // character data. This method also assumes that the string.length() will",
" // be the number of bytes following the conversion.",
" final void writeScalarPaddedString(String string, int paddedLength) throws SqlException {",
" ensureLength(paddedLength);",
" ",
" /* Grab the current CCSID MGR from the NetAgent */ ",
" CcsidManager currentCcsidMgr = netAgent_.getCurrentCcsidManager();",
" ",
" int stringLength = currentCcsidMgr.getByteLength(string);",
" ",
" buffer.position(currentCcsidMgr.convertFromJavaString(",
" string, buffer.array(), buffer.position(), netAgent_));",
" padBytes(currentCcsidMgr.space_, paddedLength - stringLength);"
]
},
{
"added": [
" int savedPos = buffer.position();"
],
"header": "@@ -1228,6 +1232,7 @@ public class Request {",
"removed": []
},
{
"added": [
" buffer.position(passwordStart_);",
" encodeString(mask.toString());",
" } finally {",
" buffer.position(savedPos);"
],
"header": "@@ -1236,14 +1241,16 @@ public class Request {",
"removed": [
" netAgent_.getCurrentCcsidManager().convertFromJavaString(",
" mask.toString(), buffer.array(), passwordStart_, netAgent_);"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/Utf8CcsidManager.java",
"hunks": [
{
"added": [
"import java.nio.ByteBuffer;",
"import java.nio.CharBuffer;",
"import java.nio.charset.CharacterCodingException;",
"import java.nio.charset.Charset;",
"import java.nio.charset.CharsetEncoder;",
"import java.nio.charset.CoderResult;"
],
"header": "@@ -22,6 +22,12 @@",
"removed": []
},
{
"added": [
" private final static String UTF8 = \"UTF-8\";",
" private final static Charset UTF8_CHARSET = Charset.forName(UTF8);",
" private final CharsetEncoder encoder = UTF8_CHARSET.newEncoder();",
""
],
"header": "@@ -31,6 +37,10 @@ import org.apache.derby.shared.common.reference.SQLState;",
"removed": []
},
{
"added": [
" throws SqlException {",
" try {",
" ByteBuffer buf = encoder.encode(CharBuffer.wrap(sourceString));",
"",
" if (buf.limit() == buf.capacity()) {",
" // The length of the encoded representation of the string",
" // matches the length of the returned buffer, so just return",
" // the backing array.",
" return buf.array();",
" }",
"",
" // Otherwise, copy the interesting bytes into an array with the",
" // correct length.",
" byte[] bytes = new byte[buf.limit()];",
" buf.get(bytes);",
" return bytes;",
" } catch (CharacterCodingException cce) {",
" throw new SqlException(agent.logWriter_,",
" new ClientMessageId(SQLState.CANT_CONVERT_UNICODE_TO_UTF8),",
" cce);",
" }"
],
"header": "@@ -56,14 +66,27 @@ public class Utf8CcsidManager extends CcsidManager {",
"removed": [
" throws SqlException {",
" byte[] bytes = new byte[getByteLength(sourceString)];",
" convertFromJavaString(sourceString, bytes, 0, agent);",
" return bytes;",
" }",
" ",
" public String convertToJavaString(byte[] sourceBytes) {",
" return convertToJavaString(sourceBytes, 0, sourceBytes.length);"
]
},
{
"added": [
" // Here we'd rather specify the encoding using a Charset object to",
" // avoid the need to handle UnsupportedEncodingException, but that",
" // constructor wasn't introduced until Java 6.",
" return new String(sourceBytes, offset, numToConvert, UTF8);"
],
"header": "@@ -71,7 +94,10 @@ public class Utf8CcsidManager extends CcsidManager {",
"removed": [
" return new String(sourceBytes, offset, numToConvert, \"UTF-8\");"
]
}
]
}
] |
derby-DERBY-5071-a9c38636
|
DERBY-5071 use string buffers when building strings in loops
Patch contributed by Dave Brosius <dbrosius@apache.org>
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1075568 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
" StringBuffer updateString = new StringBuffer(64);",
" updateString.append(\"UPDATE \").append(getTableName()).append(\" SET \");",
" updateString.append(\",\");",
" updateString.append(quoteSqlIdentifier(",
" resultSetMetaData_.getColumnName(column))).append(\" = ? \");"
],
"header": "@@ -4558,17 +4558,17 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" String updateString = \"UPDATE \" + getTableName() + \" SET \";",
" updateString += \",\";",
" updateString += quoteSqlIdentifier(",
" resultSetMetaData_.getColumnName(column)) + ",
" \" = ? \";"
]
},
{
"added": [
" updateString.append(\" WHERE CURRENT OF \").append(getServerCursorName());",
" updateString.append(\" FOR ROW ? OF ROWSET\");",
" return updateString.toString();"
],
"header": "@@ -4580,13 +4580,13 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" updateString = updateString + \" WHERE CURRENT OF \" + getServerCursorName();",
" updateString += \" FOR ROW ? OF ROWSET\";",
" return updateString;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetXAResource.java",
"hunks": [
{
"added": [
" StringBuffer xaExceptionText = new StringBuffer(64);"
],
"header": "@@ -746,7 +746,7 @@ public class NetXAResource implements XAResource {",
"removed": [
" String xaExceptionText;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/diag/ErrorLogReader.java",
"hunks": [
{
"added": [
"\t\t\t\tStringBuffer output = new StringBuffer(64);",
"\t\t\t\t\toutput.append(line.substring(line.indexOf(END_DRDAID_STRING, drdaidIndex) + 3));"
],
"header": "@@ -253,10 +253,10 @@ public class ErrorLogReader extends VTITemplate",
"removed": [
"\t\t\t\tString output;",
"\t\t\t\t\toutput = line.substring(line.indexOf(END_DRDAID_STRING, drdaidIndex) + 3);"
]
},
{
"added": [
"\t\t\t\t\toutput.append(line.substring(line.indexOf(END_DRDAID_STRING, drdaidIndex) + 3));",
"\t\t\t\t\toutput.append(line.substring(line.indexOf(END_XID_STRING, drdaidIndex) + 3,",
"\t\t\t\t\t\t\t\t\t\t\tendIndex));"
],
"header": "@@ -265,12 +265,12 @@ public class ErrorLogReader extends VTITemplate",
"removed": [
"\t\t\t\t\toutput = line.substring(line.indexOf(END_DRDAID_STRING, drdaidIndex) + 3);",
"\t\t\t\t\toutput = line.substring(line.indexOf(END_XID_STRING, drdaidIndex) + 3,",
"\t\t\t\t\t\t\t\t\t\t\tendIndex);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/diag/StatementDuration.java",
"hunks": [
{
"added": [
"\t\t\t\tStringBuffer output = new StringBuffer(64);",
"\t\t\t\t\toutput.append(line.substring(line.indexOf(END_XID_STRING, lccidIndex) + 3));"
],
"header": "@@ -265,10 +265,10 @@ public class StatementDuration extends VTITemplate",
"removed": [
"\t\t\t\tString output;",
"\t\t\t\t\toutput = line.substring(line.indexOf(END_XID_STRING, lccidIndex) + 3);"
]
},
{
"added": [
"\t\t\t\t\toutput.append(line.substring(line.indexOf(END_XID_STRING, lccidIndex) + 3));",
"\t\t\t\t\toutput.append(line.substring(line.indexOf(END_XID_STRING, lccidIndex) + 3,",
"\t\t\t\t\t\t\t\t\t\t\tendIndex));"
],
"header": "@@ -277,12 +277,12 @@ public class StatementDuration extends VTITemplate",
"removed": [
"\t\t\t\t\toutput = line.substring(line.indexOf(END_XID_STRING, lccidIndex) + 3);",
"\t\t\t\t\toutput = line.substring(line.indexOf(END_XID_STRING, lccidIndex) + 3,",
"\t\t\t\t\t\t\t\t\t\t\tendIndex);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/TableDescriptor.java",
"hunks": [
{
"added": [
"\t\t\tStringBuffer name = new StringBuffer();",
" name.append(tableName);"
],
"header": "@@ -822,7 +822,8 @@ public class TableDescriptor extends TupleDescriptor",
"removed": [
"\t\t\tString name = new String(tableName);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/Level2OptimizerImpl.java",
"hunks": [
{
"added": [
"\t\tStringBuffer joinOrderString = new StringBuffer();",
" joinOrderString.append(prefix);",
"\t\t\tjoinOrderString.append(\" \").append(joinOrder[i]);",
"\t\t\tjoinOrderString.append(\" \").append(joinOrderNumber);",
" joinOrderString.append(\" with assignedTableMap = \").append(assignedTableMap).append(\"\\n\\n\");",
" return joinOrderString.toString();"
],
"header": "@@ -530,18 +530,20 @@ public class Level2OptimizerImpl extends OptimizerImpl",
"removed": [
"\t\tString joinOrderString = prefix;",
"\t\t\tjoinOrderString = joinOrderString + \" \" + joinOrder[i];",
"\t\t\tjoinOrderString = joinOrderString + \" \" + joinOrderNumber;",
"\t\treturn joinOrderString + \" with assignedTableMap = \" + assignedTableMap + \"\\n\\n\";"
]
}
]
}
] |
derby-DERBY-5073-e0699eac
|
DERBY-3980: Conflicting select then update with REPEATABLE_READ gives lock timeout instead of deadlock
DERBY-5073: Derby deadlocks without recourse on simultaneous correlated subqueries
Added more comments describing the deadlock detection algorithm.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1084561 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/locks/Deadlock.java",
"hunks": [
{
"added": [
" * <p>",
" * Code to support deadlock detection.",
" * </p>",
" *",
" * <p>",
" * This class implements deadlock detection by searching for cycles in the",
" * wait graph. If a cycle is found, it means that (at least) two transactions",
" * are blocked by each other, and one of them must be aborted to allow the",
" * other one to continue.",
" * </p>",
" *",
" * <p>",
" * The wait graph is obtained by asking the {@code LockSet} instance to",
" * provide a map representing all wait relations, see {@link #getWaiters}.",
" * The map consists of two distinct sets of (key, value) pairs:",
" * </p>",
" *",
" * <ol>",
" * <li>(space, lock) pairs, where {@code space} is the compatibility space",
" * of a waiting transaction and {@code lock} is the {@code ActiveLock}",
" * instance on which the transaction is waiting</li>",
" * <li>(lock, prevLock) pairs, where {@code lock} is an {@code ActiveLock} and",
" * {@code prevLock} is the {@code ActiveLock} or {@code LockControl} for the",
" * first waiter in the queue behind {@code lock}</li>",
" * </ol>",
" *",
" * <p>",
" * The search is performed as a depth-first search starting from the lock",
" * request of a waiter that has been awoken for deadlock detection (either",
" * because {@code derby.locks.deadlockTimeout} has expired or because some",
" * other waiter had picked it as a victim in order to break a deadlock).",
" * From this lock request, the wait graph is traversed by checking which",
" * transactions have already been granted a lock on the object, and who they",
" * are waiting for.",
" * </p>",
" *",
" * <p>",
" * The state of the search is maintained by pushing compatibility spaces",
" * (representing waiting transactions) and granted locks onto a stack. When a",
" * dead end is found (that is, a transaction that holds locks without waiting",
" * for any other transaction), the stack is popped and the search continues",
" * down a different path. This continues until a cycle is found or the stack is",
" * empty. Detection of cycles happens when pushing a new compatibility space",
" * onto the stack. If the same space already exists on the stack, it means the",
" * graph has a cycle and we have a deadlock.",
" * </p>",
" *",
" * <p>",
" * When a deadlock is found, one of the waiters in the deadlock cycle is awoken",
" * and it will terminate itself, unless it finds that the deadlock has been",
" * broken in the meantime, for example because one of the involved waiters",
" * has timed out.",
" * </p>",
" */",
" * <p>",
" * </p>",
"\t *",
" * <p>",
" * </p>",
" *",
"\t * to satisfy the synchronization requirements of",
" * </p>"
],
"header": "@@ -38,33 +38,88 @@ import java.util.Stack;",
"removed": [
"\tCode to support deadlock detection.",
"*/",
"\t * <BR>",
"\t * <p>",
"\t * Would be nice to get a better high level description of deadlock",
"\t * search.",
"\t * to satisfy the syncronization requirements of"
]
},
{
"added": [
" // All paths from the initial waiting lock request have been",
" // examined without finding a deadlock. We're done.",
" // All granted locks in this lock control have been examined.",
"",
" // Pick one of the granted lock for examination. rollback()",
" // expects us to have examined the last one in the list, so",
" // always pick that one."
],
"header": "@@ -107,16 +162,22 @@ class Deadlock {",
"removed": [
"\t\t\t\t// all done"
]
},
{
"added": [
" // Oops... The space has been examined once before, so",
" // we have what appears to be a cycle in the wait graph.",
" // In most cases this means we have a deadlock.",
" //",
" // However, in some cases, the cycle in the graph may be",
" // an illusion. For example, we could have a situation",
" // here like this:",
" //",
" // In this case it's not necessarily a deadlock. If the",
" // Lockable returns true from its lockerAlwaysCompatible()",
" // method, which means that lock requests within the same",
" // compatibility space never conflict with each other,",
" // T1 is only waiting for T2 to release its shared lock.",
" // T2 isn't waiting for anyone, so there is no deadlock.",
" //",
" // This is only true if T1 is the first one waiting for",
" // a lock on the object. If there are other waiters in",
" // between, we have a deadlock regardless of what",
" // lockerAlwaysCompatible() returns. Take for example this",
" // similar scenario, where T3 is also waiting:",
" //",
" // Granted T1{S}, T2{S}",
" // Waiting T3{X}",
" // Waiting T1{X} - deadlock checking on this",
" //",
" // Here, T1 is stuck behind T3, and T3 is waiting for T1,",
" // so we have a deadlock.",
" // The two identical compatibility spaces were right",
" // next to each other on the stack. This means we have",
" // the first scenario described above, with the first",
" // waiter already having a lock on the object. It is a"
],
"header": "@@ -135,22 +196,45 @@ outer:\tfor (;;) {",
"removed": [
"",
"\t\t\t\t\t// We could be seeing a situation here like",
"\t\t\t\t\t// In this case it's not a deadlock, although it",
"\t\t\t\t\t// depends on the locking policy of the Lockable. E.g.",
"\t\t\t\t\t// Granted T1(latch)",
"\t\t\t\t\t// Waiting T1(latch)",
"\t\t\t\t\t// is a deadlock.",
"\t\t\t\t\t//"
]
},
{
"added": [
" // So it wasn't an illusion after all. Pick a victim.",
"",
" // Otherwise... The space hasn't been examined yet, so put it",
" // on the stack and start examining it.",
" // Who is this space waiting for?",
" // The space isn't waiting for anyone, so we're at the"
],
"header": "@@ -163,14 +247,20 @@ inner:\t\tfor (;;) {",
"removed": []
},
{
"added": [
" // Push all the granted locks on this object onto the",
" // stack, and go ahead examining them one by one.",
" // Set up the next space for examination.",
" // Now, there is a possibility that we're not actually",
" // waiting behind the other other waiter. Take for",
" // example this scenario:",
" //",
" // Granted T1{X}",
" // Waiting T2{S}",
" // Waiting T3{S} - deadlock checking on this",
" //",
" // Here, T3 isn't blocked by T2. As soon as T1 releases",
" // its X lock on the object, both T2 and T3 will be",
" // granted an S lock. And if T1 also turns out to be",
" // blocked by T3 and we have a deadlock, aborting T2",
" // won't resolve the deadlock, so it's not actually",
" // part of the deadlock. If we have this scenario, we",
" // just skip past T2's space and consider T3 to be",
" // waiting on T1 directly.",
"",
" // We're behind another waiter with a compatible",
" // lock request. Skip it since we're not really",
" // blocked by it.",
" // We are really blocked by the other waiter. Go",
" // ahead and investigate its compatibility space."
],
"header": "@@ -196,25 +286,44 @@ inner:\t\tfor (;;) {",
"removed": [
"",
" // We're behind another waiter in the queue, but we",
" // request compatible locks, so we'll get the lock",
" // too once it gets it. Since we're not actually",
" // blocked by the waiter, skip it and see what's",
" // blocking it instead."
]
},
{
"added": [
" /**",
" * Backtrack in the depth-first search through the wait graph. Expect",
" * the top of the stack to hold the compatibility space we've just",
" * investigated. Pop the stack until the most recently examined granted",
" * lock has been removed.",
" *",
" * @param chain the stack representing the state of the search",
" */"
],
"header": "@@ -225,6 +334,14 @@ inner:\t\tfor (;;) {",
"removed": []
},
{
"added": [
" /**",
" * Get all the waiters in a {@code LockTable}. The waiters are returned",
" * as pairs (space, lock) mapping waiting compatibility spaces to the",
" * lock request in which they are blocked, and (lock, prevLock) linking",
" * a lock request with the lock request that's behind it in the queue of",
" * waiters.",
" *",
" * @param set the lock table",
" * @return all waiters in the lock table",
" * @see LockControl#addWaiters(java.util.Map)",
" */",
" /**",
" * Handle a deadlock when it has been detected. Find out if the waiter",
" * that started looking for the deadlock is involved in it. If it isn't,",
" * pick a victim among the waiters that are involved.",
" *",
" * @return {@code null} if the waiter that started looking for the deadlock",
" * isn't involved in the deadlock (in which case another victim will have",
" * been picked and awoken), or an array describing the deadlock otherwise",
" */"
],
"header": "@@ -237,12 +354,32 @@ inner:\t\tfor (;;) {",
"removed": []
},
{
"added": [
" /**",
" * Build an exception that describes a deadlock.",
" *",
" * @param factory the lock factory requesting the exception",
" * @param data an array with information about who's involved in",
" * a deadlock (as returned by {@link #handle})",
" * @return a deadlock exception",
" */"
],
"header": "@@ -291,6 +428,14 @@ inner:\t\tfor (;;) {",
"removed": []
}
]
}
] |
derby-DERBY-5076-64bc46b3
|
DERBY-5076: Remove some unnecessary casts in ExceptionFormatter
Based on patch contributed by Dave Brosius <dbrosius@apache.org>.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1075842 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ExceptionFormatter.java",
"hunks": [
{
"added": [
" Sqlca sqlca = e.getSqlca();",
" sqlca.returnTokensOnlyInMessageText(returnTokensOnly);",
" if (e.getSqlca() == null) { // Too much has changed, so escape out here.",
" sqlca = e.getSqlca();"
],
"header": "@@ -40,19 +40,19 @@ public class ExceptionFormatter {",
"removed": [
" Sqlca sqlca = ((Diagnosable) e).getSqlca();",
" ((Sqlca) sqlca).returnTokensOnlyInMessageText(returnTokensOnly);",
" if (((Diagnosable) e).getSqlca() == null) { // Too much has changed, so escape out here.",
" sqlca = ((Diagnosable) e).getSqlca();"
]
},
{
"added": [
" sqlca = e.getSqlca();",
" if (sqlca != null) {",
" // JDK stack trace calls e.getMessage(), now that it is finished,",
" // we can reset the state on the sqlca that says return tokens only.",
" sqlca.returnTokensOnlyInMessageText(false);"
],
"header": "@@ -81,13 +81,11 @@ public class ExceptionFormatter {",
"removed": [
" if (e instanceof Diagnosable) {",
" sqlca = (Sqlca) ((Diagnosable) e).getSqlca();",
" if (sqlca != null) {",
" // JDK stack trace calls e.getMessage(), now that it is finished,",
" // we can reset the state on the sqlca that says return tokens only.",
" sqlca.returnTokensOnlyInMessageText(false);",
" }"
]
}
]
}
] |
derby-DERBY-5077-7f7477d1
|
DERBY-5077: [patch] remove non productive instanceof checks
Removed useless instanceof checks.
Made checkColumnOrdering in MergeSort private and rewrote the comment, and
merged two SanityManager.DEBUG blocks.
Patch provided by Dave Brosius (dbrosius at apache dot org), extended by kristwaa at apache dot org.
Patch file: derby-5077-1a-remove_useless_instanceofs.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1145926 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/SubqueryNode.java",
"hunks": [
{
"added": [
" parentComparisonOperator != null;"
],
"header": "@@ -689,7 +689,7 @@ public class SubqueryNode extends ValueNode",
"removed": [
"\t\t\t\t\t parentComparisonOperator instanceof BinaryComparisonOperatorNode;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/sort/MergeSort.java",
"hunks": [
{
"added": [
" /**",
" * Check the column ordering against the template, making sure that each",
" * column is present in the template, is not mentioned more than once, and",
" * that the columns isn't {@code null}.",
" * <p>",
" * Intended to be called as part of a sanity check. All columns are",
" * orderable, since {@code DataValueDescriptor} extends {@code Orderable}.",
" *",
" * @return {@code true} if the ordering is valid, {@code false} if not.",
" */",
" private boolean checkColumnOrdering("
],
"header": "@@ -384,13 +384,17 @@ class MergeSort implements Sort",
"removed": [
"\t/**",
"\tCheck the column ordering against the template, making",
"\tsure that each column is present in the template,",
"\timplements Orderable, and is not mentioned more than",
"\tonce. Intended to be called as part of a sanity check.",
"\t**/",
"\tprotected boolean checkColumnOrdering("
]
},
{
"added": [
" if (columnVal == null)"
],
"header": "@@ -415,7 +419,7 @@ class MergeSort implements Sort",
"removed": [
"\t\t\tif (!(columnVal instanceof Orderable))"
]
},
{
"added": [
" // Make sure the column ordering makes sense"
],
"header": "@@ -518,11 +522,7 @@ class MergeSort implements Sort",
"removed": [
" \t}",
"",
"\t\t// Make sure the column ordering makes sense",
" if (SanityManager.DEBUG)",
" {"
]
}
]
}
] |
derby-DERBY-5079-1bee891c
|
DERBY-5079 " DERBY-4984 caused a regression which will not allow users to drop a table if the table was involved in a trigger action rebind during ALTER TABLE DROP COLUMN
Adding some commented out test cases to show the problem with drop table after ALTER TABLE DROP COLUMN and some combination of triggers. This is caused because the changes for DERBY-4984 used an incorrect current dependent for the dpendency system before doing a recompile of trigger action sql. Work is being done to use the correct dependent and recreate the dependencies in SYSDEPENDS correctly after a trigger action recompile is done following an ALTER TABLE DROP COLUMN.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1076387 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5079-bf58aa63
|
DERBY-5079 (DERBY-4984 caused a regression which will not allow users to drop a table if the table was involved in a trigger action rebind during ALTER TABLE DROP COLUMN)
Trigger action sql should be rebound using the statement and not the trigger table. Also, dependency between the trigger table and trigger action SPS should be established.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1078693 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5080-695493d0
|
DERBY-5080: Utilize JQL to fetch JIRA issue list for release notes generation
Added JQL functionality, allowing the list of JIRA issues for a release to be
obtained without having to create a JIRA filter manully.
Actived by specifying a JIRA filter id of 0 (zero) when running
'ant genrelnotes'.
Patch file: derby-5080-2a-utilize_jql.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1129117 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "tools/release/jirasoap/src/main/java/org/apache/derbyBuild/jirasoap/FilteredIssueLister.java",
"hunks": [
{
"added": [
"\" JIRA id, only digits allowed.\\n\" +",
"\" If '0' (zero), a JQL query will be generated instead of using an\\n\" +",
"\" existing (manually created) JIRA filter.\\n\" +"
],
"header": "@@ -82,7 +82,9 @@ public class FilteredIssueLister {",
"removed": [
"\" JIRA id, only digits allowed\\n\" +"
]
},
{
"added": [
" /** Constant used to choose using JQL over an existing filter. */",
" private static final int GENERATE_JQL = 0;"
],
"header": "@@ -108,6 +110,8 @@ public class FilteredIssueLister {",
"removed": []
},
{
"added": [
" if (filterId == GENERATE_JQL) {",
" issues = execJiraJQLQuery(out, auth, targetVersion);",
" } else {",
" issues = execJiraFilterQuery(out, auth, filterId);",
" log(\"persisting issues (\" + issues.length + \" candidate issues)\");",
" out.write(\"// Candidate issue count: \" + issues.length);"
],
"header": "@@ -358,20 +362,14 @@ public class FilteredIssueLister {",
"removed": [
" out.write(\"// Filter id: \" + filterId + \", user id \" + user);",
" out.newLine();",
" log(\"fetching issues from filter (id = \" + filterId + \")\");",
" try {",
" issues= jiraSoapService.getIssuesFromFilterWithLimit(",
" auth, Long.toString(filterId), 0, 1000);",
" } catch (org.apache.derbyBuild.jirasoap.RemoteException re) {",
" throw new IllegalArgumentException(",
" \"invalid filter id: \" + filterId +",
" \" (\" + re.getFaultString() + \")\");",
" log(\"persisting issues (filter matched \" + issues.length + \" issues)\");",
" out.write(\"// Filter issue count: \" + issues.length);"
]
}
]
},
{
"file": "tools/release/jirasoap/src/main/java/org/apache/derbyBuild/jirasoap/FilteredIssueListerAntWrapper.java",
"hunks": [
{
"added": [
" /** JIRA filter id, or 0 (zero) for JQL. */"
],
"header": "@@ -31,6 +31,7 @@ public class FilteredIssueListerAntWrapper {",
"removed": []
},
{
"added": [
" // NOTE: A filter id of 0 (zero) will be treated specially,",
" // resulting in a JQL query being generated."
],
"header": "@@ -50,6 +51,8 @@ public class FilteredIssueListerAntWrapper {",
"removed": []
},
{
"added": [
" // NOTE: A filter id of 0 (zero) will be treated specially,",
" // resulting in a JQL query being generated."
],
"header": "@@ -67,6 +70,8 @@ public class FilteredIssueListerAntWrapper {",
"removed": []
}
]
}
] |
derby-DERBY-5082-15b7b103
|
DERBY-5082: ShutdownException in ContextManager.checkInterrupt() during shutdown
The fix is two-fold:
a) Avoid destroying tx, which will fail, if the database is [being] shut down.
b) If destroying tx, catch ShutdownException in case someone shuts down the
database under our feet.
Added a break to make the flow clearer (also avoids an unnecessary trace-line).
Patch file: derby-5082-1a-fix.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1076462 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
" break;"
],
"header": "@@ -741,6 +741,7 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" if (runningThread == null && daemonLCC != null &&",
" !isShuttingDown(daemonLCC)) {",
" // try/catch as safe-guard against shutdown race condition.",
" try {",
" daemonLCC.getTransactionExecute().destroy();",
" } catch (ShutdownException se) {",
" // Ignore",
" }"
],
"header": "@@ -863,8 +864,14 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" if (runningThread == null && daemonLCC != null) {",
" daemonLCC.getTransactionExecute().destroy();"
]
}
]
}
] |
derby-DERBY-5084-4109a77d
|
DERBY-5084; convert ijConnName.sql to a ScriptTest junit test
adding apache headers to two new files.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1097469 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5086-d1ded580
|
DERBY-5086: Disable istat logging by default
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1076559 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [],
"header": "@@ -435,8 +435,6 @@ public final class\tDataDictionaryImpl",
"removed": [
" /** TODO: Remove this when code goes into production (i.e. a release). */",
" private boolean indexStatsUpdateLoggingExplicitlySet;"
]
},
{
"added": [],
"header": "@@ -649,10 +647,6 @@ public final class\tDataDictionaryImpl",
"removed": [
" // TODO: Remove this when going into production code (i.e. a release).",
" indexStatsUpdateLoggingExplicitlySet =",
" PropertyUtil.getSystemProperty(",
" Property.STORAGE_AUTO_INDEX_STATS_LOGGING) != null;"
]
},
{
"added": [],
"header": "@@ -830,7 +824,6 @@ public final class\tDataDictionaryImpl",
"removed": [
" indexStatsUpdateLoggingExplicitlySet = true;"
]
},
{
"added": [],
"header": "@@ -13759,13 +13752,6 @@ public final class\tDataDictionaryImpl",
"removed": [
" // TODO: Remove this override after initial testing.",
" // Unless logging has been explicitly disabled, turn it on to",
" // make sure we have some information if things go wrong.",
" if (!indexStatsUpdateLoggingExplicitlySet) {",
" indexStatsUpdateLogging = true;",
" }",
""
]
}
]
}
] |
derby-DERBY-5087-299b9e7b
|
DERBY-5087: NPE in istat daemon when encountering critical exception during shutdown
Don't null out the index stats refresher reference in the data dictionary when
stopping the module.
Removed unnecessary variable 'daemonStopped', used existing 'daemonDisabled'
instead.
Patch file: derby-5087-1a-npe_on_shutdown.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1076445 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [],
"header": "@@ -131,8 +131,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" /** Tells if the daemon has been stopped. */",
" private volatile boolean daemonStopped;"
]
},
{
"added": [
" synchronized (queue) {",
" if (daemonDisabled ){",
" return true;",
" }",
" }",
" return !lcc.getDatabase().isActive();"
],
"header": "@@ -388,8 +386,12 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" if ( daemonStopped ) { return true; }",
" else { return !lcc.getDatabase().isActive(); }"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
" // Not sure if the reference can be null here, but it may be possible",
" // if multiple threads are competing to boot and shut down the db."
],
"header": "@@ -970,9 +970,10 @@ public final class\tDataDictionaryImpl",
"removed": [
" indexRefresher = null;"
]
}
]
}
] |
derby-DERBY-5088-cd02474f
|
DERBY-5088: ShutdownException raised in istat thread during factory call
Refactored outer-level error handling, and moved context service factory call
inside try/catch.
Patch file: derby-5088-1a-shutdownexception_refactor.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1076802 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
" * <p>",
" * This method is run as a background task."
],
"header": "@@ -307,6 +307,8 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" updateIndexStatsMinion(lcc, td, cds, AS_BACKGROUND_TASK);"
],
"header": "@@ -320,7 +322,7 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" tryToGatherStats(lcc, td, cds, AS_BACKGROUND_TASK);"
]
},
{
"added": [
" private boolean isShuttingDown() {",
" if (daemonDisabled || daemonLCC == null){",
" } else {",
" return !daemonLCC.getDatabase().isActive();",
""
],
"header": "@@ -349,51 +351,17 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" /**",
" * Try to gather statistics. Fail gracefully if we are being shutdown, e.g., the database is killed",
" * while we're busy. See DERBY-5037.",
" *",
" * @param lcc language connection context used to perform the work",
" * @param td the table to update index stats for",
" * @param cds the conglomerates to update statistics for (non-index",
" * conglomerates will be ignored)",
" * @param asBackgroundTask whether the updates are done automatically as",
" * part of a background task or if explicitly invoked by the user",
" * @throws StandardException if something goes wrong",
" */",
" private void tryToGatherStats(LanguageConnectionContext lcc,",
" TableDescriptor td,",
" ConglomerateDescriptor[] cds,",
" boolean asBackgroundTask)",
" throws StandardException",
" {",
" //",
" // Swallow exceptions raised while we are being shutdown.",
" //",
" try {",
" updateIndexStatsMinion( lcc, td, cds, asBackgroundTask );",
" }",
" catch (StandardException se)",
" {",
" if ( !isShuttingDown( lcc ) ) { throw se; }",
" }",
" // to filter assertions raised by debug jars",
" catch (RuntimeException re)",
" {",
" if ( !isShuttingDown( lcc ) ) { throw re; }",
" }",
" }",
" private boolean isShuttingDown( LanguageConnectionContext lcc )",
" {",
" if (daemonDisabled ){",
" return !lcc.getDatabase().isActive();",
" "
]
},
{
"added": [
" ContextService ctxService = null;",
" // Implement the outer-level exception handling here.",
" try {",
" // DERBY-5088: Factory-call may fail.",
" ctxService = ContextService.getFactory();",
" ctxService.setCurrentContextManager(ctxMgr);",
" processingLoop();",
" } catch (ShutdownException se) {",
" // The database is/has been shut down.",
" // Log processing statistics and exit.",
" stop();",
" ctxMgr.cleanupOnError(se, db.isActive());",
" } catch (RuntimeException re) {",
" // DERBY-4037",
" // Extended filtering of runtime exceptions during shutdown:",
" // o assertions raised by debug jars",
" // o runtime exceptions, like NPEs, raised by production jars -",
" // happens because the background thread interacts with store",
" // on a lower level",
" if (!isShuttingDown()) {",
" throw re;",
" }",
" } finally {",
" if (ctxService != null) {",
" ctxService.resetCurrentContextManager(ctxMgr);",
" }",
" runTime += (System.currentTimeMillis() - runStart);",
" }",
" }",
"",
" /**",
" * Main processing loop which will compute statistics until the queue",
" * of scheduled work units has been drained.",
" */",
" private void processingLoop() {"
],
"header": "@@ -699,8 +667,41 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" final ContextService ctxService = ContextService.getFactory();",
" ctxService.setCurrentContextManager(ctxMgr);"
]
},
{
"added": [
" \"failed to initialize index statistics updater\");"
],
"header": "@@ -715,8 +716,7 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" \"failed to setup index statistics updater\");",
" ctxService.resetCurrentContextManager(ctxMgr);"
]
},
{
"added": [],
"header": "@@ -798,9 +798,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" } catch (ShutdownException se) {",
" stop(); // Call stop to log activity statistics.",
" ctxMgr.cleanupOnError(se, db.isActive());"
]
},
{
"added": [],
"header": "@@ -818,8 +815,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" ctxService.resetCurrentContextManager(ctxMgr);",
" runTime += (System.currentTimeMillis() - runStart);"
]
},
{
"added": [
" !isShuttingDown()) {"
],
"header": "@@ -865,7 +860,7 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" !isShuttingDown(daemonLCC)) {"
]
},
{
"added": [
" } else if (isShuttingDown() ||",
" se.getSeverity() >= ExceptionSeverity.DATABASE_SEVERITY) {",
" // DERBY-4037: Swallow exceptions raised during shutdown."
],
"header": "@@ -894,7 +889,9 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" } else if (se.getSeverity() >= ExceptionSeverity.DATABASE_SEVERITY) {"
]
},
{
"added": [],
"header": "@@ -903,7 +900,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" stop();"
]
}
]
}
] |
derby-DERBY-5089-3a6d457f
|
DERBY-5089: Improve tracing/logging of runtime exceptions raised in the istat thread
Added tracing/logging of runtime exceptions and checked exceptions.
Removed/added/modified some other traces.
Patch file: derby-5089-1a-improved_tracing.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1078449 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
" \"update scheduled\" +",
" : \", reason=[\" + schedulingReason + \"]\") +"
],
"header": "@@ -234,20 +234,16 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" trace(0, \"scheduling \" + td.getQualifiedName() +",
" (schedulingReason == null",
" ? \"\"",
" : \" reason=[\" + schedulingReason + \"]\"));",
" \"update scheduled - \" + td.getUUID() +",
" : \" reason=[\" + schedulingReason + \"]\") +"
]
},
{
"added": [
" String msg = td.getQualifiedName() + \" rejected, \";",
" msg += \"daemon disabled\";",
" msg += \"queue full\";",
" msg += \"duplicate\";",
" trace(1, msg);"
],
"header": "@@ -290,16 +286,18 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" trace(1, \"daemon disabled - work not scheduled\");",
" trace(1, \"queue full - work not scheduled\");",
" trace(1, \"duplicate found - not scheduled\");"
]
},
{
"added": [
" trace(1, \"processing \" + td.getQualifiedName());"
],
"header": "@@ -317,7 +315,7 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" trace(0, \"generateStatistics::start {\" + td.getQualifiedName() + \"}\");"
]
},
{
"added": [],
"header": "@@ -348,7 +346,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" trace(0, \"generateStatistics::end\");"
]
},
{
"added": [
" \"wrote stats for index \" + ",
" dd.getConglomerateDescriptor(index).getDescriptorName() +",
" \" (\" + index + \"): rows=\" + numRows +",
" \", card=\" + cardToStr(cardinality));"
],
"header": "@@ -557,8 +554,10 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" \"wrote stats for index \" + index + \" (rows=\" + numRows +",
" \", card=\" + cardToStr(cardinality) + \")\");"
]
},
{
"added": [
" trace(1, \"swallowed shutdown exception: \" + extractIstatInfo(se));"
],
"header": "@@ -677,6 +676,7 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" log(AS_BACKGROUND_TASK, null, re,",
" \"runtime exception during normal operation\");",
" trace(1, \"swallowed runtime exception during shutdown: \" +",
" extractIstatInfo(re));",
" trace(0, \"worker thread exit\");"
],
"header": "@@ -687,13 +687,18 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [],
"header": "@@ -707,7 +712,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" trace(1, \"got database connection\");"
]
},
{
"added": [
" trace(1, \"daemon disabled\");"
],
"header": "@@ -741,6 +745,7 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [],
"header": "@@ -751,7 +756,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" log(AS_BACKGROUND_TASK, td, \"generating index statistics\");"
]
},
{
"added": [],
"header": "@@ -794,7 +798,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" trace(0, \"run::normal_exit\");"
]
},
{
"added": [
" updateIndexStatsMinion(lcc, td, cds, AS_EXPLICIT_TASK);",
" trace(0, \"explicit run completed\" + (runContext != null",
" : \": \") +"
],
"header": "@@ -833,11 +836,11 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" trace(0, \"explicit run\" + (runContext != null",
" : \":\") +",
" updateIndexStatsMinion(lcc, td, cds, AS_EXPLICIT_TASK);"
]
},
{
"added": [
" trace(1, \"swallowed exception during shutdown: \" +",
" extractIstatInfo(se));"
],
"header": "@@ -894,6 +897,8 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [],
"header": "@@ -926,7 +931,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" trace(1, \"top level expected exception: \" + extractIstatInfo(se));"
]
},
{
"added": [
" private static String extractIstatInfo(Throwable t) {",
" String istatClass = IndexStatisticsDaemonImpl.class.getName();",
" StackTraceElement[] stack = t.getStackTrace();",
" String trace = \"<no stacktrace>\";",
" String sqlState = \"\";",
" if (ste.getClassName().startsWith(istatClass)) {"
],
"header": "@@ -1080,13 +1084,14 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" private static String extractIstatInfo(StandardException se) {",
" StackTraceElement[] stack = se.getStackTrace();",
" String trace = \"<n/a>\";",
" if (IndexStatisticsDaemonImpl.class.getName().equals(",
" ste.getClassName())) {"
]
},
{
"added": [
" if (t instanceof StandardException) {",
" sqlState = \", SQLSTate=\" + ((StandardException)t).getSQLState();",
" }",
" return \"<\" + t.getClass() + \", msg=\" + t.getMessage() + sqlState +",
" \"> \" + trace;"
],
"header": "@@ -1096,8 +1101,11 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" return trace + \" got \" + se.getSQLState() +",
" \" (\" + se.getMessage() + \")\";"
]
}
]
}
] |
derby-DERBY-5090-2d1bc8fc
|
DERBY-5090: Retrieving BLOB fields sometimes fails
Clean up use of messages when accessing closed objects (mostly streams):
o remove unused XJ094.S
o rename J104 to OBJECT_CLOSED.
o replace uses of XCL53 with J104.
o remove XCL53
Patch file: derby-5090-3a-change_messages.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1155332 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/ClobUtf8Writer.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.MessageId;"
],
"header": "@@ -23,12 +23,9 @@",
"removed": [
"import java.io.InputStreamReader;",
"import java.io.OutputStream;",
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.reference.SQLState;"
]
},
{
"added": [
" MessageService.getTextMessage(MessageId.OBJECT_CLOSED));"
],
"header": "@@ -67,7 +64,7 @@ final class ClobUtf8Writer extends Writer {",
"removed": [
" MessageService.getTextMessage (SQLState.LANG_STREAM_CLOSED));"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/LOBInputStream.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.MessageId;"
],
"header": "@@ -26,6 +26,7 @@ import java.io.EOFException;",
"removed": []
},
{
"added": [
" MessageService.getTextMessage(MessageId.OBJECT_CLOSED));"
],
"header": "@@ -119,7 +120,7 @@ public class LOBInputStream",
"removed": [
" MessageService.getTextMessage(SQLState.LANG_STREAM_CLOSED));"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/LOBOutputStream.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.MessageId;"
],
"header": "@@ -25,6 +25,7 @@ package org.apache.derby.impl.jdbc;",
"removed": []
},
{
"added": [
" MessageService.getTextMessage(MessageId.OBJECT_CLOSED));"
],
"header": "@@ -63,8 +64,7 @@ public class LOBOutputStream extends OutputStream {",
"removed": [
" MessageService.getTextMessage(",
" SQLState.LANG_STREAM_CLOSED));"
]
}
]
},
{
"file": "java/shared/org/apache/derby/shared/common/reference/SQLState.java",
"hunks": [
{
"added": [],
"header": "@@ -1431,9 +1431,6 @@ public interface SQLState {",
"removed": [
" //lob stream error",
" String LANG_STREAM_CLOSED = \"XCL53\";",
""
]
},
{
"added": [],
"header": "@@ -1549,7 +1546,6 @@ public interface SQLState {",
"removed": [
" String OBJECT_ALREADY_CLOSED = \"XJ094.S\";"
]
}
]
}
] |
derby-DERBY-5090-b9960232
|
DERBY-5090: Retrieving BLOB fields sometimes fails
Make Derby close open streams obtained from the result set when the next
get-call is invoked (as dictated by JDBC standard). This was done for some
streams, but not all. What happened was also different for the client and the
embedded driver.
Added CloseFilterInputStream (modeled after existing class in the client).
Removed NewByteArrayInputStream, used Java API class instead.
Patch file: derby-5090-1b-fix.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1142896 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [],
"header": "@@ -30,7 +30,6 @@ import java.sql.Time;",
"removed": [
"import org.apache.derby.client.am.SQLExceptionFactory;"
]
},
{
"added": [
" private CloseFilterInputStream currentStream;",
" private Reader currentReader;"
],
"header": "@@ -49,7 +48,8 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
"\tprivate CloseFilterInputStream is_;"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -439,7 +439,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -576,7 +576,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -610,7 +610,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -644,7 +644,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -678,7 +678,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -712,7 +712,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -746,7 +746,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -780,7 +780,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -815,7 +815,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -847,7 +847,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -877,7 +877,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -920,7 +920,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -964,7 +964,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1056,7 +1056,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1084,7 +1084,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1112,7 +1112,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1143,7 +1143,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1194,7 +1194,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" currentReader = result;"
],
"header": "@@ -1214,6 +1214,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1226,7 +1227,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1256,7 +1257,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1286,7 +1287,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1313,7 +1314,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1340,7 +1341,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
},
{
"added": [
" closeOpenStreams();"
],
"header": "@@ -1374,7 +1375,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" closeCloseFilterInputStream();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/services/io/AccessibleByteArrayOutputStream.java",
"hunks": [
{
"added": [
"import java.io.ByteArrayInputStream;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [
"import java.io.ByteArrayInputStream;",
""
],
"header": "@@ -21,6 +21,8 @@",
"removed": []
},
{
"added": [],
"header": "@@ -50,7 +52,6 @@ import org.apache.derby.iapi.sql.ResultDescription;",
"removed": [
"import org.apache.derby.iapi.services.io.NewByteArrayInputStream;"
]
},
{
"added": [
"import org.apache.derby.iapi.services.io.CloseFilterInputStream;"
],
"header": "@@ -77,6 +78,7 @@ import java.util.Arrays;",
"removed": []
},
{
"added": [
" stream = new ByteArrayInputStream(dvd.getBytes());"
],
"header": "@@ -1271,7 +1273,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" stream = new NewByteArrayInputStream(dvd.getBytes());"
]
},
{
"added": [
" // Wrap in a stream throwing exception on invocations when closed.",
" stream = new CloseFilterInputStream(stream);",
" currentStream = stream;"
],
"header": "@@ -1281,7 +1283,9 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t\tcurrentStream = stream;"
]
}
]
}
] |
derby-DERBY-5093-5da491e6
|
DERBY-5093: avoid Integer allocations fetching client meta data info
Patch contributed by Dave Brosius <dbrosius@apache.org>.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1085407 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/DatabaseMetaData.java",
"hunks": [
{
"added": [
" if ((Integer.parseInt(st.nextToken())) == type) {"
],
"header": "@@ -2560,7 +2560,7 @@ public abstract class DatabaseMetaData implements java.sql.DatabaseMetaData {",
"removed": [
" if ((new Integer(st.nextToken())).intValue() == type) {"
]
},
{
"added": [
" if ((Integer.parseInt(stForConc.nextToken())) == type) {",
" if ((Integer.parseInt(stForConc.nextToken())) == concurrency) {"
],
"header": "@@ -2600,9 +2600,9 @@ public abstract class DatabaseMetaData implements java.sql.DatabaseMetaData {",
"removed": [
" if ((new Integer(stForConc.nextToken())).intValue() == type) {",
" if ((new Integer(stForConc.nextToken())).intValue() == concurrency) {"
]
},
{
"added": [
" if ((Integer.parseInt(stForType.nextToken())) == fromType) {",
" if ((Integer.parseInt(st.nextToken())) == toType) {"
],
"header": "@@ -2635,9 +2635,9 @@ public abstract class DatabaseMetaData implements java.sql.DatabaseMetaData {",
"removed": [
" if ((new Integer(stForType.nextToken())).intValue() == fromType) {",
" if ((new Integer(st.nextToken())).intValue() == toType) {"
]
}
]
}
] |
derby-DERBY-5099-ecef2eb1
|
DERBY-5099: PrepareStatementTest depends on ordering of test cases
Merged two test cases that depended on one another so that the order
in which the test cases run doesn't affect the result.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1079336 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5100-a5209eb6
|
DERBY-5100: GetCurrentPropertiesTest depends on implicit ordering of test cases
Made the test ordering explicit.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1082226 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/TestConfiguration.java",
"hunks": [
{
"added": [
"import java.util.Collections;",
"import java.util.Comparator;",
"import java.util.Iterator;"
],
"header": "@@ -28,8 +28,11 @@ import java.sql.Connection;",
"removed": []
},
{
"added": [
"",
" /**",
" * A comparator that orders {@code TestCase}s lexicographically by",
" * their names.",
" */",
" private static final Comparator TEST_ORDERER = new Comparator() {",
" public int compare(Object o1, Object o2) {",
" TestCase t1 = (TestCase) o1;",
" TestCase t2 = (TestCase) o2;",
" return t1.getName().compareTo(t2.getName());",
" }",
" };",
"",
" /**",
" * Create a test suite with all the test cases in the specified class. The",
" * test cases should be ordered lexicographically by their names.",
" *",
" * @param testClass the class with the test cases",
" * @return a lexicographically ordered test suite",
" */",
" public static Test orderedSuite(Class testClass) {",
" // Extract all tests from the test class and order them.",
" ArrayList tests = Collections.list(new TestSuite(testClass).tests());",
" Collections.sort(tests, TEST_ORDERER);",
"",
" // Build a new test suite with the tests in lexicographic order.",
" TestSuite suite = new TestSuite(suiteName(testClass));",
" for (Iterator it = tests.iterator(); it.hasNext(); ) {",
" suite.addTest((Test) it.next());",
" }",
"",
" return suite;",
" }"
],
"header": "@@ -413,6 +416,39 @@ public final class TestConfiguration {",
"removed": []
}
]
}
] |
derby-DERBY-5101-84349af5
|
DERBY-5101: TruncateTableTest depends on implicit ordering of test cases
Added a workaround for DERBY-5139 so that the test doesn't fail when testSelfReferencing runs first.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1082428 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5102-ad5e6e3d
|
DERBY-5102: GrantRevokeDDLTest depends on implicit ordering of test cases
More cleanup when the test cases complete.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1080557 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5103-8010935f
|
DERBY-5103: ProcedureInTriggerTest depends on implicit ordering of test cases
Change the test cases to be self-contained and not depend on the side effects of the other test cases.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1079805 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5104-ac0be7fe
|
DERBY-5104: InterruptResilienceTest fails to remove tables in tearDown()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1079349 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5105-ae72a308
|
DERBY-6003: Create row templates outside of the generated code
Upgrade test fix in preparation for the actual fix for this issue.
Improve SYSCS_INVALIDATE_STORED_STATEMENTS by making it null out the
plans in SYS.SYSSTATEMENTS. Previously, it only marked them as invalid.
Use the improved SYSCS_INVALIDATE_STORED_STATEMENTS to work around
problems in the upgrade tests when downgrading to a version that suffers
from DERBY-4835 or DERBY-5289. Remove the old workarounds for DERBY-4835,
DERBY-5105, DERBY-5263 and DERBY-5289, as they are now handled by the
centralized workaround that uses SYSCS_INVALIDATE_STORED_STATEMENTS.
This change is needed because later patches for this issue will change
the format of many stored plans, so more of the test cases need to work
around the downgrade bugs in some old versions.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1418296 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
" * @param recompile Whether to recompile or invalidate"
],
"header": "@@ -4429,6 +4429,7 @@ public final class\tDataDictionaryImpl",
"removed": []
}
]
}
] |
derby-DERBY-5105-caf6b950
|
DERBY-5105: NoSuchMethodError in upgrade tests (testTriggerBasic)
Disable the post soft upgrade phase for the versions that suffer from
DERBY-4835.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1081072 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5107-04c0d4e2
|
DERBY-5107: BasicInMemoryDbTest depends on implicit ordering of test cases
Use helper methods that ensure created databases are dropped between each test case, and also load the JDBC driver explicitly in test cases that used to require that an earlier test case had already loaded the driver implicitly.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1082282 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5108-764b3a0f
|
DERBY-5108: Intermittent failure in AutomaticIndexStatisticsTest.testShutdownWhileScanningThenDelete on Windows
Adjust the istat log message if the scan is aborted.
Patch file: derby-5108-3a-istat_log_abort.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1133317 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
" sb.append('c').append(timings[i][0]).append('=');",
" // Handle corner-case where the scans are aborted due to the",
" // index statistics daemon being shut down under us.",
" if (timings[i][2] == 0) {",
" sb.append(\"ABORTED,\"); ",
" } else {",
" long duration = timings[i][2] - timings[i][1];",
" sb.append(duration).append(\"ms,\");",
" }"
],
"header": "@@ -1032,9 +1032,15 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" long duration = timings[i][2] - timings[i][1];",
" sb.append('c').append(timings[i][0]).append('=').append(duration).",
" append(\"ms,\");"
]
}
]
}
] |
derby-DERBY-5108-a346f0c7
|
DERBY-5108
Changes istat daemon shutdown to check during processing if a shutdown is
in progress and respond to the shutdown immediately. Also changes the
module stop() to wait for worker threads to exit before returning. Waiting
for work to stop allows the subsequent shutdown of the storage system to
properly close it's files during a clean shutdown request. Without this
change the system sometimes left files open which the nightly tests uncovered
on windows machines while trying to delete those files.
This change is a slightly modified version of a patch proposed by Knut Anders Ha
tlen.
Previous to this change the AutomaticIndexStatisticsTest.testShutdownWhileScanni
ngThenDelete
test would fail on my machine consistently in SANE classes mode on a windows XP
laptop. After this
I have only seen one failure in 50 runs. Checking it in as it is definitely
an improvement and want to see if it fixes the errors in the nightly's across
a number of environments.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1081677 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
"",
" if (isShuttingDown()) {",
" break;"
],
"header": "@@ -424,12 +424,11 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" synchronized (queue) {",
" if (daemonDisabled) {",
" break;",
" }"
]
},
{
"added": [
" int rowsFetched = 0;",
" boolean giving_up_on_shutdown = false;",
"",
" // DERBY-5108",
" // Check if daemon has been disabled, and if so stop",
" // scan and exit asap. On shutdown the system will",
" // send interrupts, but the system currently will",
" // recover from these during the scan and allow the",
" // scan to finish. Checking here after each group",
" // I/O that is processed as a convenient point.",
" if (asBackgroundTask) {",
" if (isShuttingDown()) {",
" giving_up_on_shutdown = true;",
" break;",
" }",
" }",
""
],
"header": "@@ -458,9 +457,25 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" int rowsFetched = 0;"
]
},
{
"added": [
"",
"",
" if (giving_up_on_shutdown)",
" break;",
""
],
"header": "@@ -469,7 +484,12 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
""
],
"header": "@@ -478,6 +498,7 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
""
],
"header": "@@ -502,6 +523,7 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" Thread threadToWaitFor = null;",
""
],
"header": "@@ -863,6 +885,8 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" threadToWaitFor = runningThread;",
"",
" }",
"",
" // Wait for the currently running thread, if there is one. Must do",
" // this outside of the synchronized block so that we don't deadlock",
" // with the thread.",
" if (threadToWaitFor != null) {",
" try {",
" threadToWaitFor.join();",
" } catch (InterruptedException ie) {",
" // Never mind. The thread will die eventually.",
" }",
""
],
"header": "@@ -886,12 +910,26 @@ public class IndexStatisticsDaemonImpl",
"removed": []
}
]
}
] |
derby-DERBY-5108-ebd44de6
|
DERBY-5108 Intermittent failure in AutomaticIndexStatisticsTest.testShutdownWhileScanningThenDelete on Windows
Follow-up patch DERBY-5108-2 which makes the join with the daemon
retry in the case the thread gets interrupted, also note the fact in
InterruptStatus to follow our new pattern for interrupt handling.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1085027 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
" while (true) {",
" try {",
" threadToWaitFor.join();",
" break;",
" } catch (InterruptedException ie) {",
" InterruptStatus.setInterrupted();",
" }",
""
],
"header": "@@ -921,11 +921,15 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" try {",
" threadToWaitFor.join();",
" } catch (InterruptedException ie) {",
" // Never mind. The thread will die eventually."
]
}
]
}
] |
derby-DERBY-5108-ee17158f
|
DERBY-5108: Intermittent failure in AutomaticIndexStatisticsTest.testShutdownWhileScanningThenDelete on Windows
Shut down the istat daemon thread at an earlier stage when the database is
being shut down (user initiated or due to a severe error). This should avoid
the problem where file containers are reopened by the istat daemon after the
container cache has been shut down.
Patch file: derby-5108-2a-early_istats_shutdown_broad.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1133304 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/db/DatabaseContextImpl.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.sql.dictionary.DataDictionary;"
],
"header": "@@ -24,6 +24,7 @@ package org.apache.derby.impl.db;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
" if (!indexStatsUpdateDisabled) {",
" indexStatsUpdateDisabled = true;",
" // NOTE: This will stop the automatic updates of index statistics,",
" // but users can still do this explicitly (i.e. by invoking",
" // the SYSCS_UTIL.SYSCS_UPDATE_STATISTICS system procedure).",
" indexRefresher.stop();",
" }"
],
"header": "@@ -13768,12 +13768,13 @@ public final class\tDataDictionaryImpl",
"removed": [
" indexStatsUpdateDisabled = true;",
" // NOTE: This will stop the automatic updates of index statistics,",
" // but users can still do this explicitly (i.e. by invoking",
" // the SYSCS_UTIL.SYSCS_UPDATE_STATISTICS system procedure).",
" // Set at boot time, we expect it to be non-null.",
" indexRefresher.stop();"
]
}
]
}
] |
derby-DERBY-5111-6048528d
|
DERBY-5111 NullPointerException on unique constraint violation with unique index
Patch derby-5111.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1579766 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/IndexLister.java",
"hunks": [
{
"added": [],
"header": "@@ -100,19 +100,6 @@ public class IndexLister",
"removed": [
" /**",
"\t *\tReturns an array of all the index names on a table.",
"\t *",
"\t *\t@return\tan array of index names",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
" public\tString[]\t\tgetIndexNames()\tthrows StandardException",
"\t{",
"\t\tif ( indexNames == null ) { getAllIndexes(); }",
"\t\treturn\tArrayUtil.copy( indexNames );",
"\t}",
""
]
},
{
"added": [
"\t\tif ( distinctIndexNames == null ) { getAllIndexes(); }",
"\t\treturn\tArrayUtil.copy( distinctIndexNames );"
],
"header": "@@ -153,8 +140,8 @@ public class IndexLister",
"removed": [
"\t\tif ( indexNames == null ) { getAllIndexes(); }",
"\t\treturn\tArrayUtil.copy( indexNames );"
]
}
]
}
] |
derby-DERBY-5111-75227075
|
DERBY-5111 NullPointerException on unique constraint violation with unique index
Patch derby-5111-test, which adds the repro for this issue as a new test case.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1582819 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5112-b26ddb05
|
DERBY-5112: ImportExportTest depends on implicit ordering of test cases
Reset test tables between each test case.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1079693 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5113-995d3745
|
DERBY-5113: Intermittent failure in BlobSetMethodsTest on Java 7: Unable to set stream: 'Reached EOF prematurely; expected 1,024, got 0.'
Blob.truncate() should copy bytes from the beginning of the stream.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1081293 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5113-9b3569f8
|
DERBY-5113: Intermittent failure in BlobSetMethodsTest on Java 7: Unable to set stream: 'Reached EOF prematurely; expected 1,024, got 0.'
AccessTest should reset the database properties it modifies.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1081059 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5115-66988078
|
DERBY-5115: NetworkServerControlApiTest depends on implicit ordering of test cases
Made the test ordering explicit.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1082233 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5117-03e7a1bc
|
DERBY-5117: ParameterMetaDataJdbc30Test fails with "'DUMMYINT' is not recognized as a function or procedure"
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1079779 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5119-40ca7ba5
|
DERBY-5119 testQualifiers(org.apache.derbyTesting.functionTests.tests.store.AccessTest)java.sql.SQLException: Table/View 'FOO' already exists in Schema 'APP'.
Add drop of table FOO to teardown method so previous failed fixtures won't leave it around to intefere with others.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1081568 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5120-2198fafc
|
DERBY-5379 testDERBY5120NumRowsInSydependsForTrigger - The number of values assigned is not the same as the number of specified or implied columns.
DERBY-5484 Upgradetest fails with upgrade from 10.8.2.2 (7 errors, 1 failure) on trunk
The above 2 jiras are duplicates. The upgrade tests are failing when doing an upgrade from 10.8.2.2 to trunk.
The tests that are failing were written for DERBY-5120, DERBY-5044. Both these bugs got fixed in 10.8.2.2 and higher.
The purpose of these tests is to show that when the tests are done with a release with those fixes missing, we will see the incorrect behavior but once the database is upgraded to 10.8.2.2 and higher, the tests will start functioning correctly. The problem is that we do not recognize that if the database is created with 10.8.2.2, then we will not the problem behavior because 10.8.2.2 already has the required fixes in it for DERBY-5120 and DERBY-5044. I have fixed this by making the upgrade test understand that incorrect behavior would be seen only for releases under 10.8.2.2
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1203252 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5120-8af5f9e8
|
I am commiting change which include engine changes and upgrade test addition. Will add tests as another checkin
Following is a brief description of the engine changes.
For the table being altered, we will go through the dependency system to determine all the triggers that depend on the table being altered(this will include triggers defined directly on the table being altered and the triggers defined on other tables but using the table being altered in their trigger action plan.) This is done by first finding all the objects that depend on the table being altered. We are only interested in SPSDescriptors from that list of dependent objects. For each of these dependent SPSDescriptor, we want to find if they are defined for a trigger action SPS. If yes, then the trigger must be dependent on the table being altered. For each of these dependent triggers, we dropped their trigger descriptor from datadictionary, regenerate and rebind it's trigger action SPS and then add the trigger descriptor(with upto date version of internal representation of trigger action) back to datadictionary.During the rebind of trigger action, we will get exception if the trigger depends on the column being altered. If so, then if the alter table drop column is being done in restrict mode, then we will throw an exception that column can't be dropped because it has dependent object. If the drop column was issued in cascade mode, then we will drop the dependent triggers.
As part of this commit, I have removed the code which used to go directly through all the triggers defined on the table being altered and dropping, rebinding and recreating them. This is because the new code going through the dependency system should find all the triggers which would be impacted by drop column, no matter whether the triggers are defined on the table being altered or triggers defined on other tables but using table being altered in their trigger action.) DERBY-5120 could have prevented us from catching all the triggers defined on the table being altered through the dependency system because of missing dependency between trigger action sps and trigger table but that has been fixed in 10.9 and 10.8 so we should be fine. I have run all the existing junit suites and derbyall and they ran fine.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1166313 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [
"import org.apache.derby.catalog.Dependable;"
],
"header": "@@ -29,6 +29,7 @@ import java.util.Properties;",
"removed": []
},
{
"added": [
"\t\t// By this time, the column has been removed from the table descriptor.",
"\t\t// Now, go through all the triggers and regenerate their trigger action",
"\t\t// SPS and rebind the generated trigger action sql. If the trigger ",
"\t\t// action is using the dropped column, it will get detected here. If ",
"\t\t// not, then we will have generated the internal trigger action sql",
"\t\t// which matches the trigger action sql provided by the user.",
"\t\t//",
"\t\t// eg of positive test case",
"\t\t// create table atdc_16_tab1 (a1 integer, b1 integer, c1 integer);",
"\t\t// create table atdc_16_tab2 (a2 integer, b2 integer, c2 integer);",
"\t\t// create trigger atdc_16_trigger_1 ",
"\t\t// after update of b1 on atdc_16_tab1",
"\t\t// REFERENCING NEW AS newt",
"\t\t// for each row ",
"\t\t// update atdc_16_tab2 set c2 = newt.c1",
"\t\t// The internal representation for the trigger action before the column",
"\t\t// is dropped is as follows",
"\t\t// \t update atdc_16_tab2 set c2 = ",
"\t\t// org.apache.derby.iapi.db.Factory::getTriggerExecutionContext().",
"\t\t// getONewRow().getInt(3)",
"\t\t// After the drop column shown as below",
"\t\t// alter table DERBY4998_SOFT_UPGRADE_RESTRICT drop column c11",
"\t\t// The above internal representation of tigger action sql is not ",
"\t\t// correct anymore because column position of c1 in atdc_16_tab1 has ",
"\t\t// now changed from 3 to 2. Following while loop will regenerate it and",
"\t\t// change it to as follows",
"\t\t// \t update atdc_16_tab2 set c2 = ",
"\t\t// org.apache.derby.iapi.db.Factory::getTriggerExecutionContext().",
"\t\t// getONewRow().getInt(2)",
"\t\t//",
"\t\t// We could not do this before the actual column drop, because the ",
"\t\t// rebind would have still found the column being dropped in the",
"\t\t// table descriptor and hence use of such a column in the trigger",
"\t\t// action rebind would not have been caught.",
"",
"\t\t//For the table on which ALTER TABLE is getting performed, find out",
"\t\t// all the SPSDescriptors that use that table as a provider. We are",
"\t\t// looking for SPSDescriptors that have been created internally for",
"\t\t// trigger action SPSes. Through those SPSDescriptors, we will be",
"\t\t// able to get to the triggers dependent on the table being altered",
"\t\t//Following will get all the dependent objects that are using",
"\t\t// ALTER TABLE table as provider",
"\t\tList depsOnAlterTableList = dd.getProvidersDescriptorList(td.getObjectID().toString());",
"\t\tfor (Iterator depsOnAlterTableIterator = depsOnAlterTableList.listIterator(); ",
"\t\t\tdepsOnAlterTableIterator.hasNext();)",
"\t\t{",
"\t\t\t//Go through all the dependent objects on the table being altered ",
"\t\t\tDependencyDescriptor depOnAlterTableDesc = ",
"\t\t\t\t(DependencyDescriptor) depsOnAlterTableIterator.next();",
"\t\t\tDependableFinder dependent = depOnAlterTableDesc.getDependentFinder();",
"\t\t\t//For the given dependent, we are only interested in it if it is a",
"\t\t\t// stored prepared statement.",
"\t\t\tif (dependent.getSQLObjectType().equals(Dependable.STORED_PREPARED_STATEMENT))",
"\t\t\t\t//Look for all the dependent objects that are using this ",
"\t\t\t\t// stored prepared statement as provider. We are only ",
"\t\t\t\t// interested in dependents that are triggers.",
"\t\t\t\tList depsTrigger = dd.getProvidersDescriptorList(depOnAlterTableDesc.getUUID().toString());",
"\t\t\t\tfor (Iterator depsTriggerIterator = depsTrigger.listIterator();",
"\t\t\t\t\tdepsTriggerIterator.hasNext();)",
"\t\t\t\t\tDependencyDescriptor depsTriggerDesc = ",
"\t\t\t\t\t\t(DependencyDescriptor) depsTriggerIterator.next();",
"\t\t\t\t\tDependableFinder providerIsTrigger = depsTriggerDesc.getDependentFinder();",
"\t\t\t\t\t//For the given dependent, we are only interested in it if",
"\t\t\t\t\t// it is a trigger",
"\t\t\t\t\tif (providerIsTrigger.getSQLObjectType().equals(Dependable.TRIGGER)) {",
"\t\t\t\t\t\t//Drop and recreate the trigger after regenerating ",
"\t\t\t\t\t\t// it's trigger action plan. If the trigger action",
"\t\t\t\t\t\t// depends on the column being dropped, it will be",
"\t\t\t\t\t\t// caught here.",
"\t\t\t\t\t\tTriggerDescriptor trdToBeDropped = dd.getTriggerDescriptor(depsTriggerDesc.getUUID());",
"\t\t\t\t\t\tcolumnDroppedAndTriggerDependencies(trdToBeDropped,",
"\t\t\t\t\t\t\t\tcascade, columnName);",
"\t\t\t\t\t}"
],
"header": "@@ -1672,38 +1673,82 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\tList deps = dd.getProvidersDescriptorList(td.getObjectID().toString());",
"\t\tfor (Iterator depsIterator = deps.listIterator(); ",
" depsIterator.hasNext();)",
"\t\t{",
"\t\t\tDependencyDescriptor depDesc = ",
" (DependencyDescriptor) depsIterator.next();",
"",
"\t\t\tDependableFinder finder = depDesc.getProviderFinder();",
"\t\t\tif (finder instanceof DDColumnDependableFinder)",
"\t\t\t\tDDColumnDependableFinder colFinder = ",
" (DDColumnDependableFinder) finder;",
"\t\t\t\tFormatableBitSet oldColumnBitMap = ",
" new FormatableBitSet(colFinder.getColumnBitMap());",
"\t\t\t\tFormatableBitSet newColumnBitMap = ",
" new FormatableBitSet(oldColumnBitMap);",
"\t\t\t\tnewColumnBitMap.clear();",
"\t\t\t\tint bitLen = oldColumnBitMap.getLength();",
"\t\t\t\tfor (int i = 0; i < bitLen; i++)",
"\t\t\t\t\tif (i < droppedColumnPosition && oldColumnBitMap.isSet(i))",
"\t\t\t\t\t\tnewColumnBitMap.set(i);",
"\t\t\t\t\tif (i > droppedColumnPosition && oldColumnBitMap.isSet(i))",
"\t\t\t\t\t\tnewColumnBitMap.set(i - 1);",
"\t\t\t\tif (newColumnBitMap.equals(oldColumnBitMap))",
"\t\t\t\t\tcontinue;",
"\t\t\t\tdd.dropStoredDependency(depDesc, tc);",
"\t\t\t\tcolFinder.setColumnBitMap(newColumnBitMap.getByteArray());",
"\t\t\t\tdd.addDescriptor(depDesc, null,",
"\t\t\t\t\t\t\t\t DataDictionary.SYSDEPENDS_CATALOG_NUM,",
"\t\t\t\t\t\t\t\t true, tc);"
]
}
]
}
] |
derby-DERBY-5120-95971f27
|
DERBY-5120 Row from SYSDEPENDS gets deleted when a table has update triggers defined on it and an upate is made to the table
This commit moves recording to trigger action sps's dependency on trigger table from create trigger constant action and alter
table constant action to SPSDescriptor. This central location in SPSDescriptor for recording the dependency will take care of
create trigger, alter table and sps regeneration cases. The checkin also required fixing triggerGeneral.sql because now
that we do not loose the dependency between trigger action sps and trigger table, the change in trigger table always sends
an invalidation signal to it's triggers which causes those triggers to recompile when they fire next time. For triggerGeneral
case, the trigger in question ends up being incorrect(because of alter table add column) and thus would cause the test to fail.
I resolved it by fixing the trigger action.
Additionally, I have added upgrade test case which checks how the trigger invalidation signal are missed prior to this fix
thus not catching incorrect triggers. This test has been disabled for 10.5.1.1, 10.5.3.0, 10.6.1.0 and 10.6.2.1 because those
4 releases do not have DERBY-4835 fix in them. Because of that missing fix, the triggers donot get invalidated as part of
upgrade for those releases and hence the test added by this jira would fail for those 4 releases. To avoid the failure, I have
disabled the test for those 4 releases.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1146915 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5121-168be4e5
|
DERBY-1482 / DERBY-5121
Rick Hillegas contributed a very exhaustive trigger test which I am converting to junit and adding to the upgrade suite
// The test exhaustively walks through all subsets and permutations
// of columns for a trigger which inserts into a side table based on
// updates to a master table.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1125453 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5121-56bebbea
|
DERBY-5121 Data corruption when executing an UPDATE trigger
With the earlier checkin for DERBY-5121, DERBY-1482 changes weren't completely backed out on trunk and 10.7. We have backed out
the code for the triggers so that now triggers look for the columns in their actual column positions at execution time. But
DERBY-1482 also made changes to UPDATE code to read only the colunms needed by it and the triggers that it is going to fire.
We need to backout the changes to UPDATE code to make sure that it reads all the columns from the trigger table and not do
selective column reading.
Also adding an upgrade case testing the behavior of UPDATE reading correct columns from the trigger table so that trigger
finds the columns it needs.
derbyall and junit suite runs fine with these changes
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1087049 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.