id
stringlengths 22
25
| commit_message
stringlengths 137
6.96k
| diffs
listlengths 0
63
|
|---|---|---|
derby-DERBY-5663-6a072f82
|
DerbyDERBY-5663 Getting NPE when trying to set derby.language.logStatementText property to true inside a junit suite.
It is possible that the same instance of SystemPropertyTestSetup decorator is used more than once. In such a case, nulling out the oldValues in tearDown method can cause a null pointer exception in the subsequent use of the same SystemPropertyTestSetup. The right thing to do is to initialize oldValues to new Properties object everytime SystemPropertyTestSetup.setUp gets used. In order to do this, we are removing the initialization of oldValues from the constructor and putting it in setUp method.
Additionally, we do not want to null out newValues in tearDown method because the subsequent use of same SystemPropertyTestSetup instance will loose the new values requested by the user of the decorator. Because of this, we will not null newValues in tearDown anymore.
Existing junit All suite and derbyall ran fine with these changes.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1309244 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/SystemPropertyTestSetup.java",
"hunks": [
{
"added": [],
"header": "@@ -51,7 +51,6 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": [
"\t\tthis.oldValues = new Properties();"
]
},
{
"added": [],
"header": "@@ -67,7 +66,6 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": [
"\t\tthis.oldValues = new Properties();"
]
},
{
"added": [
" \t//DERBY-5663 Getting NPE when trying to set ",
" \t// derby.language.logStatementText property to true inside a junit ",
" \t// suite.",
" \t//The same instance of SystemPropertyTestSetup can be used again",
" \t// and hence we want to make sure that oldValues is not null as set",
" \t// in the tearDown() method. If we leave it null, we will run into NPE",
" \t// during the tearDown of SystemPropertyTestSetup during the ",
" \t// decorator's reuse.",
"\t\tthis.oldValues = new Properties();"
],
"header": "@@ -77,6 +75,15 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": []
},
{
"added": [],
"header": "@@ -106,7 +113,6 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": [
" newValues = null;"
]
}
]
}
] |
derby-DERBY-5664-47bae99a
|
DERBY-5664: Include driver tests in jdbcapi suite
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1303693 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5667-085b6e54
|
DERBY-5667; testReadCommitted(org.apache.derbyTesting.functionTests.tests.store.UpdateLocksTest)junit.framework.AssertionFailedError: Missing rows in ResultSet
adding call to wait for post-commit work after deletes.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330066 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5667-b27d8ec1
|
DERBY-5667; testReadCommitted(org.apache.derbyTesting.functionTests.tests.store.UpdateLocksTest)junit.framework.AssertionFailedError: Missing rows in ResultSet
Adding wait_for_post_commit calls after commits following updates;
changing JDBC.assertRSContains() to print out more details if there are missing rows.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1370058 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/JDBC.java",
"hunks": [
{
"added": [
" String message = \"Missing rows in ResultSet; \\n\\t expected rows: \\n\\t\\t\" ",
" + expected + \"\\n\\t actual result: \\n\\t\\t\" + actual;",
" Assert.fail( message );"
],
"header": "@@ -1392,7 +1392,9 @@ public class JDBC {",
"removed": [
" Assert.fail( \"Missing rows in ResultSet\" );"
]
}
]
}
] |
derby-DERBY-5667-c748453a
|
DERBY-5667; testReadCommitted(org.apache.derbyTesting.functionTests.tests.store.UpdateLocksTest)junit.framework.AssertionFailedError: Missing rows in ResultSet
adjusting the test so every commit following a delete is followed in turn
by a call to wait for post-commit threads to finish.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330482 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-567-6e7bbc8c
|
DERBY-688: Enhancements to XML functionality toward XPath/XQuery support
This revision contains d688_phase1_v3.patch.
This patch was contributed by Army Brown (qozinx@gmail.com).
Attaching a "phase 1" patch, d688_phase1_v1.patch, for this issue that does the following:
1. Reorganizes XML-specific code as follows:
- Moves all code that relies on JAXP and Xalan classes
out of XML.java and into a new class, SqlXmlUtil.java.
See comments at the beginning of SqlXmlUtil for an
explanation of why this was done.
- Creates a new class, SqlXmlExecutor, in the impl.sql.execute
package that serves as the class on which all XML operator
calls are generated. Ex. for XMLEXISTS, instead of
generating:
<xmlOperand>.XMLExists(<query-expr>, xmlOperand)
we now generate:
<SqlXmlExecutor>.XMLSerialize(<query-expr>, xmlOperand)
Along with making the code cleaner by allowing all
XML operator calls to be defined in the same class,
this new class has other benefits, as well--see
comments at the beginning of SqlXmlExecutor for
more of an explanation.
2. Changes implementation of XPath from XSLT processing to
the low-level Xalan API, which is faster, more flexible,
and better for implementation of the XMLQUERY operator
(the XMLQUERY operator will be coming in subsequent
phases). Note that as part of this change I've removed
the dependency on an explicit declaration of Xerces
as the parser; Derby will now pick up the parser from
the JVM (i.e. this patch resolves DERBY-567).
3. Makes a small change to the XMLEXISTS operator to bring
it more in line with SQL/XML spec. More specifically,
the query expression that is specified must now be a string
literal; parameters and other expressions are not allowed.
4. Updates the XML test and master files (lang/xml_general.sql
and lang/xmlBinding.java) to bring them in sync with the latest
Derby codeline. Since the XML tests are not (yet) run
as part of derbyall, the master files need to be updated
to reflect some client/server changes that have gone into
the codeline for 10.2 (for example, server pre-fetching
behavior).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@429698 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/XML.java",
"hunks": [
{
"added": [],
"header": "@@ -46,28 +46,6 @@ import java.io.ObjectOutput;",
"removed": [
"import org.xml.sax.ErrorHandler;",
"import org.xml.sax.XMLReader;",
"import org.xml.sax.SAXException;",
"import org.xml.sax.SAXParseException;",
"import org.xml.sax.InputSource;",
"",
"import org.xml.sax.helpers.DefaultHandler;",
"import org.xml.sax.helpers.XMLReaderFactory;",
"",
"import javax.xml.transform.Templates;",
"import javax.xml.transform.TransformerFactory;",
"",
"import javax.xml.transform.sax.SAXResult;",
"import javax.xml.transform.sax.TemplatesHandler;",
"import javax.xml.transform.sax.TransformerHandler;",
"",
"// Note that even though the following has a Xalan",
"// package name, it IS part of the JDK 1.4 API, and",
"// thus we can compile it without having Xalan in",
"// our classpath.",
"import org.apache.xalan.processor.TransformerFactoryImpl;",
""
]
},
{
"added": [],
"header": "@@ -94,18 +72,6 @@ public class XML",
"removed": [
" // Parser class to use for parsing XML. We use the",
" // Xerces parser, so (for now) we require that Xerces",
" // be in the user's classpath. Note that we load",
" // the Xerces class dynamically (using the class ",
" // name) so that Derby will build even if Xerces",
" // isn't in the build environment; i.e. Xerces is",
" // only required if XML is actually going to be used",
" // at runtime; it's not required for a successful",
" // build nor for non-XML database use.",
" protected static final String XML_PARSER_CLASS =",
" \"org.apache.xerces.parsers.SAXParser\";",
""
]
},
{
"added": [
" /**",
" Loaded at execution time, this holds XML-related objects",
" that were created once during compilation but can be re-used",
" for each row in the target result set for the current",
" SQL statement. In other words, we create the objects",
" once per SQL statement, instead of once per row. In the",
" case of XMLEXISTS, one of the \"objects\" is the compiled",
" query expression, which means we don't have to compile",
" the expression for each row and thus we save some time.",
" */",
" private SqlXmlUtil sqlxUtil;"
],
"header": "@@ -119,16 +85,17 @@ public class XML",
"removed": [
" // An XML reader for reading and parsing SAX events.",
" protected XMLReader saxReader;",
"",
" // XSLT objects used when performing an XSLT query, which",
" // is the query mechanism for this UTF8-based implementation.",
" private static final String XPATH_PLACEHOLDER = \"XPATH_PLACEHOLDER\";",
" private static final String QUERY_MATCH_STRING = \"MATCH\";",
" private static String xsltStylesheet;",
" private XMLReader xsltReader;",
" private TransformerFactoryImpl saxTFactory;"
]
},
{
"added": [
" * store the _serialized_ version locally and then return",
" * this XMLDataValue.",
" *",
" * @param sqlxUtil Contains SQL/XML objects and util",
" * methods that facilitate execution of XML-related",
" * operations",
" * @return If 'text' constitutes a valid XML document,",
" * it has been stored in this XML value and this XML",
" * value is returned; otherwise, an exception is thrown. ",
" * @exception StandardException Thrown on error.",
" public XMLDataValue XMLParse(String text, boolean preserveWS,",
" SqlXmlUtil sqlxUtil) throws StandardException",
" // Currently the only way a user can view the contents of",
" // an XML value is by explicitly calling XMLSERIALIZE.",
" // So do a serialization now and just store the result,",
" // so that we don't have to re-serialize every time a",
" // call is made to XMLSERIALIZE.",
" text = sqlxUtil.serializeToString(text);"
],
"header": "@@ -430,26 +397,32 @@ public class XML",
"removed": [
" * store the _parsed_ version for subsequent use.",
"\t * If 'text' constitutes a valid XML document,",
" * it has been stored in this XML value and nothing",
" * is returned; otherwise, an exception is thrown.",
" * @exception StandardException Thrown on parse error.",
" public void parseAndLoadXML(String text, boolean preserveWS)",
" throws StandardException",
" // We're just going to use the text exactly as it",
" // is, so we just need to see if it parses. ",
" loadSAXReader();",
" saxReader.parse(",
" new InputSource(new StringReader(text)));"
]
},
{
"added": [
" // Couldn't parse the XML document. Throw a StandardException"
],
"header": "@@ -460,7 +433,7 @@ public class XML",
"removed": [
" // The text isn't a valid XML document. Throw a StandardException"
]
},
{
"added": [
" return this;",
" * Serializes this XML value into a string with a user-specified",
" * character type, and returns that string via the received",
" * StringDataValue (if the received StringDataValue is non-null",
" * and of the correct type; else, a new StringDataValue is",
" * returned).",
" *"
],
"header": "@@ -471,15 +444,17 @@ public class XML",
"removed": [
" return;",
" * Converts this XML value into a string with a user-specified",
" * type, and returns that string via the received StringDataValue",
" * (if the received StringDataValue is non-null; else a new",
" * StringDataValue is returned)."
]
},
{
"added": [
" \"with a non-string target type: \" + targetType);"
],
"header": "@@ -505,7 +480,7 @@ public class XML",
"removed": [
" \"with a non-string target type.\");"
]
},
{
"added": [
" // we already have it as a UTF-8 string, so just use",
" // that.",
" result.setValue(getString());"
],
"header": "@@ -526,8 +501,9 @@ public class XML",
"removed": [
" // we already have it as a string, so just use that.",
" result.setValue(xmlStringValue.getString());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/XMLDataValue.java",
"hunks": [
{
"added": [
" /**",
" * Method to parse an XML string and, if it's valid,",
" * store the _serialized_ version locally and then return",
" * this XMLDataValue.",
" * @param text The string value to check.",
" * @param sqlxUtil Contains SQL/XML objects and util",
" * methods that facilitate execution of XML-related",
" * operations",
" * @return If 'text' constitutes a valid XML document,",
" * it has been stored in this XML value and this XML",
" * value returned; otherwise, an exception is thrown. ",
" * @exception StandardException Thrown on error.",
"\tpublic XMLDataValue XMLParse(String text, boolean preserveWS,",
"\t\tSqlXmlUtil sqlxUtil) throws StandardException;",
" * Serializes this XML value into a string with a user-specified",
" * character type, and returns that string via the received",
" * StringDataValue (if the received StringDataValue is non-null",
" * and of the correct type; else, a new StringDataValue is",
" * returned)."
],
"header": "@@ -24,43 +24,32 @@ import org.apache.derby.iapi.error.StandardException;",
"removed": [
" /*",
" ** NOTE: Officially speaking, the XMLParse operator",
" ** is not defined here; it is instead defined on the",
" ** StringDataValue interface (and implemented in",
" ** SQLChar.java) since it is called with a _String_",
" ** operand, not with an XML operand. That said,",
" ** though, the implemention in SQLChar.java",
" ** really just calls the \"parseAndLoadXML\" method that's",
" ** defined on this interface, so it's this interface",
" ** that really does the work.",
" **",
" ** XMLSerialize and XMLExists, on the other hand,",
" ** are called with XML operands, and thus they",
" ** can just be defined in this interface.",
" */",
"",
" /**",
" * Parse the received string value as XML. If the",
" * parse succeeds, store the string value as the",
" * contents of this XML value. If 'text' constitutes a valid XML document,",
" * it has been stored in this XML value and nothing",
" * is returned; otherwise, an exception is thrown.",
" * @param xmlText The string value to check.",
" * @exception StandardException Thrown on parse error.",
" public void parseAndLoadXML(String xmlText, boolean preserveWS)",
" throws StandardException;",
" * Converts this XML value into a string with a user-specified",
" * type, and returns that string via the received StringDataValue.",
" * (if the received StringDataValue is non-null and of the",
" * correct type; else, a new StringDataValue is returned)."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/BinaryOperatorNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.types.SqlXmlUtil;"
],
"header": "@@ -34,6 +34,7 @@ import org.apache.derby.impl.sql.compile.ActivationClassBuilder;",
"removed": []
},
{
"added": [
"\t// Class used to compile an XML query expression and/or load/process",
"\t// XML-specific objects.",
"\tprivate SqlXmlUtil sqlxUtil;",
""
],
"header": "@@ -116,6 +117,10 @@ public class BinaryOperatorNode extends ValueNode",
"removed": []
},
{
"added": [
" // Left operand is query expression and must be a string",
" // literal. SQL/XML spec doesn't allow params nor expressions",
" // 6.17: <XQuery expression> ::= <character string literal> ",
" if (!(leftOperand instanceof CharConstantNode))",
" {",
" throw StandardException.newException(",
" SQLState.LANG_INVALID_XML_QUERY_EXPRESSION);",
" }",
" else {",
" // compile the query expression.",
" sqlxUtil = new SqlXmlUtil();",
" sqlxUtil.compileXQExpr(",
" ((CharConstantNode)leftOperand).getString());"
],
"header": "@@ -340,24 +345,19 @@ public class BinaryOperatorNode extends ValueNode",
"removed": [
" // Left operand is query expression, and must be a string.",
" if (leftOperandType != null) {",
" switch (leftOperandType.getJDBCTypeId())",
" {",
" case Types.CHAR:",
" case Types.VARCHAR:",
" case Types.LONGVARCHAR:",
" case Types.CLOB:",
" break;",
" default:",
" {",
" throw StandardException.newException(",
" SQLState.LANG_BINARY_OPERATOR_NOT_SUPPORTED, ",
" methodName,",
" leftOperandType.getSQLTypeName(),",
" rightOperandType.getSQLTypeName());",
" }",
" }"
]
},
{
"added": [],
"header": "@@ -371,15 +371,6 @@ public class BinaryOperatorNode extends ValueNode",
"removed": [
" // Is there a ? parameter on the left?",
" if (leftOperand.requiresTypeFromContext())",
" {",
" // Set the left operand to be a VARCHAR, which should be",
" // long enough to hold the XPath expression.",
" leftOperand.setType(",
" DataTypeDescriptor.getBuiltInDataTypeDescriptor(Types.VARCHAR));",
" }",
""
]
},
{
"added": [
"\t\t// If we're dealing with XMLEXISTS, there is some",
"\t\t// additional work to be done.",
"\t\tboolean xmlGen = (operatorType == XMLEXISTS_OP);",
"",
"\t\tif (xmlGen) {",
"\t\t// We create an execution-time object so that we can retrieve",
"\t\t// saved objects (esp. our compiled query expression) from",
"\t\t// the activation. We do this for two reasons: 1) this level",
"\t\t// of indirection allows us to separate the XML data type",
"\t\t// from the required XML implementation classes (esp. JAXP",
"\t\t// and Xalan classes)--for more on how this works, see the",
"\t\t// comments in SqlXmlUtil.java; and 2) we can take",
"\t\t// the XML query expression, which we've already compiled,",
"\t\t// and pass it to the execution-time object for each row,",
"\t\t// which means that we only have to compile the query",
"\t\t// expression once per SQL statement (instead of once per",
"\t\t// row); see SqlXmlExecutor.java for more.",
"\t\t\tmb.pushNewStart(",
"\t\t\t\t\"org.apache.derby.impl.sql.execute.SqlXmlExecutor\");",
"\t\t\tmb.pushNewComplete(addXmlOpMethodParams(acb, mb));",
"\t\t}"
],
"header": "@@ -469,6 +460,27 @@ public class BinaryOperatorNode extends ValueNode",
"removed": []
},
{
"added": [
"\t\t\t**",
"\t\t\t** UNLESS we're generating an XML operator such as XMLEXISTS.",
"\t\t\t** In that case we want to generate",
"\t\t\t** ",
"\t\t\t** SqlXmlExecutor.method(left, right)\"",
"\t\t\t**",
"\t\t\t** and we've already pushed the SqlXmlExecutor object to",
"\t\t\t** the stack.",
"\t\t\tif (!xmlGen) {",
"\t\t\t\tmb.dup();",
"\t\t\t\tmb.cast(rightInterfaceType);",
"\t\t\t\t// stack: right,right",
"\t\t\t}"
],
"header": "@@ -526,15 +538,25 @@ public class BinaryOperatorNode extends ValueNode",
"removed": [
"\t\t\tmb.dup();",
"\t\t\tmb.cast(rightInterfaceType);",
"\t\t\t// stack: right,right"
]
},
{
"added": [
"\t\t\tif (xmlGen) {",
"\t\t\t// This is for an XMLEXISTS operation, so invoke the method",
"\t\t\t// on our execution-time object.",
"\t\t\t\tmb.callMethod(VMOpcode.INVOKEVIRTUAL, null,",
"\t\t\t\t\tmethodName, resultTypeName, 2);",
"\t\t\t}",
"\t\t\telse {",
"\t\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE, receiverType,",
"\t\t\t\t\tmethodName, resultTypeName, 2);",
"\t\t\t}"
],
"header": "@@ -604,7 +626,16 @@ public class BinaryOperatorNode extends ValueNode",
"removed": [
"\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE, receiverType, methodName, resultTypeName, 2);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UnaryOperatorNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.types.SqlXmlUtil;"
],
"header": "@@ -37,6 +37,7 @@ import org.apache.derby.iapi.services.io.StoredFormatIds;",
"removed": []
},
{
"added": [
"\t// Class used to hold XML-specific objects required for",
"\t// parsing/serializing XML data.",
"\tprivate SqlXmlUtil sqlxUtil;",
""
],
"header": "@@ -113,6 +114,10 @@ public class UnaryOperatorNode extends ValueNode",
"removed": []
},
{
"added": [
" // Create a new XML compiler object; the constructor",
" // here automatically creates the XML-specific objects ",
" // required for parsing/serializing XML, so all we",
" // have to do is create an instance.",
" sqlxUtil = new SqlXmlUtil();",
""
],
"header": "@@ -391,6 +396,12 @@ public class UnaryOperatorNode extends ValueNode",
"removed": []
},
{
"added": [
"\t\t// For XML operator we do some extra work.",
"\t\tboolean xmlGen = (operatorType == XMLPARSE_OP) ||",
"\t\t\t(operatorType == XMLSERIALIZE_OP);",
"",
"\t\tif (xmlGen) {",
"\t\t// We create an execution-time object from which we call",
"\t\t// the necessary methods. We do this for two reasons: 1) this",
"\t\t// level of indirection allows us to separate the XML data type",
"\t\t// from the required XML implementation classes (esp. JAXP and",
"\t\t// Xalan classes)--for more on how this works, see the comments",
"\t\t// in SqlXmlUtil.java; and 2) this allows us to create the",
"\t\t// required XML objects a single time (which we did at bind time",
"\t\t// when we created a new SqlXmlUtil) and then reuse those objects",
"\t\t// for each row in the target result set, instead of creating",
"\t\t// new objects every time; see SqlXmlUtil.java for more.",
"\t\t\tmb.pushNewStart(",
"\t\t\t\t\"org.apache.derby.impl.sql.execute.SqlXmlExecutor\");",
"\t\t\tmb.pushNewComplete(addXmlOpMethodParams(acb, mb));",
"\t\t}",
""
],
"header": "@@ -631,6 +642,26 @@ public class UnaryOperatorNode extends ValueNode",
"removed": []
},
{
"added": [
"",
"\t\t\t/* If we're calling a method on a class (SqlXmlExecutor) instead",
"\t\t\t * of calling a method on the operand interface, then we invoke",
"\t\t\t * VIRTUAL; we then have 2 args (the operand and the local field)",
"\t\t\t * instead of one, i.e:",
"\t\t\t *",
"\t\t\t * SqlXmlExecutor.method(operand, field)",
"\t\t\t *",
"\t\t\t * instead of",
"\t\t\t *",
"\t\t\t * <operand>.method(field).",
"\t\t\t */",
"\t\t\tif (xmlGen) {",
"\t\t\t\tmb.callMethod(VMOpcode.INVOKEVIRTUAL, null,",
"\t\t\t\t\tmethodName, resultTypeName, 2);",
"\t\t\t}",
"\t\t\telse {",
"\t\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE,",
"\t\t\t\t\t(String) null, methodName, resultTypeName, 1);",
"\t\t\t}"
],
"header": "@@ -650,8 +681,26 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"\t\t\tint numParams = 1 + addMethodParams(mb);",
"\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE, (String) null, methodName, resultTypeName, numParams);"
]
},
{
"added": [
"\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE, (String) null,",
"\t\t\t\tmethodName, resultTypeName, 0);"
],
"header": "@@ -659,8 +708,8 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
"\t\t\tint numParams = addMethodParams(mb);",
"\t\t\tmb.callMethod(VMOpcode.INVOKEINTERFACE, (String) null, methodName, resultTypeName, numParams);"
]
},
{
"added": [
" * Add some additional arguments to our method call for",
" * XML related operations like XMLPARSE and XMLSERIALIZE.",
" protected int addXmlOpMethodParams(ExpressionClassBuilder acb,",
"\t\tMethodBuilder mb) throws StandardException",
" if ((operatorType != XMLPARSE_OP) && (operatorType != XMLSERIALIZE_OP))",
" // nothing to do.",
" return 0;",
" // primitive types. Note: we don't have to save",
" // any objects for XMLSERIALIZE because it doesn't",
" // require any XML-specific objects: it just returns",
" // the serialized version of the XML value, which we",
" // already found when the XML value was created (ex.",
" // as part of the XMLPARSE work)."
],
"header": "@@ -738,25 +787,28 @@ public class UnaryOperatorNode extends ValueNode",
"removed": [
" * This method allows different operators to add",
" * primitive arguments to the generated method call,",
" * if needed.",
" protected int addMethodParams(MethodBuilder mb)",
" if (operatorType == XMLPARSE_OP) {",
" // We push whether or not we want to preserve whitespace.",
" mb.push(((Boolean)additionalArgs[0]).booleanValue());",
" return 1;",
" }",
" // primitive types."
]
}
]
}
] |
derby-DERBY-5677-c1192c0b
|
DERBY-5677: ClassNotFoundException when running suites.All without derbynet.jar
Exclude tests that cannot run without derbynet.jar if the network server
classes are not available on the classpath.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1308434 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5678-70bfa379
|
DERBY-5678: LocalizedDisplayScriptTest fails on JVMs that don't support EUC_JP encoding
Skip the test on platforms that don't support EUC_JP.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1308436 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5679-3abd440f
|
DERBY-5679: Fill nonexistent columns with NULLS during update of later column in the row.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330877 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [
" * COLUMN_CREATE_NULL - the column was recently added.",
" * it doesn't actually exist in the on-disk row yet.",
" * we will need to put a null in it as soon as possible.",
" * see DERBY-5679.",
" protected static final int COLUMN_CREATE_NULL = 3;"
],
"header": "@@ -327,10 +327,15 @@ public class StoredPage extends CachedPage",
"removed": []
},
{
"added": [
" // columns but not providing any value. this can happen",
" // if you are updating a new column after using",
" // ALTER TABLE to add a couple new columns.",
" // see DERBY-5679.",
" COLUMN_CREATE_NULL, overflowThreshold);"
],
"header": "@@ -3968,12 +3973,14 @@ public class StoredPage extends CachedPage",
"removed": [
" // columns but not providing any value, strange ...",
"",
" columnFlag, overflowThreshold);"
]
},
{
"added": [
" if ( (column == null) && (columnFlag != COLUMN_CREATE_NULL))"
],
"header": "@@ -6177,7 +6184,7 @@ public class StoredPage extends CachedPage",
"removed": [
" if (column == null)"
]
}
]
}
] |
derby-DERBY-5679-76b75c3e
|
DERBY-5679: Add more test cases to verify correct behavior of rollback on rows with lots of columns and on long rows.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1331484 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5680-7e333995
|
DERBY-5680: indexStat daemon processing tables over an over even when there are no changes in the tables
Added functionality in the update statistics code to drop statistics considered
disposable; orphaned (i.e. the referenced index doesn't exist), or not
required (the optimizer doesn't need the statistics).
Disposable statistics are only dropped when the istat daemon kicks in, or
SYSCS_UPDATE_STATISTICS is run without specifying an index.
The functionality is not enabled for soft-upgraded databases.
Included a debug property to force the old behavior.
Patch file: derby-5680-1b-remove_disposable_stats.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1340549 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
"import java.util.List;"
],
"header": "@@ -23,6 +23,7 @@ package org.apache.derby.impl.services.daemon;",
"removed": []
},
{
"added": [
" /**",
" * Tells if the user want us to fall back to pre 10.9 behavior.",
" * <p>",
" * This means do not drop any disposable statistics, and do not skip",
" * statistics for single-column primary key indexes.",
" */",
" private static final boolean FORCE_OLD_BEHAVIOR =",
" PropertyUtil.getSystemBoolean(",
" Property.STORAGE_AUTO_INDEX_STATS_DEBUG_FORCE_OLD_BEHAVIOR);",
""
],
"header": "@@ -118,6 +119,16 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" /** Tells if the database is older than 10.9 (for soft upgrade). */",
" private final boolean dbIsPre10_9;"
],
"header": "@@ -133,6 +144,8 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" this.dbIsPre10_9 = checkIfDbIsPre10_9(db);"
],
"header": "@@ -206,6 +219,7 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" /** Tells if the database is older than 10.9. */",
" private boolean checkIfDbIsPre10_9(Database db) {",
" try {",
" // Note the negation.",
" return !db.getDataDictionary().checkVersion(",
" DataDictionary.DD_VERSION_DERBY_10_9, null);",
" } catch (StandardException se) {",
" if (SanityManager.DEBUG) {",
" SanityManager.THROWASSERT(\"dd version check failed\", se);",
" }",
" return true;",
" }",
" }",
""
],
"header": "@@ -224,6 +238,20 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" updateIndexStatsMinion(lcc, td, null, AS_BACKGROUND_TASK);"
],
"header": "@@ -318,8 +346,7 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" ConglomerateDescriptor[] cds = td.getConglomerateDescriptors();",
" updateIndexStatsMinion(lcc, td, cds, AS_BACKGROUND_TASK);"
]
},
{
"added": [
" * <p>",
" * <strong>API note</strong>: Using {@code null} to update the statistics",
" * for all conglomerates is preferred over explicitly passing an array with",
" * all the conglomerates for the table. Doing so allows for some",
" * optimizations, and will cause a disposable statistics check to be",
" * performed.",
" * conglomerates will be ignored), {@code null} means all indexes"
],
"header": "@@ -364,11 +391,17 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" * conglomerates will be ignored)"
]
},
{
"added": [
" final boolean identifyDisposableStats =",
" (cds == null && !FORCE_OLD_BEHAVIOR && !dbIsPre10_9);",
" // Fetch descriptors if we're updating statistics for all indexes.",
" if (cds == null) {",
" cds = td.getConglomerateDescriptors();",
" }"
],
"header": "@@ -378,6 +411,12 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" // Check for disposable statistics if we have the required information.",
" // Note that the algorithm would drop valid statistics entries if",
" // working on a subset of the table conglomerates/indexes.",
" if (identifyDisposableStats) {",
" List existingStats = td.getStatistics();",
" StatisticsDescriptor[] stats = (StatisticsDescriptor[])",
" existingStats.toArray(",
" new StatisticsDescriptor[existingStats.size()]);",
" // For now we know that disposable stats only exist in two cases,",
" // and that we'll only get one match for both of them per table:",
" // a) orphaned statistics entries (i.e. DERBY-5681)",
" // b) single-column primary keys (TODO: after DERBY-3790 is done)",
" for (int si=0; si < stats.length; si++) {",
" UUID referencedIndex = stats[si].getReferenceID();",
" boolean isValid = false;",
" for (int ci=0; ci < conglomerateNumber.length; ci++) {",
" if (conglomerateNumber[ci] == -1) {",
" continue;",
" }",
" if (referencedIndex.equals(objectUUID[ci])) {",
" isValid = true;",
" break;",
" }",
" }",
" // If the statistics entry is orphaned or not required, drop",
" // the statistics entries for this index. Those we really need",
" // will be rebuilt below. We expect this scenario to be rare,",
" // typically you would only see it on upgrades. On the other",
" // hand, this check is cheap enough such that it is feasible to",
" // do it as part of the stats update to get a \"self healing\"",
" // mechanism in case of another bug like DERBY-5681 in Derby.",
" if (!isValid) {",
" String msg = \"dropping disposable statistics entry \" +",
" stats[si].getUUID() + \" for table \" +",
" stats[si].getTableUUID();",
" logAlways(td, null, msg);",
" trace(1, msg);",
" DataDictionary dd = lcc.getDataDictionary();",
" if (!lcc.dataDictionaryInWriteMode()) {",
" dd.startWriting(lcc);",
" }",
" dd.dropStatisticsDescriptors(",
" td.getUUID(), stats[si].getReferenceID(), tc); ",
" if (asBackgroundTask) {",
" lcc.internalCommit(true);",
" }",
" }",
" }",
" }",
""
],
"header": "@@ -417,6 +456,56 @@ public class IndexStatisticsDaemonImpl",
"removed": []
}
]
}
] |
derby-DERBY-5680-a2f00b4a
|
DERBY-6283 indexStat daemon processing tables over and over even when there are no changes in the tables in soft upgraded database.
Changed system to always drop orphaned stats during update statistics call.
Without this change soft upgraded systems running on 10.8 or higher derby
software, that had an orphaned statistic would spin forever in the index
stat daemon due to the same problem fixed by DERBY-5680 for hard
upgraded databases.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1502319 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
"",
" // can only properly identify disposable stats if cds == null, ",
" // which means we are processing all indexes on the conglomerate.",
" final boolean identifyDisposableStats = (cds == null);",
"",
"",
" long[] conglomerateNumber = new long[cds.length];",
" ExecIndexRow[] indexRow = new ExecIndexRow[cds.length];",
""
],
"header": "@@ -414,16 +414,20 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" final boolean identifyDisposableStats =",
" (cds == null && skipDisposableStats);",
" long[] conglomerateNumber = new long[cds.length];",
" ExecIndexRow[] indexRow = new ExecIndexRow[cds.length];",
" UUID[] objectUUID = new UUID[cds.length];"
]
},
{
"added": [
"",
"",
" // create a list of indexes that should have statistics, by looking",
" // at all indexes on the conglomerate, and conditionally skipping",
" // unique single column indexes. This set is the \"non disposable",
" // stat list\".",
" UUID[] non_disposable_objectUUID = new UUID[cds.length];",
""
],
"header": "@@ -434,6 +438,14 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
"",
""
],
"header": "@@ -444,7 +456,9 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" // at this point have found a stat for an existing",
" // index which is not a single column unique index, add it",
" // to the list of \"non disposable stats\"",
" conglomerateNumber[i] = cds[i].getConglomerateNumber();",
" non_disposable_objectUUID[i] = cds[i].getUUID();"
],
"header": "@@ -454,9 +468,11 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" conglomerateNumber[i] = cds[i].getConglomerateNumber();",
"",
" objectUUID[i] = cds[i].getUUID();"
]
},
{
"added": [
" // Check for and drop disposable statistics if we have the required ",
" // information.",
" //",
" // The above loop has populated \"cds\" with only existing indexes that",
" // are not single column unique.",
"",
"",
" // Note this loop is not controlled by the skipDisposableStats ",
" // flag. The above loop controls if we drop single column unique",
" // index stats or not. In all cases we are going to drop ",
" // stats with no associated index (orphaned stats).",
" ",
"",
"",
" //",
" // This loop looks for statistic entries to delete. It deletes",
" // those entries that don't have a matching conglomerate in the",
" if (referencedIndex.equals(non_disposable_objectUUID[ci])) {"
],
"header": "@@ -468,23 +484,39 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" // Check for disposable statistics if we have the required information.",
" if (referencedIndex.equals(objectUUID[ci])) {"
]
},
{
"added": [
" int sci = 0;"
],
"header": "@@ -518,7 +550,7 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" int sci = 0;"
]
},
{
"added": [
"",
" int numCols = indexRow[indexNumber].nColumns() - 1;",
" long[] cardinality = new long[numCols];",
" KeyComparator cmp = new KeyComparator(indexRow[indexNumber]);"
],
"header": "@@ -535,10 +567,11 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" int numCols = indexRow[indexNumber].nColumns() - 1;",
" long[] cardinality = new long[numCols];",
" KeyComparator cmp = new KeyComparator(indexRow[indexNumber]);"
]
}
]
}
] |
derby-DERBY-5681-29a19ff9
|
DERBY-5681 When a foreign key constraint on a table is dropped, the associated statistics row for the conglomerate is not removed
This problem happens because when two constraints share the same backing index, we conditionally dropped the statistics. Instead, this fix will make sure that the statistics are always dropped even if the underneath backing index is still valid(and hence won't be dropped and recreated) for other constraints. I ran derbyall and junit suite and they both ran fine with no errors.I have also dded few tests for the issue.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1329359 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5681-a6a07338
|
DERBY-4115 Provide a way to drop statistics information
The details of all the changes in this commit are listed below.
1)Added a new routine SYSCS_DROP_STATISTICS, with public access similar to SYSCS_UPDATE_STATISTICS. This happens in DataDictionaryImpl, where SYSCS_DROP_STATISTICS is added to the list of public access procedures in sysUtilProceduresWithPublicAccess
2)The new stored procedure implementation is similar to update statistics, ie allow the routine to go through ALTER TABLE where permission/privilege checking, table/schema/index name validations happen automatically and we implement the routine logic through extension of ALTER TABLE syntax. This new syntax for ALTER TABLE syntax(same as we did for update statistics) is an internal syntax only and won't be available to an end user directly.
3)This commit changes sqlgrammar.jj to recognize the following internal syntaxes for ALTER TABLE
a)ALTER TABLE tablename ALL DROP STATISTICS
The existing(corresponding syntax) for update statistics is as follows
ALTER TABLE tablename ALL UPDATE STATISTICS
b)ALTER TABLE tablename STATISTICS DROP indexname
The existing(corresponding syntax) for update statistics is as follows
ALTER TABLE tablename UPDATE STATISTICS indexname
Notice the two syntaxes for index level statistics are different for drop vs update.(the reason for the syntax difference is explained above)
4)After the statistics are dropped, we send invalidation signal to dependent statements so they would get recompiled when they are executed next time. This will make sure that they pick the correct plan given the statistics for the table.
5)The commit takes care of some of the test failures(expected failures because of the addition of a new system procedure).
6)The commit adds basic upgrade test for the new procedure. This test ensures that drop statistics procedure is available only after hard upgrade.
7)While writing the upgrade tests, I found that a meaningful test for drop statistics could only be written for Derby releases 10.5 and higher. We have found that when constraints end up sharing same backing index, Derby won't create statistics for them. This is issue DERBY-5702. But if we run update statistics on that constraint, we will be able to get the statistics for such a constraint. Later, when the constraint is dropped, because of DERBY-5681, the statistics row for such a constraint(one that shares it's backing index with another constraint) is never dropped. We can use drop statistics procedure introduced in this jira to take care of such hanging indexes. But since update statistics procedure is only available in 10.5 and higher, I couldn't demonstrate use of drop statistics to drop hanging statistics rows.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1338017 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/SystemProcedures.java",
"hunks": [
{
"added": [
"\t * @exception SQLException"
],
"header": "@@ -733,7 +733,7 @@ public class SystemProcedures {",
"removed": [
"\t * @exception StandardException Standard exception policy."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\t\t\t\t\t\"SYSCS_DROP_STATISTICS\", "
],
"header": "@@ -464,6 +464,7 @@ public final class\tDataDictionaryImpl",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/AlterTableNode.java",
"hunks": [
{
"added": [
"\t * dropStatistics will indicate that we are here for dropping the",
"\t * statistics. It could be statistics of just one index or all the",
"\t * indexes on a given table. ",
"\t */",
"\tprivate\t boolean\t\t\t\t\t dropStatistics;",
"\t/**",
"\t * The flag dropStatisticsAll will tell if we are going to drop the ",
"\t * statistics of all indexes or just one index on a table. ",
"\t */",
"\tprivate\t boolean\t\t\t\t\t dropStatisticsAll;",
"\t/**",
"\t * If statistic is getting updated/dropped for just one index, then ",
"\t * indexNameForStatistics will tell the name of the specific index ",
"\t * whose statistics need to be updated/dropped.",
"\tprivate\tString\t\t\t\tindexNameForStatistics;"
],
"header": "@@ -70,11 +70,22 @@ public class AlterTableNode extends DDLStatementNode",
"removed": [
"\t * If statistic is getting updated for just one index, then ",
"\t * indexNameForUpdateStatistics will tell the name of the specific index ",
"\t * whose statistics need to be updated.",
"\tprivate\tString\t\t\t\tindexNameForUpdateStatistics;"
]
},
{
"added": [],
"header": "@@ -116,33 +127,6 @@ public class AlterTableNode extends DDLStatementNode",
"removed": [
"",
"\t/**",
"\t * Initializer for a AlterTableNode for updating the statistics. The user",
"\t * can ask for update statistic of all the indexes or only a specific index",
"\t *",
"\t * @param objectName\t\tThe name of the table whose index(es) will have",
"\t * their statistics updated.",
"\t * @param updateStatisticsAll\tIf true then update the statistics of all ",
"\t * the indexes on the table. If false, then update",
"\t * the statistics of only the index provided as",
"\t * 3rd parameter here",
"\t * @param indexName\t\t\tOnly used if updateStatisticsAll is set to ",
"\t * false. ",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
"\tpublic void init(Object objectName,",
"\t\t\tObject updateStatisticsAll,",
"\t\t\tObject indexName)",
"\tthrows StandardException",
"\t{",
"\t\tinitAndCheck(objectName);",
"\t\tthis.updateStatisticsAll = ((Boolean) updateStatisticsAll).booleanValue();",
"\t\tthis.indexNameForUpdateStatistics = (String)indexName;",
"\t\tschemaDescriptor = getSchemaDescriptor();",
"\t\tupdateStatistics = true;",
"\t}"
]
},
{
"added": [
"\t * Initializer for a AlterTableNode. The parameter values have different",
"\t * meanings based on what kind of ALTER TABLE is taking place. ",
"\t * ",
"\t * @param changeType\t\tADD_TYPE or DROP_TYPE or UPDATE_STATISTICS or",
"\t * or DROP_STATISTICS",
"\t * @param param1 \t\t\tFor ADD_TYPE or DROP_TYPE, param1 gives the",
"\t * elements impacted by ALTER TABLE.",
"\t * For UPDATE_STATISTICS or or DROP_STATISTICS,",
"\t * param1 is boolean - true means update or drop",
"\t * the statistics of all the indexes on the table.",
"\t * False means, update or drop the statistics of",
"\t * only the index name provided by next parameter.",
"\t * @param param2 \t\t\tFor ADD_TYPE or DROP_TYPE, param2 gives the",
"\t * new lock granularity, if any",
"\t * For UPDATE_STATISTICS or DROP_STATISTICS,",
"\t * param2 can be the name of the specific index",
"\t * whose statistics will be dropped/updated. This",
"\t * param is used only if param1 is set to false",
"\t * @param param3\t\t\tFor DROP_TYPE, param3 can indicate if the drop",
"\t * column is CASCADE or RESTRICTED. This param is",
"\t * ignored for all the other changeType.",
"\t\t\t\t\t\t\tObject param1,",
"\t\t\t\t\t\t\tObject param2,",
"\t\t\t\t\t\t\tObject param3 )",
"\t\t",
"\t\tint[]\tct = (int[]) changeType;",
"\t\t",
"\t\t\t\tthis.tableElementList = (TableElementList) param1;",
"\t\t\t\tthis.lockGranularity = ((Character) param2).charValue();",
"\t\t\t\tint[]\tbh = (int[]) param3;",
"\t\t\t\tthis.behavior = bh[0];",
"\t\t\t\tbreak;",
"",
"\t\t case UPDATE_STATISTICS:",
"\t\t\t\tthis.updateStatisticsAll = ((Boolean) param1).booleanValue();",
"\t\t\t\tthis.indexNameForStatistics = (String)param2;",
"\t\t\t\tupdateStatistics = true;",
"\t\t\t\tbreak;",
"\t\t case DROP_STATISTICS:",
"\t\t\t\tthis.dropStatisticsAll = ((Boolean) param1).booleanValue();",
"\t\t\t\tthis.indexNameForStatistics = (String)param2;",
"\t\t\t\tdropStatistics = true;"
],
"header": "@@ -194,39 +178,67 @@ public class AlterTableNode extends DDLStatementNode",
"removed": [
"\t * Initializer for a AlterTableNode",
"\t *",
"\t * @param tableElementList\tThe alter table action",
"\t * @param lockGranularity\tThe new lock granularity, if any",
"\t * @param changeType\t\tADD_TYPE or DROP_TYPE",
"\t * @param behavior\t\t\tIf drop column is CASCADE or RESTRICTED",
"",
"\t\t\t\t\t\t\tObject tableElementList,",
"\t\t\t\t\t\t\tObject lockGranularity,",
"\t\t\t\t\t\t\tObject behavior )",
"\t\tthis.tableElementList = (TableElementList) tableElementList;",
"\t\tthis.lockGranularity = ((Character) lockGranularity).charValue();",
"\t\tint[]\tct = (int[]) changeType, bh = (int[]) behavior;",
"\t\tthis.behavior = bh[0];"
]
},
{
"added": [
"\t\t\t\t\"dropStatistics: \" + dropStatistics + \"\\n\" +",
"\t\t\t\t\"dropStatisticsAll: \" + dropStatisticsAll + \"\\n\" +",
"\t\t\t\t\"indexNameForStatistics: \" +",
"\t\t\t\tindexNameForStatistics + \"\\n\";"
],
"header": "@@ -259,8 +271,10 @@ public class AlterTableNode extends DDLStatementNode",
"removed": [
"\t\t\t\t\"indexNameForUpdateStatistics: \" +",
"\t\t\t\t indexNameForUpdateStatistics + \"\\n\";"
]
},
{
"added": [
"\t\t//Check if we are in alter table to update/drop the statistics. If yes,",
"\t\t// then check if we are here to update/drop the statistics of a specific",
"\t\t// index. If yes, then verify that the indexname provided is a valid one.",
"\t\tif ((updateStatistics && !updateStatisticsAll) || (dropStatistics && !dropStatisticsAll))",
"\t\t\t\tcd = dd.getConglomerateDescriptor(indexNameForStatistics, schemaDescriptor, false);",
"\t\t\t\t\t\tschemaDescriptor.getSchemaName() + \".\" + indexNameForStatistics);"
],
"header": "@@ -433,20 +447,20 @@ public String statementToString()",
"removed": [
"\t\t//Check if we are in alter table to update the statistics. If yes, then",
"\t\t//check if we are here to update the statistics of a specific index. If",
"\t\t//yes, then verify that the indexname provided is a valid one.",
"\t\tif (updateStatistics && !updateStatisticsAll)",
"\t\t\t\tcd = dd.getConglomerateDescriptor(indexNameForUpdateStatistics, schemaDescriptor, false);",
"\t\t\t\t\t\tschemaDescriptor.getSchemaName() + \".\" + indexNameForUpdateStatistics);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [
"\t * dropStatistics will indicate that we are here for dropping the",
"\t * statistics. It could be statistics of just one index or all the",
"\t * indexes on a given table. ",
"\t */",
" private\t boolean\t\t\t\t\t dropStatistics;",
"\t/**",
"\t * The flag dropStatisticsAll will tell if we are going to drop the ",
"\t * statistics of all indexes or just one index on a table. ",
"\t */",
" private\t boolean\t\t\t\t\t dropStatisticsAll;",
"\t/**",
"\t * If statistic is getting updated/dropped for just one index, then ",
"\t * indexNameForStatistics will tell the name of the specific index ",
"\t * whose statistics need to be updated/dropped.",
" private\t String\t\t\t\t\t\tindexNameForStatistics;",
""
],
"header": "@@ -131,11 +131,23 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t * If statistic is getting updated for just one index, then ",
"\t * indexNameForUpdateStatistics will tell the name of the specific index ",
"\t * whose statistics need to be updated.",
" private\t String\t\t\t\t\t\tindexNameForUpdateStatistics;"
]
},
{
"added": [
"\t * @param dropStatistics\t\tTRUE means we are here to drop statistics",
"\t * @param dropStatisticsAll\tTRUE means we are here to drop statistics",
"\t * \tof all the indexes. False means we are here to drop statistics of",
"\t * \tonly one index.",
"\t * @param indexNameForStatistics\tWill name the index whose statistics",
"\t * \twill be updated/dropped. This param is looked at only if ",
"\t * \tupdateStatisticsAll/dropStatisticsAll is set to false and",
"\t * \tupdateStatistics/dropStatistics is set to true."
],
"header": "@@ -196,8 +208,14 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t * @param indexNameForUpdateStatistics\tWill name the index whose statistics",
"\t * \twill be updated"
]
},
{
"added": [
" boolean dropStatistics,",
" boolean dropStatisticsAll,",
" String indexNameForStatistics)"
],
"header": "@@ -217,7 +235,9 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
" String\t indexNameForUpdateStatistics)"
]
},
{
"added": [
"\t\tthis.dropStatistics \t= dropStatistics;",
"\t\tthis.dropStatisticsAll = dropStatisticsAll;",
"\t\tthis.indexNameForStatistics = indexNameForStatistics;"
],
"header": "@@ -236,7 +256,9 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\tthis.indexNameForUpdateStatistics = indexNameForUpdateStatistics;"
]
},
{
"added": [
"",
" if (dropStatistics) {",
" dropStatistics();",
" return;",
"\t\t}"
],
"header": "@@ -330,6 +352,11 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": []
},
{
"added": [
"\t/**",
"\t * Drop statistics of either all the indexes on the table or only one",
"\t * specific index depending on what user has requested.",
"\t * ",
"\t * @throws StandardException",
"\t */",
" private void dropStatistics()",
" throws StandardException {",
" td = dd.getTableDescriptor(tableId);",
"",
" dd.startWriting(lcc);",
" dm.invalidateFor(td, DependencyManager.UPDATE_STATISTICS, lcc);",
"",
" if (dropStatisticsAll) {",
" dd.dropStatisticsDescriptors(td.getUUID(), null, tc);",
" } else {",
" ConglomerateDescriptor cd = ",
" dd.getConglomerateDescriptor(",
" indexNameForStatistics, sd, false);",
" dd.dropStatisticsDescriptors(td.getUUID(), cd.getUUID(), tc);",
" }",
" }",
""
],
"header": "@@ -649,6 +676,29 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/GenericConstantActionFactory.java",
"hunks": [
{
"added": [
"\t * @param dropStatistics\t\tTRUE means we are here to drop statistics",
"\t * @param dropStatisticsAll\tTRUE means we are here to drop statistics",
"\t * \tof all the indexes. False means we are here to drop statistics of",
"\t * \tonly one index.",
"\t * @param indexNameForStatistics\tWill name the index whose statistics",
"\t * \twill be updated/dropped. This param is looked at only if ",
"\t * \tupdateStatisticsAll/dropStatisticsAll is set to false and",
"\t * \tupdateStatistics/dropStatistics is set to true.",
"\t * ."
],
"header": "@@ -137,9 +137,15 @@ public class GenericConstantActionFactory",
"removed": [
"\t * @param indexNameForUpdateStatistics\tWill name the index whose statistics",
"\t * \twill be updated. This param is looked at only if updateStatisticsAll",
"\t * \tis set to false."
]
},
{
"added": [
"\t\tboolean\t\t\t\t\t\tdropStatistics,",
"\t\tboolean\t\t\t\t\t\tdropStatisticsAll,",
"\t\tString\t\t\t\t\t\tindexNameForStatistics"
],
"header": "@@ -161,7 +167,9 @@ public class GenericConstantActionFactory",
"removed": [
"\t\tString\t\t\t\t\t\tindexNameForUpdateStatistics"
]
}
]
}
] |
derby-DERBY-5681-b0e73bc6
|
DERBY-5681: When a foreign key constraint on a table is dropped, the associated statistics row for the conglomerate is not removed
Made test less sensitive to statistics created by other tests.
Patch file: derby-5681-3a-test.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1341002 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5681-c1e0f8ee
|
DERBY-4115/DERBY-5681: Provide a way to drop statistics information
Moved upgrade test from BasicSetup to Changes10_9.
Includes some simplifications that could be made because of the move.
Patch file: derby-4115-7a-move_test.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1341059 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5683-7840c516
|
DERBY-5683: BaseJDBCTestCase.getDatabaseProperty() should close resources before returning
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1310413 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5686-15097dd7
|
DERBY-5686; multiple intermittent errors in nightly tests during DriverMgrAuthenticationTest test. reason: An SQL data change is not permitted for a read-only connection, user or database.
updating retry logic in DatabasePropertyTestSetup
also adding code to CleanDatabaseTestSetup to catch any test leaving a
connection in read-only mode and making it fail.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1336349 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/CleanDatabaseTestSetup.java",
"hunks": [
{
"added": [
" // See DERBY-5686 - perhaps there's a test that leaves a ",
" // connection in read-only state - let's check here and ",
" // if there's a conn that's read-only, unset it, and make",
" // the test fail so we find it.",
" boolean ok=true;",
" if (conn.isReadOnly())",
" {",
" conn.setReadOnly(false);",
" ok=false;",
" }"
],
"header": "@@ -150,7 +150,17 @@ public class CleanDatabaseTestSetup extends BaseJDBCTestSetup {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/DatabasePropertyTestSetup.java",
"hunks": [
{
"added": [
" try {",
" conn.close();",
" } catch (SQLException isqle) {",
" if (sqle.getSQLState()==\"25001\")",
" {",
" // the transaction is still active. let's commit what we have.",
" conn.commit();",
" conn.close();",
" } else {",
" System.out.println(\"close failed - see SQLState.\");",
" throw sqle;",
" }",
" }"
],
"header": "@@ -300,7 +300,19 @@ public class DatabasePropertyTestSetup extends BaseJDBCTestSetup {",
"removed": [
" conn.close();"
]
},
{
"added": [
" try {",
" conn.close();",
" } catch (SQLException isqle) {",
" if (sqle.getSQLState()==\"25001\")",
" {",
" // the transaction is still active. let's commit what we have.",
" conn.commit();",
" conn.close();",
" } else {",
" System.out.println(\"close failed - see SQLState.\");",
" throw sqle;",
" }",
" }",
""
],
"header": "@@ -369,8 +381,21 @@ public class DatabasePropertyTestSetup extends BaseJDBCTestSetup {",
"removed": [
" conn.close();"
]
}
]
}
] |
derby-DERBY-5686-75e97c18
|
DERBY-5686; multiple intermittent errors in nightly tests during DriverMgrAuthenticationTest test. reason: An SQL data change is not permitted for a read-only connection, user or database.
Another attempt to catch the error and print info - now if it happens in
DatabasePropertyTestSetup.setUp.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1332484 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/DatabasePropertyTestSetup.java",
"hunks": [
{
"added": [
" System.out.println(\"Apparently this is a read-only connection in teardown()? Get some data:\");"
],
"header": "@@ -292,7 +292,7 @@ public class DatabasePropertyTestSetup extends BaseJDBCTestSetup {",
"removed": [
" System.out.println(\"Apparently this is a read-only connection? Get some data:\");"
]
},
{
"added": [
" {",
" Connection conn = getConnection();",
" try {",
" attemptSetProperties(values, conn);",
" } catch (SQLException sqle) {",
" // To try to prevent the error situation of DERBY-5686, which",
" // cascades to many test failures, catch ERROR 25502, and if it occurs",
" // try to gather some information, close the connection,",
" // and retry the clearing of the properties on a new connection",
" if (sqle.getSQLState().equals(\"25502\")) {",
" // firstly, check on the state of the connection when we",
" // get this error",
" System.out.println(\"Apparently this is a read-only connection? Get some data:\");",
" System.out.println(\"conn.isClosed: \" + conn.isClosed());",
" System.out.println(\"conn.isReadOnly: \" + conn.isReadOnly());",
" System.out.println(\"conn.getHoldability: \" + conn.getHoldability());",
" System.out.println(\"conn.getTransactionIsolation: \" + conn.getTransactionIsolation());",
" System.out.println(\"conn.getAutoCommit: \" + conn.getAutoCommit());",
" // now try to close the connection, then try open a new one, ",
" // and try to executeUpdate again.",
" conn.close();",
" Connection conn2 = getConnection();",
" // check if this second connection is read-only",
" if (conn2.isReadOnly())",
" {",
" System.out.println(\"Sorry, conn2 is also read-only, won't retry\");",
" // give up",
" throw sqle;",
" }",
" else",
" { ",
" // retry",
" System.out.println(\"retrying to set the Properties\");",
" attemptSetProperties(values, conn2);",
" }",
" }",
" }",
" }",
" ",
" private void attemptSetProperties(Properties values, Connection coonn) throws SQLException"
],
"header": "@@ -349,6 +349,46 @@ public class DatabasePropertyTestSetup extends BaseJDBCTestSetup {",
"removed": []
}
]
}
] |
derby-DERBY-5686-7aa3f882
|
DERBY-5686: multiple intermittent errors in nightly tests during DriverMgrAuthenticationTest test. reason: An SQL data change is not permitted for a read-only connection, user or database.
Make assertDirectoryDeleted accept if the root directory disappears under it
even if it couldn't delete all the files inside. The prime example is when
db.lck is the only file that can't be deleted. The reason is that Derby hasn't
shut down before the deletion of the database directory starts. Depending on
timing, assertDirectoryDeleted may be able to delete all files (including the
root directory) except the lock file, and the lock file is deleted by Derby
itself.
This patch doesn't fix this JIRA issue, it's a general improvement to the
deletion logic only.
Patch file: DERBY-5686_3.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1334313 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5686-eb472301
|
DERBY-5686; multiple intermittent errors in nightly tests during DriverMgrAuthenticationTest test. reason: An SQL data change is not permitted for a read-only connection, user or database.
implementing a change that waits for completely shutting down of the database.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1346174 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/ConnectionPoolDataSourceConnector.java",
"hunks": [
{
"added": [
" getPooledConnection().getConnection();",
" config.waitForShutdownComplete(getDatabaseName());"
],
"header": "@@ -153,7 +153,8 @@ public class ConnectionPoolDataSourceConnector implements Connector {",
"removed": [
" getPooledConnection().getConnection(); "
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/DataSourceConnector.java",
"hunks": [
{
"added": [
"import java.io.File;"
],
"header": "@@ -19,6 +19,7 @@",
"removed": []
},
{
"added": [
" ",
" singleUseDS( makeShutdownDBAttributes( config ) ).getConnection();",
" config.waitForShutdownComplete(getDatabaseName());"
],
"header": "@@ -132,9 +133,11 @@ public class DataSourceConnector implements Connector {",
"removed": [
" singleUseDS( makeShutdownDBAttributes( config ) ).getConnection(); "
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/DriverManagerConnector.java",
"hunks": [
{
"added": [
" config.waitForShutdownComplete(getDatabaseName());"
],
"header": "@@ -126,6 +126,7 @@ public class DriverManagerConnector implements Connector {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/TestConfiguration.java",
"hunks": [
{
"added": [
" private static final int LOCKFILETIMEOUT = 300000; // 5 mins",
""
],
"header": "@@ -73,6 +73,8 @@ public final class TestConfiguration {",
"removed": []
},
{
"added": [
" public void waitForShutdownComplete(String physicalDatabaseName) {",
" String path = getDatabasePath(physicalDatabaseName);",
" boolean lockfilepresent = true;",
" int timeout = LOCKFILETIMEOUT; // 5 mins",
" int totalsleep = 0;",
" File lockfile = new File (path + File.separatorChar + \"db.lck\");",
" File exlockfile = new File (path + File.separatorChar + \"dbex.lck\");",
" while (lockfilepresent) {",
" if (totalsleep >= timeout)",
" {",
" System.out.println(\"TestConfigruation.waitForShutdownComplete: \" +",
" \"been looping waiting for lock files to be deleted for at least 5 minutes, giving up\");",
" break;",
" }",
" if (lockfile.exists() || exlockfile.exists())",
" {",
" // TODO: is it interesting to know whether db.lck or dbex.lck or both is still present?",
" try {",
" System.out.println(\"TestConfiguration.waitForShutdownComplete: \" +",
" \"db*.lck files not deleted after \" + totalsleep + \" ms.\");",
" Thread.sleep(1000);",
" totalsleep=totalsleep+1000;",
" } catch (InterruptedException e) {",
" e.printStackTrace();",
" }",
" }",
" else",
" lockfilepresent=false;",
" }",
" }",
" "
],
"header": "@@ -1746,6 +1748,37 @@ public final class TestConfiguration {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/XADataSourceConnector.java",
"hunks": [
{
"added": [
" .getXAConnection().getConnection(); ",
" config.waitForShutdownComplete(getDatabaseName());",
" public String getDatabaseName() {",
" String databaseName=null;",
" try {",
" // get the physical database name",
" databaseName = (String) JDBCDataSource.getBeanProperty(ds, \"databaseName\");",
" } catch (Exception e) {",
" e.printStackTrace();",
" }",
" return databaseName;",
" }",
" "
],
"header": "@@ -137,13 +137,25 @@ public class XADataSourceConnector implements Connector {",
"removed": [
" .getXAConnection().getConnection(); "
]
}
]
}
] |
derby-DERBY-5686-f5bfd989
|
DERBY-5686; multiple intermittent errors in nightly tests during DriverMgrAuthenticationTest test. reason: An SQL data change is not permitted for a read-only connection, user or database.
committing patch DERBY_5686_1.diff (after removing some unnecessary imports).
Hopefully this will give us at least some insight into what's happening when
this happens during teardown of the DatabasePropertyTestSetup.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1331601 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/DatabasePropertyTestSetup.java",
"hunks": [
{
"added": [
" clearProperties(conn);",
" // To try to prevent the error situation of DERBY-5686, which",
" // cascades to many test failures, catch ERROR 25502, and if it occurs",
" // try to gather some information, close the connection,",
" // and retry the clearing of the properties on a new connection",
" if (sqle.getSQLState().equals(\"25502\")) {",
" // firstly, check on the state of the connection when we",
" // get this error",
" System.out.println(\"Apparently this is a read-only connection? Get some data:\");",
" System.out.println(\"conn.isClosed: \" + conn.isClosed());",
" System.out.println(\"conn.isReadOnly: \" + conn.isReadOnly());",
" System.out.println(\"conn.getHoldability: \" + conn.getHoldability());",
" System.out.println(\"conn.getTransactionIsolation: \" + conn.getTransactionIsolation());",
" System.out.println(\"conn.getAutoCommit: \" + conn.getAutoCommit());",
" // now try to close the connection, then try open a new one, ",
" // and try to executeUpdate again.",
" conn.close();",
" Connection conn2 = getConnection();",
" // check if this second connection is read-only",
" if (conn2.isReadOnly())",
" {",
" System.out.println(\"Sorry, conn2 is also read-only, won't retry\");",
" // give up",
" throw sqle;",
" }",
" else",
" { ",
" // retry",
" System.out.println(\"retrying clearing the Properties\");",
" clearProperties(conn2);",
" }",
" }",
" else if(!sqle.getSQLState().equals(SQLStateConstants.PROPERTY_UNSUPPORTED_CHANGE))"
],
"header": "@@ -282,24 +282,41 @@ public class DatabasePropertyTestSetup extends BaseJDBCTestSetup {",
"removed": [
" conn.setAutoCommit(false);",
" CallableStatement setDBP = conn.prepareCall(",
" \"CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY(?, NULL)\");",
" \t// Clear all the system properties set by the new set",
" \t// that will not be reset by the old set. Ignore any ",
" // invalid property values.",
" \tfor (Enumeration e = newValues.propertyNames(); e.hasMoreElements();)",
" \t{",
" \t\tString key = (String) e.nextElement();",
" \t\tif (oldValues.getProperty(key) == null)",
" \t\t{",
" \t\t\tsetDBP.setString(1, key);",
" \t\t\tsetDBP.executeUpdate();",
" \t\t}",
" \t}",
" \tif(!sqle.getSQLState().equals(SQLStateConstants.PROPERTY_UNSUPPORTED_CHANGE))"
]
},
{
"added": [
" private void clearProperties(Connection conn) throws SQLException",
" {",
" conn.setAutoCommit(false);",
" CallableStatement setDBP = conn.prepareCall(",
" \"CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY(?, NULL)\");",
" // Clear all the system properties set by the new set",
" // that will not be reset by the old set. Ignore any ",
" // invalid property values.",
" for (Enumeration e = newValues.propertyNames(); e.hasMoreElements();)",
" {",
" String key = (String) e.nextElement();",
" if (oldValues.getProperty(key) == null)",
" {",
" setDBP.setString(1, key);",
" setDBP.executeUpdate();",
" }",
" }",
" }",
""
],
"header": "@@ -312,6 +329,25 @@ public class DatabasePropertyTestSetup extends BaseJDBCTestSetup {",
"removed": []
}
]
}
] |
derby-DERBY-5687-c69fcab5
|
DERBY-5687: Adjust the public api javadoc for SequencePreallocator so that it no longer refers to identity columns.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1311310 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/SequencePreallocator.java",
"hunks": [
{
"added": [
" * Logic to determine how many values to pre-allocate for a sequence.",
" * By default, Derby boosts concurrency by pre-allocating ranges of numbers for sequences.",
" * Logic in this class is called every time Derby needs to pre-allocate a new range of sequence"
],
"header": "@@ -25,15 +25,15 @@ import java.sql.SQLException;",
"removed": [
" * Logic to determine how many values to pre-allocate for an identity column or sequence.",
" * By default, Derby boosts concurrency by pre-allocating ranges of numbers for identity columns and sequences.",
" * Logic in this class is called every time Derby needs to pre-allocate a new range of identity/sequence"
]
},
{
"added": [
" * that Derby can instantiate them. Derby will instantiate a SequencePreallocator for every sequence."
],
"header": "@@ -44,8 +44,7 @@ import java.sql.SQLException;",
"removed": [
" * that Derby can instantiate them. Derby will instantiate a SequencePreallocator for each identity",
" * column and sequence."
]
},
{
"added": [
" * sequence. Names are case-sensitive, as specified in CREATE SEQUENCE",
" * @param schemaName Name of schema holding the sequence.",
" * @param sequenceName Specific name of the sequence."
],
"header": "@@ -54,12 +53,12 @@ public interface SequencePreallocator",
"removed": [
" * identity column or sequence. Names are case-sensitive, as specified in CREATE SEQUENCE",
" * @param schemaName Name of schema holding the sequence or identity-laden table.",
" * @param sequenceName Specific name of the sequence or identity-laden table."
]
}
]
}
] |
derby-DERBY-5693-5d3b815a
|
DERBY-5693: BUILTIN should say passwords are hashed not encrypted
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1327910 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/authentication/AuthenticationServiceBase.java",
"hunks": [
{
"added": [
" * User passwords are hashed using a message digest algorithm",
" * if they're stored in the database. They are not hashed",
" * The passwords can be hashed using two different schemes:"
],
"header": "@@ -81,13 +81,13 @@ import org.apache.derby.iapi.reference.SQLState;",
"removed": [
" * User passwords are encrypted using a message digest algorithm",
" * if they're stored in the database; otherwise they are not encrypted",
" * The passwords can be encrypted using two different schemes:"
]
},
{
"added": [
"\t\t// We do not hash 'derby.user.<userName>' password if"
],
"header": "@@ -493,7 +493,7 @@ public abstract class AuthenticationServiceBase",
"removed": [
"\t\t// We do not encrypt 'derby.user.<userName>' password if"
]
},
{
"added": [
"\t\t// Ok, we can hash this password in the db",
"\t\t\t// hash (digest) the password",
" hashUsingDefaultAlgorithm(userName, userPassword, p);"
],
"header": "@@ -506,16 +506,16 @@ public abstract class AuthenticationServiceBase",
"removed": [
"\t\t// Ok, we can encrypt this password in the db",
"\t\t\t// encrypt (digest) the password",
" encryptUsingDefaultAlgorithm(userName, userPassword, p);"
]
},
{
"added": [
"\t * This method hashes a clear user password using a"
],
"header": "@@ -545,7 +545,7 @@ public abstract class AuthenticationServiceBase",
"removed": [
"\t * This method encrypts a clear user password using a"
]
},
{
"added": [
"\t * @return hashed user password (digest) as a String object",
"\tprotected String hashPasswordSHA1Scheme(String plainTxtUserPassword)"
],
"header": "@@ -560,10 +560,10 @@ public abstract class AuthenticationServiceBase",
"removed": [
"\t * @return encrypted user password (digest) as a String object",
"\tprotected String encryptPasswordSHA1Scheme(String plainTxtUserPassword)"
]
},
{
"added": [
"\t\tbyte[] hashedVal = algorithm.digest();",
" StringUtil.toHexString(hashedVal, 0, hashedVal.length);"
],
"header": "@@ -581,9 +581,9 @@ public abstract class AuthenticationServiceBase",
"removed": [
"\t\tbyte[] encryptVal = algorithm.digest();",
" StringUtil.toHexString(encryptVal,0,encryptVal.length);"
]
},
{
"added": [
" * Hash a password using the default message digest algorithm for this",
" * system before it's stored in the database.",
" * is a non-empty string, the password will be hashed using the",
" * @param user the user whose password to hash",
" String hashUsingDefaultAlgorithm(String user,"
],
"header": "@@ -631,28 +631,28 @@ public abstract class AuthenticationServiceBase",
"removed": [
" * Encrypt a password using the default hash algorithm for this system",
" * before it's stored in the database.",
" * is a non-empty string, the password will be encrypted using the",
" * @param user the user whose password to encrypt",
" String encryptUsingDefaultAlgorithm(String user,"
]
},
{
"added": [
" else { return hashPasswordSHA1Scheme(password); }"
],
"header": "@@ -662,7 +662,7 @@ public abstract class AuthenticationServiceBase",
"removed": [
" else { return encryptPasswordSHA1Scheme(password); }"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/authentication/BasicAuthenticationServiceImpl.java",
"hunks": [
{
"added": [
" // hash passed-in password",
" passedUserPassword = hashPasswordUsingStoredAlgorithm("
],
"header": "@@ -195,9 +195,9 @@ public final class BasicAuthenticationServiceImpl",
"removed": [
" // encrypt passed-in password",
" passedUserPassword = encryptPasswordUsingStoredAlgorithm("
]
},
{
"added": [
" hashUsingDefaultAlgorithm("
],
"header": "@@ -235,7 +235,7 @@ public final class BasicAuthenticationServiceImpl",
"removed": [
" encryptUsingDefaultAlgorithm("
]
},
{
"added": [
" * Hash a password using the same algorithm as we used to generate the",
" * @param user the user whose password to hash",
" * @throws StandardException if the password cannot be hashed with the",
" private String hashPasswordUsingStoredAlgorithm(",
" return hashPasswordSHA1Scheme(password);"
],
"header": "@@ -277,23 +277,23 @@ public final class BasicAuthenticationServiceImpl",
"removed": [
" * Encrypt a password using the same algorithm as we used to generate the",
" * @param user the user whose password to encrypt",
" * @throws StandardException if the password cannot be encrypted with the",
" private String encryptPasswordUsingStoredAlgorithm(",
" return encryptPasswordSHA1Scheme(password);"
]
}
]
}
] |
derby-DERBY-5701-673b7a79
|
DERBY-5701: Make UpdatableResultSetTest less hungry on heap space
Close statements earlier.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1329148 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5702-a6a07338
|
DERBY-4115 Provide a way to drop statistics information
The details of all the changes in this commit are listed below.
1)Added a new routine SYSCS_DROP_STATISTICS, with public access similar to SYSCS_UPDATE_STATISTICS. This happens in DataDictionaryImpl, where SYSCS_DROP_STATISTICS is added to the list of public access procedures in sysUtilProceduresWithPublicAccess
2)The new stored procedure implementation is similar to update statistics, ie allow the routine to go through ALTER TABLE where permission/privilege checking, table/schema/index name validations happen automatically and we implement the routine logic through extension of ALTER TABLE syntax. This new syntax for ALTER TABLE syntax(same as we did for update statistics) is an internal syntax only and won't be available to an end user directly.
3)This commit changes sqlgrammar.jj to recognize the following internal syntaxes for ALTER TABLE
a)ALTER TABLE tablename ALL DROP STATISTICS
The existing(corresponding syntax) for update statistics is as follows
ALTER TABLE tablename ALL UPDATE STATISTICS
b)ALTER TABLE tablename STATISTICS DROP indexname
The existing(corresponding syntax) for update statistics is as follows
ALTER TABLE tablename UPDATE STATISTICS indexname
Notice the two syntaxes for index level statistics are different for drop vs update.(the reason for the syntax difference is explained above)
4)After the statistics are dropped, we send invalidation signal to dependent statements so they would get recompiled when they are executed next time. This will make sure that they pick the correct plan given the statistics for the table.
5)The commit takes care of some of the test failures(expected failures because of the addition of a new system procedure).
6)The commit adds basic upgrade test for the new procedure. This test ensures that drop statistics procedure is available only after hard upgrade.
7)While writing the upgrade tests, I found that a meaningful test for drop statistics could only be written for Derby releases 10.5 and higher. We have found that when constraints end up sharing same backing index, Derby won't create statistics for them. This is issue DERBY-5702. But if we run update statistics on that constraint, we will be able to get the statistics for such a constraint. Later, when the constraint is dropped, because of DERBY-5681, the statistics row for such a constraint(one that shares it's backing index with another constraint) is never dropped. We can use drop statistics procedure introduced in this jira to take care of such hanging indexes. But since update statistics procedure is only available in 10.5 and higher, I couldn't demonstrate use of drop statistics to drop hanging statistics rows.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1338017 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/SystemProcedures.java",
"hunks": [
{
"added": [
"\t * @exception SQLException"
],
"header": "@@ -733,7 +733,7 @@ public class SystemProcedures {",
"removed": [
"\t * @exception StandardException Standard exception policy."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\t\t\t\t\t\"SYSCS_DROP_STATISTICS\", "
],
"header": "@@ -464,6 +464,7 @@ public final class\tDataDictionaryImpl",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/AlterTableNode.java",
"hunks": [
{
"added": [
"\t * dropStatistics will indicate that we are here for dropping the",
"\t * statistics. It could be statistics of just one index or all the",
"\t * indexes on a given table. ",
"\t */",
"\tprivate\t boolean\t\t\t\t\t dropStatistics;",
"\t/**",
"\t * The flag dropStatisticsAll will tell if we are going to drop the ",
"\t * statistics of all indexes or just one index on a table. ",
"\t */",
"\tprivate\t boolean\t\t\t\t\t dropStatisticsAll;",
"\t/**",
"\t * If statistic is getting updated/dropped for just one index, then ",
"\t * indexNameForStatistics will tell the name of the specific index ",
"\t * whose statistics need to be updated/dropped.",
"\tprivate\tString\t\t\t\tindexNameForStatistics;"
],
"header": "@@ -70,11 +70,22 @@ public class AlterTableNode extends DDLStatementNode",
"removed": [
"\t * If statistic is getting updated for just one index, then ",
"\t * indexNameForUpdateStatistics will tell the name of the specific index ",
"\t * whose statistics need to be updated.",
"\tprivate\tString\t\t\t\tindexNameForUpdateStatistics;"
]
},
{
"added": [],
"header": "@@ -116,33 +127,6 @@ public class AlterTableNode extends DDLStatementNode",
"removed": [
"",
"\t/**",
"\t * Initializer for a AlterTableNode for updating the statistics. The user",
"\t * can ask for update statistic of all the indexes or only a specific index",
"\t *",
"\t * @param objectName\t\tThe name of the table whose index(es) will have",
"\t * their statistics updated.",
"\t * @param updateStatisticsAll\tIf true then update the statistics of all ",
"\t * the indexes on the table. If false, then update",
"\t * the statistics of only the index provided as",
"\t * 3rd parameter here",
"\t * @param indexName\t\t\tOnly used if updateStatisticsAll is set to ",
"\t * false. ",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
"\tpublic void init(Object objectName,",
"\t\t\tObject updateStatisticsAll,",
"\t\t\tObject indexName)",
"\tthrows StandardException",
"\t{",
"\t\tinitAndCheck(objectName);",
"\t\tthis.updateStatisticsAll = ((Boolean) updateStatisticsAll).booleanValue();",
"\t\tthis.indexNameForUpdateStatistics = (String)indexName;",
"\t\tschemaDescriptor = getSchemaDescriptor();",
"\t\tupdateStatistics = true;",
"\t}"
]
},
{
"added": [
"\t * Initializer for a AlterTableNode. The parameter values have different",
"\t * meanings based on what kind of ALTER TABLE is taking place. ",
"\t * ",
"\t * @param changeType\t\tADD_TYPE or DROP_TYPE or UPDATE_STATISTICS or",
"\t * or DROP_STATISTICS",
"\t * @param param1 \t\t\tFor ADD_TYPE or DROP_TYPE, param1 gives the",
"\t * elements impacted by ALTER TABLE.",
"\t * For UPDATE_STATISTICS or or DROP_STATISTICS,",
"\t * param1 is boolean - true means update or drop",
"\t * the statistics of all the indexes on the table.",
"\t * False means, update or drop the statistics of",
"\t * only the index name provided by next parameter.",
"\t * @param param2 \t\t\tFor ADD_TYPE or DROP_TYPE, param2 gives the",
"\t * new lock granularity, if any",
"\t * For UPDATE_STATISTICS or DROP_STATISTICS,",
"\t * param2 can be the name of the specific index",
"\t * whose statistics will be dropped/updated. This",
"\t * param is used only if param1 is set to false",
"\t * @param param3\t\t\tFor DROP_TYPE, param3 can indicate if the drop",
"\t * column is CASCADE or RESTRICTED. This param is",
"\t * ignored for all the other changeType.",
"\t\t\t\t\t\t\tObject param1,",
"\t\t\t\t\t\t\tObject param2,",
"\t\t\t\t\t\t\tObject param3 )",
"\t\t",
"\t\tint[]\tct = (int[]) changeType;",
"\t\t",
"\t\t\t\tthis.tableElementList = (TableElementList) param1;",
"\t\t\t\tthis.lockGranularity = ((Character) param2).charValue();",
"\t\t\t\tint[]\tbh = (int[]) param3;",
"\t\t\t\tthis.behavior = bh[0];",
"\t\t\t\tbreak;",
"",
"\t\t case UPDATE_STATISTICS:",
"\t\t\t\tthis.updateStatisticsAll = ((Boolean) param1).booleanValue();",
"\t\t\t\tthis.indexNameForStatistics = (String)param2;",
"\t\t\t\tupdateStatistics = true;",
"\t\t\t\tbreak;",
"\t\t case DROP_STATISTICS:",
"\t\t\t\tthis.dropStatisticsAll = ((Boolean) param1).booleanValue();",
"\t\t\t\tthis.indexNameForStatistics = (String)param2;",
"\t\t\t\tdropStatistics = true;"
],
"header": "@@ -194,39 +178,67 @@ public class AlterTableNode extends DDLStatementNode",
"removed": [
"\t * Initializer for a AlterTableNode",
"\t *",
"\t * @param tableElementList\tThe alter table action",
"\t * @param lockGranularity\tThe new lock granularity, if any",
"\t * @param changeType\t\tADD_TYPE or DROP_TYPE",
"\t * @param behavior\t\t\tIf drop column is CASCADE or RESTRICTED",
"",
"\t\t\t\t\t\t\tObject tableElementList,",
"\t\t\t\t\t\t\tObject lockGranularity,",
"\t\t\t\t\t\t\tObject behavior )",
"\t\tthis.tableElementList = (TableElementList) tableElementList;",
"\t\tthis.lockGranularity = ((Character) lockGranularity).charValue();",
"\t\tint[]\tct = (int[]) changeType, bh = (int[]) behavior;",
"\t\tthis.behavior = bh[0];"
]
},
{
"added": [
"\t\t\t\t\"dropStatistics: \" + dropStatistics + \"\\n\" +",
"\t\t\t\t\"dropStatisticsAll: \" + dropStatisticsAll + \"\\n\" +",
"\t\t\t\t\"indexNameForStatistics: \" +",
"\t\t\t\tindexNameForStatistics + \"\\n\";"
],
"header": "@@ -259,8 +271,10 @@ public class AlterTableNode extends DDLStatementNode",
"removed": [
"\t\t\t\t\"indexNameForUpdateStatistics: \" +",
"\t\t\t\t indexNameForUpdateStatistics + \"\\n\";"
]
},
{
"added": [
"\t\t//Check if we are in alter table to update/drop the statistics. If yes,",
"\t\t// then check if we are here to update/drop the statistics of a specific",
"\t\t// index. If yes, then verify that the indexname provided is a valid one.",
"\t\tif ((updateStatistics && !updateStatisticsAll) || (dropStatistics && !dropStatisticsAll))",
"\t\t\t\tcd = dd.getConglomerateDescriptor(indexNameForStatistics, schemaDescriptor, false);",
"\t\t\t\t\t\tschemaDescriptor.getSchemaName() + \".\" + indexNameForStatistics);"
],
"header": "@@ -433,20 +447,20 @@ public String statementToString()",
"removed": [
"\t\t//Check if we are in alter table to update the statistics. If yes, then",
"\t\t//check if we are here to update the statistics of a specific index. If",
"\t\t//yes, then verify that the indexname provided is a valid one.",
"\t\tif (updateStatistics && !updateStatisticsAll)",
"\t\t\t\tcd = dd.getConglomerateDescriptor(indexNameForUpdateStatistics, schemaDescriptor, false);",
"\t\t\t\t\t\tschemaDescriptor.getSchemaName() + \".\" + indexNameForUpdateStatistics);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [
"\t * dropStatistics will indicate that we are here for dropping the",
"\t * statistics. It could be statistics of just one index or all the",
"\t * indexes on a given table. ",
"\t */",
" private\t boolean\t\t\t\t\t dropStatistics;",
"\t/**",
"\t * The flag dropStatisticsAll will tell if we are going to drop the ",
"\t * statistics of all indexes or just one index on a table. ",
"\t */",
" private\t boolean\t\t\t\t\t dropStatisticsAll;",
"\t/**",
"\t * If statistic is getting updated/dropped for just one index, then ",
"\t * indexNameForStatistics will tell the name of the specific index ",
"\t * whose statistics need to be updated/dropped.",
" private\t String\t\t\t\t\t\tindexNameForStatistics;",
""
],
"header": "@@ -131,11 +131,23 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t * If statistic is getting updated for just one index, then ",
"\t * indexNameForUpdateStatistics will tell the name of the specific index ",
"\t * whose statistics need to be updated.",
" private\t String\t\t\t\t\t\tindexNameForUpdateStatistics;"
]
},
{
"added": [
"\t * @param dropStatistics\t\tTRUE means we are here to drop statistics",
"\t * @param dropStatisticsAll\tTRUE means we are here to drop statistics",
"\t * \tof all the indexes. False means we are here to drop statistics of",
"\t * \tonly one index.",
"\t * @param indexNameForStatistics\tWill name the index whose statistics",
"\t * \twill be updated/dropped. This param is looked at only if ",
"\t * \tupdateStatisticsAll/dropStatisticsAll is set to false and",
"\t * \tupdateStatistics/dropStatistics is set to true."
],
"header": "@@ -196,8 +208,14 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t * @param indexNameForUpdateStatistics\tWill name the index whose statistics",
"\t * \twill be updated"
]
},
{
"added": [
" boolean dropStatistics,",
" boolean dropStatisticsAll,",
" String indexNameForStatistics)"
],
"header": "@@ -217,7 +235,9 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
" String\t indexNameForUpdateStatistics)"
]
},
{
"added": [
"\t\tthis.dropStatistics \t= dropStatistics;",
"\t\tthis.dropStatisticsAll = dropStatisticsAll;",
"\t\tthis.indexNameForStatistics = indexNameForStatistics;"
],
"header": "@@ -236,7 +256,9 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\tthis.indexNameForUpdateStatistics = indexNameForUpdateStatistics;"
]
},
{
"added": [
"",
" if (dropStatistics) {",
" dropStatistics();",
" return;",
"\t\t}"
],
"header": "@@ -330,6 +352,11 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": []
},
{
"added": [
"\t/**",
"\t * Drop statistics of either all the indexes on the table or only one",
"\t * specific index depending on what user has requested.",
"\t * ",
"\t * @throws StandardException",
"\t */",
" private void dropStatistics()",
" throws StandardException {",
" td = dd.getTableDescriptor(tableId);",
"",
" dd.startWriting(lcc);",
" dm.invalidateFor(td, DependencyManager.UPDATE_STATISTICS, lcc);",
"",
" if (dropStatisticsAll) {",
" dd.dropStatisticsDescriptors(td.getUUID(), null, tc);",
" } else {",
" ConglomerateDescriptor cd = ",
" dd.getConglomerateDescriptor(",
" indexNameForStatistics, sd, false);",
" dd.dropStatisticsDescriptors(td.getUUID(), cd.getUUID(), tc);",
" }",
" }",
""
],
"header": "@@ -649,6 +676,29 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/GenericConstantActionFactory.java",
"hunks": [
{
"added": [
"\t * @param dropStatistics\t\tTRUE means we are here to drop statistics",
"\t * @param dropStatisticsAll\tTRUE means we are here to drop statistics",
"\t * \tof all the indexes. False means we are here to drop statistics of",
"\t * \tonly one index.",
"\t * @param indexNameForStatistics\tWill name the index whose statistics",
"\t * \twill be updated/dropped. This param is looked at only if ",
"\t * \tupdateStatisticsAll/dropStatisticsAll is set to false and",
"\t * \tupdateStatistics/dropStatistics is set to true.",
"\t * ."
],
"header": "@@ -137,9 +137,15 @@ public class GenericConstantActionFactory",
"removed": [
"\t * @param indexNameForUpdateStatistics\tWill name the index whose statistics",
"\t * \twill be updated. This param is looked at only if updateStatisticsAll",
"\t * \tis set to false."
]
},
{
"added": [
"\t\tboolean\t\t\t\t\t\tdropStatistics,",
"\t\tboolean\t\t\t\t\t\tdropStatisticsAll,",
"\t\tString\t\t\t\t\t\tindexNameForStatistics"
],
"header": "@@ -161,7 +167,9 @@ public class GenericConstantActionFactory",
"removed": [
"\t\tString\t\t\t\t\t\tindexNameForUpdateStatistics"
]
}
]
}
] |
derby-DERBY-5704-6e35772a
|
DERBY-5704: Various cleanups in CoalesceTest
Remove the instance variables to make it easier to release resources.
Make sure negative test cases fail if no exception is thrown.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330195 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5705-fcaa724a
|
DERBY-5705: Authorization decorators don't null out connections when done
- Callers that override DatabasePropertyTestSetup.tearDown() with an
empty method are replaced by calls to getNoTeardownInstance().
- Make the tearDown() method in decorators returned by
getNoTeardownInstance() close and null out the reference to the
default connection.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330196 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/TestConfiguration.java",
"hunks": [
{
"added": [
" Test setSQLAuthMode = DatabasePropertyTestSetup.getNoTeardownInstance(",
" test, sqlAuth, true);"
],
"header": "@@ -911,11 +911,8 @@ public final class TestConfiguration {",
"removed": [
" Test setSQLAuthMode = new DatabasePropertyTestSetup(test,",
" sqlAuth, true) {",
" protected void tearDown() {",
" }",
" };"
]
},
{
"added": [
" Test setSQLAuthMode = DatabasePropertyTestSetup.getNoTeardownInstance(",
" test, sqlAuth, true);"
],
"header": "@@ -939,11 +936,8 @@ public final class TestConfiguration {",
"removed": [
" Test setSQLAuthMode = new DatabasePropertyTestSetup(test,",
" sqlAuth, true) {",
" protected void tearDown() { }",
" };",
""
]
}
]
}
] |
derby-DERBY-5706-ce3737f7
|
DERBY-5706: Clean up statements in CreateTableFromQueryTest
Stop storing the statement in an instance variable, so that it is
automatically closed and forgotten by the framework.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1329633 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5707-980db345
|
DERBY-5707: Clean up statements in CharUTF8Test
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1329686 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5708-90b6f2f0
|
DERBY-5708: simpleThread test doesn't release connection
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330197 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-571-0e9ddda2
|
DERBY-571 Fix SYSCS_INPLACE_COMPRESS_TABLE to ignore VTI tables, like it ignores views.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@345595 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/db/OnlineCompress.java",
"hunks": [
{
"added": [
" switch (td.getTableType())",
" /* Skip views and vti tables */",
" case TableDescriptor.VIEW_TYPE:",
" case TableDescriptor.VTI_TYPE:",
" \treturn;",
" // other types give various errors here",
" // DERBY-719,DERBY-720",
" default:",
" \tbreak;"
],
"header": "@@ -291,10 +291,16 @@ public class OnlineCompress",
"removed": [
" /* Skip views */",
" if (td.getTableType() == TableDescriptor.VIEW_TYPE)",
" return;"
]
},
{
"added": [
" switch (td.getTableType())",
" /* Skip views and vti tables */",
" case TableDescriptor.VIEW_TYPE:",
" case TableDescriptor.VTI_TYPE:",
" \tbreak;",
" // other types give various errors here",
" // DERBY-719,DERBY-720",
" default:",
" {"
],
"header": "@@ -476,9 +482,16 @@ public class OnlineCompress",
"removed": [
" /* Skip views */",
" if (td.getTableType() != TableDescriptor.VIEW_TYPE)"
]
},
{
"added": [
" }"
],
"header": "@@ -489,6 +502,7 @@ public class OnlineCompress",
"removed": []
},
{
"added": [
" switch (td.getTableType())",
" /* Skip views and vti tables */",
" case TableDescriptor.VIEW_TYPE:",
" case TableDescriptor.VTI_TYPE:",
" \tbreak;",
" // other types give various errors here",
" // DERBY-719,DERBY-720",
" default:",
" {",
" ConglomerateDescriptor[] conglom_descriptors = "
],
"header": "@@ -527,10 +541,17 @@ public class OnlineCompress",
"removed": [
" /* Skip views */",
" if (td.getTableType() != TableDescriptor.VIEW_TYPE)",
" ConglomerateDescriptor[] conglom_descriptors = "
]
}
]
}
] |
derby-DERBY-571-ddc6f41f
|
DERBY-571 Virtual Table Mapping for no argument Diagnostic tables
Add code to map from a table definition to a vti constuctor at compile
time. Only supported for no-argument virtual tables and limited
to table driven set of diagnostic tables, lock_table, transaction_table,
statment_cache and error_messages. Initial step towards a more
generalized application defined virtual table solution.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@292876 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/compile/NodeFactory.java",
"hunks": [
{
"added": [
"import java.util.Properties;",
"",
"import org.apache.derby.iapi.sql.dictionary.TableDescriptor;"
],
"header": "@@ -20,9 +20,12 @@",
"removed": []
},
{
"added": [
"import org.apache.derby.impl.sql.compile.ResultColumnList;",
"import org.apache.derby.impl.sql.compile.ResultSetNode;"
],
"header": "@@ -31,6 +34,8 @@ import org.apache.derby.iapi.error.StandardException;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [],
"header": "@@ -25,8 +25,6 @@ import org.apache.derby.iapi.reference.Property;",
"removed": [
"import org.apache.derby.iapi.sql.compile.CompilerContext;",
""
]
},
{
"added": [
"\t\t",
"\t\tif (SchemaDescriptor.STD_SYSTEM_DIAG_SCHEMA_NAME.equals(",
"\t\t\t\tsd.getSchemaName()))",
"\t\t{",
"\t\t\tTableDescriptor td =",
"\t\t\t\tnew TableDescriptor(this, tableName, sd,",
"\t\t\t\t\t\tTableDescriptor.VTI_TYPE,",
"\t\t\t\t\t\tTableDescriptor.DEFAULT_LOCK_GRANULARITY);",
"\t\t\t",
"\t\t\t// ensure a vti class exists",
"\t\t\tif (getVTIClass(td) != null)",
"\t\t\t\treturn td;",
"\t\t\t",
"\t\t\t// otherwise just standard search",
"\t\t}",
"\t\t\t\t"
],
"header": "@@ -1666,6 +1664,22 @@ public final class\tDataDictionaryImpl",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/NodeFactoryImpl.java",
"hunks": [
{
"added": [
"import java.util.Vector;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
},
{
"added": [
"import org.apache.derby.iapi.sql.dictionary.TableDescriptor;"
],
"header": "@@ -32,6 +33,7 @@ import org.apache.derby.iapi.sql.compile.Optimizer;",
"removed": []
},
{
"added": [
"\t",
"\tprivate static final Vector emptyVector = new Vector(0);"
],
"header": "@@ -70,6 +72,8 @@ public class NodeFactoryImpl extends NodeFactory implements ModuleControl, Modul",
"removed": []
}
]
}
] |
derby-DERBY-5712-fd2f1f73
|
DERBY-5712: CheckConstraintTest holds on to resources after completion
Use local variables instead of instance variables to prevent references from
being held after the test has finished.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330199 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5713-30158120
|
DERBY-5713: AlterTableTest holds on to resources after completion
Use local variables instead of instance variables to prevent references from
being held after the test has finished.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330200 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5714-082f2ede
|
DERBY-5714: ColumnDefaultsTest holds on to resources after completion
Use local variables instead of instance variables to prevent references from
being held after the test has finished.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330201 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5715-a3b9db04
|
DERBY-5715: InbetweenTest holds on to resources after completion
Use local variables instead of instance variables to prevent objects
from being held after the test completes.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330206 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5716-cbce5601
|
DERBY-5716: TimestampArithTest keeps references to statements after completion
- Add tearDown() method that closes the statements and clears the
references to them
- Make the test data set static so that it is not cloned with one
identical set for every test case
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330207 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5718-6001ab60
|
DERBY-5718: UniqueConstraintSetNullTest calls super.tearDown() too early
Call super.tearDown() last in tearDown(), and use framework helper methods
in setUp() and tearDown() to simplify and make resources freed automatically.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330202 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5719-8bc55f33
|
DERBY-5719: UniqueConstraintMultiThreadedTest doesn't call super.tearDown()
- Call super.tearDown() from the override
- Use framework helper methods to simplify setUp() and tearDown()
- Eliminate the shared DataSource instance (use openDefaultConnection()
instead)
- Remove unused imports
- Fix some typos in javadoc comments
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330203 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5729-136610d8
|
DERBY-5729: Replication tests keep references to connections after completion
- close connections in tearDown()
- null out references to connections, and to other objects that are no
longer needed, in tearDown()
- remove the fields masterServer and slaveServer since they are always
null
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1332133 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-573-2dc0b917
|
DERBY-573: Add support for optimizer hints, provided by the user as SQL comments. Also add upgrade infrastructure for 10.2 release and support new optimizer mechanism to work correctly under soft-upgrade for databases at 10.1 level.
I have the patch for optimizer overrides support in Derby. Alongwith the patch, I have attached the updated functional spec to the JIRA entry DERBY-573.
Majority of the changes went into the sqlgrammar.jj because Derby engine already has support for them internally. It is the parser that needs to recognize these overrides and pass it on to through the query nodes. The parser now looks for character sequence -- DERBY-PROPERTIES (case insensitive and space between -- and D is optional) and once it finds that, it looks for propertyName=value pairs on that same comment line in parser's propertyList method. The parser does the basic check to make sure that the same property is not used more than once for a given table. The remaining checks on the properties like checking the existence of user specified index etc are done in the bind phase.
I also changed the metadata.properties file to use --DERBY-PROPERTIES rather than old PROPERTIES clause to supply optimizer overrides. In addition, added \n at the end of the optimier override comment lines to make sure the comment line does not get concatenated with the next line of the sql.
Import.java had to be changed to user --DERBY-PROPERTIES rather than PROPERTIES.
Added a new test optimizerOverrides.sql which runs in both embedded and network server mode.
Rerunning all the tests after syncing the codeline to make sure nothing has broken. An earlier run of the tests before the sync came out clean.
I plan to next work on exposing these overrides through runtime statistics so that user can verify that the optimizer overrides are getting used.
Submitted by Mamta Satoor (msatoor@gmail.com)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@356562 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/conn/LanguageConnectionContext.java",
"hunks": [
{
"added": [
" /**"
],
"header": "@@ -863,7 +863,7 @@ public interface LanguageConnectionContext extends Context {",
"removed": [
" /**"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java",
"hunks": [
{
"added": [
"\t/** Special version indicating the database must be upgraded to or created at the current engine level ",
"\t * ",
"\t * DatabaseMetaData will use this to determine if the data dictionary ",
"\t * is at the latest System Catalog version number. A data dictionary version",
"\t * will not be at latest System Catalog version when the database is getting",
"\t * booted in soft upgrade mode. In soft upgrade mode, engine should goto ",
"\t * metadata.properties to get the sql for the metadata calls rather",
"\t * than going to the system tables (and using stored versions of these queries). ",
"\t * This is required because if the metadata sql has changed between the ",
"\t * releases, we want to use the latest metadata sql rather than what is ",
"\t * stored in the system catalogs. Had to introduce this behavior for",
"\t * EmbeddedDatabaseMetaData in 10.2 release where optimizer overrides ",
"\t * syntax was changed. If 10.2 engine in soft upgrade mode for a pre-10.2 ",
"\t * database went to system tables for stored metadata queires, the metadata ",
"\t * calls would fail because 10.2 release doesn't recognize the pre-10.2 ",
"\t * optimizer overrides syntax. To get around this, the 10.2 engine in ",
"\t * soft upgrade mode should get the sql from metata.properties which has ",
"\t * been changed to 10.2 syntax for optimizer overrides. To make this ",
"\t * approach more generic for all soft upgrades, from 10.2 release onwards, ",
"\t * DatabaseMetaData calls will always look at metadata.properties so it ",
"\t * will get the compatible syntax for that release.",
"\t */"
],
"header": "@@ -63,7 +63,28 @@ public interface DataDictionary",
"removed": [
"\t/** Special version indicating the database must be upgraded to or created at the current engine level */"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData.java",
"hunks": [
{
"added": [
" /**",
" * check if the dictionary is at the same version as the engine. If not, ",
" * then that means stored versions of the JDBC database metadata queries",
" * may not be compatible with this version of the software.",
" * This can happen if we are in soft upgrade mode. Since in soft upgrade ",
" * mode, we can't change these stored metadata queries in a backward ",
" * incompatible way, engine needs to read the metadata sql from ",
" * metadata.properties file rather than rely on system tables.",
" * ",
" * @return true if we are not in soft upgrade mode",
" * @throws SQLException",
" */",
"\tprivate boolean notInSoftUpgradeMode() ",
"\t\tthrows SQLException {",
"\t\tif ( getEmbedConnection().isClosed())",
"\t\t\tthrow Util.noCurrentConnection();",
"",
"\t\tboolean notInSoftUpgradeMode;",
"\t\ttry {",
"\t\t\tnotInSoftUpgradeMode =",
"\t\t\t\tgetLanguageConnectionContext().getDataDictionary().checkVersion(",
"\t\t\t\t\t\tDataDictionary.DD_VERSION_CURRENT,null);",
"\t\t} catch (Throwable t) {",
"\t\t\tthrow handleException(t);",
"\t\t}",
"\t\treturn notInSoftUpgradeMode;",
"\t}",
"\t",
"\t"
],
"header": "@@ -2108,6 +2108,35 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": []
},
{
"added": [
"\t\t\tString table) throws SQLException {"
],
"header": "@@ -2131,7 +2160,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t\t\t\tString table) throws SQLException {"
]
},
{
"added": [
"\t}\t",
"\t/**"
],
"header": "@@ -2161,9 +2190,9 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\t}",
" /**"
]
},
{
"added": [
"\tprivate PreparedStatement getPreparedQueryUsingSystemTables(String nameKey) throws SQLException "
],
"header": "@@ -3083,7 +3112,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild",
"removed": [
"\tprivate PreparedStatement getPreparedQuery(String nameKey) throws SQLException "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/GenericStatement.java",
"hunks": [
{
"added": [
"\t"
],
"header": "@@ -99,7 +99,7 @@ public class GenericStatement",
"removed": [
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DD_Version.java",
"hunks": [
{
"added": [
"\t\tcase DataDictionary.DD_VERSION_DERBY_10_2:",
"\t\t\treturn \"10.2\";"
],
"header": "@@ -145,6 +145,8 @@ public\tclass DD_Version implements\tFormatable",
"removed": []
},
{
"added": [
"\t\t<BR>",
"\t\t<B>Upgrade items for every new release</B>",
"\t\t<UL>",
"\t\t<LI> Drop and recreate the stored versions of the JDBC database metadata queries",
"\t\t</UL>",
"\t\t",
"\t\t"
],
"header": "@@ -284,12 +286,18 @@ public\tclass DD_Version implements\tFormatable",
"removed": [
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t\tsoftwareVersion = new DD_Version(this, DataDictionary.DD_VERSION_DERBY_10_2);"
],
"header": "@@ -422,7 +422,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tsoftwareVersion = new DD_Version(this, DataDictionary.DD_VERSION_DERBY_10_1);"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/upgradeTests/phaseTester.java",
"hunks": [
{
"added": [
"import org.apache.derbyTesting.functionTests.tests.jdbcapi.metadata;"
],
"header": "@@ -13,6 +13,7 @@ package org.apache.derbyTesting.upgradeTests;",
"removed": []
},
{
"added": [
"\t\tSystem.out.println(\"jdbc url is \" + url);"
],
"header": "@@ -107,6 +108,7 @@ public class phaseTester {",
"removed": []
},
{
"added": [
"\t\t\t//test the metadata calls at this stages of the db. This is to make",
"\t\t\t//sure that they don't break between these forms of upgrades of a db",
"\t\t\tmetadata metadataTest = new metadata();",
"\t\t\tmetadataTest.con = conn;",
"\t\t\tmetadata.s = conn.createStatement();",
"\t\t\tmetadataTest.runTest();"
],
"header": "@@ -167,6 +169,12 @@ public class phaseTester {",
"removed": []
},
{
"added": [
"\t\t\tconn.createStatement().executeUpdate(\"CREATE TABLE TABLE1(id INT NOT NULL PRIMARY KEY, name varchar(200))\");"
],
"header": "@@ -267,7 +275,7 @@ public class phaseTester {",
"removed": [
"\t\t\tconn.createStatement().executeUpdate(\"CREATE TABLE T1(id INT NOT NULL PRIMARY KEY, name varchar(200))\");"
]
},
{
"added": [
"\t\tps = conn.prepareStatement(\"INSERT INTO TABLE1 VALUES (?, ?)\");"
],
"header": "@@ -286,7 +294,7 @@ public class phaseTester {",
"removed": [
"\t\tps = conn.prepareStatement(\"INSERT INTO T1 VALUES (?, ?)\");"
]
}
]
}
] |
derby-DERBY-573-c7248d5e
|
DERBY-573: Enhance RUNSTAT output to show user specified optimizer hints are bing used.
Here is the checkin message from the contributor:
I have attached a patch named Derby573OptimizerOverridesAndRunTimeStatistics011206.txt to JIRA Derby573 Provide support for optimizer overrides in Derby. This patch enables users to see the optimizer overrides specified in the sql as part of runtime statistics info. This is achieved by changing the
generator so that these properties get passed from compile time to execute time. This change in generate phase can be found in FromBaseTable,
BaseJoinStrategy and JoinNode. The changes in the other classes is for returning the correct number of arguments to the scan. That change is in
getScanArgs method.
In addition, I have changed the existing lang/optimizerOverrides.sql to test this patch. derbyall suite has run fine on my Windows XP m/c with Sun's jdk14.
Submitted by Mamta Satoor (msatoor@google.com)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@369619 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/execute/ResultSetFactory.java",
"hunks": [
{
"added": [
"\t\t@param tableName\t\tThe full name of the table ",
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -739,7 +739,8 @@ public interface ResultSetFactory {",
"removed": [
"\t\t@param tableName\t\tThe full name of the table"
]
},
{
"added": [
"\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -777,6 +778,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -807,6 +809,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -833,6 +836,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -886,6 +890,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -920,6 +925,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -976,6 +982,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -1011,6 +1018,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -1112,6 +1120,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -1128,6 +1137,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -1155,6 +1165,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -1171,6 +1182,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -1210,6 +1222,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -1227,6 +1240,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -1257,6 +1271,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -1275,6 +1290,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t * @param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -1455,6 +1471,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\tString \t\t\t\tuserSuppliedOptimizerOverrides,"
],
"header": "@@ -1481,6 +1498,7 @@ public interface ResultSetFactory {",
"removed": []
},
{
"added": [
"\t\t@param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -1530,6 +1548,7 @@ public interface ResultSetFactory {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/BaseJoinStrategy.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.util.PropertyUtil;",
""
],
"header": "@@ -47,6 +47,8 @@ import org.apache.derby.iapi.error.StandardException;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java",
"hunks": [
{
"added": [
"\t\t//User may have supplied optimizer overrides in the sql",
"\t\t//Pass them onto execute phase so it can be shown in ",
"\t\t//run time statistics.",
"\t\tif (tableProperties != null)",
"\t\t\tmb.push(org.apache.derby.iapi.util.PropertyUtil.sortProperties(tableProperties));",
"\t\telse",
"\t\t\tmb.pushNull(\"java.lang.String\");"
],
"header": "@@ -3131,6 +3131,13 @@ public class FromBaseTable extends FromTable",
"removed": []
},
{
"added": [
"\t\t\t\t\tClassName.NoPutResultSet, 14);"
],
"header": "@@ -3141,7 +3148,7 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\t\t\t\tClassName.NoPutResultSet, 13);"
]
},
{
"added": [
"\t\t//User may have supplied optimizer overrides in the sql",
"\t\t//Pass them onto execute phase so it can be shown in ",
"\t\t//run time statistics.",
"\t\tif (tableProperties != null)",
"\t\t\tmb.push(org.apache.derby.iapi.util.PropertyUtil.sortProperties(tableProperties));",
"\t\telse",
"\t\t\tmb.pushNull(\"java.lang.String\");"
],
"header": "@@ -3218,6 +3225,13 @@ public class FromBaseTable extends FromTable",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/HalfOuterJoinNode.java",
"hunks": [
{
"added": [
"\t\t\t\ttableProperties,",
"\t\t\t\tnull);"
],
"header": "@@ -90,7 +90,8 @@ public class HalfOuterJoinNode extends JoinNode",
"removed": [
"\t\t\t\ttableProperties);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/JoinNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.util.PropertyUtil;"
],
"header": "@@ -58,6 +58,7 @@ import org.apache.derby.iapi.services.loader.GeneratedMethod;",
"removed": []
},
{
"added": [
"\t//User provided optimizer overrides",
"\tProperties joinOrderStrategyProperties;"
],
"header": "@@ -96,6 +97,8 @@ public class JoinNode extends TableOperatorNode",
"removed": []
},
{
"added": [
"\t * @param joinOrderStrategyProperties\tUser provided optimizer overrides"
],
"header": "@@ -107,6 +110,7 @@ public class JoinNode extends TableOperatorNode",
"removed": []
},
{
"added": [
"\t\t\t\t\tObject tableProperties,",
"\t\t\t\t\tObject joinOrderStrategyProperties)"
],
"header": "@@ -116,7 +120,8 @@ public class JoinNode extends TableOperatorNode",
"removed": [
"\t\t\t\t\tObject tableProperties)"
]
},
{
"added": [
"\t\tthis.joinOrderStrategyProperties = (Properties)joinOrderStrategyProperties;"
],
"header": "@@ -124,6 +129,7 @@ public class JoinNode extends TableOperatorNode",
"removed": []
},
{
"added": [
"\t\t//User may have supplied optimizer overrides in the sql",
"\t\t//Pass them onto execute phase so it can be shown in ",
"\t\t//run time statistics.",
"\t\tif (joinOrderStrategyProperties != null)",
"\t\t\tmb.push(PropertyUtil.sortProperties(joinOrderStrategyProperties));",
"\t\telse",
"\t\t\tmb.pushNull(\"java.lang.String\");",
"\t\t"
],
"header": "@@ -1645,6 +1651,14 @@ public class JoinNode extends TableOperatorNode",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/BulkTableScanResultSet.java",
"hunks": [
{
"added": [
"\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -92,6 +92,7 @@ public class BulkTableScanResultSet extends TableScanResultSet",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/DependentResultSet.java",
"hunks": [
{
"added": [
"\tpublic String userSuppliedOptimizerOverrides;"
],
"header": "@@ -104,6 +104,7 @@ public class DependentResultSet extends NoPutResultSetImpl implements CursorResu",
"removed": []
},
{
"added": [
"\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -139,6 +140,7 @@ public class DependentResultSet extends NoPutResultSetImpl implements CursorResu",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/DistinctScanResultSet.java",
"hunks": [
{
"added": [
"\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -82,6 +82,7 @@ public class DistinctScanResultSet extends HashScanResultSet",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/GenericResultSetFactory.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -521,6 +521,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\tuserSuppliedOptimizerOverrides,"
],
"header": "@@ -555,6 +556,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -581,6 +583,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\tuserSuppliedOptimizerOverrides,"
],
"header": "@@ -602,6 +605,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -631,6 +635,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\tuserSuppliedOptimizerOverrides,"
],
"header": "@@ -660,6 +665,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -693,6 +699,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\tuserSuppliedOptimizerOverrides,"
],
"header": "@@ -730,6 +737,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -801,6 +809,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t userSuppliedOptimizerOverrides,"
],
"header": "@@ -812,6 +821,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -831,6 +841,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t userSuppliedOptimizerOverrides,"
],
"header": "@@ -842,6 +853,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -863,6 +875,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t userSuppliedOptimizerOverrides,"
],
"header": "@@ -876,6 +889,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -897,6 +911,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t userSuppliedOptimizerOverrides,"
],
"header": "@@ -910,6 +925,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t * @param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -1086,6 +1102,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\tString \t\t\t\tuserSuppliedOptimizerOverrides,"
],
"header": "@@ -1112,6 +1129,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\tuserSuppliedOptimizerOverrides,"
],
"header": "@@ -1128,6 +1146,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -1158,6 +1177,7 @@ public class GenericResultSetFactory implements ResultSetFactory",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/HashLeftOuterJoinResultSet.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -59,6 +59,7 @@ public class HashLeftOuterJoinResultSet extends NestedLoopLeftOuterJoinResultSet",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/HashScanResultSet.java",
"hunks": [
{
"added": [
"\tpublic String userSuppliedOptimizerOverrides;"
],
"header": "@@ -110,6 +110,7 @@ public class HashScanResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
"\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -152,6 +153,7 @@ public class HashScanResultSet extends NoPutResultSetImpl",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/JoinResultSet.java",
"hunks": [
{
"added": [
"\t",
"\tString userSuppliedOptimizerOverrides;"
],
"header": "@@ -65,6 +65,8 @@ public abstract class JoinResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t String userSuppliedOptimizerOverrides,"
],
"header": "@@ -81,6 +83,7 @@ public abstract class JoinResultSet extends NoPutResultSetImpl",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/LastIndexKeyResultSet.java",
"hunks": [
{
"added": [
"\tpublic String userSuppliedOptimizerOverrides;"
],
"header": "@@ -70,6 +70,7 @@ public class LastIndexKeyResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
"\t * @param userSuppliedOptimizerOverrides\t\tOverrides specified by the user on the sql"
],
"header": "@@ -95,6 +96,7 @@ public class LastIndexKeyResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
"\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -119,6 +121,7 @@ public class LastIndexKeyResultSet extends NoPutResultSetImpl",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/RealResultSetStatisticsFactory.java",
"hunks": [
{
"added": [
"\t\t\t\t\ttsrs.userSuppliedOptimizerOverrides,"
],
"header": "@@ -595,6 +595,7 @@ public class RealResultSetStatisticsFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t\thlojrs.userSuppliedOptimizerOverrides,"
],
"header": "@@ -688,6 +689,7 @@ public class RealResultSetStatisticsFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t\tnllojrs.userSuppliedOptimizerOverrides,"
],
"header": "@@ -714,6 +716,7 @@ public class RealResultSetStatisticsFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t\thjrs.userSuppliedOptimizerOverrides,"
],
"header": "@@ -740,6 +743,7 @@ public class RealResultSetStatisticsFactory",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t\tnljrs.userSuppliedOptimizerOverrides,"
],
"header": "@@ -766,6 +770,7 @@ public class RealResultSetStatisticsFactory",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/TableScanResultSet.java",
"hunks": [
{
"added": [
"\tpublic String userSuppliedOptimizerOverrides;"
],
"header": "@@ -95,6 +95,7 @@ public class TableScanResultSet extends NoPutResultSetImpl",
"removed": []
},
{
"added": [
"\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -150,6 +151,7 @@ public class TableScanResultSet extends NoPutResultSetImpl",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/rts/RealHashJoinStatistics.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -55,6 +55,7 @@ public class RealHashJoinStatistics",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/rts/RealHashLeftOuterJoinStatistics.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -56,6 +56,7 @@ public class RealHashLeftOuterJoinStatistics",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/rts/RealJoinResultSetStatistics.java",
"hunks": [
{
"added": [
"\tpublic String userSuppliedOptimizerOverrides;"
],
"header": "@@ -46,6 +46,7 @@ public abstract class RealJoinResultSetStatistics",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\tdouble optimizerEstimatedCost,",
"\t\t\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides"
],
"header": "@@ -68,7 +69,8 @@ public abstract class RealJoinResultSetStatistics",
"removed": [
"\t\t\t\t\t\t\t\t\t\tdouble optimizerEstimatedCost"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/rts/RealNestedLoopJoinStatistics.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -74,6 +74,7 @@ public class RealNestedLoopJoinStatistics",
"removed": []
},
{
"added": [
"\t\t\toptimizerEstimatedCost,",
"\t\t\tuserSuppliedOptimizerOverrides"
],
"header": "@@ -92,7 +93,8 @@ public class RealNestedLoopJoinStatistics",
"removed": [
"\t\t\toptimizerEstimatedCost"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/rts/RealNestedLoopLeftOuterJoinStatistics.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -65,6 +65,7 @@ public class RealNestedLoopLeftOuterJoinStatistics",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/rts/RealTableScanStatistics.java",
"hunks": [
{
"added": [
"\tpublic String userSuppliedOptimizerOverrides;"
],
"header": "@@ -53,6 +53,7 @@ public class RealTableScanStatistics",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t\tString userSuppliedOptimizerOverrides,"
],
"header": "@@ -75,6 +76,7 @@ public class RealTableScanStatistics",
"removed": []
},
{
"added": [
"\t\tthis.userSuppliedOptimizerOverrides = userSuppliedOptimizerOverrides;"
],
"header": "@@ -102,6 +104,7 @@ public class RealTableScanStatistics",
"removed": []
},
{
"added": [
"\t\tString header = \"\";",
"\t\tif (userSuppliedOptimizerOverrides != null)",
"\t\t{ ",
"\t\t\theader = ",
"\t\t\t\tindent + MessageService.getTextMessage(SQLState.RTS_USER_SUPPLIED_OPTIMIZER_OVERRIDES_FOR_TABLE,",
"\t\t\t\t\t\ttableName, userSuppliedOptimizerOverrides);",
"\t\t\theader = header + \"\\n\";",
"\t\t}",
"\t\t\theader = header +"
],
"header": "@@ -130,14 +133,21 @@ public class RealTableScanStatistics",
"removed": [
"\t\tString header;",
"\t\t\theader ="
]
}
]
}
] |
derby-DERBY-5733-cc049626
|
DERBY-5733: Source file for OrderByAndSortAvoidance contains characters not available in the C locale
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1332938 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5734-73ff7eb9
|
DERBY-5734: End transaction if CleanDatabaseTestSetup.decorateSQL fails
Make sure the transaction is ended even if decorateSQL, which can be
overridden to perform custom setup tasks, fails.
This patch also nulls out the reference to the default connection
(clearConnection() must be paired with getConnection())
Patch file: derby-5734-1b-end_transaction.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1333360 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/CleanDatabaseTestSetup.java",
"hunks": [
{
"added": [
" try {",
" decorateSQL(s);",
" s.close();",
" conn.commit();",
" } finally {",
" // Make sure we release any locks held by the connection at this",
" // point. Not doing so may cause subsequent tests to fail.",
" try {",
" clearConnection();",
" } catch (SQLException sqle) {",
" // Ignore, but print details in debug mode.",
" if (getTestConfiguration().isVerbose()) {",
" println(\"clearing connection failed: \" + sqle.getMessage());",
" sqle.printStackTrace(System.err);",
" }",
" }",
" }"
],
"header": "@@ -109,11 +109,23 @@ public class CleanDatabaseTestSetup extends BaseJDBCTestSetup {",
"removed": [
" decorateSQL(s);",
"",
" s.close();",
" conn.commit();",
" conn.close();"
]
}
]
}
] |
derby-DERBY-5737-c44e39a9
|
DERBY-5737: Remove GenericDescriptorList.elements and replace Enumerator usage with Iterator
Removed elements-methods and use of Enumerator in preparation of refactoring.
Patch file: derby-5737-1a-iterator.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1333356 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/GenericDescriptorList.java",
"hunks": [
{
"added": [],
"header": "@@ -21,18 +21,6 @@",
"removed": [
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"",
"import org.apache.derby.catalog.UUID;",
"",
"import org.apache.derby.iapi.sql.dictionary.DataDictionary;",
"import org.apache.derby.iapi.sql.dictionary.UniqueTupleDescriptor;",
"import org.apache.derby.iapi.sql.dictionary.SchemaDescriptor;",
"",
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DMLModStatementNode.java",
"hunks": [
{
"added": [
"import java.util.Iterator;"
],
"header": "@@ -21,8 +21,8 @@",
"removed": [
"import java.util.Enumeration;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DeleteNode.java",
"hunks": [
{
"added": [
"import java.util.Iterator;"
],
"header": "@@ -72,9 +72,9 @@ import java.lang.reflect.Modifier;",
"removed": [
"import java.util.Enumeration;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UpdateNode.java",
"hunks": [
{
"added": [
"import java.util.Iterator;"
],
"header": "@@ -66,8 +66,8 @@ import org.apache.derby.iapi.util.ReuseFactory;",
"removed": [
"import java.util.Enumeration;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [],
"header": "@@ -22,7 +22,6 @@",
"removed": [
"import java.util.Enumeration;"
]
},
{
"added": [
" for (Iterator descIter = tdl.iterator(); descIter.hasNext() ; ) {",
" TriggerDescriptor trd = (TriggerDescriptor)descIter.next();"
],
"header": "@@ -1374,10 +1373,8 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\tEnumeration descs = tdl.elements();",
"\t\twhile (descs.hasMoreElements())",
"\t\t{",
"\t\t\tTriggerDescriptor trd = (TriggerDescriptor) descs.nextElement();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/DropTableConstantAction.java",
"hunks": [
{
"added": [
"import java.util.Iterator;"
],
"header": "@@ -51,7 +51,7 @@ import org.apache.derby.iapi.store.access.TransactionController;",
"removed": [
"import java.util.Enumeration;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/TriggerInfo.java",
"hunks": [
{
"added": [
"import java.util.Iterator;"
],
"header": "@@ -40,7 +40,7 @@ import org.apache.derby.catalog.UUID;",
"removed": [
"import java.util.Enumeration;"
]
},
{
"added": [
" Iterator descIter = triggers.iterator();",
" triggerArray[i] = (TriggerDescriptor) descIter.next();"
],
"header": "@@ -118,14 +118,14 @@ public final class TriggerInfo implements Formatable",
"removed": [
"\t\tEnumeration descs = triggers.elements();",
"\t\t\ttriggerArray[i] = (TriggerDescriptor) descs.nextElement();"
]
}
]
}
] |
derby-DERBY-5741-33605bd7
|
DERBY-5741: Improve error reporting for missing UserAuthenticators.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1335010 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5746-5db352a4
|
DERBY-5746: Minor refactoring of DataDictionaryImpl.getSetAutoincrementValue
Dropped unused return value from the fetch-call.
Moved the instantiation/delcaration/modification of the bit set inside the code
block where it is actually used.
Patch file: derby-5746-1a-minor_refactoring.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1338618 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [],
"header": "@@ -8843,9 +8843,6 @@ public final class\tDataDictionaryImpl",
"removed": [
"",
"\t\tFormatableBitSet columnToUpdate = new ",
" \t\t\tFormatableBitSet(SYSCOLUMNSRowFactory.SYSCOLUMNS_COLUMN_COUNT);"
]
},
{
"added": [
" // fetch the current value",
" heapCC.fetch(rl, row.getRowArray(), columnToRead, wait);"
],
"header": "@@ -8875,10 +8872,8 @@ public final class\tDataDictionaryImpl",
"removed": [
" boolean baseRowExists = ",
" heapCC.fetch(rl, row.getRowArray(), columnToRead, wait);",
"",
" columnToUpdate.set(columnNum - 1); // current value."
]
},
{
"added": [
" // increment the value",
"",
" // store the new value in SYSCOLUMNS",
" FormatableBitSet columnToUpdate = new FormatableBitSet(",
" SYSCOLUMNSRowFactory.SYSCOLUMNS_COLUMN_COUNT);",
" columnToUpdate.set(columnNum - 1); // current value."
],
"header": "@@ -8886,10 +8881,15 @@ public final class\tDataDictionaryImpl",
"removed": [
" // we increment and store the new value in SYSCOLUMNS"
]
}
]
}
] |
derby-DERBY-5746-94388c5a
|
DERBY-5746: Minor refactoring of DataDictionaryImpl.getSetAutoincrementValue
Add debug asserts for two cases where the return value of
ConglomerateController.fetch is ignored (primarily for consistency with other
cases).
Patch file: derby-5746-2a-assert_on_fetch.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1339986 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
" boolean baseRowExists =",
" heapCC.fetch(rl, row.getRowArray(), columnToRead, wait);",
" if (SanityManager.DEBUG) {",
" // We're not prepared for a non-existing base row.",
" SanityManager.ASSERT(baseRowExists, \"base row not found\");",
" }"
],
"header": "@@ -8873,7 +8873,12 @@ public final class\tDataDictionaryImpl",
"removed": [
" heapCC.fetch(rl, row.getRowArray(), columnToRead, wait);"
]
},
{
"added": [
" boolean baseRowExists = heapCC.fetch(",
" rowLocation, row.getRowArray(), columnToUpdate, wait);",
" if (SanityManager.DEBUG) {",
" // We're not prepared for a non-existing base row.",
" SanityManager.ASSERT(baseRowExists, \"base row not found\");",
" }"
],
"header": "@@ -10380,7 +10385,12 @@ public final class\tDataDictionaryImpl",
"removed": [
" heapCC.fetch( rowLocation, row.getRowArray(), columnToUpdate, wait );"
]
}
]
}
] |
derby-DERBY-5749-eff91692
|
DERBY-5749 Implicit cast of variable length values, e.g. as arguments to stored methods and generated columns values, silently truncate if too long
Patches derby-5749b (stored procedures and functions) and
derby-5749-2b (generated columns).
Quote from releaseNote.html attached to the issue:
Summary of Change
SQL now does correct checking of the length of variable strings in
these two cases:
Arguments to stored procedures and functions
Values assigned to generated columns
Previously, if the actual value was longer than the datatype of the
argument or column to which it was assigned, Derby would silently
truncate the value and ignore the truncation. The SQL standard
requires a truncation exception be thrown.
Derby now throws an SQLException with SQL state 22001 in these cases.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1339281 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CastNode.java",
"hunks": [
{
"added": [
" /**",
" * Method calls:",
" * Argument type has the same semantics as assignment:",
" * Section 9.2 (Store assignment). There, General Rule",
" * 2.b.v.2 says that the database should raise an exception",
" * if truncation occurs when stuffing a string value into a",
" * VARCHAR, so make sure CAST doesn't issue warning only.",
" */",
" private boolean assignmentSemantics = false;",
""
],
"header": "@@ -92,6 +92,16 @@ public class CastNode extends ValueNode",
"removed": []
},
{
"added": [
" mb.push(!sourceCTI.variableLength() ||",
" isNumber ||",
" assignmentSemantics);"
],
"header": "@@ -981,7 +991,9 @@ public class CastNode extends ValueNode",
"removed": [
"\t\t\tmb.push(!sourceCTI.variableLength() || isNumber);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/StaticMethodCallNode.java",
"hunks": [
{
"added": [],
"header": "@@ -26,7 +26,6 @@ import org.apache.derby.iapi.services.compiler.MethodBuilder;",
"removed": [
"import org.apache.derby.iapi.sql.compile.TypeCompiler;"
]
},
{
"added": [],
"header": "@@ -34,7 +33,6 @@ import org.apache.derby.iapi.types.StringDataValue;",
"removed": [
"import org.apache.derby.iapi.sql.dictionary.DataDictionary;"
]
},
{
"added": [],
"header": "@@ -42,8 +40,6 @@ import org.apache.derby.iapi.reference.SQLState;",
"removed": [
"import org.apache.derby.impl.sql.compile.ExpressionClassBuilder;",
"import org.apache.derby.iapi.services.loader.ClassInspector;"
]
},
{
"added": [],
"header": "@@ -54,9 +50,6 @@ import org.apache.derby.iapi.sql.conn.Authorizer;",
"removed": [
"import org.apache.derby.impl.sql.compile.ActivationClassBuilder;",
"",
"import org.apache.derby.catalog.UUID;"
]
}
]
}
] |
derby-DERBY-575-03ea0985
|
DERBY-575: Fix blobclob4BLOB, lobStreams, and ieptests on systems with non-ASCII
native encodings.
Committed for Myrna Van Lunteren <m.v.lunteren@gmail.com>
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@397028 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5750-f6e9c212
|
DERBY-5750 Sending an empty string as table name to compress table procedure or empty string as index name to update statistics procedure makes the parser throw an exception.
Committing changes for DERBY-5750 which will provide following functionality
a)if schema name is provided as an empty string, we will throw SQLState.LANG_SCHEMA_DOES_NOT_EXIST
b)if table name is provided as an empty string, we will throw SQLState.LANG_TABLE_NOT_FOUND
c)if index name is provided as an empty string(this is for update and drop statistics procedures), we will throw
SQLState.LANG_INDEX_NOT_FOUND
d)if schema name is null, we will use current schema to resolve the table name
e)if table name is null, we will throw SQLState.LANG_TABLE_NOT_FOUND
f)if index name is null, we will drop/update statisitcs for all the indexes for the given table.
I have added few test cases for each of these procedures.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1355552 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/SystemProcedures.java",
"hunks": [
{
"added": [
" * @param schemaname schema name of the table/index(es) whose ",
" * statistics will be updated. null will mean use",
" * the current schema to resolve the table name.",
" * Empty string for schema name will raise an ",
" * exception.",
" * be updated. A null value or an empty string will",
" * throw table not found exception. Must be non-null.",
" * @param indexname If null, then update the statistics for all the ",
" * indexes for the given table name. If not null and",
" * not empty string, then the user wants to update the",
" * statistics for only the give index name.",
" * Empty string for index name will raise an ",
" * exception."
],
"header": "@@ -724,14 +724,20 @@ public class SystemProcedures {",
"removed": [
" * @param schemaname schema name of the index(es) whose statistics will",
" * be updated. Must be non-null, no default is used.",
" * be updated. Must be non-null.",
" * @param indexname Can be null. If not null or emptry string then the",
" * user wants to update the statistics for only this",
" * index. If null, then update the statistics for all",
" * the indexes for the given table name."
]
},
{
"added": [
" StringBuffer query = new StringBuffer();",
" query.append(\"alter table \");",
" query.append(basicSchemaTableValidation(schemaname,tablename));",
"",
" //Index name can't be empty string",
" if (indexname != null && indexname.length()==0)",
"\t\t\tthrow PublicAPI.wrapStandardException(",
"\t\t\t\t\tStandardException.newException(",
"\t\t\t\t\t\t\tSQLState.LANG_INDEX_NOT_FOUND, ",
"\t\t\t\t\t\t\tindexname));",
"",
" \tquery.append(\" all update statistics \");",
" \tquery.append(\" update statistics \" + IdUtil.normalToDelimited(indexname));",
" PreparedStatement ps = conn.prepareStatement(query.toString());"
],
"header": "@@ -741,16 +747,24 @@ public class SystemProcedures {",
"removed": [
" String escapedSchema = IdUtil.normalToDelimited(schemaname);",
" String escapedTableName = IdUtil.normalToDelimited(tablename);",
" String query = \"alter table \" + escapedSchema + \".\" + escapedTableName;",
" \tquery = query + \" all update statistics \";",
" \tquery = query + \" update statistics \" + IdUtil.normalToDelimited(indexname);",
" PreparedStatement ps = conn.prepareStatement(query);"
]
},
{
"added": [
" * statistics will be dropped. null will mean use",
" * the current schema to resolve the table name.",
" * Empty string for schema name will raise an ",
" * exception.",
" * be dropped. A null value or an empty string will",
" * throw table not found exception. Must be non-null.",
" * @param indexname If null, then drop the statistics for all the ",
" * indexes for the given table name. If not null and",
" * not empty string, then the user wants to drop the",
" * statistics for only the give index name.",
" * Empty string for index name will raise an ",
" * exception.",
"\t * @exception SQLException "
],
"header": "@@ -763,16 +777,21 @@ public class SystemProcedures {",
"removed": [
" * statistics will be dropped. Must be non-null, ",
" * no default is used.",
" * be dropped. Must be non-null.",
" * @param indexname Can be null. If not null or emptry string then the",
" * user wants to drop the statistics for only this",
" * index. If null, then drop the statistics for all",
" * the indexes for the given table name.",
"\t * @exception SQLException"
]
},
{
"added": [
" StringBuffer query = new StringBuffer();",
" query.append(\"alter table \");",
" query.append(basicSchemaTableValidation(schemaname,tablename));",
"",
" //Index name can't be empty string",
" if (indexname != null && indexname.length()==0)",
"\t\t\tthrow PublicAPI.wrapStandardException(",
"\t\t\t\t\tStandardException.newException(",
"\t\t\t\t\t\t\tSQLState.LANG_INDEX_NOT_FOUND, ",
"\t\t\t\t\t\t\tindexname));",
" ",
" \tquery.append(\" all drop statistics \");",
" \tquery.append(\" statistics drop \" + IdUtil.normalToDelimited(indexname));",
" PreparedStatement ps = conn.prepareStatement(query.toString());",
" /**",
" * Do following checks",
" * a)Schema name can't be empty string",
" * b)If schema name is null, then we use current schema",
" * c)Table name can't be null or empty string",
" * ",
" * @param schemaname If schema name is null, then we will use the ",
" * current schema to resolve the table name. Empty",
" * string for schema name will raise an exception.",
" * @param tablename If table name is null or an empty string, we will",
" * throw table not found exception.",
" * @return schemaname.tablename or tablename",
" * @throws SQLException ",
" * a)if schema name is empty string",
" * b)if table name is empty string",
" * c)if table name is null",
" */",
" private static String basicSchemaTableValidation(",
" String schemaname, String tablename) ",
" throws SQLException",
" {",
" //Schema name can't be empty string",
" if (schemaname != null && schemaname.length()==0)",
"\t\t\tthrow PublicAPI.wrapStandardException(",
"\t\t\t\t\tStandardException.newException(",
"\t\t\t\t\t\t\tSQLState.LANG_SCHEMA_DOES_NOT_EXIST, ",
"\t\t\t\t\t\t\tschemaname));",
"",
" //Table name can't be null or empty string",
" if ((tablename==null) || tablename.length()==0)",
"\t\t\tthrow PublicAPI.wrapStandardException(",
"\t\t\t\t\tStandardException.newException(",
"\t\t\t\t\t\t\tSQLState.LANG_TABLE_NOT_FOUND, ",
"\t\t\t\t\t\t\ttablename));",
" \t ",
" return IdUtil.mkQualifiedName(schemaname, tablename);",
" }",
""
],
"header": "@@ -780,22 +799,68 @@ public class SystemProcedures {",
"removed": [
" String escapedSchema = IdUtil.normalToDelimited(schemaname);",
" String escapedTableName = IdUtil.normalToDelimited(tablename);",
" String query = \"alter table \" + escapedSchema + \".\" + escapedTableName;",
" \tquery = query + \" all drop statistics \";",
" \tquery = query + \" statistics drop \" + IdUtil.normalToDelimited(indexname);",
" PreparedStatement ps = conn.prepareStatement(query);"
]
}
]
}
] |
derby-DERBY-5751-0881157e
|
DERBY-5751: Make TriggerTest less hungry on heap space
- Use LoopingAlphabetStream and LoopingAlphabetReader instead of
ByteArrayInputStream and CharArrayReader so that the input data
arrays don't need to be materialized in memory.
- Close statements and result sets earlier to allow gc of old
test data.
- Use shared helper methods BaseTestCase.assertEquals(Reader,Reader)
and BaseTestCase.assertEquals(InputStream,InputStream). These also
ensure that the readers and streams are closed.
- Added new helper methods to ByteAlphabet and CharAlphabet to make
it easier to create alphabets consisting of a single value.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1336527 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/streams/ByteAlphabet.java",
"hunks": [
{
"added": [
" /**",
" * Create an alphabet that consists of a single byte.",
" */",
" public static ByteAlphabet singleByte(byte b) {",
" return new ByteAlphabet(",
" \"Single byte: \" + b,",
" new char[] { (char) (b & 0xff) },",
" \"US-ASCII\");",
" }",
""
],
"header": "@@ -113,6 +113,16 @@ public class ByteAlphabet {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/streams/CharAlphabet.java",
"hunks": [
{
"added": [
" /**",
" * Get an alphabet consisting of a single character.",
" */",
" public static CharAlphabet singleChar(char ch) {",
" return new CharAlphabet(\"Single char: \" + ch, new char[] { ch });",
" }",
""
],
"header": "@@ -92,6 +92,13 @@ public class CharAlphabet {",
"removed": []
}
]
}
] |
derby-DERBY-5752-c2fe2805
|
DERBY-5752: LOBStreamControl should materialize less aggressively
Only materialize LOBs that are smaller than 32 KB in memory.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1447722 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/LOBStreamControl.java",
"hunks": [
{
"added": [
" * size is set to the size of the byte array supplied, but no larger than",
" * MAX_BUF_SIZE. If no initial data is supplied, or if the initial data size",
" * is less than DEFAULT_BUF_SIZE, the buffer size is set to DEFAULT_BUF_SIZE.",
" * initial buffer size, the data is moved into memory."
],
"header": "@@ -44,13 +44,12 @@ import org.apache.derby.shared.common.reference.MessageId;",
"removed": [
" * size is set to the size of the byte array supplied. If no initial data",
" * is supplied or if the initial data size is less than DEFAULT_MAX_BUF_SIZE,",
" * The buffer size is set to DEFAULT_MAX_BUF_SIZE.",
" * initial buffer size (max of DEFAULT_MAX_BUF_SIZE and initial byte array size)",
" * the data moved into memory."
]
},
{
"added": [
" private static final int DEFAULT_BUF_SIZE = 4096;",
" private static final int MAX_BUF_SIZE = 32768;"
],
"header": "@@ -63,7 +62,8 @@ class LOBStreamControl {",
"removed": [
" private static final int DEFAULT_MAX_BUF_SIZE = 4096;"
]
},
{
"added": [
" bufferSize = DEFAULT_BUF_SIZE;"
],
"header": "@@ -73,7 +73,7 @@ class LOBStreamControl {",
"removed": [
" bufferSize = DEFAULT_MAX_BUF_SIZE;"
]
}
]
}
] |
derby-DERBY-5755-9f6c6783
|
DERBY-5755: Minor cleanup of DataDictionaryImpl.getRoutineList()
Use java.util.Collections to create empty and single-element lists.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1338167 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
" // We expect to find just a single function, since we currently",
" // don't support multiple routines with the same name, but use a",
" // list to support future extension.",
" List list = new ArrayList(1);",
""
],
"header": "@@ -7761,12 +7761,15 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tjava.util.List list = new java.util.ArrayList();",
"\t\t"
]
},
{
"added": [
" return ad == null ?",
" Collections.EMPTY_LIST :",
" Collections.singletonList(ad);"
],
"header": "@@ -7819,10 +7822,9 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tif (ad != null) {",
"\t\t\tlist.add(ad);",
"\t\t}",
"\t\treturn list;"
]
}
]
}
] |
derby-DERBY-5759-5a72b2c2
|
DERBY-5759: Add IndexStatsUtil.release(boolean closeConnection)
Made it possible to clean up the resources used by the utility without
closing the connection.
Patch file: derby-5759-1a-release_with_arg.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1338068 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/IndexStatsUtil.java",
"hunks": [
{
"added": [
" * Releases resources and closes the associated connection.",
" release(true);",
" }",
"",
" /**",
" * Releases resources.",
" *",
" * @param closeConnection whether to close the associated connection",
" */",
" public void release(boolean closeConnection) {"
],
"header": "@@ -410,9 +410,18 @@ public class IndexStatsUtil {",
"removed": [
" * Releases resources."
]
},
{
"added": [
" if (closeConnection) {",
" con.close();",
" }"
],
"header": "@@ -431,7 +440,9 @@ public class IndexStatsUtil {",
"removed": [
" con.close();"
]
}
]
}
] |
derby-DERBY-576-7c812c76
|
DERBY-576 xaHelper in ij creates global id that is not the same across platforms
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@740048 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/tools/org/apache/derby/impl/tools/ij/xaHelper.java",
"hunks": [
{
"added": [
"",
"import java.io.UnsupportedEncodingException;"
],
"header": "@@ -22,6 +22,8 @@",
"removed": []
},
{
"added": [
"\t\ttry {",
"\t\t\treturn new ijXid(xid, databaseName.getBytes(\"UTF-8\"));",
"\t\t} catch (UnsupportedEncodingException e) {",
"\t\t\t// UTF-8 is a required encoding. We should never get here.",
"\t\t\te.printStackTrace();",
"\t\t\treturn null;",
"\t\t}"
],
"header": "@@ -79,7 +81,13 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\treturn new ijXid(xid, databaseName.getBytes());"
]
}
]
}
] |
derby-DERBY-5760-d349a1fb
|
DERBY-5760: Missing argument in some XJ022 errors
Use helper method that sets the argument automatically.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1341350 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedBlob.java",
"hunks": [
{
"added": [
" throws StandardException, SQLException"
],
"header": "@@ -154,7 +154,7 @@ final class EmbedBlob extends ConnectionChild implements Blob, EngineLOB",
"removed": [
" throws StandardException"
]
},
{
"added": [
" throw Util.setStreamFailure(e);"
],
"header": "@@ -189,8 +189,7 @@ final class EmbedBlob extends ConnectionChild implements Blob, EngineLOB",
"removed": [
" throw StandardException.newException (",
" SQLState.SET_STREAM_FAILURE, e);"
]
},
{
"added": [
" throws StandardException, SQLException {"
],
"header": "@@ -210,7 +209,7 @@ final class EmbedBlob extends ConnectionChild implements Blob, EngineLOB",
"removed": [
" throws StandardException {"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedClob.java",
"hunks": [
{
"added": [
" throws StandardException, SQLException"
],
"header": "@@ -102,7 +102,7 @@ final class EmbedClob extends ConnectionChild implements Clob, EngineLOB",
"removed": [
" throws StandardException"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/TemporaryClob.java",
"hunks": [
{
"added": [
" throws IOException, StandardException {"
],
"header": "@@ -179,7 +179,7 @@ final class TemporaryClob implements InternalClob {",
"removed": [
" throws IOException, SQLException, StandardException {"
]
}
]
}
] |
derby-DERBY-5762-5ea170fe
|
DERBY-5762: Normalize casing of username inside NATIVE procedures.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1338760 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/SystemProcedures.java",
"hunks": [
{
"added": [
" userName = normalizeUserName( userName );",
" "
],
"header": "@@ -2090,6 +2090,8 @@ public class SystemProcedures {",
"removed": []
},
{
"added": [
" {",
" resetAuthorizationIDPassword( normalizeUserName( userName ), password );",
" }",
"",
" /**",
" * Reset the password for an already normalized authorization id.",
" */",
" private static void resetAuthorizationIDPassword",
" (",
" String userName,",
" String password",
" )",
" throws SQLException"
],
"header": "@@ -2196,6 +2198,19 @@ public class SystemProcedures {",
"removed": []
},
{
"added": [
" "
],
"header": "@@ -2221,7 +2236,7 @@ public class SystemProcedures {",
"removed": [
" "
]
},
{
"added": [
" resetAuthorizationIDPassword( currentUser, password );"
],
"header": "@@ -2233,7 +2248,7 @@ public class SystemProcedures {",
"removed": [
" SYSCS_RESET_PASSWORD( currentUser, password );"
]
},
{
"added": [
" userName = normalizeUserName( userName );",
" "
],
"header": "@@ -2245,6 +2260,8 @@ public class SystemProcedures {",
"removed": []
}
]
}
] |
derby-DERBY-5764-720294ef
|
DERBY-5764: Make DatabaseMetaDataTest more robust wrt changes made by other tests
Use schema name in the queries to avoid "pollution" of the system tables from
other tests, specifically when run as part of the upgrade test. The schema name
is set to the user name, so to enable this feature wrap the test in a decorator
that changes the user.
Wrapped a ChangeUserDecorator around the tests from DatabaseMetaDataTest in the
upgrade tests.
Patch file: derby-5764-2a-specify_schema.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1339240 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5764-cde004ca
|
DERBY-5764: Make DatabaseMetaDataTest more robust wrt changes made by other tests
Minor cleanups: removed unused imports, removed final from static method,
renamed method, and converted comment to Javadoc.
Patch file: derby-5764-1a-upgraderun_cleanup.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1339007 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5764-fa3a4bc0
|
DERBY-5764: Make DatabaseMetaDataTest more robust wrt changes made by other tests
Added some additional tests to test code paths where schema is set to null.
Added utility method JDBC.assertResultSetContains, used where the query may
return more rows due to data added by other tests but we still want to assert
that a specific subset of rows exists in the result.
Patch file: derby-5764-3b-add_test_case_schema_null.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1351212 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/JDBC.java",
"hunks": [
{
"added": [
" assertRSContains(rs, expectedRows, asTrimmedStrings, true);",
" }",
"",
" /**",
" * Asserts that the {@code ResultSet} contains the rows specified by the",
" * two-dimensional array.",
" * <p>",
" * The order of the rows are ignored, and there may be more rows in the",
" * result set than in the array. All values are compared as trimmed strings.",
" *",
" * @param rs the result set to check",
" * @param expectedRows the rows that must exist in the result set",
" * @throws SQLException if accessing the result set fails",
" */",
" public static void assertResultSetContains(",
" ResultSet rs, Object[][] expectedRows)",
" throws SQLException {",
" assertRSContains(rs, expectedRows, true, false);",
" }",
" /**",
" * Asserts that the {@code ResultSet} contains the rows specified by the",
" * two-dimensional array.",
" *",
" * @param rs the result set to check",
" * @param expectedRows the rows that must exist in the result set",
" * @param asTrimmedStrings whether the objects should be compared as",
" * trimmed strings",
" * @param rowCountsMustMatch whether the number of rows must be the same in",
" * the result set and the array of expected rows",
" * @throws SQLException if accessing the result set fails",
" */",
" private static void assertRSContains(",
" ResultSet rs, Object[][] expectedRows, boolean asTrimmedStrings,",
" boolean rowCountsMustMatch)",
" throws SQLException {",
" if (rowCountsMustMatch) {",
" assertEmpty(rs);",
" }"
],
"header": "@@ -1304,9 +1304,46 @@ public class JDBC {",
"removed": [
" assertEmpty(rs);"
]
},
{
"added": [
" if (rowCountsMustMatch) {",
" Assert.assertEquals(\"Unexpected row count\",",
" expectedRows.length, actual.size());",
" }"
],
"header": "@@ -1346,13 +1383,12 @@ public class JDBC {",
"removed": [
" Assert.assertEquals(\"Unexpected row count\",",
" expectedRows.length, actual.size());",
"",
" actual.removeAll(expected);",
" Assert.assertTrue(\"Extra rows in ResultSet\", actual.isEmpty());"
]
}
]
}
] |
derby-DERBY-5770-ebe46425
|
DERBY-5770: Reduce window of opportunity for queries being compiled without statistics on istat update
Moved invalidation to after the new statistics have been written.
Patch file: derby-5770-1a-move_invalidation.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1339999 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java",
"hunks": [
{
"added": [
" if (!lcc.dataDictionaryInWriteMode()) {",
" dd.startWriting(lcc);",
" }",
" boolean conglomerateGone = false; // invalidation control flag"
],
"header": "@@ -556,13 +556,13 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" // Invalidate statments accessing the given table.",
" // Note that due to retry logic, swithcing the data dictionary to",
" // write mode is done inside invalidateStatements.",
" invalidateStatements(lcc, td, asBackgroundTask);"
]
},
{
"added": [
" conglomerateGone = (cd == null);",
" if (!conglomerateGone) {",
" // Invalidate statments accessing the given table.",
" invalidateStatements(lcc, td, asBackgroundTask);",
" }"
],
"header": "@@ -596,8 +596,13 @@ public class IndexStatisticsDaemonImpl",
"removed": []
},
{
"added": [
" if (!lcc.dataDictionaryInWriteMode()) {"
],
"header": "@@ -621,13 +626,11 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" boolean inWrite = false;",
" if (!inWrite) {",
" inWrite = true;"
]
},
{
"added": [],
"header": "@@ -644,7 +647,6 @@ public class IndexStatisticsDaemonImpl",
"removed": [
" inWrite = false;"
]
}
]
}
] |
derby-DERBY-5774-387174c8
|
DERBY-5774: Failures in UpdateStatisticsTest (order-dependent test cases)
Made asserts on statistics specific to the relevant table(s) in testUpdateAndDropStatistics.
Made testDisposableStatsEagerness drop the tables it creates.
Patch file: derby-5774-1a-focused_asserts.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1341019 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5779-34681435
|
DERBY-5779: Prevent VTIS in FROM list subqueries from referencing other elements in the FROM list.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1362159 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromSubquery.java",
"hunks": [
{
"added": [
"import java.util.Enumeration;",
"import java.util.Vector;"
],
"header": "@@ -21,6 +21,8 @@",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromVTI.java",
"hunks": [
{
"added": [
"import java.util.ArrayList;"
],
"header": "@@ -29,6 +29,7 @@ import java.sql.PreparedStatement;",
"removed": []
},
{
"added": [
" // If this FromVTI is invoked in a subquery which is invoked in an outer FROM list,",
" // then arguments to this FromVTI may not reference other tables in that FROM list.",
" // See DERBY-5779. Here is an example of a reference we want to forbid:",
" //",
" // select tt.*",
" // from",
" // sys.systables systabs,",
" // ( select * from table (syscs_diag.space_table( systabs.tablename )) as t2 ) tt",
" // where systabs.tabletype = 'T' and systabs.tableid = tt.tableid;",
" //",
" private ArrayList outerFromLists = new ArrayList();",
" "
],
"header": "@@ -117,6 +118,18 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": []
},
{
"added": [
" /**",
" * Add a FromList to the collection of FromLists which bindExpressions() checks",
" * when vetting VTI arguments which reference columns in other tables.",
" * See DERBY-5554 and DERBY-5779.",
" */",
" public void addOuterFromList( FromList fromList )",
" {",
" outerFromLists.add( fromList );",
" }",
""
],
"header": "@@ -337,6 +350,16 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": []
},
{
"added": [
" if ( ref.getCorrelated() ) // the arg refers to a table in an outer query block",
" // If the outer table appears in a FROM list alongside a subquery which",
" // we're inside, then the reference is undefined and illegal. The following query",
" // is an example of this problem. Again, see DERBY-5779.",
" //",
" // select tt.*",
" // from",
" // sys.systables systabs,",
" // ( select * from table (syscs_diag.space_table( systabs.tablename )) as t2 ) tt",
" // where systabs.tabletype = 'T' and systabs.tableid = tt.tableid;",
" //",
" for ( int i = 0; i < outerFromLists.size(); i++ )",
" FromTable fromTable = columnInFromList( (FromList) outerFromLists.get( i ), ref );",
" if ( fromTable != null )",
" illegalReference = true;",
" break;",
" else // the arg refers to a table in this query block",
" {",
" FromTable fromTable = columnInFromList( fromListParam, ref );",
" if ( fromTable != null )",
" {",
" // the only legal kind of reference is a VTI argument which",
" // references a non-VTI/tableFunction table in the current query block",
" if ( !isDerbyStyleTableFunction && !(fromTable instanceof FromVTI) )",
" {",
" illegalReference = false;",
" break;",
" }",
" }",
" }",
""
],
"header": "@@ -888,32 +911,46 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": [
" int referencedTableNumber = ref.getTableNumber();",
" if ( !ref.getCorrelated() ) // if the arg refers to a table in this query block",
" for ( int i = 0; i < fromListParam.size(); i++ )",
" FromTable fromTable = (FromTable) fromListParam.elementAt( i );",
" if ( referencedTableNumber == fromTable.getTableNumber() )",
" // remember this FromTable so that we can code generate the arg",
" // from actual result columns later on.",
" argSources.put( new Integer( fromTable.getTableNumber() ), fromTable );",
"",
" // the only legal kind of reference is a VTI argument which",
" // references a non-VTI table in the current query block",
" if ( !isDerbyStyleTableFunction && !(fromTable instanceof FromVTI) )",
" {",
" illegalReference = false;",
" break;",
" }",
" "
]
}
]
}
] |
derby-DERBY-5779-8c3c7e88
|
DERBY-5779: Add more tests for illegal joins of VTI/tableFunction args in <joined table> clauses.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1360846 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5779-9495b40d
|
DERBY-5779: Forbid joining to VTI/tableFunction args in <joined table> clauses.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1360736 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5779-a4c1c3a3
|
DERBY-5779: Do not let table function parameters refer to other tables in the same FROM list.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1352631 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-578-fee55d87
|
DERBY-578 - Grouped select from temporary table raises null pointer exception in byte code generator
DERBY-1464 - runtimestatistics can show that an index is being used even when it isn't
Contributed by Manish Khettry
The problem is simple enough-- we didn't have a conglomerate name for temporary tables. I fixed the code to behave more like what fillInScanArgs does.
Earlier, we would set the indexName field in DistinctScanResult to the conglomerate name (cd.getName()) used to scan the table. If the conglomerate was the base table itself then this was just plain wrong. The change, for this patch, passes null if no index is being used.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@418672 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java",
"hunks": [
{
"added": [
"\t",
" /* helper method used by generateMaxSpecialResultSet and",
" * generateDistinctScan to return the name of the index if the ",
" * conglomerate is an index. ",
" * @param cd Conglomerate for which we need to push the index name",
" * @param mb Associated MethodBuilder",
" * @throws StandardException",
" */",
" private void pushIndexName(ConglomerateDescriptor cd, MethodBuilder mb) ",
" throws StandardException",
" {",
" if (cd.isConstraint()) {",
" DataDictionary dd = getDataDictionary();",
" ConstraintDescriptor constraintDesc = ",
" dd.getConstraintDescriptor(tableDescriptor, cd.getUUID());",
" mb.push(constraintDesc.getConstraintName());",
" } else if (cd.isIndex()) {",
" mb.push(cd.getConglomerateName());",
" } else {",
" // If the conglomerate is the base table itself, make sure we push null.",
" // Before the fix for DERBY-578, we would push the base table name ",
" // and this was just plain wrong and would cause statistics information to be incorrect.",
" mb.pushNull(\"java.lang.String\");",
" }",
" }",
"\t",
" private void generateMaxSpecialResultSet"
],
"header": "@@ -3085,8 +3085,33 @@ public class FromBaseTable extends FromTable",
"removed": [
"",
"\tprivate void generateMaxSpecialResultSet"
]
},
{
"added": [
"\t\t**\t\ttableName,",
"\t\t**\t\toptimizeroverride\t\t\t"
],
"header": "@@ -3106,7 +3131,8 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\t**\t\ttableName,\t\t\t"
]
},
{
"added": [
" pushIndexName(cd, mb);"
],
"header": "@@ -3132,7 +3158,7 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\tmb.push(cd.getConglomerateName());"
]
},
{
"added": [
"\t\t**\t\ttableName,",
"\t\t**\t\toptimizeroverride\t\t\t"
],
"header": "@@ -3167,7 +3193,8 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\t**\t\ttableName,\t\t\t"
]
}
]
},
{
"file": "java/tools/org/apache/derby/impl/tools/ij/xaHelper.java",
"hunks": [
{
"added": [
" if (fm == null) {",
" return;",
" }"
],
"header": "@@ -64,6 +64,9 @@ class xaHelper implements xaAbstractHelper",
"removed": []
}
]
}
] |
derby-DERBY-5780-ceaf7dfd
|
DERBY-5494 Same value returned by successive calls to a sequence generator flanking an unorderly shutdown.
DERBY-5780 identity column performance has degredated
The previous patch for DERBY-5494 had the unintended affect of forcing a
synchronous write for all nested user transactions at abort time. This
in turn caused identity column inserts to have one synchronous write per
insert as the nested user transaction is destroyed for each insert which
does an abort each time.
To solve this interfaces were changed so that calling code could set the
default commit sync behavior when the transaction was committed rather than
count on the "type" of transaction. Nested user transactions used for identity
columns have default set to not sync, and the rest of the nested user transactions
default to syncing. Behavior of other types of transactions should not
be affected. User transactions still sync by default and internal and ntt's still
default to not sync.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1344065 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/store/raw/RawStoreFactory.java",
"hunks": [
{
"added": [
" @param flush_log_on_xact_end By default should the transaction ",
" commit and abort be synced to the log. Normal usage should pick true, ",
" unless there is specific performance need and usage works correctly if ",
" a commit can be lost on system crash."
],
"header": "@@ -723,6 +723,10 @@ public interface RawStoreFactory extends Corruptable {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/store/raw/xact/TransactionFactory.java",
"hunks": [
{
"added": [
" @param contextMgr is the context manager to use. It must",
" be the current context manager.",
" @param transName is the transaction name. It will be ",
" displayed in the transactiontable VTI.",
" @param flush_log_on_xact_end By default should the transaction commit",
" and abort be synced to the log. Normal",
" usage should pick true, unless there",
" is specific performance need and usage",
" works correctly if a commit can be ",
" lost on system crash."
],
"header": "@@ -117,10 +117,16 @@ public interface TransactionFactory extends Corruptable {",
"removed": [
" @param contextMgr is the context manager to use. It must be ",
" the current context manager.",
" @param transName is the transaction name. It will be ",
" displayed in the transactiontable VTI."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java",
"hunks": [
{
"added": [
" TransactionController subTransaction = ",
" executionTC.startNestedUserTransaction( true, true );",
""
],
"header": "@@ -272,7 +272,9 @@ public abstract class SequenceUpdater implements Cacheable",
"removed": [
" TransactionController subTransaction = executionTC.startNestedUserTransaction( true );"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/RAMTransaction.java",
"hunks": [
{
"added": [
" *",
" * @param readOnly Is transaction readonly? Only 1 non-read",
" * only nested transaction is allowed per ",
" * transaction.",
" *",
" * @param flush_log_on_xact_end By default should the transaction commit",
" * and abort be synced to the log. Normal",
" * usage should pick true, unless there is",
" * specific performance need and usage ",
" * works correctly if a commit can be lost",
" * on system crash.",
" public TransactionController startNestedUserTransaction(",
" boolean readOnly,",
" boolean flush_log_on_xact_end)"
],
"header": "@@ -2301,12 +2301,25 @@ public class RAMTransaction",
"removed": [
" public TransactionController startNestedUserTransaction(boolean readOnly)"
]
},
{
"added": [
" getLockSpace(), ",
" cm,",
" cm, ",
" AccessFactoryGlobals.NESTED_UPDATE_USER_TRANS,",
" flush_log_on_xact_end));"
],
"header": "@@ -2327,10 +2340,13 @@ public class RAMTransaction",
"removed": [
" getLockSpace(), cm,",
" cm, AccessFactoryGlobals.NESTED_UPDATE_USER_TRANS));"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/xact/Xact.java",
"hunks": [
{
"added": [
" // Whether or not to flush log on commit or abort. ",
" // Current usage:",
" // User transactions default to flush. Internal and nested top",
" // transactions default to not flush. ",
" //",
" // Nested user update transactions are configured when they are created, ",
" // and most default to flush. Nested user update transaction used for",
" // identity column maintenance defaults to not flush to maintain ",
" // backward performance compatibility with previous releases.",
" //",
" // In all cases log will not be flushsed by Xact.prepareCommit()",
" // if commitNoSync() has been called rather than commit.",
" private boolean flush_log_on_xact_end;",
""
],
"header": "@@ -243,6 +243,20 @@ public class Xact extends RawTransaction implements Limit, LockOwner {",
"removed": []
},
{
"added": [
" CompatibilitySpace compatibilitySpace,",
" boolean flush_log_on_xact_end)",
"\t\tthis.xactFactory = xactFactory;",
"\t\tthis.logFactory = logFactory;",
"\t\tthis.dataFactory = dataFactory;",
"\t\tthis.dataValueFactory = dataValueFactory;",
"\t\tthis.readOnly = readOnly;",
"\t\tthis.flush_log_on_xact_end = flush_log_on_xact_end;"
],
"header": "@@ -264,16 +278,18 @@ public class Xact extends RawTransaction implements Limit, LockOwner {",
"removed": [
" CompatibilitySpace compatibilitySpace)",
"\t\tthis.xactFactory = xactFactory;",
"\t\tthis.logFactory = logFactory;",
"\t\tthis.dataFactory = dataFactory;",
"\t\tthis.dataValueFactory = dataValueFactory;",
"\t\tthis.readOnly = readOnly;"
]
},
{
"added": [],
"header": "@@ -298,11 +314,6 @@ public class Xact extends RawTransaction implements Limit, LockOwner {",
"removed": [
"",
" /*",
" System.out.println(\"Xact.constructor: readonly = \" + this.readOnly +",
" \";this = \" + this);",
" */"
]
},
{
"added": [
"\t\t\tif (seenUpdates) ",
" {"
],
"header": "@@ -773,8 +784,8 @@ public class Xact extends RawTransaction implements Limit, LockOwner {",
"removed": [
"\t\t\tif (seenUpdates) {",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/xact/XactFactory.java",
"hunks": [
{
"added": [
" RawStoreFactory rsf, ",
" ContextManager cm,",
" boolean readOnly,",
" CompatibilitySpace compatibilitySpace,",
" String xact_context_id,",
" String transName,",
" boolean excludeMe,",
" boolean flush_log_on_xact_end)"
],
"header": "@@ -312,13 +312,14 @@ public class XactFactory implements TransactionFactory, ModuleControl, ModuleSup",
"removed": [
" RawStoreFactory rsf,",
" ContextManager cm,",
" boolean readOnly,",
" CompatibilitySpace compatibilitySpace,",
" String xact_context_id,",
" String transName,",
" boolean excludeMe)"
]
},
{
"added": [
" readOnly, compatibilitySpace, flush_log_on_xact_end);"
],
"header": "@@ -335,7 +336,7 @@ public class XactFactory implements TransactionFactory, ModuleControl, ModuleSup",
"removed": [
" readOnly, compatibilitySpace);"
]
},
{
"added": [
" return(",
" startCommonTransaction(",
" rsf, ",
" cm, ",
" false, // user xact always read/write ",
" null, ",
" USER_CONTEXT_ID, ",
" transName, ",
" true, // user xact always excluded during quiesce",
" true)); // user xact default flush on xact end"
],
"header": "@@ -351,8 +352,16 @@ public class XactFactory implements TransactionFactory, ModuleControl, ModuleSup",
"removed": [
" return(startCommonTransaction(",
" rsf, cm, false, null, USER_CONTEXT_ID, transName, true));"
]
},
{
"added": [
" return(",
" startCommonTransaction(",
" rsf, ",
" cm, ",
" true, ",
" compatibilitySpace, ",
" NESTED_READONLY_USER_CONTEXT_ID, ",
" transName, ",
" false,",
" true)); // user readonly xact default flush on xact",
" // end, should never have anything to flush.",
" String transName,",
" boolean flush_log_on_xact_end)",
" return(",
" startCommonTransaction(",
" rsf, ",
" cm, ",
" false, ",
" null, ",
" NESTED_UPDATE_USER_CONTEXT_ID, ",
" transName, ",
" true,",
" flush_log_on_xact_end)); // allow caller to choose default ",
" // log log flushing on commit/abort",
" // for internal operations used ",
" // nested user update transaction."
],
"header": "@@ -362,20 +371,39 @@ public class XactFactory implements TransactionFactory, ModuleControl, ModuleSup",
"removed": [
" return(startCommonTransaction(",
" rsf, cm, true, compatibilitySpace, ",
" NESTED_READONLY_USER_CONTEXT_ID, transName, false));",
" String transName)",
" return(startCommonTransaction(",
" rsf, cm, false, null, ",
" NESTED_UPDATE_USER_CONTEXT_ID, transName, true));"
]
},
{
"added": [
" rsf, ",
" cm, ",
" false, ",
" null, ",
" USER_CONTEXT_ID, ",
" AccessFactoryGlobals.USER_TRANS_NAME, ",
" true,",
" true); // user xact default flush on xact end"
],
"header": "@@ -395,8 +423,14 @@ public class XactFactory implements TransactionFactory, ModuleControl, ModuleSup",
"removed": [
" rsf, cm, false, null, ",
" USER_CONTEXT_ID, AccessFactoryGlobals.USER_TRANS_NAME, true);"
]
},
{
"added": [
" this, logFactory, dataFactory, dataValueFactory, ",
" false, null, false);"
],
"header": "@@ -443,7 +477,8 @@ public class XactFactory implements TransactionFactory, ModuleControl, ModuleSup",
"removed": [
" this, logFactory, dataFactory, dataValueFactory, false, null);"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/unitTests/store/T_AccessFactory.java",
"hunks": [
{
"added": [
" TransactionController child_tc = ",
" tc.startNestedUserTransaction(true, true);"
],
"header": "@@ -2987,7 +2987,8 @@ public class T_AccessFactory extends T_Generic",
"removed": [
" TransactionController child_tc = tc.startNestedUserTransaction(true);"
]
},
{
"added": [
" child_tc = tc.startNestedUserTransaction(true, true);",
" child_tc.startNestedUserTransaction(true, true);"
],
"header": "@@ -3033,11 +3034,11 @@ public class T_AccessFactory extends T_Generic",
"removed": [
" child_tc = tc.startNestedUserTransaction(true);",
" child_tc.startNestedUserTransaction(true);"
]
},
{
"added": [
" child_tc = tc.startNestedUserTransaction(true, true);"
],
"header": "@@ -3074,7 +3075,7 @@ public class T_AccessFactory extends T_Generic",
"removed": [
" child_tc = tc.startNestedUserTransaction(true);"
]
},
{
"added": [
" child_tc = tc.startNestedUserTransaction(true, true);"
],
"header": "@@ -3132,7 +3133,7 @@ public class T_AccessFactory extends T_Generic",
"removed": [
" child_tc = tc.startNestedUserTransaction(true);"
]
},
{
"added": [
" child_tc = tc.startNestedUserTransaction(true, true);"
],
"header": "@@ -3183,7 +3184,7 @@ public class T_AccessFactory extends T_Generic",
"removed": [
" child_tc = tc.startNestedUserTransaction(true);"
]
},
{
"added": [
" child_tc = tc.startNestedUserTransaction(false, true);"
],
"header": "@@ -3213,7 +3214,7 @@ public class T_AccessFactory extends T_Generic",
"removed": [
" child_tc = tc.startNestedUserTransaction(false);"
]
},
{
"added": [
" child_tc = tc.startNestedUserTransaction(false, true);"
],
"header": "@@ -3253,7 +3254,7 @@ public class T_AccessFactory extends T_Generic",
"removed": [
" child_tc = tc.startNestedUserTransaction(false);"
]
}
]
}
] |
derby-DERBY-5783-02ac42ce
|
DERBY-5783: Remove duplicated code for starting remote processes in replication tests
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1344190 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-579-1373f5a6
|
Fixes DERBY-579 by not storing the timeout value in the
GenericPreparedStatement class, just passing it through.
SetQueryTimeoutTest updated, now uses a server-side function to delay
queries. This gives predictability and because of that, the running
time of the test has been significantly reduced. Also, the test now
uses the same tables in multiple statements, in order to hit the
statement cache and check that the timeout affects only the right
statements.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@293585 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/PreparedStatement.java",
"hunks": [
{
"added": [
" * @param timeoutMillis timeout value in milliseconds."
],
"header": "@@ -106,6 +106,7 @@ public interface PreparedStatement",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [
" org.apache.derby.iapi.sql.ResultSet rs = ps.execute(act, false, true, true, 0L); //execute the update where current of sql"
],
"header": "@@ -3200,8 +3200,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" ps.setQueryTimeout(0L);",
" org.apache.derby.iapi.sql.ResultSet rs = ps.execute(act, false, true, true); //execute the update where current of sql"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/GenericPreparedStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -138,7 +138,6 @@ public class GenericPreparedStatement",
"removed": [
" private long timeoutMillis; // Timeout value, in milliseconds."
]
},
{
"added": [],
"header": "@@ -181,7 +180,6 @@ public class GenericPreparedStatement",
"removed": [
" timeoutMillis = 0L; // 0 means no timeout; default."
]
},
{
"added": [
" public ResultSet execute(LanguageConnectionContext lcc,",
" boolean rollbackParentContext,",
" long timeoutMillis)",
"\t\treturn execute(a, false, false, rollbackParentContext, timeoutMillis);"
],
"header": "@@ -238,24 +236,14 @@ public class GenericPreparedStatement",
"removed": [
" /**",
" * Sets a timeout value for execution of this statement.",
" * Will also apply to each row fetch from the ResultSet",
" * produced by this statement.",
" *",
" * @param timeoutMillis Timeout value in milliseconds. 0 means no timeout.",
" */",
" public void setQueryTimeout(long timeoutMillis)",
" {",
" this.timeoutMillis = timeoutMillis;",
" }",
"",
"\tpublic ResultSet execute(LanguageConnectionContext lcc, boolean rollbackParentContext)",
"\t\treturn execute(a, false, false, rollbackParentContext);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [
"\t\tResultSet rs = ps.execute(lcc, true, 0L);"
],
"header": "@@ -2056,9 +2056,7 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
" ps.setQueryTimeout(0L);",
"",
"\t\tResultSet rs = ps.execute(lcc, true);"
]
}
]
}
] |
derby-DERBY-5792-0a8f8408
|
DERBY-5792: Make it possible to turn off encryption on an already encrypted database.
Simplified code removing old container files generated during encryption and
decryption of a database. There were two implementations, I removed one of them
and removed the parameter of EncryptOrDecryptData.removeOldVersionOfContainers
(and calling methods).
Patch file: derby-5792-5b-old_container_removal_cleanup.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1394522 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/store/raw/data/DataFactory.java",
"hunks": [
{
"added": [],
"header": "@@ -32,7 +32,6 @@ import org.apache.derby.iapi.store.raw.ContainerHandle;",
"removed": [
"import org.apache.derby.iapi.store.raw.RecordHandle;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java",
"hunks": [
{
"added": [
" dataFactory.removeOldVersionOfContainers();"
],
"header": "@@ -1836,7 +1836,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" dataFactory.removeOldVersionOfContainers(false);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/BaseDataFileFactory.java",
"hunks": [
{
"added": [],
"header": "@@ -198,9 +198,6 @@ public class BaseDataFileFactory",
"removed": [
" private EncryptOrDecryptData containerEncrypter;",
"",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/EncryptOrDecryptData.java",
"hunks": [
{
"added": [],
"header": "@@ -63,9 +63,6 @@ public class EncryptOrDecryptData implements PrivilegedAction {",
"removed": [
" private StorageFile[] oldFiles;",
" private int noOldFiles = 0;",
""
]
},
{
"added": [],
"header": "@@ -118,8 +115,6 @@ public class EncryptOrDecryptData implements PrivilegedAction {",
"removed": [
" oldFiles = new StorageFile[files.length];",
" noOldFiles = 0;"
]
},
{
"added": [
" encryptOrDecryptContainer(t, ckey, doEncrypt);",
" } else {",
" if (SanityManager.DEBUG) {",
" SanityManager.THROWASSERT(",
" (doEncrypt ? \"encryption\" : \"decryption\") +",
" \" process is unable to read container names in seg0\");",
" }"
],
"header": "@@ -141,18 +136,18 @@ public class EncryptOrDecryptData implements PrivilegedAction {",
"removed": [
" oldFiles[noOldFiles++] =",
" encryptOrDecryptContainer(t, ckey, doEncrypt);",
" } else",
" {",
" if (SanityManager.DEBUG)",
" SanityManager.THROWASSERT(\"encryption process is unable to\" +",
" \"read container names in seg0\");"
]
},
{
"added": [
" private void encryptOrDecryptContainer(RawTransaction t,",
" ContainerKey ckey,",
" boolean doEncrypt)"
],
"header": "@@ -164,12 +159,11 @@ public class EncryptOrDecryptData implements PrivilegedAction {",
"removed": [
" * @return File handle to the old copy of the container.",
" private StorageFile encryptOrDecryptContainer(RawTransaction t,",
" ContainerKey ckey,",
" boolean doEncrypt)"
]
},
{
"added": [],
"header": "@@ -247,8 +241,6 @@ public class EncryptOrDecryptData implements PrivilegedAction {",
"removed": [
"",
" return oldFile ;"
]
},
{
"added": [
" private boolean isOldContainerFile(String fileName) {",
" // Old versions start with prefix \"o\" and ends with \".dat\".",
" return (fileName.startsWith(\"o\") && fileName.endsWith(\".dat\"));"
],
"header": "@@ -275,14 +267,9 @@ public class EncryptOrDecryptData implements PrivilegedAction {",
"removed": [
" private boolean isOldContainerFile(String fileName)",
" {",
" // all old versions of the conatainer files",
" // start with prefix \"o\" and ends with \".dat\"",
" if (fileName.startsWith(\"o\") && fileName.endsWith(\".dat\"))",
" return true;",
" else",
" return false;"
]
},
{
"added": [
" /**",
" * Removes old versions of the containers after a cryptographic operation",
" * on the database.",
" public void removeOldVersionOfContainers()",
" throws StandardException {",
" // Find the old version of the container files and delete them.",
" String[] files = dataFactory.getContainerNames();",
" if (files != null) {",
" // Loop through all the files in seg0 and",
" // delete all old copies of the containers.",
" for (int i = files.length-1; i >= 0 ; i--) {",
" if (isOldContainerFile(files[i])) {",
" StorageFile oldFile = getFile(files[i]);",
" if (!privDelete(oldFile)) {",
" throw StandardException.newException(",
" SQLState.FILE_CANNOT_REMOVE_FILE,",
" oldFile);"
],
"header": "@@ -353,60 +340,30 @@ public class EncryptOrDecryptData implements PrivilegedAction {",
"removed": [
" /*",
" * Remove all the old version (encrypted with old key or",
" * un-encrypted) of the containers stored in the data directory .",
" *",
" * @param inRecovery <code> true </code>, if cleanup is",
" * happening during recovery.",
" * @exception StandardException Standard Derby Error Policy",
" public void removeOldVersionOfContainers(boolean inRecovery)",
" throws StandardException",
" {",
"",
" if (inRecovery)",
" {",
" // find the old version of the container files",
" // and delete them",
" String[] files = dataFactory.getContainerNames();",
" if (files != null)",
" {",
" // loop through all the files in seg0 and",
" // delete all old copies of the containers.",
" for (int i = files.length-1; i >= 0 ; i--)",
" {",
" // if it is a old version of the container file",
" // delete it.",
" if (isOldContainerFile(files[i]))",
" {",
" StorageFile oldFile = getFile(files[i]);",
" if (!privDelete(oldFile))",
" {",
" throw StandardException.newException(",
" SQLState.FILE_CANNOT_REMOVE_FILE,",
" oldFile);",
" }",
" }else",
" {",
" // delete all the old version of the containers.",
" for (int i = 0 ; i < noOldFiles ; i++)",
" {",
" if (!privDelete(oldFiles[i]))",
" {",
" throw StandardException.newException(",
" SQLState.FILE_CANNOT_REMOVE_FILE,",
" oldFiles[i]);",
" }",
" }",
"",
""
]
}
]
}
] |
derby-DERBY-5792-86ebb44a
|
DERBY-5792: Make it possible to turn off encryption on an already encrypted database.
Added basic tests for database decryption, verifying that:
o core functionality works
o nothing is decrypted if the database is already booted
o conflicting attributes are detected
The test has not yet been enabled as part of the store suite.
Patch file: derby-5792-2a-decryptdatabasetest.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1392856 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5792-89a66256
|
DERBY-5792: Make it possible to turn off encryption on an already encrypted database.
A set of changes made in preparation of adding database decryption support.
These modifications should not cause functional changes.
Description:
o DataFactory.setDatabaseEncrypted: introduced boolean flag to be able
to tell the data factory that encryption has been turned off.
Updated implementing method in BaseDataFileFactory
o setDatabaseEncrypted: introduced second boolean flag to be able to
tell the log factory that encryption has been turned off.
Updated implementing methods in LogToFile and ReadOnly.
o RawContainerHandle.encryptContainer: renamed to
encryptOrDecryptContainer, added boolean flag to control crypto
operation.
Updated implementing method in BaseContainerHandle
o BaseContainer.encryptContainer: renamed to encryptOrDecryptContainer,
added boolean flag to control crypto operation.
Updated implementing methods in RAFContainer and InputStreamContainer
o EncryptData: renamed to EncryptOrDecryptData, added method
decryptAllContainers, whitespace changes.
o RawStore:
- removed import
- removed instance variable encryptDatabase
- removed unused instance variable dataDirectory
- renamed databaseEncrypted to isEncryptedDatabase
- renamed configureDatabaseForEncryption to applyBulkCryptoOperation
- made setupEncryptionEngines return a boolean: whether or not
existing data must be transformed (applyBulkCryptoOperation)
- simplified parts of the logic in setupEncryptionEngines
- introduced isTrue/isSet for property sets
- removed unused method privList(File)
Patch file: derby-5792-1b-boilerplate_and_preparation.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1390712 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/store/raw/log/LogFactory.java",
"hunks": [
{
"added": [],
"header": "@@ -28,12 +28,10 @@ import org.apache.derby.iapi.store.raw.data.DataFactory;",
"removed": [
"import org.apache.derby.iapi.store.raw.ScannedTransactionHandle;",
"import org.apache.derby.catalog.UUID;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java",
"hunks": [
{
"added": [],
"header": "@@ -79,7 +79,6 @@ import java.net.MalformedURLException;",
"removed": [
"import java.lang.SecurityException;"
]
},
{
"added": [
"\tprivate boolean isEncryptedDatabase;"
],
"header": "@@ -108,8 +107,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
"\tprivate boolean databaseEncrypted;",
" private boolean encryptDatabase;"
]
},
{
"added": [],
"header": "@@ -120,8 +118,6 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
"\tString dataDirectory; \t\t\t\t\t// where files are stored\t",
""
]
},
{
"added": [
" boolean transformExistingData = false;"
],
"header": "@@ -171,6 +167,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": []
},
{
"added": [],
"header": "@@ -178,7 +175,6 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
"\t\tdataDirectory = properties.getProperty(PersistentService.ROOT);"
]
},
{
"added": [
" if (create) {",
" transformExistingData = setupEncryptionEngines(create, properties);",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(!transformExistingData,",
" \"no crypto data transformation for a new db\");",
" }",
" }"
],
"header": "@@ -206,8 +202,13 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" if (create) ",
" setupEncryptionEngines(create, properties);"
]
},
{
"added": [
" transformExistingData = setupEncryptionEngines(create, properties);",
" if (isEncryptedDatabase) {",
" logFactory.setDatabaseEncrypted(true, false);",
" dataFactory.setDatabaseEncrypted(true);"
],
"header": "@@ -299,14 +300,14 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" setupEncryptionEngines(create, properties);",
" if (databaseEncrypted) {",
" logFactory.setDatabaseEncrypted(false);",
" dataFactory.setDatabaseEncrypted();"
]
},
{
"added": [
" // If user requested to encrypt an un-encrypted database or encrypt with",
" // a new alogorithm then do that now.",
" if (transformExistingData) {",
" applyBulkCryptoOperation(properties, newCipherFactory);",
" if (isEncryptedDatabase) {",
" }"
],
"header": "@@ -333,23 +334,22 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" // if user requested to encrpty an unecrypted database or encrypt with",
" // new alogorithm then do that now. ",
" if (encryptDatabase) {",
" configureDatabaseForEncryption(properties, ",
" newCipherFactory);",
"",
"\t\t\tif (databaseEncrypted)"
]
},
{
"added": [
" * Setup encryption engines according to the user properties and the",
" * current database state.",
" *",
" * @param create whether a new database is being created, or if this is",
" * an existing database",
" * @param properties database properties, including connection attributes",
" * @return {@code true} if the existing data in the database should be",
" * transformed by applying a cryptographic operation.",
" * @throws StandardException if the properties are conflicting, if the",
" * requested configuration is denied, or if something else goes wrong",
" private boolean setupEncryptionEngines(boolean create,",
" Properties properties)",
" // Check if user has requested to encrypt the database.",
" boolean encryptDatabase = isTrue(properties, Attribute.DATA_ENCRYPTION);"
],
"header": "@@ -1260,18 +1260,23 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" * Setup Encryption Engines.",
" private void setupEncryptionEngines(boolean create, Properties properties)",
" // Check if user has requested to encrypt the database or if the",
" // database is encrypted already.",
"",
" String dataEncryption =",
" properties.getProperty(Attribute.DATA_ENCRYPTION);",
" databaseEncrypted = Boolean.valueOf(dataEncryption).booleanValue();",
""
]
},
{
"added": [
" isEncryptedDatabase =",
" isTrue(serviceprops, Attribute.DATA_ENCRYPTION);",
"",
" if (isEncryptedDatabase) {",
" // Check if the user has requested to re-encrypt an",
" reEncrypt = isSet(properties, Attribute.NEW_BOOT_PASSWORD) ||",
" isSet(properties, Attribute.NEW_CRYPTO_EXTERNAL_KEY);",
" encryptDatabase = reEncrypt;"
],
"header": "@@ -1289,29 +1294,15 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" dataEncryption = serviceprops.getProperty(Attribute.DATA_ENCRYPTION);",
" boolean encryptedDatabase = Boolean.valueOf(dataEncryption).booleanValue();",
"",
" if (!encryptedDatabase && databaseEncrypted) {",
" // It it not an encrypted database, user is asking to",
" // encrypt an un-encrypted database.",
" encryptDatabase = true;",
" // Set database as un-encrypted, we will set it as encrypted",
" // after encrypting the existing data.",
" databaseEncrypted = false;",
" } else {",
" // Check if the user has requested to re-necrypt an",
" if (encryptedDatabase) {",
" if (properties.getProperty(",
" Attribute.NEW_BOOT_PASSWORD) != null) {",
" reEncrypt = true;",
" } else if (properties.getProperty(",
" Attribute.NEW_CRYPTO_EXTERNAL_KEY) != null){",
" reEncrypt = true;",
" }",
" encryptDatabase = reEncrypt;",
" }"
]
},
{
"added": [
" if (isEncryptedDatabase || encryptDatabase) {"
],
"header": "@@ -1333,7 +1324,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" if (databaseEncrypted || encryptDatabase) {"
]
},
{
"added": [
" if (isSet(properties, RawStoreFactory.ENCRYPTION_BLOCKSIZE)) {"
],
"header": "@@ -1392,7 +1383,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" if (properties.getProperty(RawStoreFactory.ENCRYPTION_BLOCKSIZE) != null) {"
]
},
{
"added": [
" // Create new cipher factory with the new encryption"
],
"header": "@@ -1409,7 +1400,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" // Create new cipher factory with the new encrytpion"
]
},
{
"added": [
" isEncryptedDatabase = true;",
" return (!create && encryptDatabase);"
],
"header": "@@ -1433,8 +1424,10 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": []
},
{
"added": [
" if ((encryptionEngine == null && newEncryptionEngine == null)) {"
],
"header": "@@ -1449,9 +1442,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
"\t\tif ((databaseEncrypted == false && encryptDatabase == false) || ",
" (encryptionEngine == null && newEncryptionEngine == null))",
" {"
]
},
{
"added": [
"\t\tif (isEncryptedDatabase == false || decryptionEngine == null) {"
],
"header": "@@ -1478,8 +1469,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
"\t\tif (databaseEncrypted == false || decryptionEngine == null)",
" {"
]
},
{
"added": [
"\t\treturn isEncryptedDatabase ? random.nextInt() : 0;"
],
"header": "@@ -1501,7 +1491,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
"\t\treturn databaseEncrypted ? random.nextInt() : 0;"
]
},
{
"added": [
"\t\tif (!isEncryptedDatabase)"
],
"header": "@@ -1510,7 +1500,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
"\t\tif (!databaseEncrypted)"
]
},
{
"added": [
" private void applyBulkCryptoOperation(Properties properties,",
" CipherFactory newCipherFactory)",
" boolean reEncrypt = isEncryptedDatabase;",
" boolean externalKeyEncryption =",
" isSet(properties, Attribute.CRYPTO_EXTERNAL_KEY);"
],
"header": "@@ -1627,21 +1617,18 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" public void configureDatabaseForEncryption(Properties properties,",
" CipherFactory newCipherFactory) ",
" boolean reEncrypt = (databaseEncrypted && encryptDatabase);",
" boolean externalKeyEncryption = false;",
" if (properties.getProperty(Attribute.CRYPTO_EXTERNAL_KEY) != null)",
" {",
" externalKeyEncryption = true;",
" }"
]
},
{
"added": [
" logFactory.setDatabaseEncrypted(true, true);",
" isEncryptedDatabase = true;",
" dataFactory.setDatabaseEncrypted(true);"
],
"header": "@@ -1679,23 +1666,20 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" ",
"",
" encryptDatabase = false;",
" logFactory.setDatabaseEncrypted(true);",
" databaseEncrypted = true;",
" dataFactory.setDatabaseEncrypted();"
]
},
{
"added": [],
"header": "@@ -2563,22 +2547,6 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" private synchronized String[] privList(final File file)",
" {",
" actionCode = REGULAR_FILE_LIST_DIRECTORY_ACTION;",
" actionRegularFile = file;",
"",
" try",
" {",
" return (String[]) AccessController.doPrivileged( this);",
" }",
" catch( PrivilegedActionException pae) { return null;} // does not throw an exception",
" finally",
" {",
" actionRegularFile = null;",
" }",
" }",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/BaseDataFileFactory.java",
"hunks": [
{
"added": [
" private EncryptOrDecryptData containerEncrypter;"
],
"header": "@@ -198,7 +198,7 @@ public class BaseDataFileFactory",
"removed": [
" private EncryptData containerEncrypter;"
]
},
{
"added": [
" /** {@inheritDoc} */",
" public void setDatabaseEncrypted(boolean isEncrypted)",
" databaseEncrypted = isEncrypted;"
],
"header": "@@ -2066,9 +2066,10 @@ public class BaseDataFileFactory",
"removed": [
" public void setDatabaseEncrypted()",
"\t\tdatabaseEncrypted = true;"
]
},
{
"added": [
" containerEncrypter = new EncryptOrDecryptData(this);"
],
"header": "@@ -2102,7 +2103,7 @@ public class BaseDataFileFactory",
"removed": [
" containerEncrypter = new EncryptData(this);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/EncryptOrDecryptData.java",
"hunks": [
{
"added": [
" Derby - Class org.apache.derby.impl.store.raw.data.EncryptOrDecryptData"
],
"header": "@@ -1,6 +1,6 @@",
"removed": [
" Derby - Class org.apache.derby.impl.store.raw.data.EncryptData"
]
},
{
"added": [
""
],
"header": "@@ -20,15 +20,13 @@",
"removed": [
"import org.apache.derby.iapi.services.context.ContextManager;",
"import org.apache.derby.iapi.services.daemon.Serviceable;",
"import org.apache.derby.iapi.store.raw.Transaction;"
]
},
{
"added": [
" * This class is used to encrypt all the containers in the data segment with a",
" * new encryption key when password/key is changed or when an existing database",
" * is reconfigured for encryption.",
" *",
" * Encryption of existing data in the data segments is done by doing the",
" * 1.Write a log record to indicate that the container is getting encrypted.",
" * encrypt each page with new encryption key and then write to a",
" * 3. Rename the current container file (c<cid>.dat) to",
" * 4. Rename the new encrypted version of the file (n<cid).dat) to be",
" * 5. All the old version of the container (o<cid>.dat) files are removed",
"public class EncryptOrDecryptData implements PrivilegedAction {",
" private int noOldFiles = 0;"
],
"header": "@@ -41,33 +39,32 @@ import java.security.PrivilegedAction;",
"removed": [
" * This class is used to encrypt all the containers in the data segment with a ",
" * new encryption key when password/key is changed or when an existing database ",
" * is reconfigured for encryption. ",
" * ",
" * Encryption of existing data in the data segments is done by doing the ",
" * 1.Write a log record to indicate that the container is getting encrypted. ",
" * encrypt each page with new encryption key and then write to a ",
" * 3.\tRename the current container file (c<cid>.dat) to ",
" * 4.\tRename the new encrypted version of the file (n<cid).dat) to be ",
" * 5.\tAll the old version of the container (o<cid>.dat) files are removed",
" * ",
"public class EncryptData implements PrivilegedAction {",
" private int noOldFiles = 0; "
]
},
{
"added": [
" public EncryptOrDecryptData(BaseDataFileFactory dataFactory) {",
" this.dataFactory = dataFactory;",
" }",
" /**",
" * Finds all the all the containers stored in the data directory and",
" * decrypts them.",
" *",
" * @param t the transaction that is used for the decryption operation",
" * @throws StandardException Standard Derby error policy",
" */",
" public void decryptAllContainers(RawTransaction t)",
" throws StandardException {",
" encryptOrDecryptAllContainers(t, false);",
" }",
" /**",
" * Find all the all the containers stored in the data directory and",
" *",
" * @param t the transaction that is used for the encryption operation",
" */",
" public void encryptAllContainers(RawTransaction t)",
" throws StandardException {",
" encryptOrDecryptAllContainers(t, true);",
" }",
" /**",
" * Encrypts or decrypts all containers in the database data directory.",
" *",
" * @param t transaction used for the cryptographic operation",
" * @param doEncrypt tells whether to encrypt or decrypt",
" * @exception StandardException Standard Derby error policy",
" */",
" private void encryptOrDecryptAllContainers(RawTransaction t,",
" boolean doEncrypt)",
" throws StandardException {",
" // List of containers that need to be transformed are identified by",
" // simply reading the list of files in seg0.",
" String[] files = dataFactory.getContainerNames();",
" if (files != null) {",
" long segmentId = 0;",
"",
" // Loop through all the files in seg0 and",
" // encrypt/decrypt all valid containers.",
" for (int f = files.length-1; f >= 0 ; f--) {",
" long containerId;",
" try {",
" containerId =",
" Long.parseLong(files[f].substring(1,",
" }",
" catch (Throwable th)",
" {",
" // ignore errors from parse, it just means",
" // that someone put a file in seg0 that we",
" continue;",
" }",
" ContainerKey ckey = new ContainerKey(segmentId,",
" oldFiles[noOldFiles++] =",
" encryptOrDecryptContainer(t, ckey, doEncrypt);",
" }",
" // is completed.",
" } else",
" {",
" if (SanityManager.DEBUG)",
" SanityManager.THROWASSERT(\"encryption process is unable to\" +",
" }",
" /**",
" * Encrypts or decrypts the specified container.",
" *",
" * @param t transaction that used to perform the cryptographic operation",
" * @param ckey the key of the container that is being encrypted/decrypted",
" * @param doEncrypt tells whether to encrypt or decrypt",
" * @return File handle to the old copy of the container.",
" private StorageFile encryptOrDecryptContainer(RawTransaction t,",
" ContainerKey ckey,",
" boolean doEncrypt)",
" {",
" LockingPolicy cl =",
" TransactionController.ISOLATION_SERIALIZABLE,",
""
],
"header": "@@ -78,86 +75,110 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
"\tpublic EncryptData(BaseDataFileFactory dataFactory) {",
"\t\tthis.dataFactory = dataFactory;",
"\t}",
" /*",
" * Find all the all the containers stored in the data directory and ",
" * @param t the transaction that is used to configure the database ",
" * with new encryption properties.",
"\t */",
"\tpublic void encryptAllContainers(RawTransaction t) ",
" throws StandardException {",
"",
" /*",
"\t\t * List of containers that needs to be encrypted are identified by ",
"\t\t * simply reading the list of files in seg0. ",
"\t\t */",
"\t\tString[] files = dataFactory.getContainerNames();",
"\t\tif (files != null) {",
"\t\t\tlong segmentId = 0;",
"",
" // loop through all the files in seg0 and ",
" // encrypt all valid containers.",
"\t\t\tfor (int f = files.length-1; f >= 0 ; f--) {",
"\t\t\t\tlong containerId;",
"\t\t\t\ttry\t{",
"\t\t\t\t\tcontainerId = ",
"\t\t\t\t\t\tLong.parseLong(files[f].substring(1, ",
"\t\t\t\t}",
"\t\t\t\tcatch (Throwable th)",
"\t\t\t\t{",
" // ignore errors from parse, it just means ",
" // that someone put a file in seg0 that we ",
"\t\t\t\t\tcontinue;",
"\t\t\t\t}",
"\t\t\t\tContainerKey ckey = new ContainerKey(segmentId, ",
" oldFiles[noOldFiles++] = encryptContainer(t, ckey);",
"\t\t\t}",
" // is completed. ",
"\t\t} else",
"\t\t{",
"\t\t\tif (SanityManager.DEBUG) ",
"\t\t\t\tSanityManager.THROWASSERT(\"encryption process is unable to\" +",
"\t\t}",
"\t/** Encrypt a container.",
" * @param t the transaction that is used to configure the database ",
" * with new encryption properties.",
" * @param ckey the key of the container that is being encrypted.",
" * @return file handle to the old copy of the container.",
"\tprivate StorageFile encryptContainer(RawTransaction t, ",
" ContainerKey ckey)",
"\t{",
" LockingPolicy cl = ",
" TransactionController.ISOLATION_SERIALIZABLE, ",
"\t\t"
]
},
{
"added": [
" EncryptContainerOperation lop =",
"",
" // the encrypted container is created & synced and the",
" // log record for it makes it to disk. if we fail during",
" // encryption of the container, log record will make sure",
" // container is restored to the original state and",
" // any temporary files are cleaned up.",
" containerHdl.encryptOrDecryptContainer(newFilePath, doEncrypt);",
"",
" * keeping a copy of the current container file, it will be removed on",
" * after a checkpoint with new key or on a rollback this copy will be",
" * replace the container file to bring the database back to the",
" * state before encryption process started.",
" // discard pages in the cache related to this container.",
" SanityManager.THROWASSERT(\"unable to discard pages releated to \" +",
" \"container \" + ckey +"
],
"header": "@@ -167,38 +188,38 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" EncryptContainerOperation lop = ",
" ",
" // the encrypted container is created & synced and the ",
" // log record for it makes it to disk. if we fail during ",
" // encryption of the container, log record will make sure ",
" // container is restored to the original state and ",
" // any temporary files are cleaned up. ",
" containerHdl.encryptContainer(newFilePath);",
" ",
" * keeping a copy of the current container file, it will be removed on ",
" * after a checkpoint with new key or on a rollback this copy will be ",
" * replace the container file to bring the database back to the ",
" * state before encryption process started. ",
" // discard pages in the cache related to this container. ",
" SanityManager.THROWASSERT(\"unable to discard pages releated to \" + ",
" \"container \" + ckey + "
]
},
{
"added": [
" SanityManager.THROWASSERT(\"unable to discard a container \" +"
],
"header": "@@ -206,7 +227,7 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" SanityManager.THROWASSERT(\"unable to discard a container \" + "
]
},
{
"added": [
" // now replace current container file with the new file.",
"",
"",
" * Get file handle to a container file that is used to keep",
" * temporary versions of the container file.",
" return storageFactory.newStorageFile(getFilePath(containerId,",
" * the container file."
],
"header": "@@ -219,30 +240,30 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" // now replace current container file with the new file. ",
" ",
" ",
" * Get file handle to a container file that is used to keep ",
" * temporary versions of the container file. ",
" return storageFactory.newStorageFile(getFilePath(containerId, ",
" * the container file. "
]
},
{
"added": [
" private boolean isOldContainerFile(String fileName)"
],
"header": "@@ -254,7 +275,7 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" private boolean isOldContainerFile(String fileName) "
]
},
{
"added": [
" private StorageFile getFile(String ctrFileName)"
],
"header": "@@ -264,7 +285,7 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" private StorageFile getFile(String ctrFileName) "
]
},
{
"added": [
" /* Restore the contaier to the state it was before",
" * it was encrypted with new encryption key. This function is",
" * called during undo of the EncryptContainerOperation log record",
" void restoreContainer(ContainerKey containerId)",
" throws StandardException",
" // this will make sure there are no file opens on the current",
" // container file.",
"",
" \"unable to discard container from cache:\" +",
" StorageFile currentFile = dataFactory.getContainerPath(containerId,",
"",
" // if backup of the original container file exists, replace the"
],
"header": "@@ -274,35 +295,35 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" /* Restore the contaier to the state it was before ",
" * it was encrypted with new encryption key. This function is ",
" * called during undo of the EncryptContainerOperation log record ",
" void restoreContainer(ContainerKey containerId) ",
" throws StandardException ",
" // this will make sure there are no file opens on the current ",
" // container file. ",
" ",
" \"unable to discard container from cache:\" + ",
" StorageFile currentFile = dataFactory.getContainerPath(containerId, ",
" ",
" // if backup of the original container file exists, replace the "
]
},
{
"added": [
" SQLState.UNABLE_TO_DELETE_FILE,",
" * Remove all the old version (encrypted with old key or",
" * @param inRecovery <code> true </code>, if cleanup is",
" public void removeOldVersionOfContainers(boolean inRecovery)",
"",
" if (inRecovery)",
" if (files != null)",
" // loop through all the files in seg0 and",
" for (int i = files.length-1; i >= 0 ; i--)",
" // delete it.",
" if (!privDelete(oldFile))"
],
"header": "@@ -326,41 +347,41 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" SQLState.UNABLE_TO_DELETE_FILE, ",
" * Remove all the old version (encrypted with old key or ",
" * @param inRecovery <code> true </code>, if cleanup is ",
" public void removeOldVersionOfContainers(boolean inRecovery) ",
" ",
" if (inRecovery) ",
" if (files != null) ",
" // loop through all the files in seg0 and ",
" for (int i = files.length-1; i >= 0 ; i--) ",
" // delete it. ",
" if (!privDelete(oldFile)) "
]
},
{
"added": [
" }else",
" // delete all the old version of the containers.",
" for (int i = 0 ; i < noOldFiles ; i++)",
" if (!privDelete(oldFiles[i]))",
" SQLState.FILE_CANNOT_REMOVE_FILE,"
],
"header": "@@ -369,15 +390,15 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" }else ",
" // delete all the old version of the containers. ",
" for (int i = 0 ; i < noOldFiles ; i++) ",
" if (!privDelete(oldFiles[i])) ",
" SQLState.FILE_CANNOT_REMOVE_FILE, "
]
},
{
"added": [
""
],
"header": "@@ -385,7 +406,7 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" "
]
},
{
"added": [
""
],
"header": "@@ -396,7 +417,7 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" "
]
},
{
"added": [
"",
" private synchronized boolean privRename(StorageFile fromFile,"
],
"header": "@@ -404,10 +425,10 @@ public class EncryptData implements PrivilegedAction {",
"removed": [
" ",
" private synchronized boolean privRename(StorageFile fromFile, "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/log/ReadOnly.java",
"hunks": [
{
"added": [
" /**",
" * Sets whether the database is encrypted.",
" * <p>",
" * Read-only database can not be re-encrypted, nothing to do in this case.",
" public void setDatabaseEncrypted(boolean isEncrypted, boolean flushLog)"
],
"header": "@@ -369,11 +369,12 @@ public class ReadOnly implements LogFactory, ModuleSupportable {",
"removed": [
" /*",
" * Set that the database is encrypted. Read-only database can not ",
" * be reencrypted, nothing to do in this case. ",
" public void setDatabaseEncrypted(boolean flushLog)"
]
}
]
}
] |
derby-DERBY-5796-4d04d503
|
DERBY-5796: Remove unused methods in client.am.DateTime
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1346320 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/DateTime.java",
"hunks": [
{
"added": [],
"header": "@@ -20,7 +20,6 @@",
"removed": [
"import org.apache.derby.shared.common.i18n.MessageUtil;"
]
},
{
"added": [],
"header": "@@ -645,92 +644,6 @@ public class DateTime {",
"removed": [
" /**",
" * java.sql.Timestamp is converted to character representation that is in JDBC date escape ",
" * format: <code>yyyy-mm-dd</code>, which is the same as JIS date format in DERBY string representation of a date.",
" * and then converted to bytes using UTF8 encoding.",
" * @param buffer ",
" * @param offset write into the buffer from this offset ",
" * @param timestamp timestamp value",
" * @return DateTime.dateRepresentationLength. This is the fixed length ",
" * in bytes, that is taken to represent the timestamp value as a date.",
" * @throws SqlException",
" * @throws UnsupportedEncodingException",
" */",
" public static final int timestampToDateBytes(byte[] buffer,",
" int offset,",
" java.sql.Timestamp timestamp)",
" throws SqlException,UnsupportedEncodingException {",
" int year = timestamp.getYear() + 1900;",
" if (year > 9999) {",
" throw new SqlException(null,",
" new ClientMessageId(SQLState.YEAR_EXCEEDS_MAXIMUM),",
" new Integer(year), \"9999\");",
" }",
" int month = timestamp.getMonth() + 1;",
" int day = timestamp.getDate();",
"",
" char[] dateChars = new char[DateTime.dateRepresentationLength];",
" int zeroBase = (int) '0';",
" dateChars[0] = (char) (year / 1000 + zeroBase);",
" dateChars[1] = (char) ((year % 1000) / 100 + zeroBase);",
" dateChars[2] = (char) ((year % 100) / 10 + zeroBase);",
" dateChars[3] = (char) (year % 10 + +zeroBase);",
" dateChars[4] = '-';",
" dateChars[5] = (char) (month / 10 + zeroBase);",
" dateChars[6] = (char) (month % 10 + zeroBase);",
" dateChars[7] = '-';",
" dateChars[8] = (char) (day / 10 + zeroBase);",
" dateChars[9] = (char) (day % 10 + zeroBase);",
" // Network server expects to read the date parameter value bytes with",
" // UTF-8 encoding. Reference - DERBY-1127",
" // see DRDAConnThread.readAndSetParams",
" byte[] dateBytes = (new String(dateChars)).getBytes(Typdef.UTF8ENCODING);",
" System.arraycopy(dateBytes, 0, buffer, offset, DateTime.dateRepresentationLength);",
"",
" return DateTime.dateRepresentationLength;",
" }",
"",
" /**",
" * java.sql.Timestamp is converted to character representation in JDBC time escape format:",
" * <code>hh:mm:ss</code>, which is the same as",
" * JIS time format in DERBY string representation of a time. The char representation is ",
" * then converted to bytes using UTF8 encoding and written out into the buffer",
" * @param buffer",
" * @param offset write into the buffer from this offset ",
" * @param timestamp timestamp value",
" * @return DateTime.timeRepresentationLength. This is the fixed length ",
" * in bytes taken to represent the timestamp value as Time.",
" * @throws UnsupportedEncodingException",
" */",
" public static final int timestampToTimeBytes(byte[] buffer,",
" int offset,",
" java.sql.Timestamp timestamp)",
" throws UnsupportedEncodingException {",
" int hour = timestamp.getHours();",
" int minute = timestamp.getMinutes();",
" int second = timestamp.getSeconds();",
"",
" char[] timeChars = new char[DateTime.timeRepresentationLength];",
" int zeroBase = (int) '0';",
" timeChars[0] = (char) (hour / 10 + zeroBase);",
" timeChars[1] = (char) (hour % 10 + +zeroBase);",
" timeChars[2] = ':';",
" timeChars[3] = (char) (minute / 10 + zeroBase);",
" timeChars[4] = (char) (minute % 10 + zeroBase);",
" timeChars[5] = ':';",
" timeChars[6] = (char) (second / 10 + zeroBase);",
" timeChars[7] = (char) (second % 10 + zeroBase);",
" ",
" // Network server expects to read the time parameter value bytes with",
" // UTF-8 encoding. Reference - DERBY-1127",
" // see DRDAConnThread.readAndSetParams ",
" byte[] timeBytes = (new String(timeChars)).getBytes(Typdef.UTF8ENCODING);",
" System.arraycopy(timeBytes, 0, buffer, offset, DateTime.timeRepresentationLength);",
"",
" return DateTime.timeRepresentationLength;",
" }",
""
]
}
]
}
] |
derby-DERBY-5797-afe4dfdf
|
DERBY-5797: AssertionFailedError in functionTests.tests.lang.UpdateStatisticsTest.testDisposableStatsEagerness
Make the test sleep for at least one tick of the system timer to ensure the
comparison of statistics creation timestamps are valid in the normal case
(i.e. when there is no bug).
Added two utility methods to BaseTestCase:
o sleep(long ms)
o sleepAtLeastOneTick()
Removed two existing sleep-methods in test classes (note that the one taking
numbers of seconds as argument was unused).
Patch file: derby-5797-1a-sleep_a_tick.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1347888 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5801-b9e0e745
|
DERBY-5801: Sub-processes should write EMMA coverage data to separate files
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1347667 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5802-77ba1811
|
DERBY-5802: Remove unused class ExecProcUtil
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1347885 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/ExecProcUtil.java",
"hunks": [
{
"added": [],
"header": "@@ -1,107 +0,0 @@",
"removed": [
"/*",
" ",
" Derby - Class org.apache.derbyTesting.functionTests.util.ExecProcUtil",
" ",
" Licensed to the Apache Software Foundation (ASF) under one or more",
" contributor license agreements. See the NOTICE file distributed with",
" this work for additional information regarding copyright ownership.",
" The ASF licenses this file to You under the Apache License, Version 2.0",
" (the \"License\"); you may not use this file except in compliance with",
" the License. You may obtain a copy of the License at",
" ",
" http://www.apache.org/licenses/LICENSE-2.0",
" ",
" Unless required by applicable law or agreed to in writing, software",
" distributed under the License is distributed on an \"AS IS\" BASIS,",
" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
" See the License for the specific language governing permissions and",
" limitations under the License.",
" ",
" */",
"package org.apache.derbyTesting.functionTests.util;",
"",
"import org.apache.derbyTesting.functionTests.harness.ProcessStreamResult;",
"import org.apache.derbyTesting.functionTests.harness.TimedProcess;",
"import java.util.Vector;",
"import java.io.BufferedOutputStream;",
"/**",
" * Utility class to hold helper methods to exec new processes",
" */",
"public class ExecProcUtil {",
" ",
" /**",
" * For each new exec process done, set ",
" * timeout for ProcessStreamResult after which the thread that ",
" * handles the streams for the process exits. Timeout is in minutes. ",
" * Note: timeout handling will only come into effect when ",
" * ProcessStreamResult#Wait() is called",
" */",
" private static String timeoutMinutes = \"2\";",
" ",
" /**",
" * timeout in seconds for the processes spawned.",
" */",
" private static int timeoutSecondsForProcess = 180;",
" ",
" /**",
" * Execute the given command and dump the results to standard out",
" *",
" * @param args command and arguments",
" * @param vCmd java command line arguments.",
" * @param bos buffered stream (System.out) to dump results to.",
" * @exception Exception",
" */",
" public static void execCmdDumpResults(String[] args, Vector vCmd,",
" BufferedOutputStream bos) throws Exception {",
" // We need the process inputstream and errorstream",
" ProcessStreamResult prout = null;",
" ProcessStreamResult prerr = null;",
"",
" StringBuffer sb = new StringBuffer();",
"",
" for (int i = 0; i < args.length; i++) {",
" sb.append(args[i] + \" \");",
" }",
" System.out.println(sb.toString());",
" int totalSize = vCmd.size() + args.length;",
" String serverCmd[] = new String[totalSize];",
"",
" int i = 0;",
" for (i = 0; i < vCmd.size(); i++)",
" serverCmd[i] = (String) vCmd.elementAt(i);",
"",
" for (int j = 0; i < totalSize; i++)",
" serverCmd[i] = args[j++];",
"",
" System.out.flush();",
" bos.flush();",
"",
" // Start a process to run the command",
" Process pr = Runtime.getRuntime().exec(serverCmd);",
"",
" // TimedProcess, kill process if process doesnt finish in a certain ",
" // amount of time",
" TimedProcess tp = new TimedProcess(pr);",
" prout = new ProcessStreamResult(pr.getInputStream(), bos,",
" timeoutMinutes);",
" prerr = new ProcessStreamResult(pr.getErrorStream(), bos,",
" timeoutMinutes);",
"",
" // wait until all the results have been processed",
" boolean outTimedOut = prout.Wait();",
" boolean errTimedOut = prerr.Wait();",
" ",
" // wait for this process to terminate, upto a wait period",
" // of 'timeoutSecondsForProcess'",
" // if process has already been terminated, this call will ",
" // return immediately.",
" tp.waitFor(timeoutSecondsForProcess);",
" pr = null;",
" ",
" if (outTimedOut || errTimedOut)",
" System.out.println(\" Reading from process streams timed out.. \");",
"",
" System.out.flush();",
" }",
" ",
"}"
]
}
]
}
] |
derby-DERBY-5803-8ab3fa6d
|
DERBY-5803: Make error handling in xaHelper more explicit
Make error handling method return an execption and raise it explicitly.
Note that the method may still throw runtime exceptions.
Patch file: derby-5803-1a-explict_throw.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1348818 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/tools/org/apache/derby/impl/tools/ij/xaHelper.java",
"hunks": [
{
"added": [],
"header": "@@ -30,14 +30,12 @@ import java.util.Locale;",
"removed": [
"import javax.transaction.xa.XAResource;",
"import org.apache.derby.iapi.services.info.JVMInfo;"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -166,7 +164,7 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -208,7 +206,7 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -228,9 +226,8 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);",
"\t\treturn null;"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -243,7 +240,7 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -255,7 +252,7 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -267,7 +264,7 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -279,7 +276,7 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -293,7 +290,7 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);"
]
},
{
"added": [
" throw handleException(t);"
],
"header": "@@ -315,9 +312,8 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);",
""
]
},
{
"added": [
" throw handleException(t);",
" /**",
" * Handles the given throwable.",
" * <p>",
" * If possible, an {@code SQLException} is returned. Otherwise the",
" * appropriate actions are taken and a {@code RuntimeException} is thrown.",
" *",
" * @param t exception to handle",
" * @return An {@code SQLException}.",
" * @throws RuntimeException if the throwable isn't an {@code SQLException}",
" */",
"\tprivate SQLException handleException(Throwable t)",
" return (SQLException)t;"
],
"header": "@@ -328,16 +324,26 @@ class xaHelper implements xaAbstractHelper",
"removed": [
"\t\t\thandleException(t);",
"\tprivate void handleException(Throwable t) throws SQLException",
"\t\t\tthrow (SQLException)t;"
]
}
]
}
] |
derby-DERBY-5806-4bdaec20
|
DERBY-5806: Fix parsing of empty string in DRDAConnThread. Set ClientStatement.sqlMode_ correctly for empty statements.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1537874 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ClientStatement.java",
"hunks": [
{
"added": [
" sqlMode_ = executeType==executeQueryMethod__?isQuery__:isUpdate__;"
],
"header": "@@ -2259,6 +2259,7 @@ public class ClientStatement implements Statement, StatementCallbackInterface{",
"removed": []
}
]
}
] |
derby-DERBY-5808-e437bc34
|
DERBY-5808: Compatibility test should use BaseTestCase.execJavaCmd()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1350133 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5810-99028754
|
DERBY-5810: Include emma.jar on classpath when running compatibility test with instrumented jars
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1350134 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5814-afe6225b
|
DERBY-5814 Source cleanup in catalogs "impl.sql.catalog" and "impl.sql.compile"
Patch catalog-compile-cleaning-2, which:
a) removed unused private methods
b) removed unused local variables and members
c) cleaned up imports
d) remove unused formal arguments
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1350289 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DropDependencyFilter.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.services.monitor.Monitor;",
"import org.apache.derby.iapi.sql.execute.ExecRow;",
"import org.apache.derby.iapi.sql.execute.TupleFilter;",
"import org.apache.derby.iapi.types.BooleanDataValue;",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.DataValueFactory;",
"import org.apache.derby.iapi.types.SQLBoolean;"
],
"header": "@@ -21,28 +21,17 @@",
"removed": [
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"",
"import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;",
"",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"",
"import org.apache.derby.iapi.types.BooleanDataValue;",
"import org.apache.derby.iapi.types.SQLBoolean;",
"",
"import org.apache.derby.iapi.sql.execute.ExecutionFactory;",
"",
"import org.apache.derby.iapi.services.context.ContextService;",
"import org.apache.derby.iapi.types.DataValueFactory;",
"",
"import org.apache.derby.iapi.services.monitor.Monitor;",
"",
"import org.apache.derby.iapi.sql.execute.TupleFilter;",
"import org.apache.derby.iapi.sql.execute.ExecRow;",
"import org.apache.derby.iapi.error.StandardException;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/SPSNameCacheable.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.cache.Cacheable;",
"import org.apache.derby.iapi.sql.dictionary.SPSDescriptor;"
],
"header": "@@ -21,17 +21,10 @@",
"removed": [
"import org.apache.derby.iapi.services.cache.Cacheable;",
"import org.apache.derby.iapi.services.cache.CacheManager;",
"",
"import org.apache.derby.iapi.services.stream.HeaderPrintWriter;",
"",
"import org.apache.derby.iapi.sql.dictionary.SchemaDescriptor;",
"import org.apache.derby.iapi.sql.dictionary.SPSDescriptor;",
"",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/SequenceGenerator.java",
"hunks": [
{
"added": [],
"header": "@@ -23,16 +23,6 @@ package org.apache.derby.impl.sql.catalog;",
"removed": [
"import org.apache.derby.iapi.services.cache.Cacheable;",
"import org.apache.derby.iapi.services.cache.CacheManager;",
"import org.apache.derby.iapi.services.context.ContextManager;",
"import org.apache.derby.iapi.services.context.ContextService;",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;",
"import org.apache.derby.iapi.sql.dictionary.SequenceDescriptor;",
"import org.apache.derby.iapi.store.access.TransactionController;",
"import org.apache.derby.iapi.types.NumberDataValue;",
"import org.apache.derby.iapi.types.RowLocation;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/BaseTypeCompiler.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.services.classfile.VMOpcode;",
"import org.apache.derby.iapi.services.compiler.LocalField;",
"import org.apache.derby.iapi.services.compiler.MethodBuilder;",
"import org.apache.derby.iapi.types.TypeId;"
],
"header": "@@ -21,27 +21,16 @@",
"removed": [
"",
"",
"import org.apache.derby.iapi.error.StandardException;",
"",
"",
"import org.apache.derby.iapi.types.DataValueDescriptor;",
"import org.apache.derby.iapi.types.DataValueFactory;",
"import org.apache.derby.iapi.types.NumberDataValue;",
"import org.apache.derby.iapi.types.SQLInteger;",
"import org.apache.derby.iapi.types.TypeId;",
"",
"",
"import org.apache.derby.iapi.services.compiler.LocalField;",
"import org.apache.derby.iapi.services.compiler.MethodBuilder;",
"",
"import org.apache.derby.iapi.services.classfile.VMOpcode;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CreateAliasNode.java",
"hunks": [
{
"added": [
"import java.util.Vector;",
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.reference.Limits;",
"import org.apache.derby.iapi.reference.SQLState;",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"import org.apache.derby.iapi.sql.dictionary.DataDictionary;",
"import org.apache.derby.iapi.sql.dictionary.SchemaDescriptor;",
"import org.apache.derby.iapi.sql.execute.ConstantAction;",
"import org.apache.derby.iapi.types.DataTypeDescriptor;",
"import org.apache.derby.iapi.types.TypeId;"
],
"header": "@@ -21,31 +21,21 @@",
"removed": [
"import org.apache.derby.iapi.reference.SQLState;",
"import org.apache.derby.iapi.reference.Limits;",
"",
"import org.apache.derby.iapi.sql.execute.ConstantAction;",
"",
"import org.apache.derby.iapi.types.TypeId;",
"import org.apache.derby.iapi.types.DataTypeDescriptor;",
"import org.apache.derby.iapi.types.StringDataValue;",
"",
"import org.apache.derby.iapi.sql.dictionary.DataDictionary;",
"import org.apache.derby.iapi.sql.dictionary.SchemaDescriptor;",
"import org.apache.derby.iapi.reference.JDBC30Translation;",
"",
"import org.apache.derby.iapi.error.StandardException;",
"",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"",
"import org.apache.derby.catalog.types.TypeDescriptorImpl;",
"",
"import java.util.Vector;"
]
},
{
"added": [],
"header": "@@ -76,7 +66,6 @@ public class CreateAliasNode extends DDLStatementNode",
"removed": [
"\tprivate boolean\t\t\t\tdelimitedIdentifier;"
]
},
{
"added": [
" *"
],
"header": "@@ -88,10 +77,7 @@ public class CreateAliasNode extends DDLStatementNode",
"removed": [
"\t * @param delimitedIdentifier\tWhether or not to treat the class name",
"\t *\t\t\t\t\t\t\t\tas a delimited identifier if trying to",
"\t *\t\t\t\t\t\t\t\tresolve it as a class alias",
"\t *"
]
},
{
"added": [
" Object aliasType)"
],
"header": "@@ -99,8 +85,7 @@ public class CreateAliasNode extends DDLStatementNode",
"removed": [
"\t\t\t\t\t\tObject aliasType,",
"\t\t\t\t\t\tObject delimitedIdentifier)"
]
},
{
"added": [],
"header": "@@ -122,8 +107,6 @@ public class CreateAliasNode extends DDLStatementNode",
"removed": [
"\t\t\t\tthis.delimitedIdentifier =",
"\t\t\t\t\t\t\t\t((Boolean) delimitedIdentifier).booleanValue();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ExecSPSNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.SQLState;",
"import org.apache.derby.iapi.services.loader.GeneratedClass;",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"import org.apache.derby.iapi.sql.ResultDescription;",
"import org.apache.derby.iapi.sql.dictionary.SchemaDescriptor;"
],
"header": "@@ -21,34 +21,19 @@",
"removed": [
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"",
"import org.apache.derby.iapi.services.loader.GeneratedClass;",
"",
"",
"import org.apache.derby.iapi.sql.compile.CompilerContext;",
"",
"import org.apache.derby.iapi.sql.dictionary.SchemaDescriptor;",
"",
"import org.apache.derby.iapi.sql.depend.DependencyManager;",
"",
"import org.apache.derby.iapi.reference.SQLState;",
"",
"",
"import org.apache.derby.iapi.sql.PreparedStatement;",
"import org.apache.derby.iapi.sql.ResultDescription;",
"",
"import org.apache.derby.impl.sql.CursorInfo;",
"",
"import java.util.Enumeration;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromTable.java",
"hunks": [
{
"added": [
"import java.util.Enumeration;",
"import java.util.HashMap;",
"import java.util.Properties;",
"import java.util.Vector;",
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.reference.SQLState;",
"import org.apache.derby.iapi.services.io.FormatableBitSet;",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"import org.apache.derby.iapi.sql.compile.AccessPath;",
"import org.apache.derby.iapi.sql.compile.C_NodeTypes;",
"import org.apache.derby.iapi.sql.compile.CostEstimate;",
"import org.apache.derby.iapi.sql.compile.JoinStrategy;",
"import org.apache.derby.impl.sql.execute.HashScanResultSet;"
],
"header": "@@ -21,39 +21,29 @@",
"removed": [
"import org.apache.derby.iapi.services.context.ContextManager;",
"import org.apache.derby.iapi.sql.compile.CostEstimate;",
"import org.apache.derby.iapi.sql.compile.JoinStrategy;",
"import org.apache.derby.iapi.sql.compile.AccessPath;",
"import org.apache.derby.iapi.sql.compile.C_NodeTypes;",
"",
"",
"",
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.services.sanity.SanityManager;",
"",
"import org.apache.derby.iapi.reference.SQLState;",
"import org.apache.derby.iapi.error.StandardException;",
"",
"import org.apache.derby.impl.sql.execute.HashScanResultSet;",
"",
"import org.apache.derby.iapi.services.io.FormatableBitSet;",
"import org.apache.derby.catalog.UUID;",
"",
"import java.util.Enumeration;",
"import java.util.Properties;",
"import java.util.Vector;",
"import java.util.HashMap;"
]
},
{
"added": [],
"header": "@@ -94,8 +84,6 @@ abstract class FromTable extends ResultSetNode implements Optimizable",
"removed": [
"\tprivate FormatableBitSet refCols;",
""
]
},
{
"added": [],
"header": "@@ -435,7 +423,6 @@ abstract class FromTable extends ResultSetNode implements Optimizable",
"removed": [
"\t\tboolean indexSpecified = false;"
]
},
{
"added": [],
"header": "@@ -638,7 +625,6 @@ abstract class FromTable extends ResultSetNode implements Optimizable",
"removed": [
"\t\tConglomerateDescriptor cd =\tbestPath.getConglomerateDescriptor();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromVTI.java",
"hunks": [
{
"added": [
"import java.lang.reflect.Constructor;",
"import java.lang.reflect.InvocationTargetException;",
"import java.lang.reflect.Method;",
"import java.lang.reflect.Modifier;",
"import java.sql.PreparedStatement;",
"import java.sql.ResultSet;",
"import java.sql.ResultSetMetaData;",
"import java.sql.SQLException;",
"import java.util.Enumeration;",
"import java.util.HashMap;",
"import java.util.Vector;",
"import org.apache.derby.catalog.TypeDescriptor;",
"import org.apache.derby.catalog.UUID;",
"import org.apache.derby.catalog.types.RoutineAliasInfo;",
"import org.apache.derby.iapi.error.StandardException;",
"import org.apache.derby.iapi.reference.ClassName;",
"import org.apache.derby.iapi.reference.SQLState;",
"import org.apache.derby.iapi.services.classfile.VMOpcode;",
"import org.apache.derby.iapi.services.compiler.MethodBuilder;",
"import org.apache.derby.iapi.services.io.FormatableBitSet;",
"import org.apache.derby.iapi.services.io.FormatableHashtable;",
"import org.apache.derby.iapi.services.loader.ClassInspector;",
"import org.apache.derby.iapi.sql.compile.C_NodeTypes;",
"import org.apache.derby.iapi.sql.compile.CostEstimate;",
"import org.apache.derby.iapi.sql.compile.Optimizable;",
"import org.apache.derby.iapi.sql.compile.OptimizablePredicate;",
"import org.apache.derby.iapi.sql.compile.Visitor;",
"import org.apache.derby.iapi.sql.dictionary.DataDictionary;",
"import org.apache.derby.iapi.sql.execute.ExecutionContext;",
"import org.apache.derby.iapi.util.JBitSet;"
],
"header": "@@ -21,81 +21,53 @@",
"removed": [
"import org.apache.derby.iapi.services.loader.ClassInspector;",
"import org.apache.derby.iapi.services.loader.GeneratedMethod;",
"",
"import org.apache.derby.iapi.services.context.ContextManager;",
"",
"import org.apache.derby.iapi.services.compiler.MethodBuilder;",
"",
"",
"import org.apache.derby.iapi.error.StandardException;",
"",
"import org.apache.derby.iapi.sql.compile.CompilerContext;",
"import org.apache.derby.iapi.sql.compile.OptimizablePredicate;",
"import org.apache.derby.iapi.sql.compile.Optimizable;",
"import org.apache.derby.iapi.sql.compile.CostEstimate;",
"import org.apache.derby.iapi.sql.compile.Visitable;",
"import org.apache.derby.iapi.sql.compile.Visitor;",
"import org.apache.derby.iapi.sql.compile.C_NodeTypes;",
"",
"",
"import org.apache.derby.iapi.sql.dictionary.DataDictionary;",
"",
"import org.apache.derby.iapi.reference.ClassName;",
"import org.apache.derby.iapi.reference.SQLState;",
"",
"import org.apache.derby.iapi.sql.Activation;",
"",
"import org.apache.derby.catalog.TypeDescriptor;",
"import org.apache.derby.catalog.UUID;",
"import org.apache.derby.catalog.types.RoutineAliasInfo;",
"",
"import org.apache.derby.iapi.util.JBitSet;",
"import org.apache.derby.iapi.services.io.FormatableBitSet;",
"import org.apache.derby.iapi.services.classfile.VMOpcode;",
"import org.apache.derby.iapi.services.info.JVMInfo;",
"",
"import org.apache.derby.impl.sql.compile.ActivationClassBuilder;",
"import org.apache.derby.iapi.sql.execute.ExecutionContext;",
"",
"import java.lang.reflect.Constructor;",
"import java.lang.reflect.InvocationTargetException;",
"import java.lang.reflect.Method;",
"import java.lang.reflect.Modifier;",
"",
"import java.sql.PreparedStatement;",
"import java.sql.ResultSet;",
"import java.sql.ResultSetMetaData;",
"import java.sql.SQLException;",
"import java.sql.Types;",
"",
"import java.util.Enumeration;",
"import java.util.HashMap;",
"import java.util.HashSet;",
"import java.util.Properties; ",
"import java.util.Vector;",
"import org.apache.derby.iapi.services.io.FormatableHashtable;",
"",
"import java.lang.reflect.Modifier;",
""
]
},
{
"added": [],
"header": "@@ -123,19 +95,6 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": [
"",
"\t/**",
"\t\tWas a FOR UPDATE clause specified in a SELECT statement.",
"\t*/",
"\tprivate boolean forUpdatePresent;",
"",
"",
"\t/**",
"\t\tWas the FOR UPDATE clause empty (no columns specified).",
"\t*/",
"\tprivate boolean emptyForUpdate;",
"",
""
]
},
{
"added": [],
"header": "@@ -153,7 +112,6 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": [
" private boolean isInsensitive;"
]
},
{
"added": [],
"header": "@@ -739,7 +697,6 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": [
" isInsensitive = (resultSetType == ResultSet.TYPE_SCROLL_INSENSITIVE);"
]
},
{
"added": [],
"header": "@@ -906,8 +863,6 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": [
"\t\tResultColumnList\tderivedRCL = resultColumns;",
""
]
},
{
"added": [],
"header": "@@ -1045,7 +1000,6 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": [
"\t\tTableName\t\texposedTableName;"
]
},
{
"added": [],
"header": "@@ -1808,10 +1762,6 @@ public class FromVTI extends FromTable implements VTIEnvironment",
"removed": [
"",
"\t\tTableName tableName = makeTableName(td.getSchemaName(), ",
"\t\t\t\t\t\t\t\t\t\t\ttd.getName());",
""
]
}
]
}
] |
derby-DERBY-5817-273ad5f7
|
DERBY-5817: Add support for the JaCoCo code coverage tool
Adds initial support for the JaCoCo code coverage tool.
Top-level ant targets:
o jacoco-complete: runs derbyall, suites.All, junit-lowmem and junit-pptesting
o jacoco-junit: runs suite.All
o jacoco-junit-single: runs test specified by the property
The report currently ends up under 'junit_{timestamp}/coverage-report'.
You need to install 'jacocoant.jar' and 'jacocoagent.jar' in 'tools/java/'
Refactored the ant target 'getsvnversion' (now also loads the version into the
property 'changenumber' and runs only if that property isn't already set).
Patch file: derby-5817-1c-jacoco_support.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1352502 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5819-f5c1abdc
|
DERBY-5819 Add logic to BaseTestCase to start subprocesses ready to be attached to from a Java debugger
Adds options to allow this capability for Oracle Java (properties below
ignored for other implementations):
derby.test.debugPortBase=<int> default 8800
derby.test.debugSubProcesses=<boolean> default false
derby.test.debugSuspend=<y|n> default 'y'
If several subprocesses are created, the port for subprocess two will be
debugPortBase + 1 (i.e. 8801 by default) etc.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1356457 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5821-001ac634
|
DERBY-5821: tools/derbyrunjartest.java doesn't use jvmflags
Add the test to the tools suite. Skip testing the server command on
J2ME platforms, where it's not supported.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1359068 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5821-3919c525
|
DERBY-5821: tools/derbyrunjartest.java doesn't use jvmflags
Converted the test to JUnit to allow it to use BaseTestCase's helper
methods for starting sub-processes.
Extended BaseTestCase's helper methods with support for running jar
files with java -jar.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1353862 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5823-8f8881fd
|
DERBY-5823: Multi-row insert fails on table without generated keys with RETURN_GENERATED_KEYS
The fix contains two parts:
1. Don't collect generated keys if the statement does not actually
generate key values. (This is the fix for the reported problem.)
2. Cache the array of generated key columns between executions. In the
existing code, the array of key columns was created only on the
first execution. Since it wasn't cached, it was null on all
subsequent executions. When it is null, all columns are collected
into the temporary row holder, which wastes space. Now, only the
key columns are collected, also on re-execution.
The test case was contributed by Kristian Waagan.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1537888 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/InsertResultSet.java",
"hunks": [
{
"added": [
"import java.util.Arrays;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
},
{
"added": [
" private int[] autoGeneratedKeysColumnIndexes;"
],
"header": "@@ -96,6 +97,7 @@ class InsertResultSet extends DMLWriteResultSet implements TargetResultSet",
"removed": []
},
{
"added": [
" Arrays.fill(generatedColumnPositionsArray, -1);"
],
"header": "@@ -610,10 +612,8 @@ class InsertResultSet extends DMLWriteResultSet implements TargetResultSet",
"removed": [
"\t\tfor (int i=0; i<size; i++) {",
"\t\t\tgeneratedColumnPositionsArray[i] = -1;",
"\t\t}"
]
},
{
"added": [
" autoGeneratedKeysColumnIndexes =",
" activation.getAutoGeneratedKeysColumnIndexes();",
" if (autoGeneratedKeysColumnIndexes != null) {",
" // Use user-provided column positions array.",
" autoGeneratedKeysColumnIndexes =",
" uniqueColumnPositionArray(autoGeneratedKeysColumnIndexes);",
" } else {",
" // Prepare array of auto-generated keys for the table since",
" // user didn't provide any.",
" autoGeneratedKeysColumnIndexes =",
" generatedColumnPositionsArray();",
" }",
" rd = lcc.getLanguageFactory().getResultDescription(",
" resultDescription, autoGeneratedKeysColumnIndexes);"
],
"header": "@@ -991,23 +991,29 @@ class InsertResultSet extends DMLWriteResultSet implements TargetResultSet",
"removed": [
"\t\tint[] columnIndexes = null;",
"\t\t\tcolumnIndexes = activation.getAutoGeneratedKeysColumnIndexes();",
"\t\t\tif ( columnIndexes != null) {//use user provided column positions array",
"\t\t\t\tcolumnIndexes = uniqueColumnPositionArray(columnIndexes);",
"\t\t\t} else { //prepare array of auto-generated keys for the table since user didn't provide any",
"\t\t\t\tcolumnIndexes = generatedColumnPositionsArray();",
"\t\t\t}",
"\t\t\trd = lcc.getLanguageFactory().getResultDescription(resultDescription,columnIndexes);"
]
}
]
}
] |
derby-DERBY-5824-2d7d37f1
|
DERBY-5824: Disable OSReadOnlyTest when run as privileged user
When running the test as root, it is able to modify the database even
though all database files have been made read-only. Disable the test
in such environments.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1599544 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5826-666ab8f5
|
DERBY-5826: Remove unused methods in NetConnectionReply class
This patch was contributed by Mohamed Nufail (nufail56 at gmail dot com)
This change removes several unused methods from the NetConnectionReply
class. Code inspection of the NetConnectionReply class reveals that
the following methods are not used at all:
- verifyConnectReply(int codept)
- readDummyExchangeServerAttributes(Connection connection)
- checkRequiredObjects(boolean receivedFlag, boolean receivedFlag2, boolean receivedFlag3, boolean receivedFlag4, boolean receivedFlag5, boolean receivedFlag6)
- checkRequiredObjects(boolean receivedFlag, boolean receivedFlag2, boolean receivedFlag3, boolean receivedFlag4, boolean receivedFlag5, boolean receivedFlag6, boolean receivedFlag7)
The change also removes the method parseConnectError() which is being used
only by the removed method verifyConnectReply(int codept)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1355959 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/NetConnectionReply.java",
"hunks": [
{
"added": [],
"header": "@@ -57,47 +57,6 @@ public class NetConnectionReply extends Reply",
"removed": [
" void verifyConnectReply(int codept) throws SqlException {",
" if (peekCodePoint() != codept) {",
" parseConnectError();",
" return;",
" }",
" readLengthAndCodePoint();",
" skipBytes();",
"",
" if (codept == CodePoint.ACCRDBRM) {",
" int peekCP = peekCodePoint();",
" if (peekCP == Reply.END_OF_SAME_ID_CHAIN) {",
" return;",
" }",
"",
" parseTypdefsOrMgrlvlovrs();",
" NetSqlca netSqlca = parseSQLCARD(null);",
" netAgent_.netConnection_.completeSqlca(netSqlca);",
" }",
" }",
"",
" void parseConnectError() throws DisconnectException {",
" int peekCP = peekCodePoint();",
" switch (peekCP) {",
" case CodePoint.CMDCHKRM:",
" parseCMDCHKRM();",
" break;",
" case CodePoint.MGRLVLRM:",
" parseMGRLVLRM();",
" break;",
" default:",
" parseCommonError(peekCP);",
" }",
" }",
"",
" void readDummyExchangeServerAttributes(Connection connection) throws SqlException {",
" startSameIdChainParse();",
" parseDummyEXCSATreply((NetConnection) connection);",
" endOfSameIdChainData();",
" agent_.checkForChainBreakingException_();",
" }",
""
]
},
{
"added": [],
"header": "@@ -2814,32 +2773,6 @@ public class NetConnectionReply extends Reply",
"removed": [
" protected void checkRequiredObjects(boolean receivedFlag,",
" boolean receivedFlag2,",
" boolean receivedFlag3,",
" boolean receivedFlag4,",
" boolean receivedFlag5,",
" boolean receivedFlag6) throws DisconnectException {",
" if (!receivedFlag || !receivedFlag2 || !receivedFlag3 || !receivedFlag4 ||",
" !receivedFlag5 || !receivedFlag6) {",
" doSyntaxrmSemantics(CodePoint.SYNERRCD_REQ_OBJ_NOT_FOUND);",
" }",
"",
" }",
"",
" protected void checkRequiredObjects(boolean receivedFlag,",
" boolean receivedFlag2,",
" boolean receivedFlag3,",
" boolean receivedFlag4,",
" boolean receivedFlag5,",
" boolean receivedFlag6,",
" boolean receivedFlag7) throws DisconnectException {",
" if (!receivedFlag || !receivedFlag2 || !receivedFlag3 || !receivedFlag4 ||",
" !receivedFlag5 || !receivedFlag6 || !receivedFlag7) {",
" doSyntaxrmSemantics(CodePoint.SYNERRCD_REQ_OBJ_NOT_FOUND);",
" }",
" }",
""
]
}
]
}
] |
derby-DERBY-5827-f7066326
|
DERBY-5827: Remove unused methods in NetStatementReply class
This patch was contributed by Mohamed Nufail (nufail56 at gmail dot com)
This change removes several unused methods from the NetStatementReply
class. Code inspection of the NetStatementReply class reveals that
the following methods are not used at all:
parseQRYPRCTYP()
parseSQLCSRHLD()
parseQRYATTSCR()
parseQRYATTSET()
parseQRYATTSNS()
parseQRYATTUPD()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1356065 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/NetStatementReply.java",
"hunks": [
{
"added": [],
"header": "@@ -1648,17 +1648,6 @@ public class NetStatementReply extends NetPackageReply implements StatementReply",
"removed": [
" // Query Protocol type specifies the type of query protocol",
" // the target SQLAM uses.",
" protected int parseQRYPRCTYP() throws DisconnectException {",
" parseLengthAndMatchCodePoint(CodePoint.QRYPRCTYP);",
" int qryprctyp = parseCODPNTDR();",
" if ((qryprctyp != CodePoint.FIXROWPRC) && (qryprctyp != CodePoint.LMTBLKPRC)) {",
" doValnsprmSemantics(CodePoint.QRYPRCTYP, qryprctyp);",
" }",
" return qryprctyp;",
" }",
""
]
},
{
"added": [],
"header": "@@ -1668,23 +1657,6 @@ public class NetStatementReply extends NetPackageReply implements StatementReply",
"removed": [
" // hold cursor position state indicates whether the requester specified",
" // the HOLD option on the SQL DECLARE CURSOR statement. When the HOLD",
" // option is specified, the cursor is not closed upon execution of a",
" // commit operation.",
" // The value TRUE indicates that the requester specifies the HOLD",
" // operation. The value FALSSE indicates that the requeter is not",
" // specifying the HOLD option.",
" protected int parseSQLCSRHLD() throws DisconnectException {",
" parseLengthAndMatchCodePoint(CodePoint.SQLCSRHLD);",
" int sqlcsrhld = readUnsignedByte();",
" // 0xF0 is false (default), 0xF1 is true // use constants in if",
" if ((sqlcsrhld != 0xF0) && (sqlcsrhld != 0xF1)) {",
" doValnsprmSemantics(CodePoint.SQLCSRHLD, sqlcsrhld);",
" }",
" return sqlcsrhld;",
" }",
""
]
},
{
"added": [],
"header": "@@ -1695,17 +1667,6 @@ public class NetStatementReply extends NetPackageReply implements StatementReply",
"removed": [
" // Query Attribute for Scrollability indicates whether",
" // a cursor is scrollable or non-scrollable",
" protected int parseQRYATTSCR() throws DisconnectException {",
" parseLengthAndMatchCodePoint(CodePoint.QRYATTSCR);",
" int qryattscr = readUnsignedByte(); // use constants in if",
" if ((qryattscr != 0xF0) && (qryattscr != 0xF1)) {",
" doValnsprmSemantics(CodePoint.QRYATTSCR, qryattscr);",
" }",
" return qryattscr;",
" }",
""
]
},
{
"added": [],
"header": "@@ -1715,16 +1676,6 @@ public class NetStatementReply extends NetPackageReply implements StatementReply",
"removed": [
" // enabled for rowset positioning.",
" protected int parseQRYATTSET() throws DisconnectException {",
" parseLengthAndMatchCodePoint(CodePoint.QRYATTSET);",
" int qryattset = readUnsignedByte(); // use constants in if",
" if ((qryattset != 0xF0) && (qryattset != 0xF1)) {",
" doValnsprmSemantics(CodePoint.QRYATTSET, qryattset);",
" }",
" return qryattset;",
" }",
""
]
},
{
"added": [],
"header": "@@ -1734,23 +1685,6 @@ public class NetStatementReply extends NetPackageReply implements StatementReply",
"removed": [
" // Query attribute for Sensitivity indicats the sensitivity",
" // of an opened cursor to changes made to the underlying",
" // base table.",
" protected int parseQRYATTSNS() throws DisconnectException {",
" parseLengthAndMatchCodePoint(CodePoint.QRYATTSNS);",
" int qryattsns = readUnsignedByte();",
" switch (qryattsns) {",
" case CodePoint.QRYUNK:",
" case CodePoint.QRYINS:",
" break;",
" default:",
" doValnsprmSemantics(CodePoint.QRYATTSNS, qryattsns);",
" break;",
" }",
" return qryattsns;",
" }",
""
]
},
{
"added": [],
"header": "@@ -1766,23 +1700,6 @@ public class NetStatementReply extends NetPackageReply implements StatementReply",
"removed": [
" // Query Attribute for Updatability indicates the updatability",
" // of an opened cursor.",
" protected int parseQRYATTUPD() throws DisconnectException {",
" parseLengthAndMatchCodePoint(CodePoint.QRYATTUPD);",
" int qryattupd = readUnsignedByte();",
" switch (qryattupd) {",
" case CodePoint.QRYUNK:",
" case CodePoint.QRYRDO:",
" case CodePoint.QRYUPD:",
" break;",
" default:",
" doValnsprmSemantics(CodePoint.QRYATTUPD, qryattupd);",
" break;",
" }",
" return qryattupd;",
" }",
""
]
}
]
}
] |
derby-DERBY-5828-29fe9b7f
|
DERBY-5828: Remove unused methods in NetPackageReply class
This patch was contributed by Mohamed Nufail (nufail56 at gmail dot com)
This change removes the unused parsePKGNAMCT method in NetPackageReply.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1356568 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/NetPackageReply.java",
"hunks": [
{
"added": [],
"header": "@@ -197,18 +197,4 @@ public class NetPackageReply extends NetConnectionReply {",
"removed": [
" // RDB Package Name and Consistency token Scalar Object specifies the",
" // fully qualified name of a relational database package and its",
" // consistency token.",
" protected Object parsePKGNAMCT(boolean skip) throws DisconnectException {",
" parseLengthAndMatchCodePoint(CodePoint.PKGNAMCT);",
" if (skip) {",
" skipBytes();",
" return null;",
" }",
" agent_.accumulateChainBreakingReadExceptionAndThrow(new DisconnectException(agent_,",
" new ClientMessageId(SQLState.DRDA_COMMAND_NOT_IMPLEMENTED),",
" \"parsePKGNAMCT\"));",
" return null; // to make compiler happy",
" }"
]
}
]
}
] |
derby-DERBY-5830-1966619e
|
DERBY-5830: Make DoubleProperties.propertyNames() thread-safe
Don't store the property values in the intermediate Hashtable as they
are not needed. They may be null if the Properties instances are
modified after the recursive calls to Properties.propertyNames(), and
trying to store a null value in a Hashtable results in a
NullPointerException, causing issues such as DERBY-4269.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1353852 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/util/DoubleProperties.java",
"hunks": [
{
"added": [
"import java.util.Collections;",
"import java.util.HashSet;",
"import java.util.Properties;"
],
"header": "@@ -21,8 +21,10 @@",
"removed": [
"import java.util.Properties;"
]
},
{
"added": [
" Only the put(), propertyNames() and getProperty() methods are supported"
],
"header": "@@ -31,7 +33,7 @@ import java.util.Enumeration;",
"removed": [
" Only the put(), keys() and getProperty() methods are supported"
]
},
{
"added": [
" HashSet names = new HashSet();",
" addAllNames(write, names);",
" addAllNames(read, names);",
" return Collections.enumeration(names);",
"",
" /**",
" * Add all property names in the Properties object {@code src} to the",
" * HashSet {@code dest}.",
" */",
" private static void addAllNames(Properties src, HashSet dest) {",
" if (src != null) {",
" for (Enumeration e = src.propertyNames(); e.hasMoreElements(); ) {",
" dest.add(e.nextElement());",
" }",
" }",
" }"
],
"header": "@@ -60,23 +62,21 @@ public final class DoubleProperties extends Properties {",
"removed": [
"",
"\t\tProperties p = new Properties();",
"",
"\t\tif (write != null) {",
"",
"\t\t\tfor (Enumeration e = write.propertyNames(); e.hasMoreElements(); ) {",
"\t\t\t\tString key = (String) e.nextElement();",
"\t\t\t\tp.put(key, write.getProperty(key));",
"\t\t\t}",
"\t\t}",
"",
"\t\tif (read != null) {",
"\t\t\tfor (Enumeration e = read.propertyNames(); e.hasMoreElements(); ) {",
"\t\t\t\tString key = (String) e.nextElement();",
"\t\t\t\tp.put(key, read.getProperty(key));",
"\t\t\t}",
"\t\t}",
"\t\treturn p.keys();"
]
}
]
}
] |
derby-DERBY-5833-ab750e37
|
DERBY-5833: Remove unused methods in NetCallableStatement class
This patch was contributed by Mohamed Nufail (nufail56 at gmail dot com)
This change removes the following unused methods from the
NetCallableStatement class:
- resetNetCallableStatement(NetAgent netAgent, NetConnection netConnection, String sql, Section section)
- resetNetCallableStatement(NetAgent netAgent, NetConnection netConnection, String sql, Section section, ColumnMetaData parameterMetaData, ColumnMetaData resultSetMetaData)
I suspect that, of the two remaining overloads of the resetNetCallableStatement
method, the 3-argument variant could be made private, as it is called only
by the 6-argument variant. That might be a further improvement we could do
at some future time. For now, this change simply removes the two variants
that are wholly unused.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1356066 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/NetCallableStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -91,22 +91,5 @@ public class NetCallableStatement extends NetPreparedStatement",
"removed": [
" void resetNetCallableStatement(NetAgent netAgent,",
" NetConnection netConnection,",
" String sql,",
" Section section) throws SqlException {",
" callableStatement_.resetCallableStatement(netAgent, netConnection, sql, section);",
" resetNetCallableStatement(callableStatement_, netAgent, netConnection);",
" }",
"",
" void resetNetCallableStatement(NetAgent netAgent,",
" NetConnection netConnection,",
" String sql,",
" Section section,",
" ColumnMetaData parameterMetaData,",
" ColumnMetaData resultSetMetaData) throws SqlException {",
" callableStatement_.resetCallableStatement(netAgent, netConnection, sql, section, parameterMetaData, resultSetMetaData);",
" resetNetCallableStatement(callableStatement_, netAgent, netConnection);",
" }"
]
}
]
}
] |
derby-DERBY-5834-ca72f66a
|
DERBY-5834: Remove unused methods in NetPreparedStatement class
This patch was contributed by Mohamed Nufail (nufail56 at gmail dot com)
This change removes two unused overloads of the resetNetPreparedStatement
method in NetPreparedStatement.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1356573 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/NetPreparedStatement.java",
"hunks": [
{
"added": [],
"header": "@@ -141,24 +141,6 @@ public class NetPreparedStatement extends NetStatement",
"removed": [
" void resetNetPreparedStatement(NetAgent netAgent,",
" NetConnection netConnection,",
" String sql,",
" Section section) throws SqlException {",
" preparedStatement_.resetPreparedStatement(netAgent, netConnection, sql, section);",
" resetNetPreparedStatement(preparedStatement_, netAgent, netConnection);",
" }",
"",
" void resetNetPreparedStatement(NetAgent netAgent,",
" NetConnection netConnection,",
" String sql,",
" Section section,",
" ColumnMetaData parameterMetaData,",
" ColumnMetaData resultSetMetaData) throws SqlException {",
" preparedStatement_.resetPreparedStatement(netAgent, netConnection, sql, section, parameterMetaData, resultSetMetaData);",
" this.resetNetPreparedStatement(preparedStatement_, netAgent, netConnection);",
" }",
""
]
}
]
}
] |
derby-DERBY-5838-31bea464
|
DERBY-5838: Prevent users from changing the value of the DataDictionaryVersion property.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1356333 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/LanguageDbPropertySetter.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.sql.dictionary.DataDictionary;"
],
"header": "@@ -30,6 +30,7 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": []
}
]
}
] |
derby-DERBY-5839-f3ade652
|
DERBY-5839 (dblook run on toursdb fails on triggers with java.lang.StringIndexOutOfBoundsException in dblook.log)
We document that SYSTRIGGERS.REFERENCEDCOLUMNS is not part of the public API and hence that allows Derby to change underneath the behavior of the column. Prior to 10.9, this column only had information about columns referenced by UPDATE trigger. But, with 10.9, we use this column to also hold information about the trigger columns being used inside trigger action plan. This enables Derby to read only necessary columns from trigger table. But because of this change, it is not enough in dblook to check if SYSTRIGGERS.REFERENCEDCOLUMNS.wasNull. We need to also check if the string representation of that column is "NULL". Making this change fixes DERBY-5839
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1370446 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/tools/org/apache/derby/impl/tools/dblook/DB_Trigger.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t//DERBY-5839 dblook run on toursdb fails on triggers",
"\t\t\t\t\t\t//\twith java.lang.StringIndexOutOfBoundsException in",
"\t\t\t\t\t\t//\tdblook.log",
"\t\t\t\t\t\t//We document that SYSTRIGGERS.REFERENCEDCOLUMNS is not",
"\t\t\t\t\t\t// part of the public API and hence that allows Derby ",
"\t\t\t\t\t\t// to change underneath the behavior of the column.",
"\t\t\t\t\t\t// Prior to 10.9, this column only had information",
"\t\t\t\t\t\t// about columns referenced by UPDATE trigger. But,",
"\t\t\t\t\t\t// with 10.9, we use this column to also hold ",
"\t\t\t\t\t\t// information about the trigger columns being used ",
"\t\t\t\t\t\t// inside trigger action plan. This enables Derby to ",
"\t\t\t\t\t\t// read only necessary columns from trigger table. But",
"\t\t\t\t\t\t// because of this change, it is not enough in dblook",
"\t\t\t\t\t\t// to check if SYSTRIGGERS.REFERENCEDCOLUMNS.wasNull. ",
"\t\t\t\t\t\t// We need to also check if the string representation ",
"\t\t\t\t\t\t// of that column is \"NULL\". Making this change fixes",
"\t\t\t\t\t\t// DERBY-5839",
"\t\t\t\t\t\tif (!aTrig.wasNull() && !updateCols.equals(\"NULL\")) {"
],
"header": "@@ -125,7 +125,24 @@ public class DB_Trigger {",
"removed": [
"\t\t\t\t\t\tif (!aTrig.wasNull()) {"
]
}
]
}
] |
derby-DERBY-584-0c186b3d
|
Add workaround in DatabaseMetaDataTest for DERBY-584 to stop DatabaseMetaDataTest failing.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@509420 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5840-27fbf330
|
DERBY-5840: Compile network server code with source and target level 1.5
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1361925 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/build/org/apache/derbyBuild/classlister.java",
"hunks": [
{
"added": [
"\t\t// they must be picked up from derby.jar and not put in"
],
"header": "@@ -491,7 +491,7 @@ public class classlister {",
"removed": [
"\t\t// they must be picke dup from cs.jar and not put in"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/CharacterEncodings.java",
"hunks": [
{
"added": [
"import java.util.HashMap;",
""
],
"header": "@@ -21,6 +21,8 @@",
"removed": []
},
{
"added": [
" private static final HashMap<Integer, String> ccsidToJavaEncodingTable__ =",
" new HashMap<Integer, String>();"
],
"header": "@@ -28,7 +30,8 @@ final class CharacterEncodings",
"removed": [
" private static java.util.Hashtable ccsidToJavaEncodingTable__ = new java.util.Hashtable();"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/CodePointNameTable.java",
"hunks": [
{
"added": [
"class CodePointNameTable extends java.util.Hashtable<Integer, String>"
],
"header": "@@ -26,7 +26,7 @@ package org.apache.derby.impl.drda;",
"removed": [
"class CodePointNameTable extends java.util.Hashtable"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
" private List<Integer> unknownManagers;",
" private List<Integer> knownManagers;"
],
"header": "@@ -141,8 +141,8 @@ class DRDAConnThread extends Thread {",
"removed": [
" private List unknownManagers;",
" private List knownManagers;"
]
},
{
"added": [
"\t\tunknownManagers = new ArrayList<Integer>();",
"\t\tknownManagers = new ArrayList<Integer>();",
"\t\tArrayList<Integer> errorManagers = new ArrayList<Integer>();",
"\t\tArrayList<Integer> errorManagersLevel = new ArrayList<Integer>();"
],
"header": "@@ -1689,10 +1689,10 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\tunknownManagers = new ArrayList();",
"\t\tknownManagers = new ArrayList();",
"\t\tArrayList errorManagers = new ArrayList();",
"\t\tArrayList errorManagersLevel = new ArrayList();"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAProtocolException.java",
"hunks": [
{
"added": [
"\tprivate static Hashtable<String, DRDAProtocolExceptionInfo> errorInfoTable;"
],
"header": "@@ -82,7 +82,7 @@ class DRDAProtocolException extends Exception",
"removed": [
"\tprivate static Hashtable errorInfoTable;"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAResultSet.java",
"hunks": [
{
"added": [
" /** List of Blobs and Clobs. Return values to send with extdta objects. */",
" private ArrayList<Object> extDtaObjects;",
" private ArrayList<Integer> rsExtPositions;"
],
"header": "@@ -79,11 +79,10 @@ class DRDAResultSet",
"removed": [
"\tprivate ArrayList extDtaObjects; // Arraylist of Blobs and Clobs ",
"\t // Return Values to ",
"\t\t // send with extdta objects.",
"\tprivate ArrayList rsExtPositions;"
]
},
{
"added": [
"\t\t\textDtaObjects = new java.util.ArrayList<Object>();",
"\t\t\trsExtPositions = new java.util.ArrayList<Integer>();"
],
"header": "@@ -258,11 +257,11 @@ class DRDAResultSet",
"removed": [
"\t\t\textDtaObjects = new java.util.ArrayList();",
"\t\t\trsExtPositions = new java.util.ArrayList();"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAStatement.java",
"hunks": [
{
"added": [
" /** Hashtable with resultsets. */",
" private Hashtable<ConsistencyToken, DRDAResultSet> resultSetTable;",
" /** Ordered list of hash keys. */",
" private ArrayList<ConsistencyToken> resultSetKeyList;"
],
"header": "@@ -99,8 +99,10 @@ class DRDAStatement",
"removed": [
"\tprivate Hashtable resultSetTable; // Hashtable with resultsets ",
"\tprivate ArrayList resultSetKeyList; // ordered list of hash keys"
]
},
{
"added": [
"\tprotected ArrayList<Object> getExtDtaObjects()"
],
"header": "@@ -414,19 +416,11 @@ class DRDAStatement",
"removed": [
"\tprotected ArrayList getExtDtaObjects()",
"\t/**",
"\t * Set the extData Objects",
"\t */",
"\tprotected void setExtDtaObjects(ArrayList a)",
"\t{",
"\t\tcurrentDrdaRs.setExtDtaObjects(a);",
"\t}",
""
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/Database.java",
"hunks": [
{
"added": [
" /** Hash table for storing statements. */",
" private Hashtable<Object, DRDAStatement> stmtTable;"
],
"header": "@@ -81,7 +81,8 @@ class Database",
"removed": [
"\tprivate Hashtable stmtTable;\t\t// Hash table for storing statements"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DssTrace.java",
"hunks": [
{
"added": [
" (AccessController.doPrivileged(",
" new PrivilegedExceptionAction<PrintWriter>() {",
" public PrintWriter run()"
],
"header": "@@ -185,9 +185,9 @@ public class DssTrace",
"removed": [
" ((PrintWriter)AccessController.doPrivileged(",
" new PrivilegedExceptionAction() {",
" public Object run()"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"\tprivate Vector<String> commandArgs = new Vector<String>();"
],
"header": "@@ -225,7 +225,7 @@ public final class NetworkServerControlImpl {",
"removed": [
"\tprivate Vector commandArgs = new Vector();"
]
},
{
"added": [
" /** List of local addresses for checking admin commands. */",
" ArrayList<InetAddress> localAddresses;",
"\tprivate Hashtable<Integer, Session> sessionTable =",
" new Hashtable<Integer, Session>();",
"\tprivate Vector<DRDAConnThread> threadList = new Vector<DRDAConnThread>();",
"\tprivate Vector<Session> runQueue = new Vector<Session>();",
"\tprivate Hashtable<String, AppRequester> appRequesterTable =",
" new Hashtable<String, AppRequester>();"
],
"header": "@@ -303,26 +303,28 @@ public final class NetworkServerControlImpl {",
"removed": [
"\tArrayList localAddresses; // list of local addresses for checking admin",
"\t // commands. ",
"\tprivate Hashtable sessionTable = new Hashtable();",
"\tprivate Vector threadList = new Vector();",
"\tprivate Vector runQueue = new Vector();",
"\tprivate Hashtable appRequesterTable = new Hashtable();"
]
},
{
"added": [
"\t\t\t\tAccessController.doPrivileged(",
" new PrivilegedExceptionAction<ServerSocket>() {",
"\t\t\t\t\t\tpublic ServerSocket run() throws IOException"
],
"header": "@@ -718,9 +720,9 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\t(ServerSocket) ",
"\t\t\t\tAccessController.doPrivileged(new PrivilegedExceptionAction() {",
"\t\t\t\t\t\tpublic Object run() throws IOException"
]
},
{
"added": [
" final ClientThread clientThread = AccessController.doPrivileged(",
" new PrivilegedExceptionAction<ClientThread>() {",
" public ClientThread run() throws Exception {",
" return new ClientThread(thisControl, serverSocket);",
" }",
" });"
],
"header": "@@ -790,16 +792,12 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tfinal ClientThread clientThread =\t ",
"\t\t\t(ClientThread) AccessController.doPrivileged(",
"\t\t\t\t\t\t\t\tnew PrivilegedExceptionAction() {",
"\t\t\t\t\t\t\t\t\tpublic Object run() throws Exception",
"\t\t\t\t\t\t\t\t\t{",
"\t\t\t\t\t\t\t\t\t\treturn new ClientThread(thisControl, ",
"\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tserverSocket);",
"\t\t\t\t\t\t\t\t\t}",
"\t\t\t\t\t\t\t\t}",
"\t\t\t\t\t\t\t);"
]
},
{
"added": [
"\t new PrivilegedAction<Void>() {",
"\t public Void run() {"
],
"header": "@@ -818,8 +816,8 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t new PrivilegedAction() {",
"\t public Object run() {"
]
},
{
"added": [
" new PrivilegedAction<Void>() {",
" public Void run() {",
" threadi.interrupt();",
" return null;",
" }",
" });"
],
"header": "@@ -857,12 +855,12 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t \t\t\t\t\t\t\t\tnew PrivilegedAction() {",
"\t \t\t\t\t\t\t\t\t\tpublic Object run() {",
"\t \t\t\t\t\t\t\t\t\t\tthreadi.interrupt();",
"\t \t\t\t\t\t\t\t\t\t\treturn null;",
"\t \t\t\t\t\t\t\t\t\t}",
"\t \t\t\t\t\t\t\t\t});"
]
},
{
"added": [
"\t\t\tclientSocket = AccessController.doPrivileged(",
"\t\t\t\t\t\t\t\tnew PrivilegedExceptionAction<Socket>() {",
"\t\t\t\t\t\t\t\t\tpublic Socket run()"
],
"header": "@@ -2534,10 +2532,10 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\tclientSocket = (Socket) AccessController.doPrivileged(",
"\t\t\t\t\t\t\t\tnew PrivilegedExceptionAction() {",
"\t\t\t\t\t\t\t\t\tpublic Object run() "
]
},
{
"added": [
" localAddresses = new ArrayList<InetAddress>(3);"
],
"header": "@@ -2626,7 +2624,7 @@ public final class NetworkServerControlImpl {",
"removed": [
" localAddresses = new ArrayList(3);"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/Session.java",
"hunks": [
{
"added": [
" /** Table of databases accessed in this session. */",
" private\tHashtable<String, Database> dbtable;"
],
"header": "@@ -69,7 +69,8 @@ class Session",
"removed": [
"\tprivate\tHashtable\tdbtable;\t\t// Table of databases accessed in this session"
]
}
]
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.