id
stringlengths 22
25
| commit_message
stringlengths 137
6.96k
| diffs
listlengths 0
63
|
|---|---|---|
derby-DERBY-5121-6f271b4b
|
DERBY-1482/DERBY-5121
Rick Hillegas contributed a trigger test for DERBY-1482/DERBY-5121. With revision 1125453, that test was added to Changes10_8 but this test really is applicable for upgrades from all releases and should not be added into a specific version upgrade test. As a result, I am moving the test from Changes10_8.java to BasicSetup.java. This will ensure that the trigger test will get run for upgrades from all previous releases.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1130895 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5121-c1193bf7
|
DERBY-5121 Data corruption when executing an UPDATE trigger
Committing changes to back out DERBY-1482 out of trunk(10.8 codeline). The changes have already been backed out of 10.7). In addition to engine code backport, it will also disable the tests that were added for DERBY-1482.
With DERBY-1482, these tests would not read in large object columns into memory because the triggers didn't need them. But now that DERBY-1482 changes are being backed out, the large object columns will be read in which can cause the test to run out of memory depending on how much heap is available to it. I will disable the tests from 10.7 too.
This commit also has a comment in DataDictionaryImpl:getTriggerActionString explaining the code changes for backout. I will add that comment in 10.7 too.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1084718 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t\t// DERBY-1482 has caused a regression which is being worked",
"\t\t// under DERBY-5121. Until DERBY-5121 is fixed, we want",
"\t\t// Derby to create triggers same as it is done in 10.6 and",
"\t\t// earlier. This in other words means that do not try to",
"\t\t// optimize how many columns are read from the trigger table,",
"\t\t// simply read all the columns from the trigger table.",
"\t\tboolean in10_7_orHigherVersion = false;"
],
"header": "@@ -4716,8 +4716,13 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tboolean in10_7_orHigherVersion =",
"\t\t\tcheckVersion(DataDictionary.DD_VERSION_DERBY_10_7,null);"
]
}
]
}
] |
derby-DERBY-5121-df731e99
|
DERBY-5121 Data corruption when executing an UPDATE trigger
Changes made for DERBY-1482 caused corruption which is being worked
under DERBY-5121. The issue is that the generated trigger action
sql could be looking for columns (by positions, not names) in
incorrect positions. With DERBY-1482, trigger assumed that the
runtime resultset that they will get will only have trigger columns
and trigger action columns used through the REFERENCING column.
That is an incorrect assumption because the resultset could have
more columns if the triggering sql requires more columns. DERBY-1482
changes are in 10.7 and higher codelines. Because of this bug, the
changes for DERBY-1482 have been backed out from 10.7 and 10.8
codelines so they now match 10.6 and earlier releases. This in
other words means that the resultset presented to the trigger
will have all the columns from the trigger table and the trigger
action generated sql should look for the columns in the trigger
table by their absolution column position in the trigger table.
This disabling of code will make sure that all the future triggers
get created correctly. The existing triggers at the time of
upgrade (to the releases with DERBY-1482 backout changes in them)
will get marked invalid and when they fire next time around with
the release with DERBY-1482 changes backed out, the regenerated sql
for them will be generated again and they will start behaving
correctly. So, it is *highly* recommended that the users upgrade
from 10.7.1.1 to next point release of 10.7 or to 10.8
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1085613 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5133-8709c8a9
|
DERBY-5133; nightly test failure in derbyall/storeall/storemore/SpaceTable
addressing review input; using JDBCTestCase.dropTable(), simplifying the retry and eliminating the code duplication, and adjusting comments.
Also addresses failure with Java8 because of different test case sequence.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1594451 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5139-84349af5
|
DERBY-5101: TruncateTableTest depends on implicit ordering of test cases
Added a workaround for DERBY-5139 so that the test doesn't fail when testSelfReferencing runs first.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1082428 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-514-1ba0984d
|
DERBY-2242 ; remove metadata.java, metadata_test.java and odbc_metadata.java.
Test has been converted to junit test DatabaseMetaDataTest.
Also removed master file PhaseTester.out, orphaned from rev 407396 (DERBY-514)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@597464 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-514-5935349d
|
DERBY-514: get upgrade tests working with the test harness. Submitted
derby-514-patch3-v2.diff
Committed for Deepa Remesh <dremesh@gmail.com>
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@394157 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-514-6d02108f
|
DERBY-514: Enable upgrade tests to work with unpackaged classes. Add upgrade
tests to derbyall.
Committed for Deepa Remesh <dremesh@gmail.com>
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@407396 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-514-7095131a
|
DERBY-514 Integrate upgrade tests into test suite
Contributed by Deepa Remesh.
Attaching a patch 'derby-514-patch2-runtest-v1.diff' which enables the upgrade test to run with the test harness.
Summary:
* findCodeBase method in harness/jvm.java is changed to public. This method is used by upgrade test to get the location of new jar files.
* Adds the other derby jars to the jar file list in UpgradeTester. This will allow the test to run in client framework. I tried running the test in client framework and it looks like this will need new master file and some more work.
* In UpgradeTester, File.toURL method is used when creating class loader. This seems to be a better way to construct the URL.
* Master file update
This patch combined with the previous patch (derby-514-buildfiles-v1.diff) will allow the upgrade test to be run using RunTest. The location of old jars has to be passed in as a property in jvmflags. Command to run:
java -Djvmflags=-DderbyTesting.oldJarLocation=<location of 10.1 jars> org.apache.derbyTesting.functionTests.harness.RunTest upgradeTests/Upgrade_10_1_10_2.java
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@391844 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5141-a5357587
|
DERBY-5141 SSLTest fails with java.net.SocketException: Default SSL
context init failed: null
Add some verbosity to NetworkServerTestSetup to show the server start
command with -Dderby.tests.debug=true. This does not fix the issue.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1085078 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/NetworkServerTestSetup.java",
"hunks": [
{
"added": [
"import org.apache.derbyTesting.junit.BaseTestCase;"
],
"header": "@@ -33,6 +33,7 @@ import java.security.PrivilegedExceptionAction;",
"removed": []
},
{
"added": [
" String startcommand =\"\";",
" \tstartcommand += command[i] + \" \";",
"",
" BaseTestCase.println(\"XXX server startup command = \" +",
"\tstartcommand + \"\\n\");"
],
"header": "@@ -320,12 +321,13 @@ final public class NetworkServerTestSetup extends BaseTestSetup {",
"removed": [
" /* System.out.println( \"XXX server startup command = \");",
" System.out.print( command[i] + \" \" );",
" System.out.println();",
" */"
]
}
]
}
] |
derby-DERBY-5143-c85d4650
|
DERBY-5143: Remove unnecessary copying of the map in getTypeMap()
- made the JDBC 4.0 overrides of getTypeMap() just call
super.getTypeMap(), and added SuppressWarnings annotation to silence
the unchecked conversion warnings in the overridden methods
- made the client implementation return EMPTY_MAP (immutable) instead
of an empty HashMap (mutable), to match the embedded implementation
- updated ConnectionTest to expect the returned map to be immutable
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1083917 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [
"import java.util.Collections;"
],
"header": "@@ -26,6 +26,7 @@ import org.apache.derby.jdbc.ClientDataSource;",
"removed": []
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetConnection40.java",
"hunks": [
{
"added": [],
"header": "@@ -24,9 +24,6 @@ package org.apache.derby.client.net;",
"removed": [
"import java.sql.Blob;",
"import java.sql.Clob;",
"import java.sql.Connection;"
]
},
{
"added": [],
"header": "@@ -35,10 +32,8 @@ import java.sql.SQLException;",
"removed": [
"import java.util.HashMap;",
"import java.util.Enumeration;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection40.java",
"hunks": [
{
"added": [],
"header": "@@ -22,19 +22,14 @@",
"removed": [
"import java.sql.Blob;",
"import java.sql.Clob;",
"import java.sql.Connection;",
"import java.util.HashMap;",
"import java.util.Enumeration;"
]
}
]
}
] |
derby-DERBY-5145-87baa305
|
DERBY-5145: Provide option to limit compatibility test to combinations that include trunk
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1132648 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-515-cd5e376d
|
DERBY-515 Network Server should log server start and shutdown time to derby.log
Patch contributed by Deepa Remesh.
Summary of patch:
* Removes identical master files in DerbyNetClient and DerbyNet for
derbynet/NSinSameJVM.java and moves it the top master directory.
* Modified tools/release/build.xml to point to the new master file.
* Adds comments as to why CheapDateFormatter and GMT is used to format
timestamp. Moves formatting of timestamp to a method getFormattedTimestamp().
This will be helpful if timestamp is planned to be used for more messages
as Kathey indicated.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@380722 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"\t\t",
"\t\t\t\t\tgetFormattedTimestamp()});"
],
"header": "@@ -561,10 +561,10 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tlong startTime = System.currentTimeMillis();",
"\t\t\t\t\tCheapDateFormatter.formatDate(startTime)});"
]
},
{
"added": [
"\t\t\t\t\t\t\t\tgetFormattedTimestamp()});"
],
"header": "@@ -651,10 +651,9 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\tlong shutdownTime = System.currentTimeMillis();",
"\t\t\t\t\t\t\t\tCheapDateFormatter.formatDate(shutdownTime)});"
]
},
{
"added": [
"\t\t\t\t\t\t\t\tgetFormattedTimestamp()});"
],
"header": "@@ -1712,10 +1711,9 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\tlong shutdownTime = System.currentTimeMillis();",
"\t\t\t\t\t\t\t\tCheapDateFormatter.formatDate(shutdownTime)});"
]
}
]
}
] |
derby-DERBY-5152-4e72f55c
|
DERBY-5152 Shutting down db, information that the thread received an interrupt will not be restored to thread's interrupt flag
Patch derby-5152-b. When a thread receives an interrupt Derby detects
this, it will reset the thread's flag and save the fact in its lcc
(LanguageConnectionContext), if available. If not (e.g. during boot)
it will save the information in a thread local variable. For
performance reasons, we use the lcc when available. However, when
shutting down the database, the lcc goes away, and when the JDBC call
returns to the application, the thread's interrupt flag will not be
reinstated as per our specification. This is because the lcc dies
before we do the restoring (under shutdown). So, the information that
the thread was interrupted is lost with the lcc going away.
This patch copies the information from lcc over to the thread local
variable when lcc is popped and adds a new test case to
InterruptResilienceTest.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1086443 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/conn/GenericLanguageConnectionContext.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.util.InterruptStatus;"
],
"header": "@@ -75,6 +75,7 @@ import org.apache.derby.iapi.sql.ParameterValueSet;",
"removed": []
}
]
}
] |
derby-DERBY-5153-a899bbc8
|
DERBY-5153: Intermittent ASSERT FAILED Internal Error-- statistics not found in selectivityForConglomerate when running InterruptResilienceTest
Removed the failing asserts and enabled the test case that exercises the code.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1088495 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/TableDescriptor.java",
"hunks": [
{
"added": [
" * However, no locks are held to prevent the statistics from being dropped,",
" * so the method also handles the case of missing statistics by using a",
" * heuristic to estimate the selectivity."
],
"header": "@@ -1396,6 +1396,9 @@ public class TableDescriptor extends TupleDescriptor",
"removed": []
},
{
"added": [],
"header": "@@ -1407,22 +1410,6 @@ public class TableDescriptor extends TupleDescriptor",
"removed": [
"\t\tif (!statisticsExist(cd))",
"\t\t{",
"\t\t\tif (SanityManager.DEBUG)",
"\t\t\t{",
"\t\t\t\tSanityManager.THROWASSERT(\"no statistics exist for conglomerate\"",
"\t\t\t\t\t\t\t\t\t\t + cd);",
"\t\t\t}",
"\t\t\telse ",
"\t\t\t{",
"\t\t\t\tdouble selectivity = 0.1;",
"\t\t\t\tfor (int i = 0; i < numKeys; i++)",
"\t\t\t\t\tselectivity *= 0.1;",
"\t\t\t\treturn selectivity;",
"\t\t\t}",
"\t\t}",
"\t\t"
]
}
]
}
] |
derby-DERBY-5153-ae1478de
|
DERBY-5153: Intermittent ASSERT FAILED Internal Error-- statistics not found in selectivityForConglomerate when running InterruptResilienceTest
Added regression test case that reproduces the bug. Disabled for now.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1087636 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5157-42114dab
|
DERBY-5280: Large batch of DDL in a database procedure dies on a transaction severity error
Backed out the fix for DERBY-5161 since it's causing a regression and
shouldn't be needed after DERBY-5157.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1138787 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5157-cac43eee
|
DERBY-5157: Incomplete quoting of SQL identifiers in AlterTableConstantAction
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1086526 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.util.StringUtil;"
],
"header": "@@ -84,6 +84,7 @@ import org.apache.derby.iapi.types.DataTypeDescriptor;",
"removed": []
},
{
"added": [
" String updateStmt = \"UPDATE \" +",
" IdUtil.mkQualifiedName(td.getSchemaName(), td.getName()) +",
" \" SET \" + IdUtil.normalToDelimited(columnName) + \"=\" +",
" defaultText;"
],
"header": "@@ -3181,9 +3182,10 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\tString updateStmt = \"UPDATE \\\"\" + td.getSchemaName() + \"\\\".\\\"\" +",
"\t\t\t\t\t\t\ttd.getName() + \"\\\" SET \\\"\" +",
"\t\t\t\t\t\t\t columnName + \"\\\" = \" + defaultText;"
]
},
{
"added": [
" String maxStmt = \"SELECT \" + maxStr + \"(\" +",
" IdUtil.normalToDelimited(columnName) + \") FROM \" +",
" IdUtil.mkQualifiedName(td.getSchemaName(), td.getName());"
],
"header": "@@ -3207,8 +3209,9 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\tString maxStmt = \"SELECT \" + maxStr + \"(\\\"\" + columnName + \"\\\")\" +",
"\t\t\t\t\"FROM \\\"\" + td.getSchemaName() + \"\\\".\\\"\" + td.getName() + \"\\\"\";"
]
}
]
}
] |
derby-DERBY-5158-86226a22
|
DERBY-5158 Incomprehensible error message on client if attempting rollback after database has been shut down.
Scenario: app code calling client JDBC connection commit or rollback
after the database has been shut down. In the commit case the issue
and fix only applies if a transaction has been started, since
otherwise the client optimizes the commit away.
Patch DERBY-5158b, which corrects the protocol code on the client side
to cater for ENDUOWRM even in the error case (as sent by the
server). Looking at the DRDA standard, I managed to satisfy myself
that this is the correct behavior: section 7.5 Commit/Rollback
processing, where CR2 says:
"Application servers using remote unit of work protocols and
application servers using distributed unit of work but not protected
by a sync point manager must inform the application requester when the
current unit of work at the application server ends as a result of a
commit or rollback request by an application or application requester
request. This information is returned in the RPYDSS, containing the
ENDUOWRM reply message."
The "remote unit of work" is definitely ended, so...
Note that the (new) error stack trace is still different than with the
embedded driver, since there the 08003 will be directly reported as
the error (not wrapped in 06006 as shown below for the client side).
With this new code, on the client side, one can clearly see from the
exception stack that the underlying cause of the error is 08003. "No
current connection".
I added a new test, Derby5158Test.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1089795 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/NetConnectionReply.java",
"hunks": [
{
"added": [
" parseENDUOWRM(connection);",
" peekCP = parseTypdefsOrMgrlvlovrs();",
" if (peekCP == CodePoint.SQLCARD) {",
" NetSqlca netSqlca = parseSQLCARD(null);",
" connection.completeSqlca(netSqlca);",
" } else {",
" parseCommitError(connection);"
],
"header": "@@ -189,18 +189,15 @@ public class NetConnectionReply extends Reply",
"removed": [
" if (peekCP != CodePoint.ENDUOWRM && peekCP != CodePoint.SQLCARD) {",
" parseCommitError(connection);",
" return;",
" }",
" if (peekCP == CodePoint.ENDUOWRM) {",
" parseENDUOWRM(connection);",
" peekCP = parseTypdefsOrMgrlvlovrs();",
"",
" NetSqlca netSqlca = parseSQLCARD(null);",
" connection.completeSqlca(netSqlca);"
]
},
{
"added": [
" if (peekCP == CodePoint.SQLCARD) {",
" NetSqlca netSqlca = parseSQLCARD(null);",
" connection.completeSqlca(netSqlca);",
" } else {",
" parseRollbackError();",
" }"
],
"header": "@@ -208,16 +205,16 @@ public class NetConnectionReply extends Reply",
"removed": [
" if (peekCP != CodePoint.ENDUOWRM) {",
" parseRollbackError();",
" return;",
" }",
" NetSqlca netSqlca = parseSQLCARD(null);",
" connection.completeSqlca(netSqlca);"
]
}
]
}
] |
derby-DERBY-5159-00306c2e
|
DERBY-5159: ParameterMetaDataJdbc30Test fails with "'PMDD' is not recognized as a function or procedure"
Create shared procedure in the decorator so that the test cases
that need it can find it regardless of the order of execution.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1086559 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5161-42114dab
|
DERBY-5280: Large batch of DDL in a database procedure dies on a transaction severity error
Backed out the fix for DERBY-5161 since it's causing a regression and
shouldn't be needed after DERBY-5157.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1138787 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5165-dc57c1f4
|
DERBY-5165; Prepared XA transaction locks are not kept across DB restart
Adding License comment to the test
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1624105 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5168-98ec83ef
|
DERBY-5168: Wrong syntax in identifier chain returned by SynonymAliasInfo.toString()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1087641 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/types/SynonymAliasInfo.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.util.IdUtil;"
],
"header": "@@ -23,6 +23,7 @@ package org.apache.derby.catalog.types;",
"removed": []
}
]
}
] |
derby-DERBY-517-3a3105b7
|
DERBY-517 ResultSet - relative(int rows) behaves different in embedded and client/server mode when the positioning before the first row or after the last row.
Submitted by Fernanda.Pizzorno@Sun.COM
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@348192 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
" ",
" // Keep maxRows in the ResultSet, so that changes to maxRow in the statement",
" // do not affect the resultSet after it has been created",
" private int maxRows_;"
],
"header": "@@ -184,6 +184,10 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" maxRows_ = statement_.maxRows_;",
" "
],
"header": "@@ -206,6 +210,8 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" (maxRows_ > 0 && cursor_.rowsRead_ > maxRows_)) {"
],
"header": "@@ -289,7 +295,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" (statement_.maxRows_ > 0 && cursor_.rowsRead_ > statement_.maxRows_)) {"
]
},
{
"added": [
" } else if (sensitivity_ != sensitivity_sensitive_dynamic__ && maxRows_ > 0 &&",
" (firstRowInRowset_ + currentRowInRowset_ > maxRows_)) {"
],
"header": "@@ -359,8 +365,8 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" } else if (sensitivity_ != sensitivity_sensitive_dynamic__ && statement_.maxRows_ > 0 &&",
" (firstRowInRowset_ + currentRowInRowset_ > statement_.maxRows_)) {"
]
},
{
"added": [
" absolutePosition_ == (maxRows_ == 0 ? rowCount_ + 1 : maxRows_ + 1)));"
],
"header": "@@ -1503,7 +1509,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" absolutePosition_ == rowCount_ + 1));"
]
},
{
"added": [
" if (sensitivity_ != sensitivity_sensitive_dynamic__ && maxRows_ > 0) {",
" if (rowCount_ > maxRows_) {",
" row = maxRows_;"
],
"header": "@@ -1687,9 +1693,9 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" if (sensitivity_ != sensitivity_sensitive_dynamic__ && statement_.maxRows_ > 0) {",
" if (rowCount_ > statement_.maxRows_) {",
" row = statement_.maxRows_;"
]
},
{
"added": [
" if (maxRows_ > 0) {",
" if (row > 0 && row > maxRows_) {",
" } else if (row <= 0 && java.lang.Math.abs(row) > maxRows_) {"
],
"header": "@@ -1780,14 +1786,14 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" if (statement_.maxRows_ > 0) {",
" if (row > 0 && row > statement_.maxRows_) {",
" } else if (row <= 0 && java.lang.Math.abs(row) > statement_.maxRows_) {"
]
},
{
"added": [
" maxRows_ > 0 && rows > 0 && currentAbsoluteRowNumber + rows > maxRows_) {"
],
"header": "@@ -1907,7 +1913,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" statement_.maxRows_ > 0 && rows > 0 && currentAbsoluteRowNumber + rows > statement_.maxRows_) {"
]
},
{
"added": [
" if (maxRows_ < Math.abs(rowNumber) && maxRows_ != 0) {",
" if (rowNumber > 0) {",
" afterLastX();",
" } else {",
" beforeFirstX();",
" }",
" isValidCursorPosition_ = false;",
" return isValidCursorPosition_;",
" }"
],
"header": "@@ -1921,6 +1927,15 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" if (sensitivity_ != sensitivity_sensitive_dynamic__ && maxRows_ > 0 &&",
" (firstRowInRowset_ + currentRowInRowset_ > maxRows_)) {"
],
"header": "@@ -1979,8 +1994,8 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" if (sensitivity_ != sensitivity_sensitive_dynamic__ && statement_.maxRows_ > 0 &&",
" (firstRowInRowset_ + currentRowInRowset_ > statement_.maxRows_)) {"
]
},
{
"added": [
" if (rows < 0 || (maxRows_ != 0 && rows > maxRows_)) {"
],
"header": "@@ -2021,7 +2036,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" if (rows < 0 || (statement_.maxRows_ != 0 && rows > statement_.maxRows_)) {"
]
},
{
"added": [
" absolutePosition_ = (maxRows_ == 0) ? rowCount_ + 1 : maxRows_ + 1;"
],
"header": "@@ -2925,7 +2940,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" absolutePosition_ = rowCount_ + 1;"
]
},
{
"added": [
" absolutePosition_ = (maxRows_ == 0) ? rowCount_ + 1 : maxRows_ + 1;",
" "
],
"header": "@@ -3533,14 +3548,14 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" absolutePosition_ = rowCount_ + 1;",
""
]
},
{
"added": [
" ",
" // If afterLast and maxRows > 0, go backward from maxRows and not ",
" // from last row in the resultSet",
" if (maxRows_ > 0 && orientation == scrollOrientation_relative__ && isAfterLast) {",
" rowNumber += maxRows_ + 1;",
" orientation = scrollOrientation_absolute__;",
" }",
" "
],
"header": "@@ -3667,6 +3682,14 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
},
{
"added": [
" if (maxRows_ == 0)",
" lastRowInRowset_ = (isAfterLastRow) ? rowCount_ : firstRowInRowset_ - 1;",
" else",
" lastRowInRowset_ = (isAfterLastRow) ? maxRows_ : firstRowInRowset_ - 1;"
],
"header": "@@ -3699,7 +3722,10 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" lastRowInRowset_ = (isAfterLastRow) ? rowCount_ : firstRowInRowset_ - 1;"
]
},
{
"added": [
" long rowNumber;",
" if (maxRows_ == 0) {",
" rowNumber = (fetchSize_ < row) ? ((-1) * fetchSize_) : 1;",
" } else {",
" rowNumber = (fetchSize_ < row) ? (maxRows_ - fetchSize_) + 1 : 1;",
" }"
],
"header": "@@ -3822,7 +3848,12 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" long rowNumber = (fetchSize_ < row) ? (-1) * fetchSize_ : 1;"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
" //keep isAfterLast and isBeforeFirst to be able ",
" //to reposition after counting rows",
" boolean isAfterLast = rs.isAfterLast();",
" boolean isBeforeFirst = rs.isBeforeFirst();",
" ",
"",
" // reposition after last or before first",
" if (isAfterLast) {",
" rs.afterLast();",
" }",
" if (isBeforeFirst) {",
" rs.beforeFirst();",
" } "
],
"header": "@@ -6179,12 +6179,23 @@ public class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\t//reposition after last",
"\t\t\t\trs.afterLast();"
]
}
]
}
] |
derby-DERBY-5170-11468253
|
DERBY-5170: Client doesn't handle double quotes in savepoint names
Moved helper method that quoted SQL identifiers from ResultSet to a utility class, and made Connection use that method to quote savepoint names.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1088500 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Connection.java",
"hunks": [
{
"added": [
" stmt.executeX(",
" \"SAVEPOINT \" + Utils.quoteSqlIdentifier(savepointName) +",
" \" ON ROLLBACK RETAIN CURSORS\");"
],
"header": "@@ -1556,8 +1556,9 @@ public abstract class Connection",
"removed": [
" String sql = \"SAVEPOINT \\\"\" + savepointName + \"\\\" ON ROLLBACK RETAIN CURSORS\";",
" stmt.executeX(sql);"
]
},
{
"added": [
" stmt.executeX(",
" \"ROLLBACK TO SAVEPOINT \" +",
" Utils.quoteSqlIdentifier(savepointName));"
],
"header": "@@ -1615,8 +1616,9 @@ public abstract class Connection",
"removed": [
" String sql = \"ROLLBACK TO SAVEPOINT \\\"\" + savepointName + \"\\\"\";",
" stmt.executeX(sql);"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
" insertSQL.append(Utils.quoteSqlIdentifier("
],
"header": "@@ -4532,7 +4532,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" insertSQL.append(quoteSqlIdentifier("
]
},
{
"added": [
" updateString.append(Utils.quoteSqlIdentifier("
],
"header": "@@ -4567,7 +4567,7 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" updateString.append(quoteSqlIdentifier("
]
}
]
}
] |
derby-DERBY-5172-6e591dc9
|
DERBY-5172: testTimeAndDateWithCalendar (jdbcapi.CallableTest) fails intermittently with AssertionFailedError: hour expected: differs from actual.
Make the test convert between timezones via the local timezone, like
Derby does when passing a timestamp through a stored procedure.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1463874 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5173-3c7c7402
|
DERBY-5173: RAFContainer.privGetRandomAccessFile() unwraps wrong exception type
Made run() wrap IOExceptions in StandardExceptions to prevent ClassCastException in the error handler.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1088491 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java",
"hunks": [
{
"added": [
"import java.io.FileNotFoundException;"
],
"header": "@@ -54,6 +54,7 @@ import java.util.Vector;",
"removed": []
},
{
"added": [
" public Object run() throws StandardException"
],
"header": "@@ -1373,7 +1374,7 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" public Object run() throws StandardException, IOException"
]
},
{
"added": [
" try",
" {",
" return actionFile.getRandomAccessFile(\"rw\");",
" }",
" catch (FileNotFoundException fnfe)",
" {",
" throw StandardException.newException(",
" SQLState.FILE_CREATE, fnfe, actionFile.getPath());",
" }"
],
"header": "@@ -1686,7 +1687,15 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" return actionFile.getRandomAccessFile(\"rw\");"
]
}
]
}
] |
derby-DERBY-5185-1cfe34ee
|
DERBY-5185 store/rollForwardRecovery.sql stuck in RAFContainer4.recoverContainerAfterInterrupt() during shutdown
Patch derby-5185-2b which fixes state a state maintenance bugs (in
threadDoingRestore/restoreChannelInProgress) maintenance when throwing
FILE_IO_INTERRUPTED. The first fixes the immediate problem.
It also adds a maximum number of retries for the readPage code and
fixes some cases whereby the state variable "threadsInPageIO" could
risk not being properly update when exceptions would get thrown. The
latter may be the underlying reason for what we see here.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1094572 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer4.java",
"hunks": [
{
"added": [
" // then. Otherwise protected by channelCleanupMonitor. Debugging value not",
" // safe on 1.4, but who cares.."
],
"header": "@@ -87,7 +87,8 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" // then. Not safe on 1.4, but who cares.."
]
},
{
"added": [
" int retries = MAX_INTERRUPT_RETRIES;",
"",
" try {"
],
"header": "@@ -330,6 +331,9 @@ class RAFContainer4 extends RAFContainer {",
"removed": []
},
{
"added": [
"",
" if (retries-- == 0) {",
" throw StandardException.newException(",
" SQLState.FILE_IO_INTERRUPTED);",
" }",
" } finally {"
],
"header": "@@ -393,9 +397,14 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
""
]
},
{
"added": [
" }"
],
"header": "@@ -403,6 +412,7 @@ class RAFContainer4 extends RAFContainer {",
"removed": []
},
{
"added": [
" try {"
],
"header": "@@ -533,6 +543,7 @@ class RAFContainer4 extends RAFContainer {",
"removed": []
},
{
"added": [
" } finally {"
],
"header": "@@ -600,7 +611,7 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
""
]
},
{
"added": [
" }"
],
"header": "@@ -608,6 +619,7 @@ class RAFContainer4 extends RAFContainer {",
"removed": []
},
{
"added": [
" if (retries-- == 0) {",
" // Clean up state and throw",
" threadDoingRestore = null;",
" restoreChannelInProgress = false;",
" channelCleanupMonitor.notifyAll();",
"",
" throw StandardException.newException(",
" }"
],
"header": "@@ -831,11 +843,16 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" }",
" if (retries-- == 0) {",
" throw StandardException.newException("
]
}
]
}
] |
derby-DERBY-5185-4afad93f
|
DERBY-5185 store/rollForwardRecovery.sql stuck in RAFContainer4.recoverContainerAfterInterrupt() during shutdown
Patch derby-5185-1a.diff. Avoid waiting forever in the loop in
recoverContainerAfterInterrupt where we wait for other concurrent
threads to hit the wall (having seen ClosedChannelException), i.e. so
we know they waiting for this thread to clean up. The counting logic
(threadsInPageIO) here needs to be correct, if there is an error, we
could risk waiting forever, as seen in this issue.
This patch should be followed up by a patch to correct the logic, but
until such time, this patch improves on the situation.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1091221 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer4.java",
"hunks": [
{
"added": [
" // (ClosedChannelException) and are a ready waiting for us to clean up,",
" // so we can set them loose when we're done.",
" int retries = MAX_INTERRUPT_RETRIES;",
""
],
"header": "@@ -820,8 +820,10 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" // (ClosedChannelException) and are a ready wait for us to clean up, so",
" // we can set them loose when we're done."
]
},
{
"added": [
" if (retries-- == 0) {",
" throw StandardException.newException(",
" SQLState.FILE_IO_INTERRUPTED);",
" }",
"",
" Thread.sleep(INTERRUPT_RETRY_SLEEP);"
],
"header": "@@ -831,8 +833,13 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" Thread.sleep(10);"
]
}
]
}
] |
derby-DERBY-5185-a768c94c
|
DERBY-5185 store/rollForwardRecovery.sql stuck in RAFContainer4.recoverContainerAfterInterrupt() during shutdown
Follow-up patch derby-5185-3a which contains safe guard limits to two loops waiting for
a condition variable: wait maximum one minute to avoid infinite hangs.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1094728 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer4.java",
"hunks": [
{
"added": [
" int retries = MAX_INTERRUPT_RETRIES;",
""
],
"header": "@@ -308,6 +308,8 @@ class RAFContainer4 extends RAFContainer {",
"removed": []
},
{
"added": [
" if (retries-- == 0) {",
" throw StandardException.newException(",
" SQLState.FILE_IO_INTERRUPTED);",
" }",
"",
" channelCleanupMonitor.wait(INTERRUPT_RETRY_SLEEP);"
],
"header": "@@ -317,8 +319,13 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" channelCleanupMonitor.wait();"
]
},
{
"added": [
" int retries = MAX_INTERRUPT_RETRIES;",
"",
" if (retries-- == 0) {",
" throw StandardException.newException(",
" SQLState.FILE_IO_INTERRUPTED);",
" }",
"",
" channelCleanupMonitor.wait(INTERRUPT_RETRY_SLEEP);"
],
"header": "@@ -527,9 +534,16 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" channelCleanupMonitor.wait();"
]
}
]
}
] |
derby-DERBY-5189-589b21f0
|
DERBY-5189: PropertySetter should ignore GCJ installations
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1092333 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/build/org/apache/derbyPreBuild/PropertySetter.java",
"hunks": [
{
"added": [
" // Search the versions backwards (highest first) until a usable one",
" // is found.",
" for (int i = count - 1; i >= 0; i--) {",
" File javadir = versions[i];",
"",
" if (isExcludedJDK(javadir)) {",
" // This directory contains a JDK that we don't expect to",
" // work. Skip it.",
" continue;",
" }",
" String libStub = javadir.getAbsolutePath();",
"",
" //",
" // If the selected java dir is a JDK rather than a JRE, then it",
" // will have a jre subdirectory",
" //",
" File jreSubdirectory = new File(javadir, \"jre\");",
" if (jreSubdirectory.exists()) {",
" libStub = libStub + File.separator + \"jre\";",
" }",
" libStub = libStub + File.separator + \"lib\";",
" return libStub;",
" }",
"",
" return null;",
" }",
"",
" /**",
" * Check if the specified directory should be excluded when searching for",
" * a usable set of Java libraries.",
" *",
" * @param dir the directory to check",
" * @return {@code true} if the libraries in the directory should not be",
" * used for constructing a compile classpath",
" */",
" private static boolean isExcludedJDK(File dir) {",
" // DERBY-5189: The libraries that come with GCJ lack some classes in",
" // the javax.management.remote package and cannot be used for building",
" // Derby.",
" return dir.getName().toLowerCase().contains(\"gcj\");"
],
"header": "@@ -584,19 +584,49 @@ public class PropertySetter extends Task",
"removed": [
" File javadir = versions[ count - 1 ];",
" String libStub = \"\";",
" //",
" // If the selected java dir is a JDK rather than a JRE, then it",
" // will have a jre subdirectory",
" //",
" File jreSubdirectory = new File( javadir, \"jre\" );",
" if ( jreSubdirectory.exists() ) { libStub = libStub + File.separator + \"jre\"; }",
" libStub = libStub + File.separator + \"lib\";",
" return javadir.getAbsolutePath() + libStub;"
]
}
]
}
] |
derby-DERBY-5196-586ae9de
|
DERBY-5196; Correct the layout of log.ctrl as described on the Derby web site
fixing the header in the source file
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1550525 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/log/LogToFile.java",
"hunks": [
{
"added": [
"\t\t (logfile counter, position)",
"\t\tint Derby major version",
"\t\tint Derby minor version",
"\t\tint subversion revision/build number",
"\t\tbyte Flags (beta flag (0 or 1), test durability flag (0 or 1))",
"\t\tbyte spare (0)",
"\t\tbyte spare (0)",
"\t\tbyte spare (0)",
"\t\tlong checksum for control data written"
],
"header": "@@ -156,19 +156,19 @@ import java.util.zip.CRC32;",
"removed": [
"\t(pre-v15)",
"\t\tint format id",
"\t\tint log file version",
"\t\tlong the log instant (LogCounter) of the last completed checkpoint",
"\t(v15 onward)",
"\t\tint JBMS version",
"\t\tint checkpoint interval",
"\t\tlong spare (value set to 0)",
"\t\tlong spare (value set to 0)"
]
},
{
"added": [
" the control data written at last check point."
],
"header": "@@ -2397,7 +2397,7 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": [
" the condrol data written at last check point."
]
}
]
}
] |
derby-DERBY-5210-f6e1e6f9
|
DERBY-5210: Use java.nio.ByteBuffer in client.net.Request
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1102730 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/SignedBinary.java",
"hunks": [
{
"added": [],
"header": "@@ -25,9 +25,6 @@ public class SignedBinary {",
"removed": [
" /** Maximum value that cen be encoded by 6 bytes (signed). */",
" public static final long MAX_LONG_6_BYTES_SIGNED = 0x7FFFFFFFFFFFL;",
""
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetPackageRequest.java",
"hunks": [
{
"added": [
" byte[] b = new byte[buffer.position() - startPos];",
" buffer.position(startPos);",
" buffer.get(b);",
" writeBytes(section.getPKGNAMCBytes());"
],
"header": "@@ -134,29 +134,14 @@ public class NetPackageRequest extends NetConnectionRequest {",
"removed": [
" int copyLength = offset_ - startPos;",
" byte[] b = new byte[copyLength];",
" System.arraycopy(bytes_,",
" startPos,",
" b,",
" 0,",
" copyLength);",
" byte[] b = section.getPKGNAMCBytes();",
"",
" // Mare sure request buffer has enough space to write this byte array.",
" ensureLength(offset_ + b.length);",
"",
" System.arraycopy(b,",
" 0,",
" bytes_,",
" offset_,",
" b.length);",
"",
" offset_ += b.length;"
]
},
{
"added": [
" int loc = buffer.position();"
],
"header": "@@ -237,7 +222,7 @@ public class NetPackageRequest extends NetConnectionRequest {",
"removed": [
" int loc = offset_;"
]
},
{
"added": [
" int loc = buffer.position();"
],
"header": "@@ -253,7 +238,7 @@ public class NetPackageRequest extends NetConnectionRequest {",
"removed": [
" int loc = offset_;"
]
},
{
"added": [
" byte[] clearedBytes = new byte[buffer.position() - lengthLocation];",
" buffer.position(lengthLocation);",
" buffer.get(clearedBytes);",
""
],
"header": "@@ -268,11 +253,11 @@ public class NetPackageRequest extends NetConnectionRequest {",
"removed": [
" byte[] clearedBytes = new byte[offset_ - lengthLocation];",
" for (int i = lengthLocation; i < offset_; i++) {",
" clearedBytes[i - lengthLocation] = bytes_[i];",
" }"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/Request.java",
"hunks": [
{
"added": [
"import org.apache.derby.client.am.Decimal;"
],
"header": "@@ -22,6 +22,7 @@ package org.apache.derby.client.net;",
"removed": []
},
{
"added": [
"import java.nio.ByteBuffer;",
" protected ByteBuffer buffer;"
],
"header": "@@ -33,17 +34,14 @@ import java.io.BufferedInputStream;",
"removed": [
" protected byte[] bytes_;",
"",
" // keeps track of the next position to place a byte in the buffer.",
" // so the last valid byte in the message is at bytes_[offset - 1]",
" protected int offset_;"
]
},
{
"added": [
" buffer = ByteBuffer.allocate(minSize);",
" buffer.clear();"
],
"header": "@@ -79,12 +77,12 @@ public class Request {",
"removed": [
" bytes_ = new byte[minSize];",
" offset_ = 0;"
]
},
{
"added": [
" if (length > buffer.remaining()) {",
" int newLength =",
" Math.max(buffer.capacity() * 2, buffer.position() + length);",
" // copy the old buffer into a new one",
" buffer.flip();",
" buffer = ByteBuffer.allocate(newLength).put(buffer);"
],
"header": "@@ -106,10 +104,12 @@ public class Request {",
"removed": [
" if (length > bytes_.length) {",
" byte newBytes[] = new byte[Math.max(bytes_.length << 1, length)];",
" System.arraycopy(bytes_, 0, newBytes, 0, offset_);",
" bytes_ = newBytes;"
]
},
{
"added": [
" ensureLength(6);",
" dssLengthLocation_ = buffer.position();",
" buffer.putShort((short) 0xFFFF);",
" buffer.put((byte) 0xD0);",
""
],
"header": "@@ -174,18 +174,18 @@ public class Request {",
"removed": [
" ensureLength(offset_ + 6);",
" dssLengthLocation_ = offset_;",
" bytes_[offset_++] = (byte) 0xFF;",
" bytes_[offset_++] = (byte) 0xFF;",
" bytes_[offset_++] = (byte) 0xD0;"
]
},
{
"added": [
" buffer.put((byte) dssType);",
" buffer.putShort((short) corrId);"
],
"header": "@@ -194,12 +194,11 @@ public class Request {",
"removed": [
" bytes_[offset_++] = (byte) (dssType & 0xff);",
" bytes_[offset_++] = (byte) ((corrId >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (corrId & 0xff);"
]
},
{
"added": [
"\t\t\t\t\tbytesRead =",
" in.read(buffer.array(), buffer.position(), bytesToRead);"
],
"header": "@@ -316,7 +315,8 @@ public class Request {",
"removed": [
"\t\t\t\t\tbytesRead = in.read(bytes_, offset_, bytesToRead);"
]
},
{
"added": [
" buffer.position(buffer.position() + bytesRead);"
],
"header": "@@ -342,7 +342,7 @@ public class Request {",
"removed": [
"\t\t\t\t\toffset_ += bytesRead;"
]
},
{
"added": [
" ensureLength(DssConstants.MAX_DSS_LEN - buffer.position());"
],
"header": "@@ -417,7 +417,7 @@ public class Request {",
"removed": [
" ensureLength( DssConstants.MAX_DSS_LEN );"
]
},
{
"added": [
" in.read(buffer.array(), buffer.position(), spareInDss)",
" buffer.position(buffer.position() + bytesRead);"
],
"header": "@@ -442,11 +442,11 @@ public class Request {",
"removed": [
" in.read(bytes_, offset_, spareInDss ) ",
" offset_ += bytesRead;"
]
},
{
"added": [
"",
" buffer.putShort((short) 0xFFFF);"
],
"header": "@@ -454,9 +454,8 @@ public class Request {",
"removed": [
" ",
" bytes_[offset_++] = (byte) (0xff);",
" bytes_[offset_++] = (byte) (0xff);"
]
},
{
"added": [
" leftToRead + buffer.position()) > DssConstants.MAX_DSS_LEN) {"
],
"header": "@@ -538,7 +537,7 @@ public class Request {",
"removed": [
" leftToRead + offset_) > DssConstants.MAX_DSS_LEN) {"
]
},
{
"added": [
" if ((Math.min(2 + leftToRead, 32767)) > buffer.remaining()) {",
" dssLengthLocation_ = buffer.position();",
" buffer.putShort((short) 0xFFFF);"
],
"header": "@@ -599,16 +598,15 @@ public class Request {",
"removed": [
" if ((Math.min(2 + leftToRead, 32767)) > (bytes_.length - offset_)) {",
" dssLengthLocation_ = offset_;",
" bytes_[offset_++] = (byte) (0xff);",
" bytes_[offset_++] = (byte) (0xff);"
]
},
{
"added": [
" dssLengthLocation_ = buffer.position();"
],
"header": "@@ -623,7 +621,7 @@ public class Request {",
"removed": [
" dssLengthLocation_ = offset_;"
]
},
{
"added": [
" buffer.put((byte) 0x0); // use 0x0 as the padding byte"
],
"header": "@@ -653,7 +651,7 @@ public class Request {",
"removed": [
" bytes_[offset_++] = (byte) (0x0); // use 0x0 as the padding byte"
]
},
{
"added": [
" buffer.put((byte) (length >>> shiftSize));"
],
"header": "@@ -670,7 +668,7 @@ public class Request {",
"removed": [
" bytes_[offset_++] = (byte) ((length >>> shiftSize) & 0xff);"
]
},
{
"added": [
" int pos = dssLengthLocation_ + 3;",
" byte value = buffer.get(pos);",
" value |= 0x40;",
" value |= 0x10;",
" buffer.put(pos, value);",
" return buffer.position() != 0;"
],
"header": "@@ -682,17 +680,20 @@ public class Request {",
"removed": [
" bytes_[dssLengthLocation_ + 3] |= 0x40;",
" bytes_[dssLengthLocation_ + 3] |= 0x10;",
" return offset_ != 0;"
]
},
{
"added": [
" int totalSize = buffer.position() - dssLengthLocation_;"
],
"header": "@@ -714,7 +715,7 @@ public class Request {",
"removed": [
" int totalSize = offset_ - dssLengthLocation_;"
]
},
{
"added": [
" int dataByte = buffer.position() - 1;",
" ensureLength(shiftOffset);",
" buffer.position(buffer.position() + shiftOffset);"
],
"header": "@@ -739,10 +740,10 @@ public class Request {",
"removed": [
" int dataByte = offset_ - 1;",
" ensureLength(offset_ + shiftOffset);",
" offset_ += shiftOffset;"
]
},
{
"added": [
" byte[] array = buffer.array();",
" System.arraycopy(array, dataByte + 1,",
" array, dataByte + shiftOffset + 1, dataToShift);"
],
"header": "@@ -756,7 +757,9 @@ public class Request {",
"removed": [
" System.arraycopy(bytes_, dataByte + 1,bytes_, dataByte + shiftOffset + 1, dataToShift);"
]
},
{
"added": [
" buffer.putShort(dataByte + shiftOffset - 1,",
" (short) twoByteContDssHeader);"
],
"header": "@@ -771,8 +774,8 @@ public class Request {",
"removed": [
" bytes_[dataByte + shiftOffset - 1] = (byte) ((twoByteContDssHeader >>> 8) & 0xff);",
" bytes_[dataByte + shiftOffset] = (byte) (twoByteContDssHeader & 0xff);"
]
},
{
"added": [
" buffer.putShort(dssLengthLocation_, (short) totalSize);"
],
"header": "@@ -788,8 +791,7 @@ public class Request {",
"removed": [
" bytes_[dssLengthLocation_] = (byte) ((totalSize >>> 8) & 0xff);",
" bytes_[dssLengthLocation_ + 1] = (byte) (totalSize & 0xff);"
]
},
{
"added": [
" ensureLength(4);",
" buffer.position(buffer.position() + 2);",
" buffer.putShort((short) codePoint);",
" markStack_[top_++] = buffer.position();"
],
"header": "@@ -799,21 +801,20 @@ public class Request {",
"removed": [
" ensureLength(offset_ + 4);",
" offset_ += 2;",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" markStack_[top_++] = offset_;"
]
},
{
"added": [
" int length = buffer.position() - lengthLocation;"
],
"header": "@@ -839,7 +840,7 @@ public class Request {",
"removed": [
" int length = offset_ - lengthLocation;"
]
},
{
"added": [
" ensureLength(extendedLengthByteCount);"
],
"header": "@@ -848,7 +849,7 @@ public class Request {",
"removed": [
" ensureLength(offset_ + extendedLengthByteCount);"
]
},
{
"added": [
" byte[] array = buffer.array();",
" System.arraycopy(array,",
" array,",
" buffer.put(extendedLengthLocation++,",
" (byte) (extendedLength >>> shiftSize));",
" buffer.position(buffer.position() + extendedLengthByteCount);"
],
"header": "@@ -856,20 +857,22 @@ public class Request {",
"removed": [
" System.arraycopy(bytes_,",
" bytes_,",
" bytes_[extendedLengthLocation++] = (byte) ((extendedLength >>> shiftSize) & 0xff);",
" offset_ += extendedLengthByteCount;"
]
},
{
"added": [
" buffer.putShort(lengthLocation, (short) length);"
],
"header": "@@ -880,8 +883,7 @@ public class Request {",
"removed": [
" bytes_[lengthLocation] = (byte) ((length >>> 8) & 0xff);",
" bytes_[lengthLocation + 1] = (byte) (length & 0xff);"
]
},
{
"added": [
" ensureLength(length);",
" buffer.put(padByte);",
" writeByte((byte) value);"
],
"header": "@@ -905,16 +907,15 @@ public class Request {",
"removed": [
" ensureLength(offset_ + length);",
" bytes_[offset_++] = padByte;",
" ensureLength(offset_ + 1);",
" bytes_[offset_++] = (byte) (value & 0xff);"
]
},
{
"added": [
" ensureLength(3);",
" buffer.put((byte) tripletLength);",
" buffer.put((byte) tripletType);",
" buffer.put((byte) tripletId);",
" ensureLength(count * 3);",
" buffer.put((byte) lidAndLengthOverrides[offset][0]);",
" buffer.putShort((short) lidAndLengthOverrides[offset][1]);"
],
"header": "@@ -922,18 +923,17 @@ public class Request {",
"removed": [
" ensureLength(offset_ + 3);",
" bytes_[offset_++] = (byte) (tripletLength & 0xff);",
" bytes_[offset_++] = (byte) (tripletType & 0xff);",
" bytes_[offset_++] = (byte) (tripletId & 0xff);",
" ensureLength(offset_ + (count * 3));",
" bytes_[offset_++] = (byte) (lidAndLengthOverrides[offset][0] & 0xff);",
" bytes_[offset_++] = (byte) ((lidAndLengthOverrides[offset][1] >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (lidAndLengthOverrides[offset][1] & 0xff);"
]
},
{
"added": [
" ensureLength(count * 3);"
],
"header": "@@ -952,7 +952,7 @@ public class Request {",
"removed": [
" ensureLength(offset_ + (count * 3));"
]
},
{
"added": [
" buffer.put((byte) overrideLid);",
" buffer.putShort((short) lidAndLengthOverrides[offset][1]);"
],
"header": "@@ -961,9 +961,8 @@ public class Request {",
"removed": [
" bytes_[offset_++] = (byte) (overrideLid & 0xff);",
" bytes_[offset_++] = (byte) ((lidAndLengthOverrides[offset][1] >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (lidAndLengthOverrides[offset][1] & 0xff);"
]
},
{
"added": [
" writeShort((short) value);",
" writeInt((int) value);",
" ensureLength(length);",
" buffer.put(buf, 0, length);",
" writeBytes(buf, buf.length);",
" ensureLength(4);",
" buffer.putShort((short) codePoint);",
" buffer.putShort((short) value);",
" ensureLength(5);",
" buffer.put((byte) 0x00);",
" buffer.put((byte) 0x05);",
" buffer.putShort((short) codePoint);",
" buffer.put((byte) value);",
" ensureLength(6);",
" buffer.put((byte) 0x00);",
" buffer.put((byte) 0x06);",
" buffer.putShort((short) codePoint);",
" buffer.putShort((short) value);",
" ensureLength(8);",
" buffer.put((byte) 0x00);",
" buffer.put((byte) 0x08);",
" buffer.putShort((short) codePoint);",
" buffer.putInt((int) value);",
" ensureLength(12);",
" buffer.put((byte) 0x00);",
" buffer.put((byte) 0x0C);",
" buffer.putShort((short) codePoint);",
" buffer.putLong(value);"
],
"header": "@@ -972,97 +971,71 @@ public class Request {",
"removed": [
" ensureLength(offset_ + 2);",
" bytes_[offset_++] = (byte) ((value >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (value & 0xff);",
" ensureLength(offset_ + 4);",
" bytes_[offset_++] = (byte) ((value >>> 24) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 16) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (value & 0xff);",
" ensureLength(offset_ + length);",
" System.arraycopy(buf, 0, bytes_, offset_, length);",
" offset_ += length;",
" ensureLength(offset_ + buf.length);",
" System.arraycopy(buf, 0, bytes_, offset_, buf.length);",
" offset_ += buf.length;",
" ensureLength(offset_ + 4);",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (value & 0xff);",
" ensureLength(offset_ + 5);",
" bytes_[offset_++] = 0x00;",
" bytes_[offset_++] = 0x05;",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" bytes_[offset_++] = (byte) (value & 0xff);",
" ensureLength(offset_ + 6);",
" bytes_[offset_++] = 0x00;",
" bytes_[offset_++] = 0x06;",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (value & 0xff);",
" ensureLength(offset_ + 8);",
" bytes_[offset_++] = 0x00;",
" bytes_[offset_++] = 0x08;",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 24) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 16) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (value & 0xff);",
" ensureLength(offset_ + 12);",
" bytes_[offset_++] = 0x00;",
" bytes_[offset_++] = 0x0C;",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 56) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 48) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 40) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 32) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 24) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 16) & 0xff);",
" bytes_[offset_++] = (byte) ((value >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (value & 0xff);"
]
},
{
"added": [
" ensureLength(4);",
" buffer.putShort((short) length);",
" buffer.putShort((short) codePoint);"
],
"header": "@@ -1071,11 +1044,9 @@ public class Request {",
"removed": [
" ensureLength(offset_ + 4);",
" bytes_[offset_++] = (byte) ((length >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (length & 0xff);",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);"
]
},
{
"added": [
" writeScalarBytes(codePoint, buf, 0, length);"
],
"header": "@@ -1086,14 +1057,7 @@ public class Request {",
"removed": [
" ensureLength(offset_ + length + 4);",
" bytes_[offset_++] = (byte) (((length + 4) >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) ((length + 4) & 0xff);",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" for (int i = 0; i < length; i++) {",
" bytes_[offset_++] = buf[i];",
" }"
]
},
{
"added": [
" writeLengthCodePoint(dataLength + 4, codePoint);",
" ensureLength(dataLength);"
],
"header": "@@ -1101,11 +1065,8 @@ public class Request {",
"removed": [
" ensureLength(offset_ + dataLength + 4);",
" bytes_[offset_++] = (byte) (((dataLength + 4) >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) ((dataLength + 4) & 0xff);",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);"
]
},
{
"added": [
" int stringByteLength = currentCcsidMgr.getByteLength(string);",
" throw new SqlException(netAgent_.logWriter_,",
"",
" writeScalarHeader(codePoint, Math.max(byteMinLength, stringByteLength));",
"",
" buffer.position(",
" currentCcsidMgr.convertFromJavaString(",
" string, buffer.array(), buffer.position(), netAgent_));",
"",
" padBytes(currentCcsidMgr.space_, byteMinLength - stringByteLength);"
],
"header": "@@ -1144,38 +1105,23 @@ public class Request {",
"removed": [
" ",
" int maxByteLength = currentCcsidMgr.getByteLength(string);",
" ensureLength(offset_ + maxByteLength + 4);",
" // Skip length for now until we know actual length",
" int lengthOffset = offset_;",
" offset_ += 2;",
" ",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" ",
" offset_ = currentCcsidMgr.convertFromJavaString(string, bytes_, offset_, netAgent_);",
" ",
" int stringByteLength = offset_ - lengthOffset - 4;",
" // reset the buffer and throw an SQLException if the length is too long",
" offset_ = lengthOffset;",
" throw new SqlException(netAgent_.logWriter_, ",
" for (int i = stringByteLength ; i < byteMinLength; i++) {",
" bytes_[offset_++] = currentCcsidMgr.space_;",
" }",
" stringByteLength = byteMinLength;",
" // now write the length. We have the string byte length plus",
" // 4 bytes, 2 for length and 2 for codepoint.",
" int totalLength = stringByteLength + 4;",
" bytes_[lengthOffset] = (byte) ((totalLength >>> 8) & 0xff);",
" bytes_[lengthOffset + 1] = (byte) ((totalLength) & 0xff);"
]
},
{
"added": [
" ensureLength(paddedLength);",
" buffer.position(currentCcsidMgr.convertFromJavaString(",
" string, buffer.array(), buffer.position(), netAgent_));",
"",
" padBytes(currentCcsidMgr.space_, paddedLength - stringLength);"
],
"header": "@@ -1191,17 +1137,17 @@ public class Request {",
"removed": [
" ensureLength(offset_ + paddedLength);",
" offset_ = currentCcsidMgr.convertFromJavaString(string, bytes_, offset_, netAgent_);",
" for (int i = 0; i < paddedLength - stringLength; i++) {",
" bytes_[offset_++] = currentCcsidMgr.space_;",
" }"
]
},
{
"added": [
" writeScalarBytes(codePoint, buff, 0, buff.length);"
],
"header": "@@ -1210,14 +1156,7 @@ public class Request {",
"removed": [
" int buffLength = buff.length;",
" ensureLength(offset_ + buffLength + 4);",
" bytes_[offset_++] = (byte) (((buffLength + 4) >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) ((buffLength + 4) & 0xff);",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" System.arraycopy(buff, 0, bytes_, offset_, buffLength);",
" offset_ += buffLength;"
]
},
{
"added": [
" writeLengthCodePoint(length + 4, codePoint);",
" ensureLength(length);",
" buffer.put(buff, start, length);"
],
"header": "@@ -1228,13 +1167,9 @@ public class Request {",
"removed": [
" ensureLength(offset_ + length + 4);",
" bytes_[offset_++] = (byte) (((length + 4) >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) ((length + 4) & 0xff);",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" System.arraycopy(buff, start, bytes_, offset_, length);",
" offset_ += length;"
]
},
{
"added": [
" writeLengthCodePoint(paddedLength + 4, codePoint);",
" writeScalarPaddedBytes(buff, paddedLength, padByte);"
],
"header": "@@ -1246,18 +1181,8 @@ public class Request {",
"removed": [
" int buffLength = buff.length;",
" ensureLength(offset_ + paddedLength + 4);",
" bytes_[offset_++] = (byte) (((paddedLength + 4) >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) ((paddedLength + 4) & 0xff);",
" bytes_[offset_++] = (byte) ((codePoint >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (codePoint & 0xff);",
" System.arraycopy(buff, 0, bytes_, offset_, buffLength);",
" offset_ += buffLength;",
"",
" for (int i = 0; i < paddedLength - buffLength; i++) {",
" bytes_[offset_++] = padByte;",
" }"
]
},
{
"added": [
" writeBytes(buff);",
" padBytes(padByte, paddedLength - buff.length);"
],
"header": "@@ -1265,14 +1190,8 @@ public class Request {",
"removed": [
" int buffLength = buff.length;",
" ensureLength(offset_ + paddedLength);",
" System.arraycopy(buff, 0, bytes_, offset_, buffLength);",
" offset_ += buffLength;",
"",
" for (int i = 0; i < paddedLength - buffLength; i++) {",
" bytes_[offset_++] = padByte;",
" }"
]
},
{
"added": [
" socketOutputStream.write(buffer.array(), 0, buffer.position());"
],
"header": "@@ -1286,7 +1205,7 @@ public class Request {",
"removed": [
" socketOutputStream.write(bytes_, 0, offset_);"
]
},
{
"added": [
" ((NetLogWriter) netAgent_.logWriter_).traceProtocolFlow(",
" buffer.array(),",
" buffer.position(),"
],
"header": "@@ -1295,9 +1214,10 @@ public class Request {",
"removed": [
" ((NetLogWriter) netAgent_.logWriter_).traceProtocolFlow(bytes_,",
" offset_,"
]
},
{
"added": [
" netAgent_.getCurrentCcsidManager().convertFromJavaString(",
" mask.toString(), buffer.array(), passwordStart_, netAgent_);",
" buffer.put(passwordStart_ + i, (byte) 0xFF);",
" ensureLength(1);",
" buffer.put(v);",
" ensureLength(2);",
" buffer.putShort(v);",
" ensureLength(4);",
" buffer.putInt(v);"
],
"header": "@@ -1316,35 +1236,33 @@ public class Request {",
"removed": [
" netAgent_.getCurrentCcsidManager()",
" .convertFromJavaString(mask.toString(), bytes_, passwordStart_, netAgent_);",
" bytes_[passwordStart_ + i] = (byte) 0xFF;",
" ensureLength(offset_ + 1);",
" bytes_[offset_++] = v;",
" ensureLength(offset_ + 2);",
" org.apache.derby.client.am.SignedBinary.shortToBigEndianBytes(bytes_, offset_, v);",
" offset_ += 2;",
" ensureLength(offset_ + 4);",
" org.apache.derby.client.am.SignedBinary.intToBigEndianBytes(bytes_, offset_, v);",
" offset_ += 4;"
]
},
{
"added": [
" ensureLength(6);",
" buffer.putShort((short) (v >> 32));",
" buffer.putInt((int) v);",
" ensureLength(8);",
" buffer.putLong(v);",
" writeShort(v);",
" writeInt(v);",
" writeLong(v);",
" writeInt(Float.floatToIntBits(v));",
" writeLong(Double.doubleToLongBits(v));",
" ensureLength(16);",
" int length = Decimal.bigDecimalToPackedDecimalBytes(",
" buffer.array(), buffer.position(),",
" v, declaredPrecision, declaredScale);",
" buffer.position(buffer.position() + length);",
" ensureLength(10);",
" DateTime.dateToDateBytes(buffer.array(), buffer.position(), date);",
" buffer.position(buffer.position() + 10);"
],
"header": "@@ -1355,71 +1273,61 @@ public class Request {",
"removed": [
" ensureLength(offset_ + 6);",
" org.apache.derby.client.am.SignedBinary.long6BytesToBigEndianBytes(",
" bytes_, offset_, v);",
" offset_ += 6;",
" ensureLength(offset_ + 8);",
" org.apache.derby.client.am.SignedBinary.longToBigEndianBytes(bytes_, offset_, v);",
" offset_ += 8;",
" ensureLength(offset_ + 2);",
" org.apache.derby.client.am.SignedBinary.shortToBigEndianBytes(bytes_, offset_, v);",
" offset_ += 2;",
" ensureLength(offset_ + 4);",
" org.apache.derby.client.am.SignedBinary.intToBigEndianBytes(bytes_, offset_, v);",
" offset_ += 4;",
" ensureLength(offset_ + 8);",
" org.apache.derby.client.am.SignedBinary.longToBigEndianBytes(bytes_, offset_, v);",
" offset_ += 8;",
" ensureLength(offset_ + 4);",
" org.apache.derby.client.am.FloatingPoint.floatToIeee754Bytes(bytes_, offset_, v);",
" offset_ += 4;",
" ensureLength(offset_ + 8);",
" org.apache.derby.client.am.FloatingPoint.doubleToIeee754Bytes(bytes_, offset_, v);",
" offset_ += 8;",
" ensureLength(offset_ + 16);",
" int length = org.apache.derby.client.am.Decimal.bigDecimalToPackedDecimalBytes(bytes_, offset_, v, declaredPrecision, declaredScale);",
" offset_ += length;",
" ensureLength(offset_ + 10);",
" org.apache.derby.client.am.DateTime.dateToDateBytes(bytes_, offset_, date);",
" offset_ += 10;"
]
},
{
"added": [
" ensureLength(8);",
" DateTime.timeToTimeBytes(buffer.array(), buffer.position(), time);",
" buffer.position(buffer.position() + 8);"
],
"header": "@@ -1429,9 +1337,9 @@ public class Request {",
"removed": [
" ensureLength(offset_ + 8);",
" org.apache.derby.client.am.DateTime.timeToTimeBytes(bytes_, offset_, time);",
" offset_ += 8;"
]
},
{
"added": [
" ensureLength(length);",
" DateTime.timestampToTimestampBytes(",
" buffer.array(), buffer.position(),",
" timestamp, supportsTimestampNanoseconds);",
" buffer.position(buffer.position() + length);"
],
"header": "@@ -1443,10 +1351,11 @@ public class Request {",
"removed": [
" ensureLength( offset_ + length );",
" org.apache.derby.client.am.DateTime.timestampToTimestampBytes",
" ( bytes_, offset_, timestamp, supportsTimestampNanoseconds );",
" offset_ += length;"
]
},
{
"added": [
" write1Byte(v ? 1 : 0);"
],
"header": "@@ -1457,8 +1366,7 @@ public class Request {",
"removed": [
" ensureLength(offset_ + 1);",
" bytes_[offset_++] = (byte) ((v ? 1 : 0) & 0xff);"
]
},
{
"added": [
" writeLDBytes(b);"
],
"header": "@@ -1480,13 +1388,11 @@ public class Request {",
"removed": [
" ensureLength(offset_ + b.length + 2);",
" writeLDBytesX(b.length, b);",
" ensureLength(offset_ + bytes.length + 2);"
]
},
{
"added": [
" writeShort((short) ldSize);",
" writeBytes(bytes, bytesToCopy);"
],
"header": "@@ -1502,10 +1408,8 @@ public class Request {",
"removed": [
" bytes_[offset_++] = (byte) ((ldSize >>> 8) & 0xff);",
" bytes_[offset_++] = (byte) (ldSize & 0xff);",
" System.arraycopy( bytes, 0, bytes_, offset_, bytesToCopy );",
" offset_ += bytesToCopy;"
]
},
{
"added": [],
"header": "@@ -1546,7 +1450,6 @@ public class Request {",
"removed": [
" ensureLength( offset_ + length + 2 );"
]
},
{
"added": [
" ensureLength(currentCcsidManager.getByteLength(s));",
" buffer.position(currentCcsidManager.convertFromJavaString(",
" s, buffer.array(), buffer.position(), netAgent_));"
],
"header": "@@ -1556,9 +1459,10 @@ public class Request {",
"removed": [
" ensureLength(offset_ + currentCcsidManager.getByteLength(s));",
" offset_ = currentCcsidManager.convertFromJavaString(s, bytes_, offset_, netAgent_);"
]
},
{
"added": [
"",
" if (buffer.remaining() == 0) {",
" buffer.put(flag);"
],
"header": "@@ -1614,10 +1518,11 @@ public class Request {",
"removed": [
" if (offset_ == bytes_.length) {",
" bytes_[offset_++] = flag;"
]
}
]
}
] |
derby-DERBY-5213-898c8da7
|
DERBY-5213; Write tests to verify the interaction of TRUNCATE TABLE and online backup
- add a third fixture, which tests an uncommitted truncate table followed by freeze/copy/unfreeze
- remove the decorateSQL method and moves the creation and population of the truncable table to setUp(), and drops the table in teardown()
- reinstate the test in lang._Suite
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1383677 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5222-c4521269
|
DERBY-5222: Compatibility tests fail to delete database directory
Wait for all forked processes to complete before testing the next combination.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1103742 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5222-e411e977
|
DERBY-5222: Compatibility tests fail to delete database directory
Use BaseTestCase.removeDirectory() to delete the database directory so
that we get more details in the log when something goes wrong.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1101059 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5223-a00a0313
|
DERBY-5223 Thread's interrupted flag not always preserved after Derby returns from JDBC API call
Patch DERBY-5223b: This fix moves the initialization of the variable
"interruptedException" earlier in
GenericLanguageConnectionContext#initialize and adds a missing
reinitialization to resetFromPool as a precaution (if
interruptedException is still non-null the connection should have
throws 08000).
The patch also changes the InterruptResilienceTest so that JUnit
asserts in the worker threads will get propagated to the main test
thread on completion, so any future errors in these invariants do not
get overlooked.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1102826 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/conn/GenericLanguageConnectionContext.java",
"hunks": [
{
"added": [
" interruptedException = null;"
],
"header": "@@ -377,6 +377,7 @@ public class GenericLanguageConnectionContext",
"removed": []
},
{
"added": [],
"header": "@@ -397,9 +398,7 @@ public class GenericLanguageConnectionContext",
"removed": [
"",
" interruptedException = null;"
]
}
]
}
] |
derby-DERBY-5224-7af858da
|
DERBY-5224: reduce cohesion by removing overzealous casting
Contributed by Dave Brosius <dbrosius@apache.org>.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1102679 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/locks/LockTableVTI.java",
"hunks": [
{
"added": [
"\t\t\t\tlock = (Latch) grantedList.next();"
],
"header": "@@ -100,10 +100,9 @@ class LockTableVTI implements Enumeration",
"removed": [
"//System.out.println(\"next lock \");",
"\t\t\t\tlock = (Lock) grantedList.next();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UnionNode.java",
"hunks": [
{
"added": [
"\t\t\t\t(ResultSetNode) getNodeFactory().getNode("
],
"header": "@@ -445,7 +445,7 @@ public class UnionNode extends SetOperatorNode",
"removed": [
"\t\t\t\t(NormalizeResultSetNode) getNodeFactory().getNode("
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UpdateNode.java",
"hunks": [
{
"added": [
"\t\t\trowLocationNode = (ValueNode) getNodeFactory().getNode(",
"\t\t\trowLocationNode = (ValueNode) getNodeFactory().getNode("
],
"header": "@@ -498,13 +498,13 @@ public final class UpdateNode extends DMLModStatementNode",
"removed": [
"\t\t\trowLocationNode = (CurrentRowLocationNode) getNodeFactory().getNode(",
"\t\t\trowLocationNode = (NumericConstantNode) getNodeFactory().getNode("
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/index/B2IFactory.java",
"hunks": [
{
"added": [
" btree = (Conglomerate) root.getConglom(B2I.FORMAT_NUMBER);"
],
"header": "@@ -284,7 +284,7 @@ public class B2IFactory implements ConglomerateFactory, ModuleControl",
"removed": [
" btree = (B2I) root.getConglom(B2I.FORMAT_NUMBER);"
]
}
]
}
] |
derby-DERBY-5232-0dfad316
|
DERBY-5232 (Put a stern README file in log and seg0 directories to warn users of corrpution they will cause if they touch files there)
Adding a test case for default database creation with the checks for the existence of the 3 readme files. These readmes warn users against editing/deleting any of the files in the database directories. The location of the readme files are
1)at the db level directory,
2)in seg0 directory and
3)in the log directocy.
All the three readme files are named README_DONT_TOUCH_FILES.txt
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1409100 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5232-1bb2c59c
|
DERBY-5232 (Put a stern README file in log and seg0 directories to warn users of corrpution they will cause if they touch files there)
This commit fixes
a)a typo in the readme file content.
b)uses OutputStreamWrtier to deal with UTF-8 encoding for the readme files.
c)fixes the if condition used to create readme files
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1407170 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java",
"hunks": [
{
"added": [
"import java.io.OutputStream;"
],
"header": "@@ -75,7 +75,7 @@ import java.io.Serializable;",
"removed": [
""
]
},
{
"added": [
" private static final int README_FILE_OUTPUTSTREAM_WRITER_ACTION = 19;"
],
"header": "@@ -149,7 +149,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" private String activePerms;"
]
},
{
"added": [
" private synchronized OutputStreamWriter privGetOutputStreamWriter(StorageFile file)",
" actionCode = README_FILE_OUTPUTSTREAM_WRITER_ACTION;",
" return (OutputStreamWriter) java.security.AccessController.doPrivileged(this);"
],
"header": "@@ -2372,15 +2372,14 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" private synchronized StorageRandomAccessFile privRandomAccessFile(StorageFile file, String perms)",
" actionCode = REGULAR_FILE_EXISTS_ACTION;",
" activePerms = perms;",
" return (StorageRandomAccessFile) java.security.AccessController.doPrivileged(this);"
]
},
{
"added": [
" case README_FILE_OUTPUTSTREAM_WRITER_ACTION:",
" \treturn(new OutputStreamWriter(actionStorageFile.getOutputStream(),\"UTF8\"));"
],
"header": "@@ -2773,6 +2772,8 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/log/LogToFile.java",
"hunks": [
{
"added": [
"import java.io.OutputStreamWriter;"
],
"header": "@@ -96,6 +96,7 @@ import org.apache.derby.iapi.util.InterruptDetectedException;",
"removed": []
},
{
"added": [
" if (!privExists(fileReadMe)) {",
" OutputStreamWriter osw = null;",
" osw = privGetOutputStreamWriter(fileReadMe);",
" osw.write(MessageService.getTextMessage("
],
"header": "@@ -2738,11 +2739,11 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": [
" if (privExists(fileReadMe)) {",
" StorageRandomAccessFile fileReadMeDB=null;",
" fileReadMeDB = privRandomAccessFile(fileReadMe, \"rw\");",
" fileReadMeDB.writeUTF(MessageService.getTextMessage("
]
},
{
"added": [
" if (osw != null)",
" osw.close();"
],
"header": "@@ -2750,11 +2751,11 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": [
" if (fileReadMeDB != null)",
" fileReadMeDB.close();"
]
},
{
"added": [
" private synchronized OutputStreamWriter privGetOutputStreamWriter(StorageFile file)",
" throws IOException",
" {",
" action = 10;",
" activeFile = file;",
" try",
" {",
" return (OutputStreamWriter) java.security.AccessController.doPrivileged(this);",
" }",
" catch (java.security.PrivilegedActionException pae)",
" {",
" throw (IOException) pae.getException();",
" }",
" }",
""
],
"header": "@@ -5688,6 +5689,21 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport",
"removed": []
}
]
}
] |
derby-DERBY-5232-393f243f
|
DERBY-5232 (Put a stern README file in log and seg0 directories to warn users of corrpution they will cause if they touch files there)
The earlier commit caused test failures because readme file creation was not getting done inside a privileged block. This commit fixes that problem and it was also fixes so typo and gets rid of redundant file close since that is already happening in the finally block.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1403807 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java",
"hunks": [
{
"added": [
" private String activePerms;"
],
"header": "@@ -149,6 +149,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": []
},
{
"added": [
" private synchronized StorageRandomAccessFile privRandomAccessFile(StorageFile file, String perms)",
" throws IOException",
" {",
" actionCode = REGULAR_FILE_EXISTS_ACTION;",
" actionStorageFile = file;",
" activePerms = perms;",
" try",
" {",
" return (StorageRandomAccessFile) java.security.AccessController.doPrivileged(this);",
" }",
" catch (java.security.PrivilegedActionException pae)",
" {",
" throw (IOException) pae.getException();",
" }",
" }"
],
"header": "@@ -2371,6 +2372,21 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": []
}
]
}
] |
derby-DERBY-5232-651c99e8
|
DERBY-5232 (Put a stern README file in log and seg0 directories to warn users of corrpution they will cause if they touch files there)
This will create 3 readme(README_DONT_TOUCH_FILES.txt) files at database creation time, One at the database level, one in the "log" directory and another in the "seg0" directory.
a)Content of the readme file at the database level are as follows
# *************************************************************************
# *** Do not touch files in this directory! ***
# *** FILES IN THIS DIRECTORY AND SUB-DIRECTORIES CONSTITUES DERBY ***
# *** DATABASE WHICH INCLUDES THE DATA(USER AND SYSTEM) AND THE ***
# *** NECESSARY FILES FOR DATABASE RECOVERY. ***
# *** EDITING, ADDITING OR DELETING ANY OF THESE FILES MAY CAUSE DATA ***
# *** CORRUPTION AND LEAVE THE DATABASE IN NON-RECOVERABLE STATE. ***
# *************************************************************************
b)Content of the readme file at "seg0" directory are as follows
# *************************************************************************
# *** Do not touch files in this directory! ***
# *** FILES IN THIS DIRECTORY ARE USED BY THE DERBY DATABASE TO STORE ***
# *** USER AND SYSTEM DATA. EDITING, ADDING, OR DELETING FILES IN THIS ***
# *** DIRECTORY WILL CORRUPT THE ASSOCIATED DERBY DATABASE, AND WILL ***
# *** NOT BE RECOVERABLE. ***
# *************************************************************************
c)Content of the readme file at "log" directory are as follows
# *************************************************************************
# *** Do not touch files in this directory! ***
# *** FILES IN THIS DIRECTORY ARE USED BY THE DERBY DATABASE RECOVERY ***
# *** SYSTEM. EDITING, ADDING OR DELETING FILES IN THIS DIRECTORY WILL ***
# *** CAUSE THE DERBY RECOVERY SYSTEM TO FAIL LEADING TO UNRECOVERABLE ***
# *** CORRUPT DATABASES. ***
# *************************************************************************
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1403611 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/monitor/PersistentService.java",
"hunks": [
{
"added": [
" /**",
" The readme file cautioning users against touching the files in",
" the database directory ",
" */",
" public static final String DB_README_FILE_NAME = \"README_DONT_TOUCH_FILES.txt\";"
],
"header": "@@ -82,6 +82,11 @@ public interface PersistentService {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/monitor/StorageFactoryService.java",
"hunks": [
{
"added": [
"import org.apache.derby.io.StorageRandomAccessFile;"
],
"header": "@@ -35,6 +35,7 @@ import org.apache.derby.iapi.error.StandardException;",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java",
"hunks": [
{
"added": [
"import org.apache.derby.io.StorageRandomAccessFile;"
],
"header": "@@ -54,6 +54,7 @@ import org.apache.derby.iapi.store.raw.log.LogInstant;",
"removed": []
}
]
},
{
"file": "java/storeless/org/apache/derby/impl/storeless/StorelessService.java",
"hunks": [
{
"added": [
"",
" /** @see PersistentService#createDataWarningFile */",
" public void createDataWarningFile(StorageFactory sf) ",
" throws StandardException {",
" // Auto-generated method stub",
" }"
],
"header": "@@ -109,4 +109,10 @@ public class StorelessService implements PersistentService {",
"removed": []
}
]
}
] |
derby-DERBY-5232-ce0c0aed
|
DERBY-5232 (Put a stern README file in log and seg0 directories to warn users of corrpution they will cause if they touch files there)
Remove redundant file length check on readme files. It is sufficient to check that they exist. Additionally, assert the value returned by file exists method.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1409254 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5233-6e61723e
|
DERBY-5233 Interrupt of create table or index (i.e. a container) will throw XSDF1 under NIO - connection survives
Patch DERBY-5233-2: The patch makes RAFContainer, when seeing an
interrupt exception during container creation, close it down and try
again, up to MAX_INTERRUPT_RETRIES times. Since RAFContainer should
work also under CDC/Foundation 1.1, the exceptions are checked using
reflection (NIO classes are excluded there).
I also adds a new test case to InterruptResilienceTest:
testCreateDropInterrupted.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1129764 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5234-355ebf60
|
DERBY-5234: Add edge case tests for compression-related data corruption.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1335677 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5236-52660564
|
DERBY-5236: Client driver silently truncates strings that exceed 32KB
Test case for the bug. Currently disabled.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1104365 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5236-7aa76512
|
DERBY-5236: Client driver silently truncates strings that exceed 32KB
Prepare the network client for the possibility that a column is split
over more than two blocks in the response. This cannot happen until a
fix that makes the server stop sending truncated strings, has been
checked in.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1163131 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/NetCursor.java",
"hunks": [
{
"added": [
" checkForSplitRowAndComplete(4);"
],
"header": "@@ -413,18 +413,7 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" if ((position_ + 4) > lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow();",
"",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }",
""
]
},
{
"added": [
" checkForSplitRowAndComplete(1);",
" checkForSplitRowAndComplete(1, index);"
],
"header": "@@ -433,38 +422,14 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" // For singleton select, the complete row always comes back, even if multiple query blocks are required,",
" // so there is no need to drive a flowFetch (continue query) request for singleton select.",
" if (position_ == lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow();",
"",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }",
" // For singleton select, the complete row always comes back, even if multiple query blocks are required,",
" // so there is no need to drive a flowFetch (continue query) request for singleton select.",
" if (position_ == lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow(index);",
"",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }"
]
},
{
"added": [
" checkForSplitRowAndComplete(length);",
" byte[] b = new byte[length];",
" System.arraycopy(dataBuffer_, position_, b, 0, length);",
" position_ += length;"
],
"header": "@@ -473,26 +438,11 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" byte[] b = new byte[length];",
" ;",
"",
" // For singleton select, the complete row always comes back, even if multiple query blocks are required,",
" // so there is no need to drive a flowFetch (continue query) request for singleton select.",
" if ((position_ + length) > lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow();",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }",
"",
" for (int i = 0; i < length; i++) {",
" b[i] = dataBuffer_[position_++];",
" }"
]
},
{
"added": [
" checkForSplitRowAndComplete(2);",
" checkForSplitRowAndComplete(2, index);"
],
"header": "@@ -501,40 +451,14 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" // For singleton select, the complete row always comes back, even if multiple query blocks are required,",
" // so there is no need to drive a flowFetch (continue query) request for singleton select.",
" if ((position_ + 2) > lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow();",
"",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }",
"",
" // For singleton select, the complete row always comes back, even if multiple query blocks are required,",
" // so there is no need to drive a flowFetch (continue query) request for singleton select.",
" if ((position_ + 2) > lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow(index);",
"",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }",
""
]
},
{
"added": [
" checkForSplitRowAndComplete(length);",
" checkForSplitRowAndComplete(length, index);"
],
"header": "@@ -545,38 +469,13 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" // For singleton select, the complete row always comes back, even if multiple query blocks are required,",
" // so there is no need to drive a flowFetch (continue query) request for singleton select.",
" if ((position_ + length) > lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow();",
"",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }",
" // For singleton select, the complete row always comes back, even if multiple query blocks are required,",
" // so there is no need to drive a flowFetch (continue query) request for singleton select.",
" if ((position_ + length) > lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow(index);",
"",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }",
""
]
},
{
"added": [
" /**",
" * Adjust column offsets after fetching the next part of a split row.",
" * @param index the index of the column that was split, or -1 when not",
" * fetching column data",
" */"
],
"header": "@@ -603,6 +502,11 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": []
},
{
"added": [
" checkForSplitRowAndComplete(length);"
],
"header": "@@ -971,19 +875,7 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" // For singleton select, the complete row always comes back, even if multiple query blocks are required,",
" // so there is no need to drive a flowFetch (continue query) request for singleton select.",
" if ((position_ + length) > lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow();",
"",
" // if lastValidBytePosition_ has not changed, and an ENDQRYRM was received,",
" // throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }"
]
},
{
"added": [
" /**",
" * Check if the data we want crosses a row split, and fetch more data",
" * if necessary.",
" *",
" * @param length the length in bytes of the data needed",
" * @param index the index of the column to be fetched, or -1 when not",
" * fetching column data",
" */",
" private void checkForSplitRowAndComplete(int length, int index)",
" throws SqlException {",
" // For singleton select, the complete row always comes back, even if",
" // multiple query blocks are required, so there is no need to drive a",
" // flowFetch (continue query) request for singleton select.",
" while ((position_ + length) > lastValidBytePosition_) {",
" // Check for ENDQRYRM, throw SqlException if already received one.",
" checkAndThrowReceivedEndqryrm();",
"",
" // Send CNTQRY to complete the row/rowset.",
" int lastValidByteBeforeFetch = completeSplitRow(index);",
"",
" // If lastValidBytePosition_ has not changed, and an ENDQRYRM was",
" // received, throw a SqlException for the ENDQRYRM.",
" checkAndThrowReceivedEndqryrm(lastValidByteBeforeFetch);",
" }",
" }",
"",
" /**",
" * Check if the data we want crosses a row split, and fetch more data",
" * if necessary. This method is not for column data; use",
" * {@link #checkForSplitRowAndComplete(int, int)} for that.",
" *",
" * @param length the length in bytes of the data needed",
" */",
" private void checkForSplitRowAndComplete(int length) throws SqlException {",
" checkForSplitRowAndComplete(length, -1);",
" }",
""
],
"header": "@@ -1239,6 +1131,43 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": []
},
{
"added": [
" /**",
" * Fetch more data for a row that has been split up.",
" *",
" * @param index the index of the column that was split, or -1 when not",
" * fetching column data",
" * @return the value of {@code lastValidBytePosition_} before more data",
" * was fetched",
" */"
],
"header": "@@ -1274,21 +1203,14 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {",
"removed": [
" private int completeSplitRow() throws DisconnectException, SqlException {",
" int lastValidBytePositionBeforeFetch = 0;",
" if (netResultSet_ != null && netResultSet_.scrollable_) {",
" lastValidBytePositionBeforeFetch = lastValidBytePosition_;",
" netResultSet_.flowFetchToCompleteRowset();",
" } else {",
" // Shift partial row to the beginning of the dataBuffer",
" shiftPartialRowToBeginning();",
" resetCurrentRowPosition();",
" lastValidBytePositionBeforeFetch = lastValidBytePosition_;",
" netResultSet_.flowFetch();",
" }",
" return lastValidBytePositionBeforeFetch;",
" }",
""
]
}
]
}
] |
derby-DERBY-5236-9b3e218c
|
DERBY-5236: Client driver silently truncates strings that exceed 32KB
Disable the fix when talking to old clients because they may get a
StringIndexOutOfBoundsException if they receive longer strings, and
they also don't know exactly how to handle java.sql.DataTruncation
warnings.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1167470 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DDMWriter.java",
"hunks": [
{
"added": [
" // Find out how long strings the client supports, and possibly",
" // truncate the string before sending it.",
"",
" int maxByteLength = MAX_VARCHAR_BYTE_LENGTH;",
" boolean warnOnTruncation = true;",
"",
" AppRequester appRequester = agent.getSession().appRequester;",
" if (appRequester != null && !appRequester.supportsLongerLDStrings()) {",
" // The client suffers from DERBY-5236, and it doesn't support",
" // receiving as long strings as newer clients do. It also doesn't",
" // know exactly what to do with a DataTruncation warning, so skip",
" // sending it to old clients.",
" maxByteLength = FdocaConstants.LONGVARCHAR_MAX_LEN;",
" warnOnTruncation = false;",
" }",
"",
" if (byteLength > maxByteLength) {",
" byteLength = maxByteLength;"
],
"header": "@@ -1245,13 +1245,29 @@ class DDMWriter",
"removed": [
" if (byteLength > MAX_VARCHAR_BYTE_LENGTH) {",
" byteLength = MAX_VARCHAR_BYTE_LENGTH;"
]
},
{
"added": [
" // If invoked as part of statement execution, and the client",
" // supports receiving DataTruncation warnings, add a warning about",
" if (warnOnTruncation && stmt != null) {"
],
"header": "@@ -1269,9 +1285,10 @@ class DDMWriter",
"removed": [
" // If invoked as part of statement execution, add a warning about",
" if (stmt != null) {"
]
}
]
}
] |
derby-DERBY-5236-b8501198
|
DERBY-5236: Client driver silently truncates strings that exceed 32KB
Add a java.sql.DataTruncation warning to the result if a string was
truncated.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1167017 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/SqlException.java",
"hunks": [
{
"added": [],
"header": "@@ -22,7 +22,6 @@",
"removed": [
"import java.util.TreeMap;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/Sqlca.java",
"hunks": [
{
"added": [
"import java.sql.DataTruncation;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
},
{
"added": [
" /** Token delimiter for SQLERRMC. */",
" private final static String SQLERRMC_TOKEN_DELIMITER = \"\\u0014\";",
""
],
"header": "@@ -58,6 +59,9 @@ public abstract class Sqlca {",
"removed": []
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DDMWriter.java",
"hunks": [
{
"added": [
"import java.sql.DataTruncation;"
],
"header": "@@ -32,6 +32,7 @@ import java.nio.CharBuffer;",
"removed": []
},
{
"added": [
" /**",
" * Get the current position in the output buffer.",
" * @return current position",
" */",
" protected int getBufferPosition() {",
" return buffer.position();",
" }",
"",
" /**",
" * Change the current position in the output buffer.",
" * @param position new position",
" */",
" protected void setBufferPosition(int position) {",
" buffer.position(position);",
" }",
"",
" /**",
" * Get a copy of a subsequence of the output buffer, starting at the",
" * specified position and ending at the current buffer position.",
" *",
" * @param startPos the position of the first byte to copy",
" * @return all bytes from {@code startPos} up to the current position",
" */",
" protected byte[] getBufferContents(int startPos) {",
" byte[] bytes = new byte[buffer.position() - startPos];",
" System.arraycopy(buffer.array(), startPos, bytes, 0, bytes.length);",
" return bytes;",
" }",
""
],
"header": "@@ -182,6 +183,35 @@ class DDMWriter",
"removed": []
},
{
"added": [
"\t\twriteLDString(s, 0, null, false);"
],
"header": "@@ -1120,7 +1150,7 @@ class DDMWriter",
"removed": [
"\t\twriteLDString(s,0);"
]
},
{
"added": [
" * @param stmt the executing statement (null if not invoked as",
" * part of statement execution)",
" * @param isParameter true if the value written is for an output",
" * parameter in a procedure call",
"\tprotected void writeLDString(String s, int index, DRDAStatement stmt,",
" boolean isParameter)",
" throws DRDAProtocolException"
],
"header": "@@ -1191,9 +1221,15 @@ class DDMWriter",
"removed": [
"\tprotected void writeLDString(String s, int index) throws DRDAProtocolException"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
"import java.sql.DataTruncation;"
],
"header": "@@ -31,6 +31,7 @@ import java.io.UnsupportedEncodingException;",
"removed": []
},
{
"added": [
" else if (se instanceof DataTruncation)",
" sqlerrmc = buildDataTruncationSqlerrmc((DataTruncation) se);"
],
"header": "@@ -6179,6 +6180,8 @@ class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" /**",
" * Build the SQLERRMC for a {@code java.sql.DataTruncation} warning.",
" * Serialize all the fields of the {@code DataTruncation} instance in the",
" * order in which they appear in the parameter list of the constructor.",
" *",
" * @param dt the {@code DataTruncation} instance to serialize",
" * @return the SQLERRMC string with all fields of the warning",
" */",
" private String buildDataTruncationSqlerrmc(DataTruncation dt) {",
" return dt.getIndex() + SQLERRMC_TOKEN_DELIMITER +",
" dt.getParameter() + SQLERRMC_TOKEN_DELIMITER +",
" dt.getRead() + SQLERRMC_TOKEN_DELIMITER +",
" dt.getDataSize() + SQLERRMC_TOKEN_DELIMITER +",
" dt.getTransferSize();",
" }"
],
"header": "@@ -6264,6 +6267,21 @@ class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" // Save the position where we start writing the warnings in case",
" // we need to add more warnings later.",
" final int sqlcagrpStart = writer.getBufferPosition();",
"",
" // Save the position right after the warnings so we know where to",
" // insert more warnings later.",
" final int sqlcagrpEnd = writer.getBufferPosition();",
""
],
"header": "@@ -7082,11 +7100,19 @@ class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
" writeFdocaVal(i, extdtaStream, drdaType, precision,",
" scale, extdtaStream.isNull(), stmt, false);"
],
"header": "@@ -7125,8 +7151,8 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\t\t\t\twriteFdocaVal(i,extdtaStream, drdaType,",
"\t\t\t\t\t\t\t\t\t\t precision,scale,extdtaStream.isNull(),stmt);"
]
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\t precision, scale, rs.wasNull(),",
" stmt, false);",
"\t\t\t\t\t\t\t\t\t\t precision, scale, rs.wasNull(),",
" stmt, false);"
],
"header": "@@ -7182,12 +7208,14 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\t\t\t\t\t\t\t precision,scale,rs.wasNull(),stmt);",
"\t\t\t\t\t\t\t\t\t\t precision,scale,rs.wasNull(),stmt);"
]
},
{
"added": [
"\t\t\t\t\t\twriteFdocaVal(i, val, drdaType, precision, scale,",
" valNull, stmt, true);",
"\t\t\t\t\t\twriteFdocaVal(i, null, drdaType, precision, scale,",
" true, stmt, true);",
"",
" DataTruncation truncated = stmt.getTruncationWarnings();",
" if (truncated != null) {",
" // Some of the data was truncated, so we need to add a",
" // truncation warning. Save a copy of the row data, then move",
" // back to the SQLCAGRP section and overwrite it with the new",
" // warnings, and finally re-insert the row data after the new",
" // SQLCAGRP section.",
" byte[] data = writer.getBufferContents(sqlcagrpEnd);",
" writer.setBufferPosition(sqlcagrpStart);",
" if (sqlw != null) {",
" truncated.setNextWarning(sqlw);",
" }",
" writeSQLCAGRP(truncated, CodePoint.SVRCOD_WARNING, 1, -1);",
" writer.writeBytes(data);",
" stmt.clearTruncationWarnings();",
" }",
""
],
"header": "@@ -7208,13 +7236,33 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\t\t\twriteFdocaVal(i,val,drdaType,precision, scale, valNull,stmt);",
"\t\t\t\t\t\twriteFdocaVal(i,null,drdaType,precision,scale,true,stmt);"
]
},
{
"added": [
" * @param isParam True when writing a value for a procedure parameter"
],
"header": "@@ -7875,6 +7923,7 @@ class DRDAConnThread extends Thread {",
"removed": []
},
{
"added": [
"\t\t\t\t\t\t\t\t DRDAStatement stmt, boolean isParam)",
" throws DRDAProtocolException, SQLException"
],
"header": "@@ -7885,8 +7934,8 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\t\t\t\t\t ",
"\t\t\t\t\t\t\t\t DRDAStatement stmt) throws DRDAProtocolException, SQLException"
]
},
{
"added": [
"\t\t\t\t\twriter.writeLDString(val.toString(), index, stmt, isParam);"
],
"header": "@@ -7949,7 +7998,7 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t\t\twriter.writeLDString(val.toString(), index);"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAStatement.java",
"hunks": [
{
"added": [
"import java.sql.DataTruncation;"
],
"header": "@@ -39,6 +39,7 @@ import java.util.StringTokenizer;",
"removed": []
},
{
"added": [
" /**",
" * A chain of warnings indicating whether some of the data values returned",
" * by this statement had to be truncated before being sent to the client.",
" */",
" private DataTruncation truncationWarnings;",
""
],
"header": "@@ -102,6 +103,12 @@ class DRDAStatement",
"removed": []
},
{
"added": [
" /**",
" * Add a warning about data having been truncated.",
" * @param w the warning to add",
" */",
" protected void addTruncationWarning(DataTruncation w) {",
" if (truncationWarnings == null) {",
" truncationWarnings = w;",
" } else {",
" truncationWarnings.setNextWarning(w);",
" }",
" }",
"",
" /**",
" * Get the chain of truncation warnings added to this statement.",
" * @return chain of truncation warnings, possibly {@code null}",
" */",
" protected DataTruncation getTruncationWarnings() {",
" return truncationWarnings;",
" }",
"",
" /**",
" * Clear the chain of truncation warnings for this statement.",
" */",
" protected void clearTruncationWarnings() {",
" truncationWarnings = null;",
" }",
""
],
"header": "@@ -343,6 +350,33 @@ class DRDAStatement",
"removed": []
},
{
"added": [
" truncationWarnings = null;"
],
"header": "@@ -1033,6 +1067,7 @@ class DRDAStatement",
"removed": []
}
]
}
] |
derby-DERBY-5236-c6e42941
|
DERBY-5236: Client driver silently truncates strings that exceed 32KB
Truncate the strings at 65535 bytes instead of 32700 bytes, and make sure
that the truncation happens on a character boundary.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1164495 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DDMWriter.java",
"hunks": [
{
"added": [
" /**",
" * The maximum length in bytes for strings sent by {@code writeLDString()},",
" * which is the maximum unsigned integer value that fits in two bytes.",
" */",
" private final static int MAX_VARCHAR_BYTE_LENGTH = 0xFFFF;",
""
],
"header": "@@ -58,6 +58,12 @@ class DDMWriter",
"removed": []
}
]
}
] |
derby-DERBY-5238-902725f1
|
DERBY-5238 VARCHAR size typos in some documentation topics
Modified comments in three Java source files.
Patches: DERBY-5238-2.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1125447 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/SystemProcedures.java",
"hunks": [
{
"added": [
"\t * IN INSERTCOLUMNLIST VARCHAR(32672), IN COLUMNINDEXES VARCHAR(32672),",
"\t * IN FILENAME VARCHAR(32672), IN COLUMNDELIMITER CHAR(1), "
],
"header": "@@ -1532,8 +1532,8 @@ public class SystemProcedures {",
"removed": [
"\t * IN INSERTCOLUMNLIST VARCHAR(32762), IN COLUMNINDEXES VARCHAR(32762),",
"\t * IN FILENAME VARCHAR(32762), IN COLUMNDELIMITER CHAR(1), "
]
},
{
"added": [
" * IN INSERTCOLUMNLIST VARCHAR(32672), ",
" * IN COLUMNINDEXES VARCHAR(32672),",
" * IN FILENAME VARCHAR(32672), IN COLUMNDELIMITER CHAR(1), "
],
"header": "@@ -1577,9 +1577,9 @@ public class SystemProcedures {",
"removed": [
" * IN INSERTCOLUMNLIST VARCHAR(32762), ",
" * IN COLUMNINDEXES VARCHAR(32762),",
" * IN FILENAME VARCHAR(32762), IN COLUMNDELIMITER CHAR(1), "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t\t * IN TABLENAME VARCHAR(128), IN FILENAME VARCHAR(32672), "
],
"header": "@@ -11070,7 +11070,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t * IN TABLENAME VARCHAR(128), IN FILENAME VARCHAR(32762), "
]
},
{
"added": [
"\t\t * IN TABLENAME VARCHAR(128), IN INSERTCOLUMNLIST VARCHAR(32672), ",
"\t\t * IN COLUMNINDEXES VARCHAR(32672), IN IN FILENAME VARCHAR(32672), "
],
"header": "@@ -11112,8 +11112,8 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t * IN TABLENAME VARCHAR(128), IN INSERTCOLUMNLIST VARCHAR(32762), ",
"\t\t * IN COLUMNINDEXES VARCHAR(32762), IN IN FILENAME VARCHAR(32762), "
]
},
{
"added": [
"\t\t * IN VTINAME VARCHAR(32672), ",
" * IN VTIARG VARCHAR(32672))"
],
"header": "@@ -11160,8 +11160,8 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t * IN VTINAME VARCHAR(32762), ",
" * IN VTIARG VARCHAR(32762))"
]
},
{
"added": [
" * IN TABLENAME VARCHAR(128), IN FILENAME VARCHAR(32672), "
],
"header": "@@ -12667,7 +12667,7 @@ public final class\tDataDictionaryImpl",
"removed": [
" * IN TABLENAME VARCHAR(128), IN FILENAME VARCHAR(32762), "
]
},
{
"added": [
" * IN TABLENAME VARCHAR(128), IN INSERTCOLUMNLIST VARCHAR(32672), ",
" * IN COLUMNINDEXES VARCHAR(32672), IN IN FILENAME VARCHAR(32672), "
],
"header": "@@ -12706,8 +12706,8 @@ public final class\tDataDictionaryImpl",
"removed": [
" * IN TABLENAME VARCHAR(128), IN INSERTCOLUMNLIST VARCHAR(32762), ",
" * IN COLUMNINDEXES VARCHAR(32762), IN IN FILENAME VARCHAR(32762), "
]
}
]
}
] |
derby-DERBY-5240-9ef6c29f
|
DERBY-5240 (Log Operating system information to derby.log on boot)
Adding following piece of information into derby.log for OS right after the existing info about JVM
OS name=XXX
OS architecture=YYY
OS version=ZZZ
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1371041 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/BaseDataFileFactory.java",
"hunks": [
{
"added": [
"\tprivate String osInfo;",
"\t"
],
"header": "@@ -135,6 +135,8 @@ public class BaseDataFileFactory",
"removed": []
},
{
"added": [
"\t\tosInfo = buildOSinfo();",
"\t\t"
],
"header": "@@ -261,6 +263,8 @@ public class BaseDataFileFactory",
"removed": []
},
{
"added": [
"\t\t//Log the OS info",
"\t\tlogMsg(osInfo);",
""
],
"header": "@@ -368,6 +372,9 @@ public class BaseDataFileFactory",
"removed": []
},
{
"added": [
" /**",
" * Return values of system properties that identify the OS.",
" * Will catch SecurityExceptions and note them for displaying information.",
" * @return the Java system property value for the OS or a string capturing a",
" * security exception.",
" */",
" private static String buildOSinfo () {",
" \treturn (String)AccessController.doPrivileged(new PrivilegedAction(){",
" \t\tpublic Object run() {",
" \t\t\tString osInfo = \"\";",
" \t\t\ttry {",
" \t\t\t\tString currentProp = PropertyUtil.getSystemProperty(\"os.name\");",
" \t\t\t\tif (currentProp != null)",
" \t\t\t\t\tosInfo = \"os.name=\"+currentProp+\"\\n\";",
" \t\t\t\tif ((currentProp = PropertyUtil.getSystemProperty(\"os.arch\")) != null)",
" \t\t\t\t\tosInfo += \"os.arch=\"+currentProp+\"\\n\";",
" \t\t\t\tif ((currentProp = PropertyUtil.getSystemProperty(\"os.version\")) != null)",
" \t\t\t\t\tosInfo += \"os.version=\"+currentProp;",
" \t\t\t}",
" \t\t\tcatch(SecurityException se){",
" \t\t\t\treturn se.getMessage();",
" \t\t\t}",
" \t\t\treturn osInfo;",
" \t\t}",
" \t});",
" }"
],
"header": "@@ -2154,6 +2161,32 @@ public class BaseDataFileFactory",
"removed": []
}
]
}
] |
derby-DERBY-5244-be3da0b3
|
DERBY-5244 DERBY-5244 DatabaseMetaData.getColumns(null, null, tableName, null) does not return the columns meta for a SYNONYM. This is because for
Adding test case for views. The test comments for table, views and synonym is as follows
* DERBY-5244 DatabaseMetaData.getColumns(null, null, tableName, null)
* does not return the columns meta for a SYNONYM. This is because for
* synonyms, we do not add any rows in SYSCOLUMNS. But the metadata query
* for DatabaseMetaData.getColumns() looks at SYSCOLUMNS to get the
* resultset. Views and Tables do not have problems because we do keep
* their columns information in SYSCOLUMNS.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1201945 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5244-be72b144
|
DERBY-5244 DatabaseMetaData.getColumns(null, null, tableName, null) does not return the columns meta for a SYNONYM
Adding a test case which confirms the behavior noticed by DERBY-5244.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1199096 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5246-6d90df09
|
DERBY-5246: Simplify bytecode generation for concatenation operator
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1127886 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/BinaryOperatorNode.java",
"hunks": [
{
"added": [],
"header": "@@ -600,9 +600,6 @@ public class BinaryOperatorNode extends OperatorNode",
"removed": [
"\t\t\t//following method is special code for concatenation where if field is null, we want it to be initialized to NULL SQLxxx type object",
"\t\t\t//before generating code \"field = method(p1, p2, field);\"",
"\t\t\tinitializeResultField(acb, mb, resultField);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ConcatenationOperatorNode.java",
"hunks": [
{
"added": [],
"header": "@@ -33,10 +33,6 @@ import org.apache.derby.iapi.sql.compile.TypeCompiler;",
"removed": [
"import org.apache.derby.iapi.services.compiler.MethodBuilder;",
"import org.apache.derby.iapi.services.compiler.LocalField;",
"import org.apache.derby.impl.sql.compile.ExpressionClassBuilder;",
""
]
},
{
"added": [],
"header": "@@ -538,24 +534,6 @@ public class ConcatenationOperatorNode extends BinaryOperatorNode {",
"removed": [
"\t/*",
"\t * for conatenation operator, we generate code as follows field = method(p1,",
"\t * p2, field); what we are ensuring here is if field is null then initialize",
"\t * it to NULL SQLxxx type. Because of the following, at execution time,",
"\t * SQLxxx concatenate method do not have to worry about field coming in as",
"\t * null",
"\t */",
"\tprotected void initializeResultField(ExpressionClassBuilder acb,",
"\t\t\tMethodBuilder mb, LocalField resultField) throws StandardException {",
"\t\tmb.conditionalIfNull();//get the field on the stack and if it is null",
"\t\tacb.generateNull(mb, getTypeCompiler(), getTypeServices()",
"\t\t\t\t.getCollationType());// yes, it is, hence create a NULL SQLxxx",
"\t\t\t\t\t\t\t\t\t // type object and put that on stack",
"\t\tmb.startElseCode(); //no, it is not null",
"\t\tmb.getField(resultField); //so put it back on the stack",
"\t\tmb.completeConditional(); //complete if else block",
"\t}",
""
]
}
]
}
] |
derby-DERBY-5248-45b5ae15
|
DERBY-5258
Fixing row level btree postcommit reclaim space to hold latch until end
of the internal transaction. Before this fix there was a very small window
(a few instructions) between the release of the latch and the commit of the
transaction where another transaction could access the page, insert rows,
and if a crash happens cause the undo of the purges of the reclaim space work
to fail.
It is proposed that this is what caused DERBY-5248, but without a repro it
can't be proved.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1132711 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreePostCommit.java",
"hunks": [
{
"added": [
" TransactionManager internal_xact = tc.getInternalTransaction();"
],
"header": "@@ -207,7 +207,7 @@ class BTreePostCommit implements Serviceable",
"removed": [
" TransactionManager internal_xact = tc.getInternalTransaction();"
]
},
{
"added": [
" OpenBTree open_btree = null;"
],
"header": "@@ -215,7 +215,7 @@ class BTreePostCommit implements Serviceable",
"removed": [
" OpenBTree open_btree = null;"
]
},
{
"added": [],
"header": "@@ -244,11 +244,8 @@ class BTreePostCommit implements Serviceable",
"removed": [
" // RESOLVE (mikem) - move this call when doing row level locking.",
"",
" open_btree.close();"
]
},
{
"added": [],
"header": "@@ -277,8 +274,6 @@ class BTreePostCommit implements Serviceable",
"removed": [
" open_btree.close();",
""
]
},
{
"added": [
" if (open_btree != null)",
" open_btree.close();",
"",
" // counting on this commit to release latches associated with",
" // row level purge, that have been left to prevent others from",
" // getting to purged pages before the commit. If latch is released",
" // early, other transactions could insert on the page which could",
" // prevent undo of the purges in case of a crash before the commit",
" // gets to the disk."
],
"header": "@@ -295,6 +290,15 @@ class BTreePostCommit implements Serviceable",
"removed": []
},
{
"added": [
" * This routine handles purging committed deletes while holding a table",
" * level exclusive lock. See purgeRowLevelCommittedDeletes() for row level",
" * purging."
],
"header": "@@ -330,8 +334,9 @@ class BTreePostCommit implements Serviceable",
"removed": [
" * RESOLVE (mikem) - under row locking this routine must do more work to",
" * determine a deleted row is a committed deleted row."
]
},
{
"added": [
" * <p>",
" * The latch on the leaf page containing the purged rows must be kept until",
" * after the transaction has been committed or aborted in order to insure",
" * proper undo of the purges can take place. Otherwise another transaction",
" * could use the space freed by the purge and then prevent the purge from",
" * being able to undo."
],
"header": "@@ -448,6 +453,12 @@ class BTreePostCommit implements Serviceable",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/BasePage.java",
"hunks": [
{
"added": [
"\t\t\tSanityManager.ASSERT(isLatched(), ",
" \"unlatch() attempted on page that is not latched.\");",
" releaseExclusive();"
],
"header": "@@ -1364,10 +1364,11 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": [
"\t\t\tSanityManager.ASSERT(isLatched());",
"\t releaseExclusive();"
]
},
{
"added": [
"\t\t\tSanityManager.ASSERT(",
" isLatched(), \"page not latched on call to recordCount()\");"
],
"header": "@@ -1388,7 +1389,8 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": [
"\t\t\tSanityManager.ASSERT(isLatched());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [
" SanityManager.ASSERT(isLatched(), ",
" \"logAction() executed on an unlatched page.\");"
],
"header": "@@ -7034,7 +7034,8 @@ public class StoredPage extends CachedPage",
"removed": [
" SanityManager.ASSERT(isLatched());"
]
}
]
}
] |
derby-DERBY-5249-8d26b674
|
DERBY-5249 A table created with 10.0.2.1 with constraints cannot be dropped with 10.5 due to NullPointerException with insane build or ASSERT FAILED Failed to find sharable conglomerate descriptor for index conglomerate with sane build
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1131272 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5249-94d3c58f
|
DERBY-5249 A table created with 10.0.2.1 with constraints cannot be dropped with 10.5 due to NullPointerException with insane build or ASSERT FAILED Failed to find sharable conglomerate descriptor for index conglomerate with sane build
Add a test for this issue. The test xtestDropTableAfterUpgradeWithConstraint()
is not currently enabled because the issue is not yet fixed.
Remove the x to enable the test once the issue is fixed.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1130632 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-525-1022f8fc
|
DERBY-525 - getAsciiStreamshould replace non-ASCII characters with 0x3f, '?' to match embedded - Patch by Tomohito Nakayama (tomonaka@basil.ocn.ne.jp)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@292414 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
"\t\t",
"\t\tresult = new java.io.ByteArrayInputStream",
"\t\t\t(convertToAsciiByteArray((String) agent_.crossConverters_.setObject(java.sql.Types.CHAR,",
"\t\t\t\t\t\t\t\t\t\t\t updatedColumns_[column - 1])));"
],
"header": "@@ -879,13 +879,10 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" try {",
" result = new java.io.ByteArrayInputStream",
" (((String) agent_.crossConverters_.setObject(java.sql.Types.CHAR,",
" updatedColumns_[column - 1])).getBytes(\"US-ASCII\"));",
" } catch (java.io.UnsupportedEncodingException e) {",
" throw new SqlException(agent_.logWriter_, e, e.getMessage());",
" }"
]
},
{
"added": [
"",
"\t",
"\tprivate static byte[] convertToAsciiByteArray(String original){",
"",
"\t\tbyte[] result = new byte[original.length()];",
"",
"\t\tfor(int i = 0;",
"\t\t i < original.length();",
"\t\t i ++){",
"\t\t\t",
"\t\t\tif(original.charAt(i) <= 0x00ff)",
"\t\t\t\tresult[i] = (byte) original.charAt(i);",
"\t\t\telse",
"\t\t\t\tresult[i] = 0x003f;",
"\t\t}",
"",
"\t\treturn result;",
"",
"\t}"
],
"header": "@@ -3882,4 +3879,23 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": []
}
]
}
] |
derby-DERBY-525-8ecffdda
|
- DERBY-525_3 getAsciiStreamshould replace non-ASCII characters with 0x3f, '?' to match embedded - Patch by Tomohito Nakayama (tomonaka@basil.ocn.ne.jp)
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@329372 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/AsciiStream.java",
"hunks": [
{
"added": [
"import java.io.StringReader;",
"",
"\t",
"\tpublic AsciiStream(String materializedString){",
"\t\tthis(materializedString,new StringReader(materializedString));",
"\t}",
"\t"
],
"header": "@@ -19,11 +19,17 @@",
"removed": [
""
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
"\t\tresult = new AsciiStream((String) agent_.crossConverters_.setObject(java.sql.Types.CHAR,",
"\t\t\t\t\t\t\t\t\t\t updatedColumns_[column - 1]));"
],
"header": "@@ -939,9 +939,8 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
"\t\tresult = new java.io.ByteArrayInputStream",
"\t\t\t(convertToAsciiByteArray((String) agent_.crossConverters_.setObject(java.sql.Types.CHAR,",
"\t\t\t\t\t\t\t\t\t\t\t updatedColumns_[column - 1])));"
]
}
]
}
] |
derby-DERBY-5252-f1512cdd
|
DERBY-5252: make GrantRevokeTest pass in non-English locale
This patch was contributed by Houx Zhang (houxzhang at gmail dot com)
This change adjusts 4 places in the test to compare the actual
error message text only if the test is running in the English locale.
This enables the test to pass when run in non-English locales, while
also preserving the error message text validation for runs of the
test in the English locale.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1134139 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5254-ab49fa3c
|
DERBY-5254: Unreserve the keywords which were added as part of implementing sequences.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1130127 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5256-67790a0e
|
DERBY-5256: Improve error reporting in common.sanity.AssertFailure
Added more specific error reporting, and fixed code that could result
in an NPE under some circumstances.
Patch file: derby-5256-1a-error_reporting.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1130964 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/shared/org/apache/derby/shared/common/sanity/AssertFailure.java",
"hunks": [
{
"added": [
" /**",
" * Tells if generating a thread dump is supported in the running JVM.",
" */",
" private boolean supportsThreadDump() {",
" try {",
" // This checks that we are on a jvm >= 1.5 where we",
" // can actually do threaddumps.",
" Thread.class.getMethod(\"getAllStackTraces\", new Class[] {});",
" return true;",
" } catch (NoSuchMethodException nsme) {",
" // Ignore exception",
" }",
" return false;",
" }"
],
"header": "@@ -126,6 +126,20 @@ public class AssertFailure extends RuntimeException {",
"removed": []
},
{
"added": [
" if (!supportsThreadDump()) {",
" return \"(Skipping thread dump because it is not \" +",
" \"supported on JVM 1.4)\";",
" }",
" ",
" // NOTE: No need to flush with the StringWriter/PrintWriter combination.",
" // Load the class and method we need with reflection.",
" final Method m;",
" Class c = Class.forName(",
" \"org.apache.derby.shared.common.sanity.ThreadDump\");",
" m = c.getMethod(\"getStackDumpString\", new Class[] {});",
" } catch (Exception e) {",
" p.println(\"Failed to load class/method required to generate \" +",
" \"a thread dump:\");",
" e.printStackTrace(p);",
" return out.toString();",
" }",
" //Try to get a thread dump and deal with various situations.",
" try {",
" String dump = (String) AccessController.doPrivileged",
" IllegalArgumentException,",
" IllegalAccessException,",
" InvocationTargetException {",
" return m.invoke(null, (Object[])null);",
" p.print(\"---------------\\nStack traces for all live threads:\");",
" } catch (PrivilegedActionException pae) {",
" Throwable cause = pae.getCause();",
" if (cause instanceof InvocationTargetException &&",
" cause.getCause() instanceof AccessControlException) {",
" + \"because of insufficient permissions:\\n\"",
" + cause.getCause() + \")\\n\");",
" p.println(\"\\nAssertFailure tried to do a thread dump, \"",
" + \"but there was an error:\");",
" cause.printStackTrace(p);"
],
"header": "@@ -140,54 +154,57 @@ public class AssertFailure extends RuntimeException {",
"removed": [
" //Try to get a thread dump and deal with various situations.",
" //This checks that we are on a jvm >= 1.5 where we",
" //can actually do threaddumps.",
" Thread.class.getMethod(\"getAllStackTraces\", new Class[] {});",
"",
" //Then get the thread dump.",
" Class c = Class.",
" forName(\"org.apache.derby.shared.common.sanity.ThreadDump\");",
" final Method m = c.getMethod(\"getStackDumpString\",new Class[] {});",
"",
" String dump;",
" dump = (String) AccessController.doPrivileged",
" IllegalArgumentException,",
" IllegalAccessException,",
" InvocationTargetException{",
" return m.invoke(null, null);",
" p.print(\"---------------\\nStack traces for all \" +",
" \"live threads:\");",
" } catch (NoSuchMethodException e) {",
" p.println(\"(Skipping thread dump because it is not \" +",
" \"supported on JVM 1.4)\");",
"",
" } catch (Exception e) {",
" if (e instanceof PrivilegedActionException &&",
" e.getCause() instanceof InvocationTargetException &&",
" e.getCause().getCause() instanceof AccessControlException){",
" + \"because of insufficient permissions:\\n\"",
" + e.getCause().getCause() + \")\\n\");",
" p.println(\"\\nAssertFailure tried to do a thread dump, but \"",
" + \"there was an error:\");",
" e.getCause().printStackTrace(p);"
]
}
]
}
] |
derby-DERBY-5258-45b5ae15
|
DERBY-5258
Fixing row level btree postcommit reclaim space to hold latch until end
of the internal transaction. Before this fix there was a very small window
(a few instructions) between the release of the latch and the commit of the
transaction where another transaction could access the page, insert rows,
and if a crash happens cause the undo of the purges of the reclaim space work
to fail.
It is proposed that this is what caused DERBY-5248, but without a repro it
can't be proved.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1132711 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreePostCommit.java",
"hunks": [
{
"added": [
" TransactionManager internal_xact = tc.getInternalTransaction();"
],
"header": "@@ -207,7 +207,7 @@ class BTreePostCommit implements Serviceable",
"removed": [
" TransactionManager internal_xact = tc.getInternalTransaction();"
]
},
{
"added": [
" OpenBTree open_btree = null;"
],
"header": "@@ -215,7 +215,7 @@ class BTreePostCommit implements Serviceable",
"removed": [
" OpenBTree open_btree = null;"
]
},
{
"added": [],
"header": "@@ -244,11 +244,8 @@ class BTreePostCommit implements Serviceable",
"removed": [
" // RESOLVE (mikem) - move this call when doing row level locking.",
"",
" open_btree.close();"
]
},
{
"added": [],
"header": "@@ -277,8 +274,6 @@ class BTreePostCommit implements Serviceable",
"removed": [
" open_btree.close();",
""
]
},
{
"added": [
" if (open_btree != null)",
" open_btree.close();",
"",
" // counting on this commit to release latches associated with",
" // row level purge, that have been left to prevent others from",
" // getting to purged pages before the commit. If latch is released",
" // early, other transactions could insert on the page which could",
" // prevent undo of the purges in case of a crash before the commit",
" // gets to the disk."
],
"header": "@@ -295,6 +290,15 @@ class BTreePostCommit implements Serviceable",
"removed": []
},
{
"added": [
" * This routine handles purging committed deletes while holding a table",
" * level exclusive lock. See purgeRowLevelCommittedDeletes() for row level",
" * purging."
],
"header": "@@ -330,8 +334,9 @@ class BTreePostCommit implements Serviceable",
"removed": [
" * RESOLVE (mikem) - under row locking this routine must do more work to",
" * determine a deleted row is a committed deleted row."
]
},
{
"added": [
" * <p>",
" * The latch on the leaf page containing the purged rows must be kept until",
" * after the transaction has been committed or aborted in order to insure",
" * proper undo of the purges can take place. Otherwise another transaction",
" * could use the space freed by the purge and then prevent the purge from",
" * being able to undo."
],
"header": "@@ -448,6 +453,12 @@ class BTreePostCommit implements Serviceable",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/BasePage.java",
"hunks": [
{
"added": [
"\t\t\tSanityManager.ASSERT(isLatched(), ",
" \"unlatch() attempted on page that is not latched.\");",
" releaseExclusive();"
],
"header": "@@ -1364,10 +1364,11 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": [
"\t\t\tSanityManager.ASSERT(isLatched());",
"\t releaseExclusive();"
]
},
{
"added": [
"\t\t\tSanityManager.ASSERT(",
" isLatched(), \"page not latched on call to recordCount()\");"
],
"header": "@@ -1388,7 +1389,8 @@ abstract class BasePage implements Page, Observer, TypedFormat",
"removed": [
"\t\t\tSanityManager.ASSERT(isLatched());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java",
"hunks": [
{
"added": [
" SanityManager.ASSERT(isLatched(), ",
" \"logAction() executed on an unlatched page.\");"
],
"header": "@@ -7034,7 +7034,8 @@ public class StoredPage extends CachedPage",
"removed": [
" SanityManager.ASSERT(isLatched());"
]
}
]
}
] |
derby-DERBY-5260-5f5bc5fb
|
DERBY-5260 Remove unused "replace" argument to backup variant of StorageFactoryService#saveServiceProperties
The patch "saveServiceProperties-1" removes redundant code.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1133123 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/monitor/StorageFactoryService.java",
"hunks": [
{
"added": [
"",
" /**",
" * Save service.properties during backup",
" *",
" * @arg serviceName backup location of the service",
" * @arg properties to save",
" *",
" * @exception StandardException Properties cannot be saved.",
" */",
" final Properties properties)"
],
"header": "@@ -391,15 +391,18 @@ final class StorageFactoryService implements PersistentService",
"removed": [
"\t/**",
" Save to a backup file",
" ",
"\t\t@exception StandardException Properties cannot be saved.",
"\t*/",
" final Properties properties, ",
" final boolean replace)"
]
},
{
"added": [
" // Since this is the backup location, we cannot use",
" // storageFactory.newStorageFile as in the other",
" // variant of this method:"
],
"header": "@@ -407,24 +410,13 @@ final class StorageFactoryService implements PersistentService",
"removed": [
" File backupFile = null;",
"",
" if (replace) {",
" backupFile = ",
" new File(serviceName, PersistentService.PROPERTIES_NAME.concat(\"old\"));",
" try {",
" if(!servicePropertiesFile.renameTo(backupFile)) {",
" throw StandardException.newException(",
" SQLState.UNABLE_TO_RENAME_FILE, servicePropertiesFile, backupFile);",
" }",
" } catch (SecurityException se) {",
" throw Monitor.exceptionStartingModule(se);",
" }",
" }"
]
}
]
},
{
"file": "java/storeless/org/apache/derby/impl/storeless/StorelessService.java",
"hunks": [
{
"added": [
" public void saveServiceProperties(String serviceName,",
" Properties properties)",
" throws StandardException {"
],
"header": "@@ -65,7 +65,9 @@ public class StorelessService implements PersistentService {",
"removed": [
"\tpublic void saveServiceProperties(String serviceName, Properties properties, boolean replace) throws StandardException {"
]
}
]
}
] |
derby-DERBY-5263-ae72a308
|
DERBY-6003: Create row templates outside of the generated code
Upgrade test fix in preparation for the actual fix for this issue.
Improve SYSCS_INVALIDATE_STORED_STATEMENTS by making it null out the
plans in SYS.SYSSTATEMENTS. Previously, it only marked them as invalid.
Use the improved SYSCS_INVALIDATE_STORED_STATEMENTS to work around
problems in the upgrade tests when downgrading to a version that suffers
from DERBY-4835 or DERBY-5289. Remove the old workarounds for DERBY-4835,
DERBY-5105, DERBY-5263 and DERBY-5289, as they are now handled by the
centralized workaround that uses SYSCS_INVALIDATE_STORED_STATEMENTS.
This change is needed because later patches for this issue will change
the format of many stored plans, so more of the test cases need to work
around the downgrade bugs in some old versions.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1418296 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
" * @param recompile Whether to recompile or invalidate"
],
"header": "@@ -4429,6 +4429,7 @@ public final class\tDataDictionaryImpl",
"removed": []
}
]
}
] |
derby-DERBY-5267-e39eee71
|
DERBY-5267: Shut down engine for old versions in upgrade tests to save memory
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1135432 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5269-f81766ad
|
DERBY-5269: Remove unused methods getSocketAndInputOutputStreams and checkAlternateServerHasEqualOrHigherProductLevel in NetConnection
Removed unused methods getSocketAndInputOutputStreams and
checkAlternateServerHasEqualOrHigherProductLevel.
Patch file: derby-5269-1a-remove_methods.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1135976 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/net/NetConnection.java",
"hunks": [
{
"added": [],
"header": "@@ -1628,54 +1628,6 @@ public class NetConnection extends org.apache.derby.client.am.Connection {",
"removed": [
" protected int getSocketAndInputOutputStreams(String server, int port, int clientSSLMode) {",
" try {",
" netAgent_.socket_ = (java.net.Socket) java.security.AccessController.doPrivileged(new OpenSocketAction(server, port, clientSSLMode));",
" } catch (java.security.PrivilegedActionException e) {",
" Exception openSocketException = e.getException();",
" if (netAgent_.loggingEnabled()) {",
" netAgent_.logWriter_.tracepoint(\"[net]\", 101, \"Client Re-route: \" + openSocketException.getClass().getName() + \" : \" + openSocketException.getMessage());",
" }",
" return -1;",
" }",
"",
" try {",
" netAgent_.rawSocketOutputStream_ = netAgent_.socket_.getOutputStream();",
" netAgent_.rawSocketInputStream_ = netAgent_.socket_.getInputStream();",
" } catch (java.io.IOException e) {",
" if (netAgent_.loggingEnabled()) {",
" netAgent_.logWriter_.tracepoint(\"[net]\", 103, \"Client Re-route: java.io.IOException \" + e.getMessage());",
" }",
" try {",
" netAgent_.socket_.close();",
" } catch (java.io.IOException doNothing) {",
" }",
" return -1;",
" }",
" return 0;",
" }",
"",
" protected int checkAlternateServerHasEqualOrHigherProductLevel(ProductLevel orgLvl, int orgServerType) {",
" if (orgLvl == null && orgServerType == 0) {",
" return 0;",
" }",
" ProductLevel alternateServerProductLvl =",
" netAgent_.netConnection_.databaseMetaData_.productLevel_;",
" boolean alternateServerIsEqualOrHigherToOriginalServer =",
" (alternateServerProductLvl.greaterThanOrEqualTo",
" (orgLvl.versionLevel_,",
" orgLvl.releaseLevel_,",
" orgLvl.modificationLevel_)) ? true : false;",
" // write an entry to the trace",
" if (!alternateServerIsEqualOrHigherToOriginalServer &&",
" netAgent_.loggingEnabled()) {",
" netAgent_.logWriter_.tracepoint(\"[net]\",",
" 99,",
" \"Client Re-route failed because the alternate server is on a lower product level than the origianl server.\");",
" }",
" return (alternateServerIsEqualOrHigherToOriginalServer) ? 0 : -1;",
" }",
""
]
}
]
}
] |
derby-DERBY-5271-d954835a
|
DERBY-5271: Client may hang if the server crashes due to a java.lang.Error
Tries to ensure that if the network server crashes due to a condition
raising java.lang.Error, the client socket will be closed on the server
side. Note that even if one of the worker threads crashes, the network
server itself may remain operational. If the JVM process dies, the
sockets will be closed anyway.
Patch file: derby-5271-1a-inital_fix_proposal.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1158108 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java",
"hunks": [
{
"added": [
" } catch (Error error) {",
" // Do as little as possible, but try to cut loose the client",
" // to avoid that it hangs in a socket read-call.",
" try {",
" closeSession();",
" } catch (Throwable t) {",
" // One last attempt...",
" try {",
" session.clientSocket.close();",
" } catch (IOException ioe) {",
" // Ignore, we're in deeper trouble already.",
" } ",
" } finally {",
" throw error;",
" }",
" }"
],
"header": "@@ -318,7 +318,22 @@ class DRDAConnThread extends Thread {",
"removed": [
"\t\t\t}"
]
}
]
}
] |
derby-DERBY-5274-e43e7e2d
|
DERBY-5274: getColumns() doesn't work with auto generated identity
columns that start with large numbers
Removed hard-coded maximum length for the start value and increment in
the meta-data query.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1136371 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5278-631e46c9
|
DERBY-5278: AssertionFailedError in IndexSplitDeadlockTest.testBTreeForwardScan_fetchRows_resumeAfterWait_unique_split()
Made synchronization between threads more reliable with wait/notify instead of sleep.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1136844 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5280-42114dab
|
DERBY-5280: Large batch of DDL in a database procedure dies on a transaction severity error
Backed out the fix for DERBY-5161 since it's causing a regression and
shouldn't be needed after DERBY-5157.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1138787 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5283-4eea8755
|
DERBY-5283: Crash / process termination during SYSCS_DISABLE_LOG_ARCHIVE_MODE can leave service.properties broken
Adds recover logic for service.properties. To be able to recover there must be
a backup file present. There are three different cases the logic can handle:
o corrupt original (no EOF token) and backup: use backup
o orignal and backup: delete backup
o backup only: rename to original
Patch file: derby-5283-1b-recover.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1188109 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/monitor/StorageFactoryService.java",
"hunks": [
{
"added": [
"import java.io.BufferedReader;",
"import java.io.FileReader;",
"import java.io.BufferedWriter;",
"import java.io.OutputStreamWriter;"
],
"header": "@@ -39,14 +39,18 @@ import org.apache.derby.io.WritableStorageFactory;",
"removed": []
},
{
"added": [
" /** Marker printed as the last line of the service properties file. */",
" private static final String SERVICE_PROPERTIES_EOF_TOKEN =",
" \"#--- last line, don't put anything after this line ---\";"
],
"header": "@@ -64,6 +68,9 @@ import org.apache.derby.iapi.services.io.FileUtil;",
"removed": []
},
{
"added": [
" resolveServicePropertiesFiles(storageFactory, file);"
],
"header": "@@ -276,6 +283,7 @@ final class StorageFactoryService implements PersistentService",
"removed": []
},
{
"added": [
" // Write the service properties to file."
],
"header": "@@ -323,6 +331,7 @@ final class StorageFactoryService implements PersistentService",
"removed": []
},
{
"added": [
" StorageFile backupFile = replace",
" ? storageFactory.newStorageFile(",
" PersistentService.PROPERTIES_NAME.concat(\"old\"))",
" : null;",
" FileOperationHelper foh = new FileOperationHelper();",
" foh.renameTo(",
" servicePropertiesFile, backupFile, true);",
" properties.store(os, serviceName +",
" MessageService.getTextMessage(",
" MessageId.SERVICE_PROPERTIES_DONT_EDIT));",
" BufferedWriter bOut = new BufferedWriter(",
" new OutputStreamWriter(os));",
" bOut.write(SERVICE_PROPERTIES_EOF_TOKEN);",
" bOut.newLine();",
" bOut.close();",
" os = null; ",
" {",
" if (backupFile != null)",
" {",
" // Rename the old properties file back again.",
" foh.renameTo(backupFile, servicePropertiesFile,",
" false);",
" }",
" if (replace)",
" {",
" throw StandardException.newException(",
" SQLState.SERVICE_PROPERTIES_EDIT_FAILED,",
" ioe);",
" }",
" else",
" {",
" throw Monitor.exceptionStartingModule(ioe);",
" }",
" }",
" finally"
],
"header": "@@ -330,31 +339,55 @@ final class StorageFactoryService implements PersistentService",
"removed": [
" StorageFile backupFile = null;",
" backupFile = storageFactory.newStorageFile( PersistentService.PROPERTIES_NAME.concat(\"old\"));",
" try",
" {",
" if(!servicePropertiesFile.renameTo(backupFile))",
" throw StandardException.newException(SQLState.UNABLE_TO_RENAME_FILE,",
" servicePropertiesFile, backupFile);",
" }",
" catch (SecurityException se) { throw Monitor.exceptionStartingModule(se); }",
" properties.store( os, serviceName + MessageService.getTextMessage(MessageId.SERVICE_PROPERTIES_DONT_EDIT));",
" os = null;"
]
},
{
"added": [
" catch (IOException ioe)",
" // Ignore exception on close",
" if (!foh.delete(backupFile, false))",
" Monitor.getStream().printlnWithHeader(",
" MessageService.getTextMessage(",
" MessageId.SERVICE_PROPERTIES_BACKUP_DEL_FAILED,",
" getMostAccuratePath(backupFile)));",
" "
],
"header": "@@ -362,31 +395,23 @@ final class StorageFactoryService implements PersistentService",
"removed": [
" catch (IOException ioe2) {}",
" os = null;",
" }",
"",
" if (backupFile != null)",
" {",
" // need to re-name the old properties file back again",
" try",
" servicePropertiesFile.delete();",
" backupFile.renameTo(servicePropertiesFile);",
" catch (SecurityException se) {}",
" throw Monitor.exceptionStartingModule(ioe);",
" try",
" backupFile.delete();",
" backupFile = null;",
" catch (SecurityException se) {}"
]
},
{
"added": [
" ",
" /**",
" * Resolves situations where a failure condition left the service properties",
" * file, and/or the service properties file backup, in an inconsistent",
" * state.",
" * <p>",
" * Note that this method doesn't resolve the situation where both the",
" * current service properties file and the backup file are missing.",
" *",
" * @param sf the storage factory for the service",
" * @param spf the service properties file",
" * @throws StandardException if a file operation on a service properties",
" * file fails",
" */",
" private void resolveServicePropertiesFiles(StorageFactory sf,",
" StorageFile spf)",
" throws StandardException {",
" StorageFile spfOld = sf.newStorageFile(PROPERTIES_NAME.concat(\"old\"));",
" FileOperationHelper foh = new FileOperationHelper();",
" boolean hasCurrent = foh.exists(spf, true);",
" boolean hasBackup = foh.exists(spfOld, true);",
" // Shortcut the normal case.",
" if (hasCurrent && !hasBackup) {",
" return;",
" }",
"",
" // Backup file, but no current file.",
" if (hasBackup && !hasCurrent) {",
" // Writing the new service properties file must have failed during",
" // an update. Rename the backup to be the current file.",
" foh.renameTo(spfOld, spf, true);",
" Monitor.getStream().printlnWithHeader(",
" MessageService.getTextMessage(",
" MessageId.SERVICE_PROPERTIES_RESTORED));",
" // Backup file and current file.",
" } else if (hasBackup && hasCurrent) {",
" // See if the new (current) file is valid. If so delete the backup,",
" // if not, rename the backup to be the current.",
" BufferedReader bin = null;",
" String lastLine = null;",
" try {",
" bin = new BufferedReader(new FileReader(spf.getPath()));",
" String line;",
" while ((line = bin.readLine()) != null) {",
" if (line.trim().length() != 0) {",
" lastLine = line;",
" }",
" }",
" } catch (IOException ioe) {",
" throw StandardException.newException(",
" SQLState.UNABLE_TO_OPEN_FILE, ioe,",
" spf.getPath(), ioe.getMessage());",
" } finally {",
" try {",
" if (bin != null) {",
" bin.close();",
" }",
" } catch (IOException ioe) {",
" // Ignore exception during close",
" }",
" }",
" if (lastLine != null &&",
" lastLine.startsWith(SERVICE_PROPERTIES_EOF_TOKEN)) {",
" // Being unable to delete the backup file is fine as long as",
" // the current file appears valid.",
" String msg;",
" if (foh.delete(spfOld, false)) {",
" msg = MessageService.getTextMessage(",
" MessageId.SERVICE_PROPERTIES_BACKUP_DELETED); ",
" } else {",
" // Include path so the user can delete file manually.",
" msg = MessageService.getTextMessage(",
" MessageId.SERVICE_PROPERTIES_BACKUP_DEL_FAILED,",
" getMostAccuratePath(spfOld));",
" }",
" Monitor.getStream().printlnWithHeader(msg);",
" } else {",
" foh.delete(spf, false);",
" foh.renameTo(spfOld, spf, true);",
" Monitor.getStream().printlnWithHeader(",
" MessageService.getTextMessage(",
" MessageId.SERVICE_PROPERTIES_RESTORED));",
" }",
" } ",
" }"
],
"header": "@@ -454,6 +479,91 @@ final class StorageFactoryService implements PersistentService",
"removed": []
},
{
"added": [
" ( SQLState.SERVICE_PROPERTIES_MISSING, serviceName, PersistentService.PROPERTIES_NAME );"
],
"header": "@@ -691,7 +801,7 @@ final class StorageFactoryService implements PersistentService",
"removed": [
" ( SQLState.MISSING_SERVICE_PROPERTIES, serviceName, PersistentService.PROPERTIES_NAME );"
]
}
]
}
] |
derby-DERBY-5284-b1043a6a
|
DERBY-5284 A derby crash at exactly right time during a btree split can cause a corrupt db which can not be booted.
Fixed a problem during BTREE split. The first phase of btree split sees
if it can reclaim space from committed deleted rows. If it finds any
it purges these rows in a nested internal transaction. It needs to hold
the latch on the page until end of transaction, but did not. This allowed
a very small window of a few instructions where another insert could use
the space on the page and then a system crash could cause the purges to undo
but fail due to the insert.
The fix was to hold the latch and let commit release it.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1138275 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeController.java",
"hunks": [
{
"added": [
"\t * @return true if at least one row was purged. If true, then the routine",
" * will leave the page latched, and the caller will release",
" * the latch by committing or aborting the transaction. The",
" * latch must be held to end transaction to insure space on",
" * the page remains available for a undo of the purge."
],
"header": "@@ -109,7 +109,11 @@ public class BTreeController extends OpenBTree implements ConglomerateController",
"removed": [
"\t * @return true if at least one row was purged."
]
},
{
"added": [
" if (controlRow != null) ",
" {",
" if (purged_at_least_one_row) ",
" {",
" // the page. If at least one row has been purged, then",
" // do not release the latch. Purge requires latch to ",
" // be held until commit, where it will be released after",
" // the commit log record has been logged.",
" else",
" {",
" // Ok to release latch if no purging has happened.",
" controlRow.release();",
" }"
],
"header": "@@ -187,14 +191,23 @@ public class BTreeController extends OpenBTree implements ConglomerateController",
"removed": [
" if (controlRow != null) {",
" if (purged_at_least_one_row) {",
" // the page.",
" controlRow.release();"
]
},
{
"added": [
" // on return if !do_split then the latch on leaf_pageno is held",
" // and will be released by the committing or aborting the ",
" // transaction. If a purge has been done, no other action on",
" // the page should be attempted (ie. a split) before committing",
" // the purges.",
""
],
"header": "@@ -308,6 +321,12 @@ public class BTreeController extends OpenBTree implements ConglomerateController",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/access/heap/HeapPostCommit.java",
"hunks": [
{
"added": [
" // transaction commit take care of it. The latch must be",
" // held until end transaction in order to insure no other",
" // transaction uses the space freed by the purge, which",
" // would cause a subquent undo of the purge to fail."
],
"header": "@@ -222,7 +222,10 @@ class HeapPostCommit implements Serviceable",
"removed": [
" // transaction commit take care of it."
]
}
]
}
] |
derby-DERBY-5288-df02986b
|
DERBY-5288: running multiple suites.All concurrently should be possible
Wait for the threads that read stdout and stderr to finish before trying
to fetch the output from SpawnedProcess.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1138795 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/SpawnedProcess.java",
"hunks": [
{
"added": [
" private final StreamSaver errSaver;",
" private final StreamSaver outSaver;",
" public SpawnedProcess(Process javaProcess, String name) {",
" errSaver = streamSaver(javaProcess.getErrorStream(), name",
" outSaver = streamSaver(javaProcess.getInputStream(), name"
],
"header": "@@ -37,18 +37,17 @@ public final class SpawnedProcess {",
"removed": [
" private final ByteArrayOutputStream err;",
" private final ByteArrayOutputStream out;",
" public SpawnedProcess(Process javaProcess, String name) ",
" throws InterruptedException {",
" err = streamSaver(javaProcess.getErrorStream(), name",
" out = streamSaver(javaProcess.getInputStream(), name"
]
},
{
"added": [
" * <p>",
" * encoding which is assumed is how it was originally written.",
" * </p>",
" *",
" * <p>",
" * This method should only be called after the process has completed.",
" * That is, {@link #complete(boolean)} or {@link #complete(boolean, long)}",
" * should be called first.",
" * </p>",
" // First wait until we've read all the output.",
" outSaver.thread.join();",
"",
" return outSaver.stream.toString();"
],
"header": "@@ -60,14 +59,23 @@ public final class SpawnedProcess {",
"removed": [
" * encoding which is assumed is how it was orginally",
" * written.",
" Thread.sleep(500);",
" return out.toString(); "
]
},
{
"added": [
" fullData = outSaver.stream.toByteArray();"
],
"header": "@@ -85,7 +93,7 @@ public final class SpawnedProcess {",
"removed": [
" fullData = out.toByteArray();"
]
},
{
"added": [
"",
" ByteArrayOutputStream err = errSaver.stream;",
" ByteArrayOutputStream out = outSaver.stream;",
""
],
"header": "@@ -111,7 +119,10 @@ public final class SpawnedProcess {",
"removed": [
" "
]
},
{
"added": [
" /**",
" * Complete the process."
],
"header": "@@ -127,7 +138,8 @@ public final class SpawnedProcess {",
"removed": [
" /*Complete the method"
]
},
{
"added": [
" * Complete the process."
],
"header": "@@ -135,7 +147,7 @@ public final class SpawnedProcess {",
"removed": [
" * Complete the method."
]
},
{
"added": [
"",
" // The process has completed. Wait until we've read all output.",
" outSaver.thread.join();",
" errSaver.thread.join();",
"",
" ByteArrayOutputStream err = errSaver.stream;"
],
"header": "@@ -168,10 +180,15 @@ public final class SpawnedProcess {",
"removed": [
" Thread.sleep(500);"
]
},
{
"added": [
" ByteArrayOutputStream out = outSaver.stream;"
],
"header": "@@ -180,6 +197,7 @@ public final class SpawnedProcess {",
"removed": []
},
{
"added": [
" /**",
" * Class holding references to a stream that receives the output from a",
" * process and a thread that reads the process output and passes it on",
" * to the stream.",
" */",
" private static class StreamSaver {",
" final ByteArrayOutputStream stream;",
" final Thread thread;",
" StreamSaver(ByteArrayOutputStream stream, Thread thread) {",
" this.stream = stream;",
" this.thread = thread;",
" }",
" }",
"",
" private StreamSaver streamSaver(final InputStream in,",
" final String name) {"
],
"header": "@@ -192,8 +210,22 @@ public final class SpawnedProcess {",
"removed": [
" private ByteArrayOutputStream streamSaver(final InputStream in,",
" final String name) throws InterruptedException {"
]
},
{
"added": [
" return new StreamSaver(out, streamReader);"
],
"header": "@@ -223,9 +255,7 @@ public final class SpawnedProcess {",
"removed": [
" streamReader.join(500);",
"",
" return out;"
]
}
]
}
] |
derby-DERBY-5289-0f64b702
|
DERBY-5289 Unable to boot 10.5.1.1 database - fails during soft/hard upgrade process for a new version number while trying to drop jdbc metadata
Checking testcase for DERBY-5289. In trunk theDERBY-3870 fix contributed by Knut Anders Hatlen fixed the issue so no code change is needed. Just the portion of DERBY-3870 that is relevant to DERBY-5289 will be backported to the other branches.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1139449 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5289-ae72a308
|
DERBY-6003: Create row templates outside of the generated code
Upgrade test fix in preparation for the actual fix for this issue.
Improve SYSCS_INVALIDATE_STORED_STATEMENTS by making it null out the
plans in SYS.SYSSTATEMENTS. Previously, it only marked them as invalid.
Use the improved SYSCS_INVALIDATE_STORED_STATEMENTS to work around
problems in the upgrade tests when downgrading to a version that suffers
from DERBY-4835 or DERBY-5289. Remove the old workarounds for DERBY-4835,
DERBY-5105, DERBY-5263 and DERBY-5289, as they are now handled by the
centralized workaround that uses SYSCS_INVALIDATE_STORED_STATEMENTS.
This change is needed because later patches for this issue will change
the format of many stored plans, so more of the test cases need to work
around the downgrade bugs in some old versions.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1418296 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
" * @param recompile Whether to recompile or invalidate"
],
"header": "@@ -4429,6 +4429,7 @@ public final class\tDataDictionaryImpl",
"removed": []
}
]
}
] |
derby-DERBY-5292-7bfb37a8
|
DERBY-5292 SQLAuthorisation and views
Patch derby5292d: For views, the premissions collection is disabled
from tables in the query from-list since it should run with definer's
rights. However, this disabling did not work for all cases. The way to
reach all the "from" tables has been changed to use a node visitor
instead, so as to be able to also reach tables inside subqueries and
inside explicit JOIN and set operations, cf the repro issues. An added
test case was added to GrantRevokeTest: testViewDefinersRights.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1142635 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java",
"hunks": [
{
"added": [
"import java.util.Iterator;"
],
"header": "@@ -75,14 +75,12 @@ import org.apache.derby.iapi.store.access.TransactionController;",
"removed": [
"import org.apache.derby.impl.sql.compile.ExpressionClassBuilder;",
"import org.apache.derby.impl.sql.compile.ActivationClassBuilder;",
"import org.apache.derby.impl.sql.compile.FromSubquery;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromSubquery.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.sql.compile.Visitor;"
],
"header": "@@ -29,6 +29,7 @@ import org.apache.derby.iapi.sql.dictionary.DataDictionary;",
"removed": []
},
{
"added": [],
"header": "@@ -658,15 +659,6 @@ public class FromSubquery extends FromTable",
"removed": [
"\t/** ",
"\t * @see QueryTreeNode#disablePrivilegeCollection",
"\t */",
"\tpublic void disablePrivilegeCollection()",
"\t{",
"\t\tsuper.disablePrivilegeCollection();",
"\t\tsubquery.disablePrivilegeCollection();",
"\t}",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/SelectNode.java",
"hunks": [
{
"added": [],
"header": "@@ -24,7 +24,6 @@ package\torg.apache.derby.impl.sql.compile;",
"removed": [
"import org.apache.derby.iapi.sql.compile.Visitable;"
]
},
{
"added": [],
"header": "@@ -34,8 +33,6 @@ import org.apache.derby.iapi.sql.compile.CompilerContext;",
"removed": [
"import org.apache.derby.iapi.types.TypeId;",
"import org.apache.derby.iapi.types.DataTypeDescriptor;"
]
}
]
}
] |
derby-DERBY-5293-ccb1894d
|
DERBY-5293: Replace bubble sort in DataDictionaryImpl and CreateTriggerNode with Collections.sort()
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1141005 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"import java.util.Collections;",
"import java.util.Comparator;"
],
"header": "@@ -155,6 +155,8 @@ import org.apache.derby.iapi.util.IdUtil;",
"removed": []
},
{
"added": [
" /**",
" * Comparator that can be used for sorting lists of column references",
" * on the position they have in the SQL query string.",
" */",
" private static final Comparator OFFSET_COMPARATOR = new Comparator() {",
" public int compare(Object o1, Object o2) {",
" // Return negative int, zero, or positive int if the first column",
" // reference has an offset which is smaller than, equal to, or",
" // greater than the offset of the second column reference.",
" return ((ColumnReference) o1).getBeginOffset() -",
" ((ColumnReference) o2).getBeginOffset();",
" }",
" };",
""
],
"header": "@@ -4684,6 +4686,20 @@ public final class\tDataDictionaryImpl",
"removed": []
},
{
"added": [
"\t\tCollections.sort(refs, OFFSET_COMPARATOR);"
],
"header": "@@ -4818,7 +4834,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tQueryTreeNode[] cols = sortRefs(refs, true);"
]
},
{
"added": [
"\t\t\tfor (int i = 0; i < refs.size(); i++)",
"\t\t\t\tColumnReference ref = (ColumnReference) refs.get(i);"
],
"header": "@@ -4876,9 +4892,9 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\tfor (int i = 0; i < cols.length; i++)",
"\t\t\t\tColumnReference ref = (ColumnReference) cols[i];"
]
},
{
"added": [
"\t\tfor (int i = 0; i < refs.size(); i++)",
"\t\t\tColumnReference ref = (ColumnReference) refs.get(i);"
],
"header": "@@ -4992,9 +5008,9 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tfor (int i = 0; i < cols.length; i++)",
"\t\t\tColumnReference ref = (ColumnReference) cols[i];\t\t\t\t"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CreateTriggerNode.java",
"hunks": [
{
"added": [
"import java.util.Collections;",
"import java.util.Comparator;"
],
"header": "@@ -22,6 +22,8 @@",
"removed": []
},
{
"added": [
" /**",
" * Comparator that can be used for sorting lists of FromBaseTables",
" * on the position they have in the SQL query string.",
" */",
" private static final Comparator OFFSET_COMPARATOR = new Comparator() {",
" public int compare(Object o1, Object o2) {",
" // Return negative int, zero, or positive int if the offset of the",
" // first table is less than, equal to, or greater than the offset",
" // of the second table.",
" return ((FromBaseTable) o1).getTableNameField().getBeginOffset() -",
" ((FromBaseTable) o2).getTableNameField().getBeginOffset();",
" }",
" };",
""
],
"header": "@@ -454,6 +456,20 @@ public class CreateTriggerNode extends DDLStatementNode",
"removed": []
},
{
"added": [
"\t\t\tVector tabs = visitor.getList();",
"\t\t\tCollections.sort(tabs, OFFSET_COMPARATOR);",
"\t\t\tfor (int i = 0; i < tabs.size(); i++)",
"\t\t\t\tFromBaseTable fromTable = (FromBaseTable) tabs.get(i);"
],
"header": "@@ -587,11 +603,11 @@ public class CreateTriggerNode extends DDLStatementNode",
"removed": [
"\t\t\tVector refs = visitor.getList();",
"\t\t\tQueryTreeNode[] tabs = sortRefs(refs, false);",
"\t\t\tfor (int i = 0; i < tabs.length; i++)",
"\t\t\t\tFromBaseTable fromTable = (FromBaseTable) tabs[i];"
]
},
{
"added": [],
"header": "@@ -684,43 +700,6 @@ public class CreateTriggerNode extends DDLStatementNode",
"removed": [
"\t/*",
"\t** Sort the refs into array.",
"\t*/",
"\tprivate QueryTreeNode[] sortRefs(Vector refs, boolean isRow)",
"\t{",
"\t\tint size = refs.size();",
"\t\tQueryTreeNode[] sorted = (QueryTreeNode[]) refs.toArray(new QueryTreeNode[size]);",
"\t\tint i = 0;",
"\t\t/* bubble sort",
"\t\t */",
"\t\tQueryTreeNode temp;",
"\t\tfor (i = 0; i < size - 1; i++)",
"\t\t{",
"\t\t\ttemp = null;",
"\t\t\tfor (int j = 0; j < size - i - 1; j++)",
"\t\t\t{",
"\t\t\t\tif ((isRow && ",
"\t\t\t\t\t sorted[j].getBeginOffset() > ",
"\t\t\t\t\t sorted[j+1].getBeginOffset()",
"\t\t\t\t\t) ||",
"\t\t\t\t\t(!isRow &&",
"\t\t\t\t\t ((FromBaseTable) sorted[j]).getTableNameField().getBeginOffset() > ",
"\t\t\t\t\t ((FromBaseTable) sorted[j+1]).getTableNameField().getBeginOffset()",
"\t\t\t\t\t))",
"\t\t\t\t{",
"\t\t\t\t\ttemp = sorted[j];",
"\t\t\t\t\tsorted[j] = sorted[j+1];",
"\t\t\t\t\tsorted[j+1] = temp;",
"\t\t\t\t}",
"\t\t\t}",
"\t\t\tif (temp == null)\t\t// sorted",
"\t\t\t\tbreak;",
"\t\t}",
"",
"\t\treturn sorted;",
"\t}",
""
]
}
]
}
] |
derby-DERBY-530-82d721fd
|
DERBY-530
ClientDriver ignores Properties object in connect(String url, Properties connectionProperties) method
Send both the properties specified in the info parameter and those specified in the url to the server in the RDBNAM. user and password attributes will be the exception. user and password will be sent via the standard DRDA mechanism and excluded from the attributes sent with RDBNAM whether specified with the url or info properties. As a result of the combination, the order of attributes sent to the server may be different than originally specified in the URL.
Also added additional driver tests and attrbute tests to checkDriver.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@289227 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/jdbc/ClientDriver.java",
"hunks": [
{
"added": [
"import java.util.Enumeration;",
"import java.util.Properties;",
""
],
"header": "@@ -20,6 +20,9 @@",
"removed": []
},
{
"added": [
" // database is the database name and attributes. This will be",
" database = appendDatabaseAttributes(database,augmentedProperties);"
],
"header": "@@ -92,11 +95,11 @@ public class ClientDriver implements java.sql.Driver {",
"removed": [
" // longDatabase is the databaseName and attributes. This will be",
""
]
},
{
"added": [
" /**",
" * Append attributes to the database name except for user/password ",
" * which are sent as part of the protocol.",
" * Other attributes will be sent to the server with the database name",
" * Assumes augmentedProperties is not null",
" * ",
"\t * @param database - Short database name",
"\t * @param augmentedProperties - Set of properties to append as attributes",
"\t * @return databaseName + attributes (e.g. mydb;create=true) ",
"\t */",
"\tprivate String appendDatabaseAttributes(String database, Properties augmentedProperties) {",
"\t",
"\t\tStringBuffer longDatabase = new StringBuffer(database);",
"\t\tfor (Enumeration keys = augmentedProperties.keys(); keys.hasMoreElements() ;)",
"\t\t{",
"\t\t\tString key = (String) keys.nextElement();",
"\t\t\tif (key.equals(ClientDataSource.propertyKey_user) || ",
"\t\t\t\tkey.equals(ClientDataSource.propertyKey_password))",
"\t\t\t\tcontinue;",
"\t\t\tlongDatabase.append(\";\" + key + \"=\" + augmentedProperties.getProperty(key));",
"\t\t}",
"\t\treturn longDatabase.toString();",
"\t}",
"",
"\tpublic boolean acceptsURL(String url) throws java.sql.SQLException {",
" java.util.StringTokenizer urlTokenizer = ",
" \t\tnew java.util.StringTokenizer(url, \"/:=; \\t\\n\\r\\f\", true);"
],
"header": "@@ -129,8 +132,33 @@ public class ClientDriver implements java.sql.Driver {",
"removed": [
" public boolean acceptsURL(String url) throws java.sql.SQLException {",
" java.util.StringTokenizer urlTokenizer = new java.util.StringTokenizer(url, \"/:=; \\t\\n\\r\\f\", true);"
]
}
]
}
] |
derby-DERBY-5300-39837074
|
DERBY-5300: Change derby.tests.trace to print the class as well as fixture name
Introduces the following changes to the output controlled by derby.tests.trace:
a) Print '(emb)' or '(net)' to show which driver/framework is being used.
b) Test class names are shortened if possible. The following prefixes are
stripped off:
o 'org.apache.derbyTesting.functionTests.tests.'
o 'org.apache.derbyTesting.'
Patch contributed by Jayaram Subramanian (rsjay1976 at gmail dot com).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1185330 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/Utilities.java",
"hunks": [
{
"added": [
"",
" /**",
" * Function to eliminate known package prefixes given a class full path",
" * ",
" * @param test",
" * class name prefixed with package",
" */",
" public static String formatTestClassNames(String mainString) {",
" final String COMMON_FUNCTIONTEST_PREFIX = \"org.apache.derbyTesting.functionTests.tests.\";",
" final String COMMON_TEST_PREFIX = \"org.apache.derbyTesting.\";",
" if (mainString.startsWith(COMMON_FUNCTIONTEST_PREFIX)) {",
" return mainString.substring(COMMON_FUNCTIONTEST_PREFIX.length());",
" } else if (mainString.startsWith(COMMON_TEST_PREFIX)) {",
" return mainString.substring(COMMON_TEST_PREFIX.length());",
" } else {",
" return mainString;",
" }",
" }"
],
"header": "@@ -231,4 +231,22 @@ public class Utilities {",
"removed": []
}
]
}
] |
derby-DERBY-5300-e7b124dd
|
DERBY-5300: Change derby.tests.trace to print the class as well as fixture name
Changed code to agree with Javadoc param comment ("mainString" -> "test").
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1185465 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/Utilities.java",
"hunks": [
{
"added": [
" public static String formatTestClassNames(String test) {",
" if (test.startsWith(COMMON_FUNCTIONTEST_PREFIX)) {",
" return test.substring(COMMON_FUNCTIONTEST_PREFIX.length());",
" } else if (test.startsWith(COMMON_TEST_PREFIX)) {",
" return test.substring(COMMON_TEST_PREFIX.length());",
" return test;"
],
"header": "@@ -238,15 +238,15 @@ public class Utilities {",
"removed": [
" public static String formatTestClassNames(String mainString) {",
" if (mainString.startsWith(COMMON_FUNCTIONTEST_PREFIX)) {",
" return mainString.substring(COMMON_FUNCTIONTEST_PREFIX.length());",
" } else if (mainString.startsWith(COMMON_TEST_PREFIX)) {",
" return mainString.substring(COMMON_TEST_PREFIX.length());",
" return mainString;"
]
}
]
}
] |
derby-DERBY-5308-f30426b5
|
DERBY-1903 Convert largedata/LobLimits.java to junit
DERBY-5308 Investigate if largeData/LobLimits.java can be run for client
Patch derby-1903_client_diff.txt enables client for largedata.LobLimitsLite. It disables the test cases that fail with client:
DERBY-5338 client gives wrong SQLState and protocol error inserting a 4GB clob. Should be 22003
DERBY-5341 : Client allows clob larger than column width to be inserted.
DERBY-5317 cannot use setCharacterStream with value from C/Blob.getCharacterStream
Also fixes the test to fail if we do not get an exception for negative test cases and fixes a javadoc warning.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1147335 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5310-29ba0c10
|
DERBY-5310: PropertySetter prints warning when building with JDK 7
Make PropertySetter recognize "Oracle Corporation" in java.vendor.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1142591 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/build/org/apache/derbyPreBuild/PropertySetter.java",
"hunks": [
{
"added": [
" private static final String JDK_ORACLE = \"Oracle Corporation\";"
],
"header": "@@ -122,6 +122,7 @@ public class PropertySetter extends Task",
"removed": []
},
{
"added": [
" else if ( usingOracleJDK( jdkVendor ) ) { setForOracleJDKs(); }"
],
"header": "@@ -305,7 +306,7 @@ public class PropertySetter extends Task",
"removed": [
" else if ( JDK_SUN.equals( jdkVendor ) ) { setForSunJDKs(); }"
]
},
{
"added": [
" // SET PROPERTIES FOR Oracle JDKs",
" * Set the properties needed to compile using the Oracle JDKs. This",
" private void setForOracleJDKs()"
],
"header": "@@ -400,17 +401,17 @@ public class PropertySetter extends Task",
"removed": [
" // SET PROPERTIES FOR Sun JDKs",
" * Set the properties needed to compile using the Sun JDKs. This",
" private void setForSunJDKs()"
]
},
{
"added": [
"",
" /**",
" * Return true if we are using an Oracle JDK.",
" */",
" private static boolean usingOracleJDK(String vendor)",
" {",
" return JDK_SUN.equals(vendor) || JDK_ORACLE.equals(vendor);",
" }",
""
],
"header": "@@ -1180,7 +1181,15 @@ public class PropertySetter extends Task",
"removed": [
" "
]
}
]
}
] |
derby-DERBY-5312-08b4ed59
|
DERBY-5312 InterruptResilienceTest failed with ERROR 40XD1: Container was opened in read-only
Patch derby-5312-simplify-reopen-1. It fixes the race condition by
reducing the amount of work done under reopening the container, thus
side stepping the issue by no longer updating the state variable that
caused the race.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1148344 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java",
"hunks": [
{
"added": [
" private static final int REOPEN_CONTAINER_ACTION = 8;"
],
"header": "@@ -74,16 +74,8 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
"",
" /**",
" * Identity of this container. Make it visible to RAFContainer4, which may",
" * need to reopen the container after interrupts due to a NIO channel being",
" * closed by the interrupt.",
" */",
" protected ContainerKey currentIdentity;",
" private boolean reopen;",
""
]
},
{
"added": [],
"header": "@@ -811,7 +803,6 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" currentIdentity = newIdentity;"
]
},
{
"added": [
" return AccessController.doPrivileged( this) != null;",
" catch( PrivilegedActionException pae) {"
],
"header": "@@ -849,45 +840,16 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" protected ContainerKey idAPriori = null;",
" return openContainerMinion(newIdentity, false);",
" }",
"",
" synchronized boolean reopenContainer(ContainerKey newIdentity)",
" throws StandardException {",
" return openContainerMinion(newIdentity, true);",
" }",
"",
" private boolean openContainerMinion(",
" ContainerKey newIdentity,",
" boolean doReopen) throws StandardException",
" {",
" reopen = doReopen;",
" boolean success = false;",
" idAPriori = currentIdentity;",
"",
" currentIdentity = newIdentity;",
" // NIO: We need to set currentIdentity before we try to open, in",
" // case we need its value to perform a recovery in the case of an",
" // interrupt during readEmbryonicPage as part of",
" // OPEN_CONTAINER_ACTION. Note that this gives a recursive call to",
" // openContainer.",
" //",
" // If we don't succeed in opening, we reset currentIdentity to its",
" // a priori value.",
"",
" success = AccessController.doPrivileged(this) != null;",
" idAPriori = currentIdentity;",
" return success;",
" catch( PrivilegedActionException pae) { "
]
},
{
"added": [
" actionIdentity = null;",
" }",
" }",
" /**",
" * Only used by RAFContainer4 (NIO) to reopen RAF when its channel gets",
" * closed due to interrupts.",
" *",
" * @param currentIdentity",
" * @throws StandardException standard exception policy",
" */",
" protected synchronized void reopenContainer(ContainerKey currentIdentity)",
" throws StandardException {",
"",
" actionCode = REOPEN_CONTAINER_ACTION;",
" actionIdentity = currentIdentity;",
"",
" try {",
" AccessController.doPrivileged(this);",
" } catch (PrivilegedActionException pae) {",
" closeContainer();",
" throw (StandardException) pae.getException();",
" } catch (RuntimeException e) {",
" closeContainer();",
" throw e;",
" } finally {",
" actionIdentity = null;"
],
"header": "@@ -897,11 +859,33 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" if (!success) {",
" currentIdentity = idAPriori;",
" }",
" actionIdentity = null; "
]
},
{
"added": [
" readHeader(getEmbryonicPage(fileData,",
" FIRST_ALLOC_PAGE_OFFSET));",
" "
],
"header": "@@ -1475,15 +1459,9 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction",
"removed": [
" if (!reopen) {",
" // under reopen: can give race condition or if we",
" // synchronize access, deadlock, so skip, we know",
" // what's there anyway.",
" readHeader(getEmbryonicPage(fileData,",
" FIRST_ALLOC_PAGE_OFFSET));",
" }",
"",
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer4.java",
"hunks": [
{
"added": [],
"header": "@@ -83,8 +83,6 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" private Thread threadDoingRestore = null;",
""
]
},
{
"added": [
" private ContainerKey currentIdentity;",
""
],
"header": "@@ -166,6 +164,8 @@ class RAFContainer4 extends RAFContainer {",
"removed": []
},
{
"added": [
" currentIdentity = newIdentity;"
],
"header": "@@ -183,6 +183,7 @@ class RAFContainer4 extends RAFContainer {",
"removed": []
},
{
"added": [
"",
" currentIdentity = newIdentity;",
" /**",
" * When the existing channel ({@code ourChannel}) has been closed due to",
" * interrupt, we need to reopen the underlying RAF to get a fresh channel",
" * so we can resume IO.",
" */",
" private void reopen() throws StandardException {",
" if (SanityManager.DEBUG) {",
" SanityManager.ASSERT(!ourChannel.isOpen());",
" }",
" ourChannel = null;",
" reopenContainer(currentIdentity);",
" }",
""
],
"header": "@@ -199,9 +200,24 @@ class RAFContainer4 extends RAFContainer {",
"removed": []
},
{
"added": [],
"header": "@@ -311,14 +327,6 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" if (Thread.currentThread() == threadDoingRestore) {",
" // Reopening the container will do readEmbryonicPage",
" // (i.e. ReadPage is called recursively from",
" // recoverContainerAfterInterrupt), so now let's make",
" // sure we don't get stuck waiting for ourselves ;-)",
" break;",
" }",
""
]
},
{
"added": [],
"header": "@@ -842,7 +850,6 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" threadDoingRestore = Thread.currentThread();"
]
},
{
"added": [],
"header": "@@ -860,7 +867,6 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" threadDoingRestore = null;"
]
},
{
"added": [
" reopen();"
],
"header": "@@ -895,13 +901,7 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" closeContainer();",
" reopenContainer(currentIdentity);",
" } catch (InterruptDetectedException e) {",
" // Interrupted again?",
" debugTrace(\"interrupted during recovery's \" +",
" \"readEmbryonicPage\");",
" continue;"
]
},
{
"added": [],
"header": "@@ -934,7 +934,6 @@ class RAFContainer4 extends RAFContainer {",
"removed": [
" threadDoingRestore = null;"
]
}
]
}
] |
derby-DERBY-5313-e828232e
|
DERBY-5313: Assert failure with CASE expression in GROUP BY clause
Remove assert that doesn't expect expressions in the GROUP BY list.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1575866 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5316-09ecd71c
|
DERBY-5343: Upgrade tests failing with java.lang.IllegalAccessException
Rework the workaround for DERBY-23 added by DERBY-5316 so that it
doesn't attempt to modify final fields. Modifying final fields
doesn't seem to work prior to Java 5.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1148302 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5316-e9666094
|
DERBY-5316: Unload old JDBC drivers when done with them in the upgrade tests
Added a workaround for DERBY-23.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1145973 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5316-eb60359d
|
DERBY-5316: Unload old JDBC drivers when done with them in the upgrade tests
Assert that the driver is only unloaded in the affected versions.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1145553 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5316-f89c1b58
|
DERBY-5316: Unload old JDBC drivers when done with them in the upgrade tests
When we're done with an old version in the upgrade test, deregister all
JDBC drivers in the old version's class loader from DriverManager.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1145111 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5317-b862050f
|
DERBY-5317: Detect attempts to reuse a connection that in the middle of sending a request to the server. Use this to provide a better error message and avoid the NPE.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1530704 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Agent.java",
"hunks": [
{
"added": [
" ",
" abstract public void beginWriteChainOutsideUOW() throws SqlException;"
],
"header": "@@ -267,9 +267,8 @@ public abstract class Agent {",
"removed": [
"",
" public void beginWriteChainOutsideUOW() throws SqlException {",
" }"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetAgent.java",
"hunks": [
{
"added": [
" /**",
" * Flag which indicates that a writeChain has been started and data sent to",
" * the server.",
" * If true, starting a new write chain will throw a DisconnectException. ",
" * It is cleared when the write chain is ended.",
" */",
" private boolean writeChainIsDirty_ = false;"
],
"header": "@@ -107,6 +107,13 @@ public class NetAgent extends Agent {",
"removed": []
},
{
"added": [
" /**",
" * Marks the agent's write chain as dirty. A write chain is dirty when data",
" * from it has been sent to the server. A dirty write chain cannot be reset ",
" * and reused for another request until the remaining data has been sent to",
" * the server and the write chain properly ended. ",
" * ",
" * Resetting a dirty chain will cause the new request to be appended to the ",
" * unfinished request already at the server, which will likely lead to ",
" * cryptic syntax errors.",
" */",
" void markWriteChainAsDirty() { ",
" writeChainIsDirty_ = true;",
" }",
" ",
" private void verifyWriteChainIsClean() throws DisconnectException {",
" if (writeChainIsDirty_) { ",
" throw new DisconnectException(this, ",
" new ClientMessageId(SQLState.NET_WRITE_CHAIN_IS_DIRTY));",
" }",
" }",
" verifyWriteChainIsClean();",
" verifyWriteChainIsClean();",
" protected void endWriteChain() {}",
" "
],
"header": "@@ -462,23 +469,41 @@ public class NetAgent extends Agent {",
"removed": [
"",
" super.beginWriteChainOutsideUOW();",
" protected void endWriteChain() {",
" super.endWriteChain();",
" }",
""
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/Request.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.ExceptionUtil;"
],
"header": "@@ -41,6 +41,7 @@ import java.nio.ByteBuffer;",
"removed": []
},
{
"added": [
" } catch (IOException ioe) {",
" if (netAgent_.getOutputStream() == null) {",
" // The exception has taken down the connection, so we ",
" // check if it was caused by attempting to ",
" // read the stream from our own connection...",
" for (Throwable t = ioe; t != null; t = t.getCause()) {",
" if (t instanceof SqlException",
" && ((SqlException) t).getSQLState().equals(ExceptionUtil.getSQLStateFromIdentifier(SQLState.NET_WRITE_CHAIN_IS_DIRTY))) {",
" throw new SqlException(netAgent_.logWriter_,",
" new ClientMessageId(SQLState.NET_LOCATOR_STREAM_PARAMS_NOT_SUPPORTED),",
" ioe, parameterIndex);",
" }",
" }",
" // Something else has killed the connection, fast forward to despair...",
" throw new SqlException(netAgent_.logWriter_,",
" new ClientMessageId(SQLState.NET_DISCONNECT_EXCEPTION_ON_READ),",
" ioe, parameterIndex, ioe.getMessage());",
" }",
" // The OutPutStream is still intact so try to finish request",
" // with what we managed to read",
"",
" new SqlException(",
" ioe, parameterIndex, ioe.getMessage()));"
],
"header": "@@ -315,16 +316,36 @@ class Request {",
"removed": [
" } catch (Exception e) {",
" new SqlException(",
" e, parameterIndex, e.getMessage()));"
]
}
]
}
] |
derby-DERBY-5317-f30426b5
|
DERBY-1903 Convert largedata/LobLimits.java to junit
DERBY-5308 Investigate if largeData/LobLimits.java can be run for client
Patch derby-1903_client_diff.txt enables client for largedata.LobLimitsLite. It disables the test cases that fail with client:
DERBY-5338 client gives wrong SQLState and protocol error inserting a 4GB clob. Should be 22003
DERBY-5341 : Client allows clob larger than column width to be inserted.
DERBY-5317 cannot use setCharacterStream with value from C/Blob.getCharacterStream
Also fixes the test to fail if we do not get an exception for negative test cases and fixes a javadoc warning.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1147335 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-5318-840ed3f5
|
DERBY-5318: Use assertDirectoryDeleted in ReplicationRun and remove dead code
Use assertDirectoryDeleted to get better error reporting when the test fails
to delete a database directory.
Remove dead code in ReplicationRun and Utils.
Patch file: derby-5318-1a-cleanup.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1146644 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-532-00df88c3
|
DERBY-532: Support deferrable constraints
Patch derby-532-testAlterConstraintInvalidation.
Adds a fixture to test that prepared statement is invalidated when a
table its depends on undergoes an ALTER TABLE ALTER CONSTRAINT
statement. As it turns out, this is already handled by the common
machinery for ALTER TABLE.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1519045 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-532-0493010b
|
DERBY-532 Support deferrable constraints
Substituted BaseJDBCTestCase#dropTable for home grown version, we
prefer the standard way to do it.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1594255 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-532-400cc602
|
DERBY-532 Support deferrable constraints
Patch derby-532-nullableUniqueFix. When we changed the implementation from
special treatment of deferrable constraints in the BTree, a couple of extra
predicates needed to be added were omitted - added those here: we should not
mark the physical index with "uniqueWithDuplicateNulls" if it is deferrable.
This error was found when running the regressions with default deferrable for
all pk and unique constraints.
We also removed an unused flag "hasDeferrableChecking" in the same places (it is
not longer used by the physical index).
Added a new test case, testCompressTable, which tests the
"uniqueWithDuplicateNulls" case.
We also change the behavior in the following way for deferrable, but not
deferred constraints: if we hit a time-out or dead-lock when checking uniqueness
(in the BTree scan), we throw that time-out or dead-lock. Up till now we
converted it to a duplicate exception. We will only assume it can be a duplicate
- for later checking - iff the constraint mode is deferrable.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1550152 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/InsertResultSet.java",
"hunks": [
{
"added": [
"",
" if ( cd.getIndexDescriptor().isUniqueWithDuplicateNulls() &&",
" !cd.getIndexDescriptor().hasDeferrableChecking() )"
],
"header": "@@ -2003,17 +2003,14 @@ class InsertResultSet extends DMLWriteResultSet implements TargetResultSet",
"removed": [
"\t\t\tif(cd.getIndexDescriptor().isUniqueWithDuplicateNulls())",
" if (cd.getIndexDescriptor().hasDeferrableChecking()) {",
" properties.put(",
" \"hasDeferrableChecking\", Boolean.toString(true));",
" }",
""
]
}
]
}
] |
derby-DERBY-532-51e62681
|
DERBY-532 Support deferrable constraints
Patch derby-532-test-speedup changes ConstraintCharacteristicsTest to
use a main memory database for some tests for increased speed. It also
changes the way SystemPropertyTestSetup for static properties closes
down the database to not deregister the driver; without this change we
saw a test setup try to connect via the client driver to a Derby server
engine without a registered driver.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1550308 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/ConnectionPoolDataSourceConnector.java",
"hunks": [
{
"added": [
"import org.apache.derby.shared.common.sanity.SanityManager;"
],
"header": "@@ -29,6 +29,7 @@ import java.util.Properties;",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/DataSourceConnector.java",
"hunks": [
{
"added": [
"import org.apache.derby.shared.common.sanity.SanityManager;"
],
"header": "@@ -26,6 +26,7 @@ import java.util.Map;",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/DriverManagerConnector.java",
"hunks": [
{
"added": [
"import java.util.Enumeration;"
],
"header": "@@ -22,6 +22,7 @@ package org.apache.derbyTesting.junit;",
"removed": []
},
{
"added": [
" Properties p = new Properties();",
" p.setProperty(\"shutdown\", \"true\");",
" getConnectionByAttributes(config.getJDBCUrl(), p);"
],
"header": "@@ -154,8 +155,9 @@ public class DriverManagerConnector implements Connector {",
"removed": [
" getConnectionByAttributes(config.getJDBCUrl(),",
" \"shutdown\", \"true\");"
]
},
{
"added": [
" * @param deregisterDriver",
" * @throws java.sql.SQLException",
" public void shutEngine(boolean deregisterDriver) throws SQLException {",
" Properties p = new Properties();",
" p.setProperty(\"shutdown\", \"true\");",
" if (!deregisterDriver) {",
" p.setProperty(\"deregister\", \"false\");",
" }",
"",
" getConnectionByAttributes(\"jdbc:derby:\", p);"
],
"header": "@@ -165,10 +167,18 @@ public class DriverManagerConnector implements Connector {",
"removed": [
" public void shutEngine() throws SQLException {",
" getConnectionByAttributes(\"jdbc:derby:\", \"shutdown\", \"true\");"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/XADataSourceConnector.java",
"hunks": [
{
"added": [
" public void shutEngine(boolean deregisterDriver) throws SQLException {"
],
"header": "@@ -142,7 +142,7 @@ public class XADataSourceConnector implements Connector {",
"removed": [
" public void shutEngine() throws SQLException {"
]
}
]
}
] |
derby-DERBY-532-831e54e7
|
DERBY-532 Support deferrable constraints
Patch derby-532-allow-pk-unique-1, which opens up for using deferrable
constraints for primary key and unique constraints, i.e. it is no
longer required that the special property "derby.constraintsTesting"
be used for those constraints, since the implementation is complete
modulo bugs. Upgrade tests still remain to be built, though.
For foreign key and check constraints as well as for "not enforced",
the property will still required till the implementation for those is
completed.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1555006 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/execute/SetConstraintsConstantAction.java",
"hunks": [
{
"added": [],
"header": "@@ -104,13 +104,6 @@ class SetConstraintsConstantAction extends DDLConstantAction",
"removed": [
" // Remove when feature DERBY-532 is completed",
" if (!PropertyUtil.getSystemProperty(",
" \"derby.constraintsTesting\", \"false\").equals(\"true\")) {",
" throw StandardException.newException(",
" SQLState.NOT_IMPLEMENTED, \"SET CONSTRAINT\");",
" }",
""
]
}
]
}
] |
derby-DERBY-532-908f0a96
|
DERBY-532 Support deferrable constraints
A patch ("derby-532-fix-metadata-1") that fixes broken database
metadata for deferred constraint indexes: the metadata query used the
method IndexDescriptor#isUnique to determine logical uniqueness, but
it really represents physical uniqueness now. For deferred unique
constraints, the method that should be used is
"isUniqueDeferrable". Added a test, and also added client/server run
of the regression test for deferred constraints.
Before the fix, the added test fixture "testDatabaseMetaData" failed
in that the index in question was identified as non unique, but it is
logically unique and so should be reported as such.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1550113 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-532-c4cfffbb
|
DERBY-532 Support deferrable constraints
Added an extra test case.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1577134 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.