id
stringlengths
22
25
commit_message
stringlengths
137
6.96k
diffs
listlengths
0
63
derby-DERBY-532-e03c0745
DERBY-532 Support deferrable constraints Patch derby-532-fix-drop-not-nullable. Fixes a broken predicate when recreating the index when going from UNIQUE NOT NULL to plain UNIQUE: the existing predicate missed the deferrable case, so the index was not recreated. Added a test case. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1550284 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-532-e28a36f3
DERBY-532 Support deferrable constraints Patch derby-532-metadata-queries: updates the metadata queries to give correct results in the DEFERRABILITY column returned by the calls: - DataBaseMetaData#getImportedKeys - DataBaseMetaData#getExportedKeys - DataBaseMetaData#getCrossReference. Test have been added. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1593949 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-532-ea36b420
DERBY-6419 Make BTree scan honor OPENMODE_LOCK_NOWAIT for row locks A follow-up patch: derby-6419-followup. Only short circuit waiting for lock in BTree scan to check duplicates for a deferred unique/pk constraint if constraint mode is deferred (i.e. not if immediate). Added a test case lifted from UniqueConstraintMultiThreadedTest, which exposed the issue when we run the regressions with default deferrable by default (see DERBY-532). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1550299 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/execute/IndexChanger.java", "hunks": [ { "added": [ "", " // If the constraint mode is deferred, perform the check without", " // waiting for any locks; we will just presume any lock conflicts", " // constitute duplicates (not always the case), and check those keys", " // again at commit time.", " final boolean deferred =", " lcc.isEffectivelyDeferred(activation, indexCID);", "", " (deferred ?", " TransactionController.OPENMODE_LOCK_ROW_NOWAIT :", " 0)," ], "header": "@@ -464,10 +464,20 @@ class IndexChanger", "removed": [ " TransactionController.OPENMODE_LOCK_ROW_NOWAIT," ] } ] } ]
derby-DERBY-532-ee5954e2
DERBY-532 Support deferrable constraints Patch _*derby-532-upgrade-1b*_. It checks that deferrable constraints cannot be used unless hard upgrade has happened. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1555724 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5325-b0a89283
DERBY-5325 Checkpoint fails with ClosedChannelException in InterruptResilienceTest Patch derby-5325-refactor-b, which refactors redundant code and cleans up some comments and Javadoc. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1151612 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer4.java", "hunks": [ { "added": [ " // those monitors during container recovery. So, just forge ahead" ], "header": "@@ -292,7 +292,7 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " // those monitors during container resurrection. So, just forge ahead" ] }, { "added": [ " handleClosedChannel(e, stealthMode, retries--);" ], "header": "@@ -371,52 +371,8 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " //} catch (ClosedByInterruptException e) {", " // Java NIO Bug 6979009:", " // http://bugs.sun.com/view_bug.do?bug_id=6979009", " // Sometimes NIO throws AsynchronousCloseException instead of", " // ClosedByInterruptException", " } catch (AsynchronousCloseException e) {", " // Subsumes ClosedByInterruptException", "", " // The interrupted thread may or may not get back here", " // before other concurrent writers that will see", " // ClosedChannelException, we have logic to handle that.", " if (Thread.currentThread().isInterrupted()) {", " // Normal case", " if (recoverContainerAfterInterrupt(", " e.toString(),", " stealthMode)) {", " continue; // do I/O over again", " }", " }", "", "", " // Recovery is in progress, wait for another", " // interrupted thread to clean up, i.e. act as if we", " // had seen ClosedChannelException.", "", " awaitRestoreChannel(e, stealthMode);", "", " // We are not the thread that first saw the channel interrupt,", " // so no recovery attempt.", "", " // if we also have seen an interrupt, we might as well take", " // notice now.", " InterruptStatus.noteAndClearInterrupt(", " \"readPage in ClosedChannelException\",", " threadsInPageIO,", " hashCode());", "", " // Recovery is in progress, wait for another interrupted thread", " // to clean up.", " awaitRestoreChannel(e, stealthMode);", "", " if (retries-- == 0) {", " throw StandardException.newException(", " SQLState.FILE_IO_INTERRUPTED);", " }" ] }, { "added": [ " // those monitors during container recovery. So, just forge ahead" ], "header": "@@ -522,7 +478,7 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " // those monitors during container resurrection. So, just forge ahead" ] }, { "added": [ " handleClosedChannel(e, stealthMode, retries--);" ], "header": "@@ -587,46 +543,8 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " //} catch (ClosedByInterruptException e) {", " // Java NIO Bug 6979009:", " // http://bugs.sun.com/view_bug.do?bug_id=6979009", " // Sometimes NIO throws AsynchronousCloseException instead of", " // ClosedByInterruptException", " } catch (AsynchronousCloseException e) {", " // Subsumes ClosedByInterruptException", "", " // The interrupted thread may or may not get back here", " // before other concurrent writers that will see", " // ClosedChannelException, we have logic to handle that.", "", " if (Thread.currentThread().isInterrupted()) {", " // Normal case", " if (recoverContainerAfterInterrupt(", " e.toString(),", " stealthMode)) {", " continue; // do I/O over again", " }", " }", " // Recovery is in progress, wait for another", " // interrupted thread to clean up, i.e. act as if we", " // had seen ClosedChannelException.", "", " awaitRestoreChannel(e, stealthMode);", "", " // We are not the thread that first saw the channel interrupt,", " // so no recovery attempt.", "", " InterruptStatus.noteAndClearInterrupt(", " \"writePage in ClosedChannelException\",", " threadsInPageIO,", " hashCode());", "", " awaitRestoreChannel(e, stealthMode);", " if (retries-- == 0) {", " throw StandardException.newException(", " SQLState.FILE_IO_INTERRUPTED);", " }" ] }, { "added": [ " /**", " * This method handles what to do when, during a NIO operation we receive a", " * {@code ClosedChannelException}. Note the specialization hierarchy:", " * <p/>", " * {@code ClosedChannelException} -> {@code AsynchronousCloseException} ->", " * {@code ClosedByInterruptException}", " * <p/>", " * If {@code e} is a ClosedByInterruptException, we normally start", " * container recovery, i.e. we need to reopen the random access file so we", " * get get a new interruptible channel and continue IO.", " * <p/>", " * If {@code e} is a {@code AsynchronousCloseException} or a plain {@code", " * ClosedChannelException}, the behavior depends of {@code stealthMode}:", " * <p/>", " * If {@code stealthMode == false}, the method will wait for", " * another thread tp finish recovering the IO channel before returning.", " * <p/>", " * If {@code stealthMode == true}, the method throws {@code", " * InterruptDetectedException}, allowing retry at a higher level in the", " * code. The reason for this is that we sometimes need to release monitors", " * on objects needed by the recovery thread.", " *", " * @param e Should be an instance of {@code ClosedChannelException}.", " * @param stealthMode If {@code true}, do retry at a higher level", " * @param retries Give up waiting for another thread to reopen the channel", " * when {@code retries} reaches 0. Only applicable if {@code", " * stealthMode == false}.", " * @throws InterruptDetectedException if retry at higher level is required", " * {@code stealthMode == true}.", " * @throws StandardException standard error policy, incl. when we give up", " * waiting for another thread to reopen channel", " */", " private void handleClosedChannel(ClosedChannelException e,", " boolean stealthMode,", " int retries)", " throws StandardException {", "", " // if (e instanceof ClosedByInterruptException e) {", " // Java NIO Bug 6979009:", " // http://bugs.sun.com/view_bug.do?bug_id=6979009", " // Sometimes NIO throws AsynchronousCloseException instead of", " // ClosedByInterruptException", "", " if (e instanceof AsynchronousCloseException) {", " // Subsumes ClosedByInterruptException", "", " // The interrupted thread may or may not get back here to try", " // recovery before other concurrent IO threads will see (the", " // secondary) ClosedChannelException, but we have logic to handle", " // that, cf threadsInPageIO.", "", " if (Thread.currentThread().isInterrupted()) {", " if (recoverContainerAfterInterrupt(", " e.toString(),", " stealthMode)) {", " return; // do I/O over again", " }", " }", "", " // Recovery is in progress, wait for another interrupted thread to", " // clean up.", "", " awaitRestoreChannel(e, stealthMode);", " } else {", " // According to the exception type, We are not the thread that", " // first saw the channel interrupt, so no recovery attempt.", " InterruptStatus.noteAndClearInterrupt(", " \"ClosedChannelException\",", " threadsInPageIO,", " hashCode());", "", " awaitRestoreChannel(e, stealthMode);", " if (retries == 0) {", " throw StandardException.newException(", " SQLState.FILE_IO_INTERRUPTED);", " }", " }", " }", "" ], "header": "@@ -640,6 +558,85 @@ class RAFContainer4 extends RAFContainer {", "removed": [] }, { "added": [ " *", " *" ], "header": "@@ -659,10 +656,10 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " * ", "" ] }, { "added": [ "" ], "header": "@@ -690,7 +687,7 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " " ] }, { "added": [ " // Wait here till the interrupted thread does container recovery." ], "header": "@@ -707,7 +704,7 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " // Wait here till the interrupted thread does container resurrection." ] }, { "added": [ " * Use this when the thread has received a ClosedByInterruptException (or,", " * prior to JDK 1.7 it may also be AsynchronousCloseException - a bug)", " * thread a likely candicate to do container recovery, unless another", " * thread started it already, cf. return value." ], "header": "@@ -776,10 +773,11 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " * Use this when the thread has received a AsynchronousCloseException", " * thread a likely candicate to do container recovery (aka resurrection),", " * unless another thread started it already, cf. return value." ] }, { "added": [ " * Write a sequence of bytes at the given offset in a file. This method", " * operates in <em>stealth mode</em>, see doc for {@link", " * #handleClosedChannel handleClosedChannel}.", " * This presumes that IO retry happens at a higher level, i.e. the", " * caller(s) must be prepared to handle {@code InterruptDetectedException}.", " * This method overrides FileContainer#writeAtOffset." ], "header": "@@ -1072,9 +1070,13 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " * Write a sequence of bytes at the given offset in a file.", " * override of FileContainer#writeAtOffset" ] }, { "added": [ " final boolean stealthMode = true;" ], "header": "@@ -1094,7 +1096,7 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " boolean stealthMode = true;" ] }, { "added": [ " handleClosedChannel(e, stealthMode, -1 /* NA */);" ], "header": "@@ -1107,45 +1109,8 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " //} catch (ClosedByInterruptException e) {", " // Java NIO Bug 6979009:", " // http://bugs.sun.com/view_bug.do?bug_id=6979009", " // Sometimes NIO throws AsynchronousCloseException instead of", " // ClosedByInterruptException", " } catch (AsynchronousCloseException e) {", " // Subsumes ClosedByInterruptException", "", " // The interrupted thread may or may not get back here", " // before other concurrent writers that will see", " // ClosedChannelException, we have logic to handle that.", "", " if (Thread.currentThread().isInterrupted()) {", " // Normal case", " if (recoverContainerAfterInterrupt(", " e.toString(),", " stealthMode)) {", " continue; // do I/O over again", " }", " }", " // Recovery is in progress, wait for another", " // interrupted thread to clean up, i.e. act as if we", " // had seen ClosedChannelException.", "", " // stealthMode == true, so this will throw", " // InterruptDetectedException", " awaitRestoreChannel(e, stealthMode);", " // We are not the thread that first saw the channel interrupt,", " // so no recovery attempt.", "", " InterruptStatus.noteAndClearInterrupt(", " \"writeAtOffset in ClosedChannelException\",", " threadsInPageIO,", " hashCode());", "", " // stealthMode == true, so this will throw", " // InterruptDetectedException", " awaitRestoreChannel(e, stealthMode);" ] } ] } ]
derby-DERBY-5325-b7f22c34
DERBY-5325 Checkpoint fails with ClosedChannelException in InterruptResilienceTest Patch derby-5325a: With NIO, writeRAFHeader has two methods leading to interruptible IO: - getEmbryonicPage - writeHeader Currently, getEmbryonicPage may throw InterruptDetectedException and hence, so may writeRAFHeader. writeHeader may throw ClosedByInterruptException, AsynchronousCloseException and ClosedChannelException because writeHeader does not use RAFContainer4#writePage, but rather uses RAFContainer4#writeAtOffset, which does not currently attempt to recover after interrupt. So currently, clients of writeRAFHeader need to be prepared for all of InterruptDetectedException, ClosedByInterruptException, AsynchronousCloseException and ClosedChannelException. writeRAFHeader is used in three locations: - RAFContainer#clean - RAFContainer#run(CREATE_CONTAINER_ACTION) - RAFContainer#run(STUBBIFY_ACTION) RAFContainer#clean is prepared for InterruptDetectedException only. The issue shows that ClosedChannelException may also occur, and it is not prepared for that (this bug). RAFContainer#run(CREATE_CONTAINER_ACTION) is prepared for ClosedByInterruptException and AsynchronousCloseException. Since IO during container creation is single-threaded, this is sufficient: it should never need to handle ClosedChannelException/InterruptDetectedException, both of which signal that another thread saw interrupt on the container channel. RAFContainer#run(STUBBIFY_ACTION) is part of the removeContainer operation which should happen after the container is closed, so it should be single-threaded on the container as well(?). It should handle ClosedByInterruptException and AsynchronousCloseException and do retry, but doesn't, currently. If we let writeAtOffset clean up just like writePage, RAFContainer4#writeAtOffset (i.e.also writeHeader) would only only throw InterruptDetectedException, i.e. another thread saw interrupt, so retry. This would simplify logic in RAFContainer: we could remove the retry logic from RAFContainer#run(CREATE_CONTAINER_ACTION). This could also cover retry logic for RAFContainer#run(STUBBIFY_ACTION) wrt its use of writeRAFHeader. Next, RAFContainer#clean is already handling InterruptDetectedException and would with this change no longer see ClosedByInterruptException, AsynchronousCloseException or ClosedChannelException. This should solve DERBY-5325 (this bug). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1148354 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer4.java", "hunks": [ { "added": [], "header": "@@ -622,10 +622,6 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " // Recovery is in progress, wait for another", " // interrupted thread to clean up, i.e. act as if we", " // had seen ClosedChannelException.", "" ] }, { "added": [ "", " if (ioChannel == null) {", " return;", " }", "", " ourChannel = ioChannel;", "", " boolean success = false;", " boolean stealthMode = true;", "", " while (!success) {", "", " synchronized (this) {", " // don't use ourChannel directly, could need re-initilization", " // after interrupt and container reopening:", " ioChannel = getChannel();", " }", "", " try {", " writeFull(ByteBuffer.wrap(bytes), ioChannel, offset);", " success = true;", " //} catch (ClosedByInterruptException e) {", " // Java NIO Bug 6979009:", " // http://bugs.sun.com/view_bug.do?bug_id=6979009", " // Sometimes NIO throws AsynchronousCloseException instead of", " // ClosedByInterruptException", " } catch (AsynchronousCloseException e) {", " // Subsumes ClosedByInterruptException", "", " // The interrupted thread may or may not get back here", " // before other concurrent writers that will see", " // ClosedChannelException, we have logic to handle that.", "", " if (Thread.currentThread().isInterrupted()) {", " // Normal case", " if (recoverContainerAfterInterrupt(", " e.toString(),", " stealthMode)) {", " continue; // do I/O over again", " }", " }", " // Recovery is in progress, wait for another", " // interrupted thread to clean up, i.e. act as if we", " // had seen ClosedChannelException.", "", " // stealthMode == true, so this will throw", " // InterruptDetectedException", " awaitRestoreChannel(e, stealthMode);", " } catch (ClosedChannelException e) {", " // We are not the thread that first saw the channel interrupt,", " // so no recovery attempt.", "", " InterruptStatus.noteAndClearInterrupt(", " \"writeAtOffset in ClosedChannelException\",", " threadsInPageIO,", " hashCode());", "", " // stealthMode == true, so this will throw", " // InterruptDetectedException", " awaitRestoreChannel(e, stealthMode);", " }" ], "header": "@@ -1089,10 +1085,68 @@ class RAFContainer4 extends RAFContainer {", "removed": [ " if (ioChannel != null) {", " writeFull(ByteBuffer.wrap(bytes), ioChannel, offset);", " } else {" ] } ] } ]
derby-DERBY-5328-cee7de4b
DERBY-5328: Improve the re-entrancy of the NetServlet. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1155367 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/drda/NetServlet.java", "hunks": [ { "added": [ "\tprivate final static String[] knownLang =", " { \"cs\",\"en\",\"es\",\"de_DE\",\"fr\",\"hu\",\"it\", \"ja_JP\",\"ko_KR\",\"pl\",\"pt_BR\",\"ru\",\"zh_CN\",\"zh_TW\" };", " // set at initialization", "\tprivate int portNumber = 1527;", "", " // can be overridden by trips through doGet()", "\tprivate volatile String tracingDirectory;", "\tprivate volatile boolean logStatus= false;\t/* Logging off */", "\tprivate volatile boolean traceStatus = false;\t/* Tracing off */" ], "header": "@@ -51,22 +51,22 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate String formHeader = null;", "\tprivate int portNumber=1527;", "\tprivate String tracingDirectory;", "\tprivate boolean logStatus= false;\t/* Logging off */", "\tprivate boolean traceStatus = false;\t/* Tracing off */", "\tprivate String[] knownLang = {\"cs\",\"en\",\"es\",\"de_DE\",\"fr\",\"hu\",\"it\",", "\t\t\t\"ja_JP\",\"ko_KR\",\"pl\",\"pt_BR\",\"ru\",\"zh_CN\",\"zh_TW\"};", "\tprivate String locale;", "\tprivate PrintWriter out;" ] }, { "added": [ "\t\t\t\trunServer(langUtil, null, null, null);" ], "header": "@@ -123,7 +123,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\trunServer(langUtil, null, null);" ] }, { "added": [ " String formHeader = null;", "", " String locale[] = new String[ 1 ];", "\t\tlangUtil = getCurrentAppUI(request, locale);" ], "header": "@@ -150,9 +150,12 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\tlangUtil = getCurrentAppUI(request);" ] }, { "added": [ " PrintWriter out = new PrintWriter", " ( new OutputStreamWriter(response.getOutputStream(), \"UTF8\"),true );" ], "header": "@@ -161,8 +164,8 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\tout = new PrintWriter(new", " \t\t\tOutputStreamWriter(response.getOutputStream(), \"UTF8\"),true);" ] }, { "added": [ "\t\tprintBanner(langUtil, out);", "\t\t\t\tprintErrorForm(langUtil, request, e, returnMessage, out);", "\t\tserver.setClientLocale( locale[ 0 ] );" ], "header": "@@ -180,17 +183,17 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\tprintBanner(langUtil);", "\t\t\t\tprintErrorForm(langUtil, request, e, returnMessage);", "\t\tserver.setClientLocale(locale);" ] }, { "added": [ "\t\t\t\trunServer(langUtil, request, returnMessage, out);", "\t\t\t\tshutdownServer(langUtil, request, returnMessage, out);" ], "header": "@@ -213,13 +216,13 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\trunServer(langUtil, request, returnMessage);", "\t\t\t\tshutdownServer(langUtil, request, returnMessage);" ] }, { "added": [ "\t\t\t\tif (logging(langUtil, true, request, returnMessage, out))", "\t\t\t\tif (logging(langUtil, false, request, returnMessage, out))", "\t\t\t\tif (traceAll(langUtil, true, request, returnMessage, out))", "\t\t\t\tif (traceAll(langUtil, false, request, returnMessage, out))", "\t\t\tdisplayCurrentStatus(request, langUtil, returnMessage, out);" ], "header": "@@ -248,25 +251,25 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\tif (logging(langUtil, true, request, returnMessage))", "\t\t\t\tif (logging(langUtil, false, request, returnMessage))", "\t\t\t\tif (traceAll(langUtil, true, request, returnMessage))", "\t\t\t\tif (traceAll(langUtil, false, request, returnMessage))", "\t\t\tdisplayCurrentStatus(request, langUtil, returnMessage);" ] }, { "added": [ "\t\t\tprintAsContentHeader(langUtil.getTextMessage(\"SRV_NotStarted\"), out);" ], "header": "@@ -300,7 +303,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintAsContentHeader(langUtil.getTextMessage(\"SRV_NotStarted\"));" ] }, { "added": [ " returnMessage, out);", "\t\t\t\t\t\tprintErrorForm(langUtil, request, e, returnMessage, out);" ], "header": "@@ -348,14 +351,14 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\t\t\t\treturnMessage);", "\t\t\t\t\t\tprintErrorForm(langUtil, request, e, returnMessage);" ] }, { "added": [ "\t\t\t\t\tif (traceSession(langUtil, val, session, request, returnMessage, out))" ], "header": "@@ -364,7 +367,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\t\tif (traceSession(langUtil, val, session, request, returnMessage))" ] }, { "added": [ "\t\t\tprintAsContentHeader(langUtil.getTextMessage(\"SRV_TraceSessButton\"), out);" ], "header": "@@ -376,7 +379,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintAsContentHeader(langUtil.getTextMessage(\"SRV_TraceSessButton\"));" ] }, { "added": [ "\t\t\tprintAsContentHeader(traceDirMessage, out);", " returnMessage, out) )" ], "header": "@@ -389,14 +392,14 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintAsContentHeader(traceDirMessage);", "\t\t\t\t\t\t\treturnMessage) )" ] }, { "added": [ "\t\t\t\tprintErrorForm(langUtil, request, e, returnMessage, out);", "\t\t\t\t\t\"SRV_NewMaxThreads\", langUtil, returnMessage, out);", "\t\t\t\t\t\treturnMessage, out);" ], "header": "@@ -432,16 +435,16 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\tprintErrorForm(langUtil, request, e, returnMessage);", "\t\t\t\t\t\"SRV_NewMaxThreads\", langUtil, returnMessage);", "\t\t\t\t\t\treturnMessage);" ] }, { "added": [ "\t\t\t\t\t\t\treturnMessage, out))", "\t\t\tprintAsContentHeader(netParamMessage, out);" ], "header": "@@ -451,13 +454,13 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\t\t\t\treturnMessage))", "\t\t\tprintAsContentHeader(netParamMessage);" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate void runServer", " ( LocalizedResource localUtil, HttpServletRequest request, String returnMessage, PrintWriter out )" ], "header": "@@ -533,11 +536,12 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate void runServer(LocalizedResource localUtil, HttpServletRequest request,", "\t\tString returnMessage)" ] }, { "added": [ "\t\t\t\tprintErrorForm(localUtil, request, e, returnMessage, out);" ], "header": "@@ -599,7 +603,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\tprintErrorForm(localUtil, request, e, returnMessage);" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate void printErrorForm", " (", " LocalizedResource localUtil,", " HttpServletRequest request,", " Exception e,", " String returnMessage,", " PrintWriter out", " )", "\t\tprintAsContentHeader(localUtil.getTextMessage(\"SRV_NetworkServerError\"), out);" ], "header": "@@ -611,11 +615,18 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate void printErrorForm(LocalizedResource localUtil, HttpServletRequest request,", "\t\tException e, String returnMessage)", "\t\tprintAsContentHeader(localUtil.getTextMessage(\"SRV_NetworkServerError\"));" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate void printErrorForm", " (", " LocalizedResource localUtil,", " HttpServletRequest request,", " String msg,", " String returnMessage,", " PrintWriter out", " )", "\t\tprintAsContentHeader(localUtil.getTextMessage(\"SRV_NetworkServerError\"), out);" ], "header": "@@ -627,12 +638,19 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate void printErrorForm(LocalizedResource localUtil, HttpServletRequest request,", "\t\tString msg, String returnMessage)", "\t\tprintAsContentHeader(localUtil.getTextMessage(\"SRV_NetworkServerError\"));" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate void displayCurrentStatus", " (", " HttpServletRequest request,", " LocalizedResource localUtil,", " String returnMessage,", " PrintWriter out", " )", "\t\t\tprintAsContentHeader(localUtil.getTextMessage(\"SRV_Started\"), out);" ], "header": "@@ -644,13 +662,19 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate void displayCurrentStatus(HttpServletRequest request,", "\t\tLocalizedResource localUtil, String returnMessage)", "\t\t\tprintAsContentHeader(localUtil.getTextMessage(\"SRV_Started\"));" ] }, { "added": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage, out);" ], "header": "@@ -675,7 +699,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage);" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate boolean shutdownServer", " (", " LocalizedResource localUtil,", " HttpServletRequest request,", " String returnMessage,", " PrintWriter out", " )" ], "header": "@@ -698,10 +722,16 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate boolean shutdownServer(LocalizedResource localUtil,", "\t\tHttpServletRequest request, String returnMessage)" ] }, { "added": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage, out);" ], "header": "@@ -709,7 +739,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage);" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate boolean logging", " (", " LocalizedResource localUtil,", " boolean val,", " HttpServletRequest request,", " String returnMessage,", " PrintWriter out", " )" ], "header": "@@ -719,10 +749,17 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate boolean logging(LocalizedResource localUtil, boolean val,", "\t\tHttpServletRequest request, String returnMessage)" ] }, { "added": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage, out);" ], "header": "@@ -730,7 +767,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage);" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate boolean traceAll", " (", " LocalizedResource localUtil,", " boolean val,", " HttpServletRequest request,", " String returnMessage,", " PrintWriter out", " )" ], "header": "@@ -741,10 +778,17 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate boolean traceAll(LocalizedResource localUtil, boolean val,", "\t\tHttpServletRequest request, String returnMessage)" ] }, { "added": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage, out);" ], "header": "@@ -752,7 +796,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage);" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate boolean traceSession", " (", " LocalizedResource localUtil,", " boolean val,", " int session,", " HttpServletRequest request,", " String returnMessage,", " PrintWriter out", " )" ], "header": "@@ -764,10 +808,18 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate boolean traceSession(LocalizedResource localUtil, boolean val, int session,", "\t\tHttpServletRequest request, String returnMessage)" ] }, { "added": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage, out);" ], "header": "@@ -775,7 +827,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage);" ] }, { "added": [ "\t * @param returnMessage\t\tlocalized continue message for continue button on error form\t", "\t * @param out Form PrintWriter", "\tprivate boolean traceDirectory", " (", " LocalizedResource localUtil,", " String traceDirectory,", " HttpServletRequest request,", " String returnMessage,", " PrintWriter out", " )", " localUtil.getTextMessage(\"SRV_TraceDir\")), returnMessage, out);" ], "header": "@@ -786,19 +838,25 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t * @param returnMessage\t\tlocalized continue message for continue ", "\t *\t\t\t\t\t\t\tbutton on error form\t", "\tprivate boolean traceDirectory(LocalizedResource localUtil, String traceDirectory,", "\t\tHttpServletRequest request, String returnMessage)", "\t\t\t\tlocalUtil.getTextMessage(\"SRV_TraceDir\")), returnMessage);" ] }, { "added": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage, out);" ], "header": "@@ -809,7 +867,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage);" ] }, { "added": [ "\t * @param returnMessage\t\tlocalized continue message for continue button on error form\t", "\t * @param out Form PrintWriter", "\tprivate boolean setNetParam", " (", " LocalizedResource localUtil,", " int max,", " int slice,", " HttpServletRequest request,", " String returnMessage,", " PrintWriter out", " )" ], "header": "@@ -821,12 +879,19 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t * @param returnMessage\t\tlocalized continue message for continue ", "\t *\t\t\t\t\t\t\tbutton on error form\t", "\tprivate boolean setNetParam(LocalizedResource localUtil, int max, int slice,", "\t\tHttpServletRequest request, String returnMessage)" ] }, { "added": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage, out);" ], "header": "@@ -836,7 +901,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\tprintErrorForm(localUtil, request, e, returnMessage);" ] }, { "added": [ "\t * @param returnMessage\t\tlocalized continue message for continue button on error form\t", "\t * @param out Form PrintWriter", "\tprivate int getIntParameter", " (", " HttpServletRequest request,", " String name,", " String fieldKey,", " LocalizedResource localUtil,", " String returnMessage,", " PrintWriter out", " )" ], "header": "@@ -857,11 +922,18 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t * @param returnMessage\t\tlocalized continue message for continue ", "\t *\t\t\t\t\t\t\tbutton on error form\t", "\tprivate int getIntParameter(HttpServletRequest request,", "\t\t\tString name, String fieldKey, LocalizedResource localUtil, String returnMessage)" ] }, { "added": [ " val, localUtil.getTextMessage(fieldKey)), returnMessage, out);", " val, localUtil.getTextMessage(fieldKey)), returnMessage, out);" ], "header": "@@ -871,13 +943,13 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\tval, localUtil.getTextMessage(fieldKey)), returnMessage);", "\t\t\t\tval, localUtil.getTextMessage(fieldKey)), returnMessage);" ] }, { "added": [ "\tprivate void printBanner(LocalizedResource localUtil, PrintWriter out)" ], "header": "@@ -885,7 +957,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate void printBanner(LocalizedResource localUtil)" ] }, { "added": [ "\t * @param locale Name of locale (return arg)", "\tprivate LocalizedResource getCurrentAppUI(HttpServletRequest request, String[] locale )", "\t\tlocale[ 0 ] = null;" ], "header": "@@ -905,15 +977,16 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate LocalizedResource getCurrentAppUI(HttpServletRequest request)", "\t\tlocale = null;" ] }, { "added": [ "\t\t\t\tlocale[ 0 ] = lang;" ], "header": "@@ -931,7 +1004,7 @@ public class NetServlet extends HttpServlet {", "removed": [ "\t\t\t\tlocale = lang;" ] }, { "added": [ "\t * @param out Form PrintWriter", "\tprivate void printAsContentHeader(String str, PrintWriter out) {" ], "header": "@@ -1016,8 +1089,9 @@ public class NetServlet extends HttpServlet {", "removed": [ "\tprivate void printAsContentHeader(String str) {" ] } ] } ]
derby-DERBY-5336-c5a71001
DERBY-5336: Repeated database creation causes OutOfMemoryError Clean up context when the daemon is shut down. Added regression test, more or less copied from patch provided by Aja Walker (aja at ajawalker dot com). Patch file: derby-5336-1a-remove_context_on_stop.diff derby-5336-2a-regression_test.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1160593 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5338-f30426b5
DERBY-1903 Convert largedata/LobLimits.java to junit DERBY-5308 Investigate if largeData/LobLimits.java can be run for client Patch derby-1903_client_diff.txt enables client for largedata.LobLimitsLite. It disables the test cases that fail with client: DERBY-5338 client gives wrong SQLState and protocol error inserting a 4GB clob. Should be 22003 DERBY-5341 : Client allows clob larger than column width to be inserted. DERBY-5317 cannot use setCharacterStream with value from C/Blob.getCharacterStream Also fixes the test to fail if we do not get an exception for negative test cases and fixes a javadoc warning. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1147335 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-534-00b0dbe4
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Add a test case to verify that the WHEN clause SPS is invalidated and recompiled if one of its dependencies requests a recompilation. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1529145 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-534-05b022f3
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Fix incorrect null check when merging subqueryTrackingArray and materializedSubqueries in GenericStatementContext.setTopResultSet(). Used to cause NullPointerException in some cases when a WHEN clause contained a subquery. Add more tests for scalar subqueries in WHEN clauses. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1532666 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-534-1725dd14
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Add the WHEN clause syntax to the grammar and wire it together with the existing partial code for the WHEN clause. Make RowTriggerExecutor and StatementTriggerExecutor execute the WHEN clause and use the result to decide whether the trigger action should be executed. Add some basic positive tests for the currently supported subset of the functionality. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1523965 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/CreateTriggerNode.java", "hunks": [ { "added": [], "header": "@@ -227,7 +227,6 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ "\t * @param whenOffset\t\t\toffset of start of WHEN clause" ] }, { "added": [], "header": "@@ -247,7 +246,6 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ " int whenOffset," ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/CreateTriggerConstantAction.java", "hunks": [ { "added": [ "", " if (whenSPSId == null && whenText != null) {", " whenSPSId = dd.getUUIDFactory().createUUID();", " }" ], "header": "@@ -288,6 +288,10 @@ class CreateTriggerConstantAction extends DDLSingleTableConstantAction", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/GenericTriggerExecutor.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.sql.execute.ExecRow;", "import org.apache.derby.iapi.types.DataValueDescriptor;", "import org.apache.derby.iapi.types.SQLBoolean;", "import org.apache.derby.shared.common.sanity.SanityManager;" ], "header": "@@ -26,12 +26,15 @@ import org.apache.derby.iapi.sql.dictionary.SPSDescriptor;", "removed": [ "" ] }, { "added": [ " // Cached prepared statement and activation for WHEN clause and", " // trigger action.", " private ExecPreparedStatement whenPS;", " private Activation spsWhenActivation;", " private ExecPreparedStatement actionPS;", " private Activation spsActionActivation;" ], "header": "@@ -50,8 +53,12 @@ abstract class GenericTriggerExecutor", "removed": [ "\tprivate ExecPreparedStatement\tps;", "\tprivate Activation \t\t\t\tspsActivation;" ] }, { "added": [ " private SPSDescriptor getWhenClause() throws StandardException" ], "header": "@@ -95,7 +102,7 @@ abstract class GenericTriggerExecutor", "removed": [ "\tprotected SPSDescriptor getWhenClause() throws StandardException" ] }, { "added": [ " * @param sps the SPS to execute", " * @param isWhen {@code true} if the SPS is for the WHEN clause,", " * {@code false} otherwise", " * @return {@code true} if the SPS is for a WHEN clause and it evaluated", " * to {@code TRUE}, {@code false} otherwise", " final boolean executeSPS(SPSDescriptor sps, boolean isWhen)", " throws StandardException", " boolean whenClauseWasTrue = false;", "", " // The prepared statement and the activation may already be available", " // if the trigger has been fired before in the same statement. (Only", " // happens with row triggers that are triggered by a statement that", " // touched multiple rows.) The WHEN clause and the trigger action have", " // their own prepared statement and activation. Fetch the correct set.", " ExecPreparedStatement ps = isWhen ? whenPS : actionPS;", " Activation spsActivation = isWhen", " ? spsWhenActivation : spsActionActivation;" ], "header": "@@ -120,11 +127,27 @@ abstract class GenericTriggerExecutor", "removed": [ "\tprotected void executeSPS(SPSDescriptor sps) throws StandardException" ] }, { "added": [ "", " // Cache the prepared statement and activation in case the", " // trigger fires multiple times.", " if (isWhen) {", " whenPS = ps;", " spsWhenActivation = spsActivation;", " } else {", " actionPS = ps;", " spsActionActivation = spsActivation;", " }" ], "header": "@@ -153,6 +176,16 @@ abstract class GenericTriggerExecutor", "removed": [] }, { "added": [ "", " if (isWhen)", " {", " // This is a WHEN clause. Expect a single BOOLEAN value", " // to be returned.", " ExecRow row = rs.getNextRow();", " if (SanityManager.DEBUG && row.nColumns() != 1) {", " SanityManager.THROWASSERT(", " \"Expected WHEN clause to have exactly \"", " + \"one column, found: \" + row.nColumns());", " }", "", " DataValueDescriptor value = row.getColumn(1);", " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(value instanceof SQLBoolean);", " }", "", " whenClauseWasTrue =", " !value.isNull() && value.getBoolean();", "", " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(rs.getNextRow() == null,", " \"WHEN clause returned more than one row\");", " }", " }", " else if (rs.returnsRows())" ], "header": "@@ -175,7 +208,32 @@ abstract class GenericTriggerExecutor", "removed": [ " if( rs.returnsRows())" ] } ] } ]
derby-DERBY-534-211adc3c
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Remove logic in the readExternal() and writeExternal() methods of TriggerInfo and TriggerDescriptor that was originally put there for compatibility between different Derby versions. Since these objects are only persisted as part of a stored prepared statement, and Derby always clears all stored prepared statements on version change, there is no requirement that TriggerInfo and TriggerDescriptor instances written by one Derby version must be possible to read by other Derby versions. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1535654 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/DataDescriptorGenerator.java", "hunks": [ { "added": [ " return new TriggerDescriptor(" ], "header": "@@ -378,12 +378,7 @@ public class DataDescriptorGenerator", "removed": [ " if (dataDictionary.checkVersion(", " DataDictionary.DD_VERSION_DERBY_10_11, null)) {", " // The dictionary version is recent enough to support the WHEN", " // clause (DERBY-534). Create a descriptor that uses the new", " // format.", " return new TriggerDescriptor(" ] } ] }, { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/TriggerDescriptor.java", "hunks": [ { "added": [], "header": "@@ -905,16 +905,6 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [ " readExternal_v10_10(in);", " whenClauseText = (String) in.readObject();", " }", "", " /**", " * {@code readExternal()} method to be used if the data dictionary", " * version is 10.10 or lower.", " */", " void readExternal_v10_10(ObjectInput in)", " throws IOException, ClassNotFoundException {" ] }, { "added": [ " whenClauseText = (String) in.readObject();" ], "header": "@@ -948,7 +938,7 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [ "\t\t" ] }, { "added": [], "header": "@@ -979,15 +969,6 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [ " writeExternal_v10_10(out);", " out.writeObject(whenClauseText);", " }", "", " /**", " * {@code writeExternal()} method to be used if the data dictionary", " * version is 10.10 or lower.", " */", " void writeExternal_v10_10(ObjectOutput out) throws IOException {" ] }, { "added": [ " out.writeObject(whenClauseText);" ], "header": "@@ -1034,6 +1015,7 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/TriggerInfo.java", "hunks": [ { "added": [], "header": "@@ -137,12 +137,6 @@ public final class TriggerInfo implements Formatable", "removed": [ "", " // Used to write an array of changed column numbers and an array", " // with the names of the columns, but they are not used anymore.", " // Write dummy values to preserve the format.", " ArrayUtil.writeIntArray(out, (int[]) null);", " ArrayUtil.writeArray(out, (String[]) null);" ] }, { "added": [], "header": "@@ -158,10 +152,6 @@ public final class TriggerInfo implements Formatable", "removed": [ "", " // Discard fields that are no longer used.", " ArrayUtil.readIntArray(in);", " ArrayUtil.readStringArray(in);" ] } ] } ]
derby-DERBY-534-50734d82
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Make the code in TriggerDescriptor.getActionSPS() reusable for TriggerDescriptor.getWhenClauseSPS() so that the fixes for DERBY-4874 and Cloudscape bug 4821 also get applied to the WHEN clause. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1531226 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/TriggerDescriptor.java", "hunks": [ { "added": [ " return getSPS(lcc, false /* isWhenClause */);", " }", "", " /**", " * Get the SPS for the triggered SQL statement or the WHEN clause.", " *", " * @param lcc the LanguageConnectionContext to use", " * @param isWhenClause {@code true} if the SPS for the WHEN clause is", " * requested, {@code false} if it is the triggered SQL statement", " * @return the requested SPS", " * @throws StandardException if an error occurs", " */", " private SPSDescriptor getSPS(LanguageConnectionContext lcc,", " boolean isWhenClause)", " throws StandardException", " {", " DataDictionary dd = getDataDictionary();", " SPSDescriptor sps = isWhenClause ? whenSPS : actionSPS;", " UUID spsId = isWhenClause ? whenSPSId : actionSPSId;", " String originalSQL = isWhenClause ? whenClauseText : triggerDefinition;", "", " if (sps == null) {", " sps = dd.getSPSDescriptor(spsId);", "" ], "header": "@@ -333,18 +333,38 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [ "\t\tif (actionSPS == null)", "\t\t{", "\t\t\tactionSPS = getDataDictionary().getSPSDescriptor(actionSPSId);", "\t\t" ] }, { "added": [ " if ((!sps.isValid() ||", " (sps.getPreparedStatement() == null)) &&", " dd.getSchemaDescriptor(sps.getCompSchemaId(), null));", " Visitable stmtnode =", " isWhenClause ? pa.parseSearchCondition(originalSQL)", " : pa.parseStatement(originalSQL);", "", " String newText = dd.getTriggerActionString(stmtnode,", " originalSQL,", " false);", "", " if (isWhenClause) {", " // The WHEN clause is not a full SQL statement, just a search", " // condition, so we need to turn it into a statement in order", " // to create an SPS.", " newText = \"VALUES \" + newText;", " }", "", " sps.setText(newText);", "", "", " return sps;" ], "header": "@@ -363,39 +383,49 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [ "\t\tDataDictionary dd = getDataDictionary();", "\t\tif((!actionSPS.isValid() ||", "\t\t\t\t (actionSPS.getPreparedStatement() == null)) && ", " dd.getSchemaDescriptor(actionSPS.getCompSchemaId(), null));", "\t\t\tVisitable stmtnode = pa.parseStatement(triggerDefinition);", "\t\t\t\t\t", " actionSPS.setText(dd.getTriggerActionString(stmtnode,", "\t\t\t\t\ttriggerDefinition,", "\t\t\t\t\tfalse", "\t\t\t\t\t));", "\t\t", "\t\treturn actionSPS;" ] } ] } ]
derby-DERBY-534-51f910f6
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Disallow references to tables in the SESSION schema in the WHEN clause. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1527993 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/CreateTriggerNode.java", "hunks": [ { "added": [ " // Throw an exception if the WHEN clause or the triggered SQL", " // statement references a table in the SESSION schema.", " if (referencesSessionSchema()) {", " }" ], "header": "@@ -437,10 +437,11 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ "\t\t//If attempting to reference a SESSION schema table (temporary or permanent) in the trigger action, throw an exception", "\t\tif (actionNode.referencesSessionSchema())", "" ] } ] } ]
derby-DERBY-534-6a17f800
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Reject references to generated columns in the NEW transition variables of BEFORE triggers, as required by the SQL standard. See also DERBY-3948. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1527489 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-534-75fbb865
DERBY-534: Add a disabled test case for NPE with subquery in WHEN clause git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1525819 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-534-a50f8954
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Add more tests. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1540690 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-534-c955b823
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Add tests to verify that the when clause operates with the privileges of the user that created the trigger, and that exceptions thrown in the WHEN clause are handled gracefully. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1531279 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-534-d23659a0
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Reuse code for dependency checking of the triggered SQL statement for checking dependencies in the WHEN clause. Add test to verify that attempts to drop columns referenced in the WHEN clause detect that the trigger is dependent on the columns. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1530887 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.sql.compile.Visitable;" ], "header": "@@ -43,6 +43,7 @@ import org.apache.derby.iapi.sql.ResultSet;", "removed": [] }, { "added": [ "", " // First check for dependencies in the trigger's WHEN", " // clause, if there is one.", " UUID whenClauseId = trdToBeDropped.getWhenClauseId();", " boolean gotDropped = false;", " if (whenClauseId != null) {", " gotDropped = columnDroppedAndTriggerDependencies(", " trdToBeDropped, whenClauseId, true,", " cascade, columnName);", " }", "", " // If no dependencies were found in the WHEN clause,", " // we have to check if the triggered SQL statement", " // depends on the column being dropped. But if there", " // were dependencies and the trigger has already been", " // dropped, there is no point in looking for more", " // dependencies.", " if (!gotDropped) {", " columnDroppedAndTriggerDependencies(trdToBeDropped,", " trdToBeDropped.getActionId(), false,", " cascade, columnName);", " }" ], "header": "@@ -1770,8 +1771,28 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ "\t\t\t\t\t\tcolumnDroppedAndTriggerDependencies(trdToBeDropped,", "\t\t\t\t\t\t\t\tcascade, columnName);" ] }, { "added": [ " //", " // This method is called both on the WHEN clause (if one exists) and the", " // triggered SQL statement of the trigger action.", " //", " // Return true if the trigger was dropped by this method (if cascade is", " // true and it turns out the trigger depends on the column being dropped),", " // or false otherwise.", " private boolean columnDroppedAndTriggerDependencies(TriggerDescriptor trd,", " UUID spsUUID, boolean isWhenClause," ], "header": "@@ -1792,7 +1813,15 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ "\tprivate void columnDroppedAndTriggerDependencies(TriggerDescriptor trd," ] }, { "added": [ " dd.getSPSDescriptor(spsUUID).getCompSchemaId(),", " String originalSQL = isWhenClause ? trd.getWhenClauseText()", " : trd.getTriggerDefinition();", " Visitable node = isWhenClause ? pa.parseSearchCondition(originalSQL)", " : pa.parseStatement(originalSQL);" ], "header": "@@ -1800,11 +1829,14 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ " dd.getSPSDescriptor(trd.getActionId()).getCompSchemaId(),", "\t\tStatementNode stmtnode = (StatementNode)pa.parseStatement(trd.getTriggerDefinition());" ] }, { "added": [ " SPSDescriptor sps = isWhenClause ? trd.getWhenClauseSPS()", " : trd.getActionSPS(lcc);", " String newText = dd.getTriggerActionString(node,", " originalSQL,", " true);", "", " if (isWhenClause) {", " // The WHEN clause is not a full SQL statement, just a search", " // condition, so we need to turn it into a statement in order", " // to create an SPS.", " newText = \"VALUES \" + newText;", " }", "", " sps.setText(newText);" ], "header": "@@ -1832,20 +1864,29 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ "\t\t\tSPSDescriptor triggerActionSPSD = trd.getActionSPS(lcc);", "\t\t\ttriggerActionSPSD.setText(dd.getTriggerActionString(stmtnode, ", "\t\t\t\ttrd.getTriggerDefinition(),", "\t\t\t\ttrue", "\t\t\t\t));" ] }, { "added": [ " StatementNode stmtnode = (StatementNode) pa.parseStatement(newText);", " newCC.setCurrentDependent(sps.getPreparedStatement());" ], "header": "@@ -1862,9 +1903,9 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ "\t\t\tstmtnode = (StatementNode)pa.parseStatement(triggerActionSPSD.getText());", "\t\t\tnewCC.setCurrentDependent(triggerActionSPSD.getPreparedStatement());" ] }, { "added": [ " return true;" ], "header": "@@ -1914,7 +1955,7 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction", "removed": [ "\t\t\t\t\treturn;" ] } ] } ]
derby-DERBY-534-d9878ca0
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Allow references to transition variables and transition tables in the WHEN clause. To support this, a new column WHENCLAUSETEXT is added to the SYS.SYSTRIGGERS table, and a corresponding field is added to the TriggerDescriptor class. The logic that transforms triggered SQL statements to internal syntax for accessing the transition variables and transition tables (via Java method calls and VTIs) is reused on the WHEN clause text so that the same transformation happens there. Upgrade logic is added so that the new column in SYS.SYSTRIGGERS will be created when a database is upgraded from an older version. The WHEN clause is now disabled in the parser when running in soft upgrade mode. An upgrade test case checks that the WHEN clause can only be used in a hard-upgraded database, and that a reasonable error is raised otherwise. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1526831 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/services/io/StoredFormatIds.java", "hunks": [ { "added": [ " * Class org.apache.derby.impl.sql.catalog.TriggerDescriptor, used for", " * trigger descriptors if the dictionary version is 10.11 or higher.", " */", " static public final int TRIGGER_DESCRIPTOR_V02_ID =", " (MIN_ID_2 + 315);", "", " /**", " * Class org.apache.derby.impl.sql.catalog.TriggerDescriptor_v10_10, used", " * for trigger descriptors if the dictionary version is 10.10 or lower." ], "header": "@@ -374,7 +374,15 @@ public interface StoredFormatIds {", "removed": [ " class org.apache.derby.impl.sql.catalog.TriggerDescriptorFinder" ] } ] }, { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/CatalogRowFactory.java", "hunks": [ { "added": [], "header": "@@ -33,7 +33,6 @@ import org.apache.derby.iapi.sql.execute.ExecRow;", "removed": [ "import org.apache.derby.iapi.util.StringUtil;" ] }, { "added": [ " public int getHeapColumnCount() throws StandardException" ], "header": "@@ -294,7 +293,7 @@ public abstract\tclass CatalogRowFactory", "removed": [ "\tpublic final int getHeapColumnCount()" ] } ] }, { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/DataDescriptorGenerator.java", "hunks": [ { "added": [ " * @param whenClauseText the SQL text of the WHEN clause (may be null)", " * @return a trigger descriptor" ], "header": "@@ -350,6 +350,8 @@ public class DataDescriptorGenerator", "removed": [] }, { "added": [ " String newReferencingName,", " String whenClauseText", " if (dataDictionary.checkVersion(", " DataDictionary.DD_VERSION_DERBY_10_11, null)) {", " // The dictionary version is recent enough to support the WHEN", " // clause (DERBY-534). Create a descriptor that uses the new", " // format.", " return new TriggerDescriptor(" ], "header": "@@ -372,10 +374,16 @@ public class DataDescriptorGenerator", "removed": [ "\t\tString\t\t\t\tnewReferencingName", "\t\treturn new TriggerDescriptor(" ] } ] }, { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/TriggerDescriptor.java", "hunks": [ { "added": [ " * <li> public String getWhenClauseText();" ], "header": "@@ -75,6 +75,7 @@ import java.io.IOException;", "removed": [] }, { "added": [ " private String whenClauseText;" ], "header": "@@ -123,6 +124,7 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [] }, { "added": [ " * @param whenClauseText the SQL text of the WHEN clause, or {@code null}", " * if there is no WHEN clause" ], "header": "@@ -154,6 +156,8 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [] }, { "added": [ " String newReferencingName,", " String whenClauseText" ], "header": "@@ -175,7 +179,8 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [ "\t\tString\t\t\t\tnewReferencingName" ] }, { "added": [ " this.whenClauseText = whenClauseText;" ], "header": "@@ -197,6 +202,7 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [] }, { "added": [ " /**", " * Get the SQL text of the WHEN clause.", " * @return SQL text for the WHEN clause, or {@code null} if there is", " * no WHEN clause", " */", " public String getWhenClauseText() {", " return whenClauseText;", " }", "" ], "header": "@@ -403,6 +409,15 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [] }, { "added": [ " readExternal_v10_10(in);", " whenClauseText = (String) in.readObject();", " }", "", " /**", " * {@code readExternal()} method to be used if the data dictionary", " * version is 10.10 or lower.", " */", " void readExternal_v10_10(ObjectInput in)", " throws IOException, ClassNotFoundException {" ], "header": "@@ -859,6 +874,16 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [] }, { "added": [ " writeExternal_v10_10(out);", " out.writeObject(whenClauseText);", " }", "", " /**", " * {@code writeExternal()} method to be used if the data dictionary", " * version is 10.10 or lower.", " */", " void writeExternal_v10_10(ObjectOutput out) throws IOException {" ], "header": "@@ -923,6 +948,15 @@ public class TriggerDescriptor extends UniqueSQLObjectDescriptor", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java", "hunks": [ { "added": [], "header": "@@ -121,7 +121,6 @@ import org.apache.derby.iapi.sql.dictionary.TableDescriptor;", "removed": [ "import org.apache.derby.iapi.sql.dictionary.UniqueTupleDescriptor;" ] }, { "added": [], "header": "@@ -4747,34 +4746,6 @@ public final class\tDataDictionaryImpl", "removed": [ "\t/**", "\t * Get every trigger in this database.", "\t * Note that this list of TriggerDescriptors is", "\t * not going to be the same objects that are typically", "\t * cached off of the table descriptors, so this will", "\t * most likely instantiate some duplicate objects.", "\t *", "\t * @return the list of descriptors", "\t *", "\t * @exception StandardException\t\tThrown on failure", "\t */", " private TriggerDescriptorList getAllTriggerDescriptors()", "\t\tthrows StandardException", "\t{", "\t\tTabInfoImpl\t\t\t\t\tti = getNonCoreTI(SYSTRIGGERS_CATALOG_NUM);", "", " TriggerDescriptorList list = new TriggerDescriptorList();", "", "\t\tgetDescriptorViaHeap(", " null,", "\t\t\t\t\t\t(ScanQualifier[][]) null,", "\t\t\t\t\t\tti,", "\t\t\t\t\t\t(TupleDescriptor) null,", " list,", " TriggerDescriptor.class);", "\t\treturn list;", "\t}", "" ] }, { "added": [], "header": "@@ -5457,11 +5428,6 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\tif (td == null)", "\t\t{", "\t\t\treturn getAllTriggerDescriptors();", "\t\t}", "" ] }, { "added": [ " ti.getCatalogRowFactory().makeEmptyRowForCurrentVersion()," ], "header": "@@ -8396,7 +8362,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\t\tti.getCatalogRowFactory().makeEmptyRow()," ] }, { "added": [ " ExecRow templateRow = rowFactory.makeEmptyRowForCurrentVersion();" ], "header": "@@ -8464,7 +8430,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\tExecRow\t\t\t\ttemplateRow = rowFactory.makeEmptyRow();" ] }, { "added": [ " cdArray[ix] = makeColumnDescriptor(currentColumn, columnID, td);" ], "header": "@@ -8508,7 +8474,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\t\tcdArray[ix] = makeColumnDescriptor( currentColumn, ix + 1, td );" ] }, { "added": [ " ExecRow templateRow = rowFactory.makeEmptyRowForCurrentVersion();" ], "header": "@@ -8531,7 +8497,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\tExecRow\t\t\t\ttemplateRow = rowFactory.makeEmptyRow();" ] }, { "added": [ " baseRow = rf.makeEmptyRowForCurrentVersion();" ], "header": "@@ -8926,7 +8892,7 @@ public final class\tDataDictionaryImpl", "removed": [ "\t\tbaseRow = rf.makeEmptyRow();" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/catalog/SYSTRIGGERSRowFactory.java", "hunks": [ { "added": [], "header": "@@ -21,7 +21,6 @@", "removed": [ "import org.apache.derby.iapi.types.DataTypeDescriptor;" ] }, { "added": [ " public static final int SYSTRIGGERS_WHENCLAUSETEXT = 18;", " public static final int SYSTRIGGERS_COLUMN_COUNT = SYSTRIGGERS_WHENCLAUSETEXT;" ], "header": "@@ -81,8 +80,9 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [ "\tpublic\tstatic\tfinal\tint\t\tSYSTRIGGERS_COLUMN_COUNT = SYSTRIGGERS_NEWREFERENCINGNAME;" ] }, { "added": [ " private final DataDictionary dataDictionary;", "", " SYSTRIGGERSRowFactory(", " DataDictionary dd,", " UUIDFactory uuidf,", " ExecutionFactory ef,", " DataValueFactory dvf)", " throws StandardException", " this.dataDictionary = dd;" ], "header": "@@ -110,14 +110,22 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [ "\tSYSTRIGGERSRowFactory(UUIDFactory uuidf, ExecutionFactory ef, DataValueFactory dvf)" ] }, { "added": [ " @Override", " return makeRow(td, getHeapColumnCount());", " }", "", " @Override", " public ExecRow makeEmptyRowForCurrentVersion() throws StandardException {", " return makeRow(null, SYSTRIGGERS_COLUMN_COUNT);", " }", "", " /**", " * Helper method that contains common logic for {@code makeRow()} and", " * {@code makeEmptyRowForCurrentVersion()}. Creates a row for the", " * SYSTRIGGERS conglomerate.", " *", " * @param td the {@code TriggerDescriptor} to create a row from (can be", " * {@code null} if the returned row should be empty)", " * @param columnCount the number of columns in the returned row (used for", " * trimming off columns in soft upgrade mode to match the format in", " * the old dictionary version)", " * @return a row for the SYSTRIGGERS conglomerate", " * @throws StandardException if an error happens when creating the row", " */", " private ExecRow makeRow(TupleDescriptor td, int columnCount)", " throws StandardException {" ], "header": "@@ -134,13 +142,33 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [ "", "\t\tDataTypeDescriptor\t\tdtd;", "\t\tExecRow \t\t\t\trow;", "\t\tDataValueDescriptor\t\tcol;" ] }, { "added": [ " String whenClauseText = null;" ], "header": "@@ -158,6 +186,7 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [] }, { "added": [ " whenClauseText = triggerDescriptor.getWhenClauseText();", " ExecRow row = getExecutionFactory().getValueRow(columnCount);" ], "header": "@@ -185,10 +214,11 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [ "\t\trow = getExecutionFactory().getValueRow(SYSTRIGGERS_COLUMN_COUNT);" ] }, { "added": [ " /* 18th column is WHENCLAUSETEXT */", " if (row.nColumns() >= 18) {", " // This column is present only if the data dictionary version is", " // 10.11 or higher.", " row.setColumn(18, dvf.getLongvarcharDataValue(whenClauseText));", " }", "" ], "header": "@@ -243,6 +273,13 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [] }, { "added": [ " // The expected number of columns depends on the version of the", " // data dictionary. The WHENCLAUSETEXT column was added in version", " // 10.11 (DERBY-534).", " int expectedCols =", " dd.checkVersion(DataDictionary.DD_VERSION_DERBY_10_11, null)", " ? SYSTRIGGERS_COLUMN_COUNT", " : (SYSTRIGGERS_COLUMN_COUNT - 1);", "", " SanityManager.ASSERT(row.nColumns() == expectedCols," ], "header": "@@ -296,7 +333,15 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [ "\t\t\tSanityManager.ASSERT(row.nColumns() == SYSTRIGGERS_COLUMN_COUNT, " ] }, { "added": [ " // 13th column is TRIGGERDEFINITION (longvarchar)" ], "header": "@@ -372,7 +417,7 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [ "\t\t// 13th column is TRIGGERDEFINITION (longvarhar)" ] }, { "added": [ " // 18th column is WHENCLAUSETEXT (longvarchar)", " String whenClauseText = null;", " if (row.nColumns() >= 18) {", " // This column is present only if the data dictionary version is", " // 10.11 or higher.", " col = row.getColumn(18);", " whenClauseText = col.getString();", " }", "" ], "header": "@@ -392,6 +437,15 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [] }, { "added": [ " newReferencingName,", " whenClauseText" ], "header": "@@ -410,7 +464,8 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [ "\t\t\t\t\t\t\t\t\tnewReferencingName" ] }, { "added": [ " SystemColumnImpl.getColumn(\"WHENCLAUSETEXT\",", " Types.LONGVARCHAR, true, Integer.MAX_VALUE)," ], "header": "@@ -450,7 +505,8 @@ public class SYSTRIGGERSRowFactory extends CatalogRowFactory", "removed": [ " " ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/CreateTriggerNode.java", "hunks": [ { "added": [ "import java.util.ArrayList;" ], "header": "@@ -22,6 +22,7 @@", "removed": [] }, { "added": [ "import org.apache.derby.iapi.sql.compile.Visitable;" ], "header": "@@ -32,6 +33,7 @@ import org.apache.derby.iapi.reference.SQLState;", "removed": [] }, { "added": [ " private final String originalWhenText;", " private final String originalActionText;", " private final int whenOffset;", " private final int actionOffset;" ], "header": "@@ -63,8 +65,10 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ "\tprivate\tString\t\t\t\toriginalActionText; // text w/o trim of spaces", "\tprivate\tint\t\t\t\t\tactionOffset;" ] }, { "added": [ " * @param whenOffset offset of start of WHEN clause" ], "header": "@@ -227,6 +231,7 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [] }, { "added": [ " int whenOffset," ], "header": "@@ -246,6 +251,7 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [] }, { "added": [ " this.originalWhenText = whenText;", " this.whenText = (whenText == null) ? null : whenText.trim();", " this.whenOffset = whenOffset;", " this.actionText = (actionText == null) ? null : actionText.trim();" ], "header": "@@ -263,10 +269,12 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ " this.whenText = (whenText == null) ? null : (\"VALUES \" + whenText);", " this.actionText = (actionText == null) ? null : actionText;" ] }, { "added": [ " ContextManager cm = getContextManager();", " whenClause = whenClause.bindExpression(", " new FromList(cm), new SubqueryList(cm),", " new ArrayList<AggregateNode>(0));" ], "header": "@@ -384,12 +392,13 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ "\t\t\t/* when clause is always null", "\t\t\t\twhenClause.bind();", "\t\t\t*/" ] }, { "added": [ " ** 2) convert trigger action text and WHEN clause text. e.g." ], "header": "@@ -470,7 +479,7 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ "\t** 2) convert trigger action text. e.g. " ] }, { "added": [ " String transformedWhenText = null;" ], "header": "@@ -515,7 +524,7 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ "\t\tint start = 0;" ] }, { "added": [ "", " // If there is a WHEN clause, we need to transform its text too.", " if (whenClause != null) {", " transformedWhenText =", " getDataDictionary().getTriggerActionString(", " whenClause, oldTableName, newTableName,", " originalWhenText, referencedColInts,", " referencedColsInTriggerAction, whenOffset,", " triggerTableDescriptor, triggerEventMask, true);", " }", "" ], "header": "@@ -576,6 +585,17 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [] }, { "added": [ " transformedActionText = transformStatementTriggerText(", " actionNode, originalActionText, actionOffset);", " if (whenClause != null) {", " transformedWhenText = transformStatementTriggerText(", " whenClause, originalWhenText, whenOffset);", " }" ], "header": "@@ -587,63 +607,12 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [ "\t\t\t//Total Number of columns in the trigger table", "\t\t\tint numberOfColsInTriggerTable = triggerTableDescriptor.getNumberOfColumns();", " StringBuilder newText = new StringBuilder();", "\t\t\t/*", "\t\t\t** For a statement trigger, we find all FromBaseTable nodes. If", "\t\t\t** the from table is NEW or OLD (or user designated alternates", "\t\t\t** REFERENCING), we turn them into a trigger table VTI.", "\t\t\t*/", "\t\t\tCollectNodesVisitor<FromBaseTable> visitor = new CollectNodesVisitor<FromBaseTable>(FromBaseTable.class);", "\t\t\tactionNode.accept(visitor);", "\t\t\tList<FromBaseTable> tabs = visitor.getList();", "\t\t\tCollections.sort(tabs, OFFSET_COMPARATOR);", "\t\t\tfor (int i = 0; i < tabs.size(); i++)", "\t\t\t{", "\t\t\t\tFromBaseTable fromTable = tabs.get(i);", "\t\t\t\tString baseTableName = fromTable.getBaseTableName();", "\t\t\t\tif ((baseTableName == null) ||", "\t\t\t\t\t((oldTableName == null || !oldTableName.equals(baseTableName)) &&", "\t\t\t\t\t(newTableName == null || !newTableName.equals(baseTableName))))", "\t\t\t\t{", "\t\t\t\t\tcontinue;", "\t\t\t\t}", "\t\t\t\tint tokBeginOffset = fromTable.getTableNameField().getBeginOffset();", "\t\t\t\tint tokEndOffset = fromTable.getTableNameField().getEndOffset();", "\t\t\t\tif (tokBeginOffset == -1)", "\t\t\t\t{", "\t\t\t\t\tcontinue;", "\t\t\t\t}", "", "\t\t\t\tcheckInvalidTriggerReference(baseTableName);", "", "\t\t\t\tnewText.append(originalActionText.substring(start, tokBeginOffset-actionOffset));", "\t\t\t\tnewText.append(baseTableName.equals(oldTableName) ?", "\t\t\t\t\t\t\t\t\"new org.apache.derby.catalog.TriggerOldTransitionRows() \" :", "\t\t\t\t\t\t\t\t\"new org.apache.derby.catalog.TriggerNewTransitionRows() \");", "\t\t\t\t/*", "\t\t\t\t** If the user supplied a correlation, then just", "\t\t\t\t** pick it up automatically; otherwise, supply", "\t\t\t\t** the default.", "\t\t\t\t*/", " if (fromTable.getCorrelationName() == null)", "\t\t\t\t{", "\t\t\t\t\tnewText.append(baseTableName).append(\" \");", "\t\t\t\t}", "\t\t\t\tstart=tokEndOffset-actionOffset+1;", "\t\t\t\t//If we are dealing with statement trigger, then we will read ", "\t\t\t\t//all the columns from the trigger table since trigger will be", "\t\t\t\t//fired for any of the columns in the trigger table.", "\t\t\t\treferencedColInts= new int[numberOfColsInTriggerTable];", "\t\t\t\tfor (int j=0; j < numberOfColsInTriggerTable; j++)", "\t\t\t\t\treferencedColInts[j]=j+1;", "\t\t\t}", "\t\t\tif (start < originalActionText.length())", "\t\t\t{", "\t\t\t\tnewText.append(originalActionText.substring(start));", "\t\t\t}", "\t\t\ttransformedActionText = newText.toString();" ] }, { "added": [ " if (whenClause != null && !transformedWhenText.equals(whenText)) {", " regenNode = true;", " whenText = transformedWhenText;", " whenClause = parseSearchCondition(whenText, true);", " }", "" ], "header": "@@ -662,6 +631,12 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [] }, { "added": [ " /**", " * Transform the WHEN clause or the triggered SQL statement of a", " * statement trigger from its original shape to internal syntax where", " * references to transition tables are replaced with VTIs that return", " * the before or after image of the changed rows.", " *", " * @param node the syntax tree of the WHEN clause or the triggered", " * SQL statement", " * @param originalText the original text of the WHEN clause or the", " * triggered SQL statement", " * @param offset the offset of the WHEN clause or the triggered SQL", " * statement within the CREATE TRIGGER statement", " * @return internal syntax for accessing before or after image of", " * the changed rows", " * @throws StandardException if an error happens while performing the", " * transformation", " */", " private String transformStatementTriggerText(", " Visitable node, String originalText, int offset)", " throws StandardException", " {", " int start = 0;", " StringBuilder newText = new StringBuilder();", "", " // For a statement trigger, we find all FromBaseTable nodes. If", " // the from table is NEW or OLD (or user designated alternates", " // REFERENCING), we turn them into a trigger table VTI.", " CollectNodesVisitor<FromBaseTable> visitor =", " new CollectNodesVisitor<FromBaseTable>(FromBaseTable.class);", " node.accept(visitor);", " List<FromBaseTable> tabs = visitor.getList();", " Collections.sort(tabs, OFFSET_COMPARATOR);", " for (FromBaseTable fromTable : tabs) {", " String baseTableName = fromTable.getBaseTableName();", " if (baseTableName == null", " || (!baseTableName.equals(oldTableName)", " && !baseTableName.equals(newTableName))) {", " // baseTableName is not the NEW or OLD table, so no need", " // to do anything. Skip this table.", " continue;", " }", "", " int tokBeginOffset = fromTable.getTableNameField().getBeginOffset();", " int tokEndOffset = fromTable.getTableNameField().getEndOffset();", " if (tokBeginOffset == -1) {", " // Unknown offset. Skip this table.", " continue;", " }", "", " // Check if this transition table is allowed in this trigger type.", " checkInvalidTriggerReference(baseTableName);", "", " // Replace the transition table name with a VTI.", " newText.append(originalText, start, tokBeginOffset - offset);", " newText.append(baseTableName.equals(oldTableName)", " ? \"new org.apache.derby.catalog.TriggerOldTransitionRows() \"", " : \"new org.apache.derby.catalog.TriggerNewTransitionRows() \");", "", " // If the user supplied a correlation, then just", " // pick it up automatically; otherwise, supply", " // the default.", " if (fromTable.getCorrelationName() == null) {", " newText.append(baseTableName).append(' ');", " }", "", " start = tokEndOffset - offset + 1;", "", " // If we are dealing with statement trigger, then we will read", " // all the columns from the trigger table since trigger will be", " // fired for any of the columns in the trigger table.", " int numberOfColsInTriggerTable =", " triggerTableDescriptor.getNumberOfColumns();", " referencedColInts = new int[numberOfColsInTriggerTable];", " for (int j = 0; j < numberOfColsInTriggerTable; j++) {", " referencedColInts[j] = j + 1;", " }", " }", "", " newText.append(originalText, start, originalText.length());", "", " return newText.toString();", " }", "" ], "header": "@@ -693,6 +668,89 @@ class CreateTriggerNode extends DDLStatementNode", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/ParserImpl.java", "hunks": [ { "added": [ "import java.io.StringReader;" ], "header": "@@ -21,6 +21,7 @@", "removed": [] }, { "added": [ " return parseStatementOrSearchCondition(", " statementSQLText, paramDefaults, true);", " }", "", " /**", " * Parse a full SQL statement or a fragment that represents a", " * {@code <search condition>}.", " *", " * @param sql the SQL statement or fragment to parse", " * @param paramDefaults parameter defaults to pass on to the parser", " * in the case where {@code sql} is a full SQL statement", " * @param isStatement {@code true} if {@code sql} is a full SQL statement,", " * {@code false} if it is a fragment", " * @return parse tree for the SQL", " * @throws StandardException if an error happens during parsing", " */", " private Visitable parseStatementOrSearchCondition(", " String sql, Object[] paramDefaults, boolean isStatement)", " throws StandardException", " {", " StringReader sqlText = new StringReader(sql);" ], "header": "@@ -126,8 +127,27 @@ public class ParserImpl implements Parser", "removed": [ "", "\t\tjava.io.Reader sqlText = new java.io.StringReader(statementSQLText);" ] }, { "added": [ " SQLtext = sql;", " SQLParser p = getParser();", " return isStatement", " ? p.Statement(sql, paramDefaults)", " : p.SearchCondition(sql);" ], "header": "@@ -140,12 +160,15 @@ public class ParserImpl implements Parser", "removed": [ "\t\tSQLtext = statementSQLText;", "\t\t return getParser().Statement(statementSQLText, paramDefaults);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/QueryTreeNode.java", "hunks": [ { "added": [ " return (StatementNode)", " parseStatementOrSearchCondition(sql, internalSQL, true);", " }", "", " /**", " * Parse an SQL fragment that represents a {@code <search condition>}.", " *", " * @param sql a fragment of an SQL statement", " * @param internalSQL {@code true} if the SQL fragment is allowed to", " * contain internal syntax, {@code false} otherwise", " * @return a {@code ValueNode} representing the parse tree of the", " * SQL fragment", " * @throws StandardException if an error happens while parsing", " */", " ValueNode parseSearchCondition(String sql, boolean internalSQL)", " throws StandardException", " {", " return (ValueNode)", " parseStatementOrSearchCondition(sql, internalSQL, false);", " }", "", " /**", " * Parse a full SQL statement or a fragment representing a {@code <search", " * condition>}. This is a worker method that contains common logic for", " * {@link #parseStatement} and {@link #parseSearchCondition}.", " *", " * @param sql the SQL statement or fragment to parse", " * @param internalSQL {@code true} if it is allowed to contain internal", " * syntax, {@code false} otherwise", " * @param isStatement {@code true} if {@code sql} is a full SQL statement,", " * {@code false} if it is a fragment", " * @return a parse tree", " * @throws StandardException if an error happens while parsing", " */", " private Visitable parseStatementOrSearchCondition(", " String sql, boolean internalSQL, boolean isStatement)", " throws StandardException", " {" ], "header": "@@ -737,6 +737,44 @@ public abstract class QueryTreeNode implements Visitable", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/CreateTriggerConstantAction.java", "hunks": [ { "added": [ " private final String originalWhenText;" ], "header": "@@ -75,6 +75,7 @@ class CreateTriggerConstantAction extends DDLSingleTableConstantAction", "removed": [] }, { "added": [ " * @param originalWhenText The original user text of the WHEN clause (may be null)" ], "header": "@@ -108,6 +109,7 @@ class CreateTriggerConstantAction extends DDLSingleTableConstantAction", "removed": [] }, { "added": [ " String originalWhenText," ], "header": "@@ -131,6 +133,7 @@ class CreateTriggerConstantAction extends DDLSingleTableConstantAction", "removed": [] }, { "added": [ " this.originalWhenText = originalWhenText;" ], "header": "@@ -155,6 +158,7 @@ class CreateTriggerConstantAction extends DDLSingleTableConstantAction", "removed": [] }, { "added": [ " newReferencingName,", " originalWhenText);" ], "header": "@@ -319,7 +323,8 @@ class CreateTriggerConstantAction extends DDLSingleTableConstantAction", "removed": [ "\t\t\t\t\t\t\t\t\tnewReferencingName);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/GenericConstantActionFactory.java", "hunks": [ { "added": [ " * @param originalWhenText The original user text of the WHEN clause (may be null)" ], "header": "@@ -987,6 +987,7 @@ public class GenericConstantActionFactory", "removed": [] }, { "added": [ " String originalWhenText," ], "header": "@@ -1010,6 +1011,7 @@ public class GenericConstantActionFactory", "removed": [] } ] } ]
derby-DERBY-534-db60062a
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Move common logic for executing WHEN clause and trigger action to the base class GenericTriggerExecutor. In addition to reducing code duplication, the change makes row triggers reuse the prepared statement for the WHEN clause (same as it already does for the trigger action), and it makes statement triggers not leave the before and after result sets open if the WHEN clause evaluates to false. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1524645 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/execute/GenericTriggerExecutor.java", "hunks": [ { "added": [ " private SPSDescriptor getAction() throws StandardException" ], "header": "@@ -112,7 +112,7 @@ abstract class GenericTriggerExecutor", "removed": [ "\tprotected SPSDescriptor getAction() throws StandardException" ] }, { "added": [ " private boolean executeSPS(SPSDescriptor sps, boolean isWhen)" ], "header": "@@ -134,7 +134,7 @@ abstract class GenericTriggerExecutor", "removed": [ " final boolean executeSPS(SPSDescriptor sps, boolean isWhen)" ] }, { "added": [ " * Cleanup after executing the SPS for the WHEN clause and trigger action." ], "header": "@@ -301,7 +301,7 @@ abstract class GenericTriggerExecutor", "removed": [ " * Cleanup after executing the SPS for the trigger action." ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/StatementTriggerExecutor.java", "hunks": [ { "added": [ " try {", " executeWhenClauseAndAction();", " } finally {", " clearSPS();", " tec.clearTrigger();" ], "header": "@@ -78,15 +78,11 @@ class StatementTriggerExecutor extends GenericTriggerExecutor", "removed": [ " // Execute the trigger action only if the WHEN clause returns", " // TRUE or there is no WHEN clause.", " if (executeWhenClause()) {", " try {", " executeSPS(getAction(), false);", " } finally {", " clearSPS();", " tec.clearTrigger();", " }" ] } ] } ]
derby-DERBY-534-df73e361
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Forbid CREATE TRIGGER statements whose WHEN clause contains a parameter marker or returns a non-BOOLEAN value. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1528401 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-534-ea25568c
DERBY-534: Support use of the WHEN clause in CREATE TRIGGER statements Add dblook support for triggers with a WHEN clause. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1534988 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/tools/org/apache/derby/impl/tools/dblook/DB_Trigger.java", "hunks": [ { "added": [ " // Column name constants for SYS.SYSTRIGGERS", " private static final String TRIGGERNAME = \"TRIGGERNAME\";", " private static final String SCHEMAID = \"SCHEMAID\";", " private static final String EVENT = \"EVENT\";", " private static final String FIRINGTIME = \"FIRINGTIME\";", " private static final String TYPE = \"TYPE\";", " private static final String TABLEID = \"TABLEID\";", " private static final String REFERENCEDCOLUMNS = \"REFERENCEDCOLUMNS\";", " private static final String TRIGGERDEFINITION = \"TRIGGERDEFINITION\";", " private static final String REFERENCINGOLD = \"REFERENCINGOLD\";", " private static final String REFERENCINGNEW = \"REFERENCINGNEW\";", " private static final String OLDREFERENCINGNAME = \"OLDREFERENCINGNAME\";", " private static final String NEWREFERENCINGNAME = \"NEWREFERENCINGNAME\";", " private static final String WHENCLAUSETEXT = \"WHENCLAUSETEXT\";", "", " /** ************************************************", " * @param supportsWhenClause Tells whether the database supports the", " * trigger WHEN clause.", " public static void doTriggers(Connection conn, boolean supportsWhenClause)", " ResultSet rs = stmt.executeQuery(", " \"SELECT * FROM SYS.SYSTRIGGERS WHERE STATE != 'D'\");", " dblook.expandDoubleQuotes(rs.getString(TRIGGERNAME)));", " String trigSchema = dblook.lookupSchemaId(rs.getString(SCHEMAID));", " String tableName = dblook.lookupTableId(rs.getString(TABLEID));", "", " // Get the WHEN clause text, if there is a WHEN clause. The", " // WHENCLAUSETEXT column is only present if the data dictionary", " // version is 10.11 or higher (DERBY-534).", " String whenClause =", " supportsWhenClause ? rs.getString(WHENCLAUSETEXT) : null;", " if (!dblook.stringContainsTargetTable(", " rs.getString(TRIGGERDEFINITION)) &&", " !dblook.stringContainsTargetTable(whenClause) &&" ], "header": "@@ -23,53 +23,70 @@ package org.apache.derby.impl.tools.dblook;", "removed": [ "import java.sql.PreparedStatement;", "import java.util.HashMap;", "import java.util.StringTokenizer;", "", "\t/* ************************************************", "\t * @return The DDL for the triggers has been written", "\t * to output via Logs.java.", "\tpublic static void doTriggers (Connection conn)", "\t\tResultSet rs = stmt.executeQuery(\"SELECT TRIGGERNAME, SCHEMAID, \" +", "\t\t\t\"EVENT, FIRINGTIME, TYPE, TABLEID, REFERENCEDCOLUMNS, \" + ", "\t\t\t\"TRIGGERDEFINITION, REFERENCINGOLD, REFERENCINGNEW, OLDREFERENCINGNAME, \" +", "\t\t\t\"NEWREFERENCINGNAME FROM SYS.SYSTRIGGERS WHERE STATE != 'D'\");", "\t\t\t\tdblook.expandDoubleQuotes(rs.getString(1)));", "\t\t\tString trigSchema = dblook.lookupSchemaId(rs.getString(2));", "\t\t\tString tableName = dblook.lookupTableId(rs.getString(6));", "\t\t\tif (!dblook.stringContainsTargetTable(rs.getString(8)) &&" ] }, { "added": [ " tableName, whenClause, rs);" ], "header": "@@ -80,7 +97,7 @@ public class DB_Trigger {", "removed": [ "\t\t\t\ttableName, rs);" ] }, { "added": [ " /** ************************************************", " * @param whenClause The WHEN clause text (possibly {@code null}).", " String whenClause, ResultSet aTrig) throws SQLException", " StringBuilder sb = new StringBuilder(\"CREATE TRIGGER \");", " if (aTrig.getString(FIRINGTIME).charAt(0) == 'A') {", " } else {", " }", " String event = aTrig.getString(EVENT);", " switch (event.charAt(0)) {", " String updateCols = aTrig.getString(REFERENCEDCOLUMNS);" ], "header": "@@ -94,37 +111,40 @@ public class DB_Trigger {", "removed": [ "\t/* ************************************************", "\t\tResultSet aTrig) throws SQLException", "\t\tStringBuffer sb = new StringBuffer (\"CREATE TRIGGER \");", "\t\tif (aTrig.getString(4).charAt(0) == 'A')", "\t\telse", "\t\tswitch (aTrig.getString(3).charAt(0)) {", "\t\t\t\t\t\tString updateCols = aTrig.getString(7);" ] }, { "added": [ " aTrig.getString(TABLEID), updateCols));", " event, (String)null);" ], "header": "@@ -145,12 +165,12 @@ public class DB_Trigger {", "removed": [ "\t\t\t\t\t\t\t\taTrig.getString(6), updateCols));", "\t\t\t\t\t\t\taTrig.getString(3), (String)null);" ] }, { "added": [ " char trigType = aTrig.getString(TYPE).charAt(0);", " String oldReferencing = aTrig.getString(OLDREFERENCINGNAME);", " String newReferencing = aTrig.getString(NEWREFERENCINGNAME);", " if (aTrig.getBoolean(REFERENCINGOLD)) {" ], "header": "@@ -159,12 +179,12 @@ public class DB_Trigger {", "removed": [ "\t\tchar trigType = aTrig.getString(5).charAt(0);", "\t\tString oldReferencing = aTrig.getString(11);", "\t\tString newReferencing = aTrig.getString(12);", "\t\t\tif (aTrig.getBoolean(9)) {" ] }, { "added": [ " if (aTrig.getBoolean(REFERENCINGNEW)) {" ], "header": "@@ -174,7 +194,7 @@ public class DB_Trigger {", "removed": [ "\t\t\tif (aTrig.getBoolean(10)) {" ] } ] }, { "file": "java/tools/org/apache/derby/tools/dblook.java", "hunks": [ { "added": [], "header": "@@ -30,7 +30,6 @@ import java.sql.Connection;", "removed": [ "import java.sql.SQLWarning;" ] }, { "added": [ " boolean at10_11 = atVersion(conn, 10, 11);" ], "header": "@@ -519,6 +518,7 @@ public final class dblook {", "removed": [] }, { "added": [ " DB_Trigger.doTriggers(this.conn, at10_11);" ], "header": "@@ -544,7 +544,7 @@ public final class dblook {", "removed": [ "\t\t\tDB_Trigger.doTriggers(this.conn);" ] } ] } ]
derby-DERBY-5341-f30426b5
DERBY-1903 Convert largedata/LobLimits.java to junit DERBY-5308 Investigate if largeData/LobLimits.java can be run for client Patch derby-1903_client_diff.txt enables client for largedata.LobLimitsLite. It disables the test cases that fail with client: DERBY-5338 client gives wrong SQLState and protocol error inserting a 4GB clob. Should be 22003 DERBY-5341 : Client allows clob larger than column width to be inserted. DERBY-5317 cannot use setCharacterStream with value from C/Blob.getCharacterStream Also fixes the test to fail if we do not get an exception for negative test cases and fixes a javadoc warning. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1147335 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5342-400ea2e0
DERBY-5342: make ScriptTestCase support ij properties This patch was contributed by Houx Zhang (houxzhang at gmail dot com) This patch refactors the ij utilMain class so that the code that supports the properties: - ij.showNoConnectionsAtStart - ij.showNoCountForSelect is extracted into a separate method so that it can be called from goScript. In addition, ScriptTestCase is modified so that it sets these properties to 'true' (other test cases may subsequently set them to 'false' for testing purposes). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1151691 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/tools/org/apache/derby/impl/tools/ij/utilMain.java", "hunks": [ { "added": [ "\t\t\tsupportIJProperties(connEnv[currCE]);" ], "header": "@@ -239,23 +239,7 @@ public class utilMain implements java.security.PrivilegedAction {", "removed": [ " \t\t//check if the property is set to not show select count and set the static variable", " \t\t//accordingly. ", " \t\tboolean showNoCountForSelect = Boolean.valueOf(util.getSystemProperty(\"ij.showNoCountForSelect\")).booleanValue();", " \t\tJDBCDisplayUtil.showSelectCount = !showNoCountForSelect;", "", " \t\t//check if the property is set to not show initial connections and accordingly set the", " \t\t//static variable.", " \t\tboolean showNoConnectionsAtStart = Boolean.valueOf(util.getSystemProperty(\"ij.showNoConnectionsAtStart\")).booleanValue();", "", " \t\tif (!(showNoConnectionsAtStart)) {", " \t\ttry {", " \t\t\tijResult result = ijParser.showConnectionsMethod(true);", " \t\t\t\t\tdisplayResult(out,result,connEnv[currCE].getConnection());", " \t\t} catch (SQLException ex) {", " \t\t\thandleSQLException(out,ex);", " \t\t}", " \t\t}" ] }, { "added": [ "\t connEnv[0].addSession(conn, (String) null);", " ijParser.setConnection(connEnv[0], (numConnections > 1));", "\t supportIJProperties(connEnv[0]); ", "\t \t\t", "\tprivate void supportIJProperties(ConnectionEnv env) {", "\t //check if the property is set to not show select count and set the static variable", " //accordingly. ", " boolean showNoCountForSelect = Boolean.valueOf(util.getSystemProperty(\"ij.showNoCountForSelect\")).booleanValue();", " JDBCDisplayUtil.showSelectCount = !showNoCountForSelect;", "", " //check if the property is set to not show initial connections and accordingly set the", " //static variable.", " boolean showNoConnectionsAtStart = Boolean.valueOf(util.getSystemProperty(\"ij.showNoConnectionsAtStart\")).booleanValue();", "", " if (!(showNoConnectionsAtStart)) {", " try {", " ijResult result = ijParser.showConnectionsMethod(true);", " displayResult(out,result,env.getConnection());", " } catch (SQLException ex) {", " handleSQLException(out,ex);", " }", " } ", " }", "", " /**" ], "header": "@@ -272,14 +256,36 @@ public class utilMain implements java.security.PrivilegedAction {", "removed": [ "\t\tJDBCDisplayUtil.showSelectCount = false;", "\t\tconnEnv[0].addSession(conn, (String) null);", "\t/**" ] } ] } ]
derby-DERBY-5343-09ecd71c
DERBY-5343: Upgrade tests failing with java.lang.IllegalAccessException Rework the workaround for DERBY-23 added by DERBY-5316 so that it doesn't attempt to modify final fields. Modifying final fields doesn't seem to work prior to Java 5. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1148302 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5344-8c86e54c
DERBY-1903 convert LobLimits.java test to JUnit Enable updateClob2 test for client and lite configuration. It is still disabled for embedded in LobLimitsTest because of DERBY-5344 updateClob2 test in LobLimitsTest gets OutOfMemoryError git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1149662 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5346-6708cb9a
DERBY-5346: ij3Test fails on phoneME Make the test use a data source instead of a connection URL on CDC/Foundation Profile. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1148687 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5347-d34c2ceb
DERBY-5347 Derby loops filling logs and consuming all CPU with repeated error: java.net.SocketException: EDC5122I Input/output error. If accept does not block and continues to throw errors, retry and log the exception two more times times at 1 second intervals and then if it still fails, shutdown the server. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1203050 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/impl/drda/ClientThread.java", "hunks": [ { "added": [ " try { // Check for PrivilegedActionException", " clientSocket =", " acceptClientWithRetry();", " if (clientSocket != null)", " clientSocket.close();" ], "header": "@@ -57,22 +57,15 @@ final class ClientThread extends Thread {", "removed": [ " try{ // Check for PrivilegedActionException", "", " clientSocket = ", " (Socket) AccessController.doPrivileged(", " new PrivilegedExceptionAction() {", " public Object run() throws IOException", " {", " return serverSocket.accept();", " }", " }", " );", " clientSocket.close();" ] }, { "added": [ "", " /**", " * Perform a server socket accept. Allow three attempts with a one second", " * wait between each", " * ", " * @return client socket or null if accept failed.", " * ", " */", " private Socket acceptClientWithRetry() {", " return (Socket) AccessController.doPrivileged(", " new PrivilegedAction() {", " public Object run() {", " for (int trycount = 1; trycount <= 3; trycount++) {", " try {", " // DERBY-5347 Need to exit if", " // accept fails with IOException", " // Cannot just aimlessly loop", " // writing errors", " return serverSocket.accept();", " } catch (IOException acceptE) {", " // If not a normal shutdown,", " // log and shutdown the server", " if (!parent.getShutdown()) {", " parent", " .consoleExceptionPrintTrace(acceptE);", " if (trycount == 3) {", " // give up after three tries", " parent.directShutdownInternal();", " } else {", " // otherwise wait 1 second and retry", " try {", " Thread.sleep(1000);", " } catch (InterruptedException ie) {", " parent", " .consoleExceptionPrintTrace(ie);", " }", " }", " }", " }", " }", " return null; // no socket to return after three tries", " }", " }", "", " );", " }" ], "header": "@@ -149,6 +142,52 @@ final class ClientThread extends Thread {", "removed": [] } ] } ]
derby-DERBY-5357-78c2db1d
DERBY-5357 SQLJ.INSTALL_JAR shouldn't use identifier as file name Since SQL identifiers can contain arbitrary characters, it is not safe to use them as is as part of a file name. Trying ot map parts of the name by excluding unsafe characters leads to a chance of name collision. So, we have changed the naming altogether. This patch, derby-5357-with-tests-4, changes the name (and location) of the jar files stored in a database. The name is now based on UUID, and no subdirectories under the directory "jar" are used: all jar-files reside directly in the "jar" database directory, and the name is of the form <Derby uuid string>[.]jar[.]G[0-9]+ where <Derby uuid string> has the form hhhhhhhh-hhhh-hhhh-hhhh-hhhhhhhhhhhh where h id a lower case hex digit, and the suffix ".G[0-9]+" is the version number as before. The format is changed on hard upgrade, cf tests in Changes10_9. Also, dblook has been updated to cater for this change. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1302836 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/DataDescriptorGenerator.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.sanity.SanityManager;" ], "header": "@@ -45,6 +45,7 @@ import org.apache.derby.catalog.UUID;", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java", "hunks": [ { "added": [ "import java.io.File;" ], "header": "@@ -21,6 +21,7 @@", "removed": [] }, { "added": [ "import java.util.Map;", "import org.apache.derby.iapi.services.io.FileUtil;", "import org.apache.derby.iapi.store.access.FileResource;", "import org.apache.derby.impl.sql.execute.JarUtil;", "import org.apache.derby.io.StorageFile;" ], "header": "@@ -179,6 +180,11 @@ import java.security.MessageDigest;", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/JarUtil.java", "hunks": [ { "added": [ "import java.io.File;" ], "header": "@@ -21,6 +21,7 @@", "removed": [] }, { "added": [ "import java.util.HashMap;", "import java.util.Iterator;", "import java.util.List;", "import java.util.Map;", "import org.apache.derby.catalog.UUID;", "import org.apache.derby.iapi.services.context.ContextService;", "import org.apache.derby.iapi.services.io.FileUtil;", "import org.apache.derby.iapi.services.monitor.Monitor;" ], "header": "@@ -28,11 +29,18 @@ import java.net.MalformedURLException;", "removed": [ "" ] }, { "added": [ "import org.apache.derby.iapi.store.access.TransactionController;", "import org.apache.derby.io.StorageFile;" ], "header": "@@ -41,8 +49,9 @@ import org.apache.derby.iapi.sql.dictionary.DataDictionary;", "removed": [ "" ] }, { "added": [ " UUID id = Monitor.getMonitor().getUUIDFactory().createUUID();", " final String jarExternalName = JarUtil.mkExternalName(", " id, schemaName, sqlName, fr.getSeparatorChar());", " fid = ddg.newFileInfoDescriptor(id, sd, sqlName, generationId);" ], "header": "@@ -127,12 +136,13 @@ public class JarUtil", "removed": [ " final String jarExternalName = JarUtil.mkExternalName(schemaName,", " sqlName, fr.getSeparatorChar());", " fid = ddg.newFileInfoDescriptor(/*DJD*/null, sd, sqlName, generationId);" ] }, { "added": [ " UUID id = fid.getUUID();", " fr.remove(", " JarUtil.mkExternalName(", " id, schemaName, sqlName, fr.getSeparatorChar())," ], "header": "@@ -205,9 +215,11 @@ public class JarUtil", "removed": [ "", "\t\t\tfr.remove(JarUtil.mkExternalName(schemaName, sqlName, fr.getSeparatorChar())," ] }, { "added": [ " JarUtil.mkExternalName(", " fid.getUUID(), schemaName, sqlName, fr.getSeparatorChar());" ], "header": "@@ -276,7 +288,8 @@ public class JarUtil", "removed": [ " JarUtil.mkExternalName(schemaName, sqlName, fr.getSeparatorChar());" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java", "hunks": [ { "added": [], "header": "@@ -72,7 +72,6 @@ import java.util.Date;", "removed": [ "import java.io.FileNotFoundException;" ] }, { "added": [ "import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;", "import org.apache.derby.iapi.sql.dictionary.DataDictionary;" ], "header": "@@ -81,6 +80,8 @@ import java.net.URL;", "removed": [] }, { "added": [ " String [] jarDirContents = privList(jarDir);" ], "header": "@@ -805,7 +806,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " String [] jarSchemaList = privList(jarDir);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/data/RFResource.java", "hunks": [ { "added": [ " /**", " * @see FileResource#removeJarDir", " */", " public void removeJarDir(String f) throws StandardException {", " if (factory.isReadOnly())", " throw StandardException.newException(SQLState.FILE_READ_ONLY);", "", " ContextManager cm =", " ContextService.getFactory().getCurrentContextManager();", "", " RawTransaction tran =", " factory.getRawStoreFactory().getXactFactory().findUserTransaction(", " factory.getRawStoreFactory(),", " cm,", " AccessFactoryGlobals.USER_TRANS_NAME);", "", " StorageFile ff = factory.storageFactory.newStorageFile(f);", " Serviceable s = new RemoveFile(ff);", "", " // Since this code is only used during upgrade to post-10.8 databases", " // we do no bother to build code for a special RemoveDirOperation and", " // do tran.logAndDo (cf. logic in #remove). If the post-commit removal", " // doesn't get completed, that is no big issue, the dirs can be removed", " // by hand if need be. A prudent DBA will rerun the upgrade from a", " // backup if something crashes anyway..", "", " tran.addPostCommitWork(s);", " }", "", " /**" ], "header": "@@ -148,7 +148,36 @@ class RFResource implements FileResource {", "removed": [ "\t/**" ] } ] }, { "file": "java/tools/org/apache/derby/impl/tools/dblook/DB_Jar.java", "hunks": [ { "added": [ "import java.io.FileOutputStream;", "import java.sql.Connection;", "import java.sql.ResultSet;", "import java.sql.SQLException;", "import java.sql.Statement;" ], "header": "@@ -21,20 +21,14 @@", "removed": [ "import java.sql.Connection;", "import java.sql.Statement;", "import java.sql.PreparedStatement;", "import java.sql.ResultSet;", "import java.sql.SQLException;", "", "import java.util.HashMap;", "", "import java.io.FileOutputStream;", "import java.io.FileNotFoundException;", "" ] }, { "added": [ " * @param at10_9 Dictionary is at 10.9 or higher", "\tpublic static void doJars(", " String dbName, Connection conn, boolean at10_9)", " ResultSet rs = stmt.executeQuery(", " \"SELECT FILENAME, SCHEMAID, \" +", " \"GENERATIONID, FILEID FROM SYS.SYSFILES\");", " StringBuffer loadJarString = new StringBuffer();", "", " String jarName = rs.getString(1);", " String schemaId = rs.getString(2);", " String genID = rs.getString(3);", " String UUIDstring = rs.getString(4);", "", " String schemaNameSQL = dblook.lookupSchemaId(schemaId);", "", " if (dblook.isIgnorableSchema(schemaNameSQL))", " continue;", "", " doHeader(firstTime);", "", " if (at10_9) {", " String schemaNameCNF =", " dblook.unExpandDoubleQuotes(", " dblook.stripQuotes(dblook.lookupSchemaId(schemaId)));;", "", " StringBuffer jarFullName = new StringBuffer();", " jarFullName.append(UUIDstring);", " jarFullName.append(\".jar.G\");", " jarFullName.append(genID);", "", " StringBuffer oldJarPath = new StringBuffer();", " oldJarPath.append(dbName);", " oldJarPath.append(separator);", " oldJarPath.append(\"jar\");", " oldJarPath.append(separator);", " oldJarPath.append(jarFullName.toString());", "", " // Copy jar file to DBJARS directory.", " String absJarDir = null;", " try {", "", " // Create the DBJARS directory.", " File jarDir = new File(System.getProperty(\"user.dir\") +", " separator + \"DBJARS\");", " absJarDir = jarDir.getAbsolutePath();", " jarDir.mkdirs();", "", " doCopy(oldJarPath.toString(), absJarDir + separator + jarFullName);", " } catch (Exception e) {", " Logs.debug(\"DBLOOK_FailedToLoadJar\",", " absJarDir + separator + jarFullName.toString());", " Logs.debug(e);", " firstTime = false;", " continue;", " }", "", " // Now, add the DDL to read the jar from DBJARS.", " loadJarString.append(\"CALL SQLJ.INSTALL_JAR('file:\");", " loadJarString.append(absJarDir);", " loadJarString.append(separator);", " loadJarString.append(jarFullName);", " loadJarString.append(\"', '\");", " loadJarString.append(", " dblook.addQuotes(", " dblook.expandDoubleQuotes(schemaNameCNF)));", "", " loadJarString.append(\".\");", "", " loadJarString.append(", " dblook.addQuotes(", " dblook.expandDoubleQuotes(jarName)));", "", " } else {", " jarName = dblook.addQuotes(", " dblook.expandDoubleQuotes(jarName));", "", " String schemaWithoutQuotes = dblook.stripQuotes(schemaNameSQL);", " StringBuffer jarFullName = new StringBuffer(separator);", " jarFullName.append(dblook.stripQuotes(jarName));", " jarFullName.append(\".jar.G\");", " jarFullName.append(genID);", "", " StringBuffer oldJarPath = new StringBuffer();", " oldJarPath.append(dbName);", " oldJarPath.append(separator);", " oldJarPath.append(\"jar\");", " oldJarPath.append(separator);", " oldJarPath.append(schemaWithoutQuotes);", " oldJarPath.append(jarFullName);", "", " // Copy jar file to DBJARS directory.", " String absJarDir = null;", " try {", "", " // Create the DBJARS directory.", " File jarDir = new File(", " System.getProperty(\"user.dir\") +", " separator + \"DBJARS\" + separator + schemaWithoutQuotes);", " absJarDir = jarDir.getAbsolutePath();", " jarDir.mkdirs();", "", " doCopy(oldJarPath.toString(), absJarDir + jarFullName);", " } catch (Exception e) {", " Logs.debug(\"DBLOOK_FailedToLoadJar\",", " absJarDir + jarFullName.toString());", " Logs.debug(e);", " firstTime = false;", " continue;", " }", "", " // Now, add the DDL to read the jar from DBJARS.", " loadJarString.append(\"CALL SQLJ.INSTALL_JAR('file:\");", " loadJarString.append(absJarDir);", " loadJarString.append(jarFullName);", " loadJarString.append(\"', '\");", " loadJarString.append(schemaNameSQL);", " loadJarString.append(\".\");", " loadJarString.append(jarName);", " }", " ", " loadJarString.append(\"', 0)\");", "", " Logs.writeToNewDDL(loadJarString.toString());", " Logs.writeStmtEndToNewDDL();", " Logs.writeNewlineToNewDDL();", " firstTime = false;" ], "header": "@@ -44,105 +38,145 @@ public class DB_Jar {", "removed": [ "\tpublic static void doJars(String dbName, Connection conn)", "\t\tResultSet rs = stmt.executeQuery(\"SELECT FILENAME, SCHEMAID, \" +", "\t\t\t\"GENERATIONID FROM SYS.SYSFILES\");", "\t\t\tString jarName = dblook.addQuotes(", "\t\t\t\tdblook.expandDoubleQuotes(rs.getString(1)));", "\t\t\tString schemaId = rs.getString(2);", "\t\t\tString schemaName = dblook.lookupSchemaId(schemaId);", "\t\t\tif (dblook.isIgnorableSchema(schemaName))", "\t\t\t\tcontinue;", "", "\t\t\tif (firstTime) {", "\t\t\t\tLogs.reportString(\"----------------------------------------------\");", "\t\t\t\tLogs.reportMessage(\"DBLOOK_JarsHeader\");", "\t\t\t\tLogs.reportMessage(\"DBLOOK_Jar_Note\");", "\t\t\t\tLogs.reportString(\"----------------------------------------------\\n\");", "\t\t\t}", "", "\t\t\tString genID = rs.getString(3);", "", "\t\t\tString schemaWithoutQuotes = dblook.stripQuotes(schemaName);", "\t\t\tStringBuffer jarFullName = new StringBuffer(separator);", "\t\t\tjarFullName.append(dblook.stripQuotes(jarName));", "\t\t\tjarFullName.append(\".jar.G\");", "\t\t\tjarFullName.append(genID);", "", "\t\t\tStringBuffer oldJarPath = new StringBuffer();", "\t\t\toldJarPath.append(dbName);", "\t\t\toldJarPath.append(separator);", "\t\t\toldJarPath.append(\"jar\");", "\t\t\toldJarPath.append(separator);", "\t\t\toldJarPath.append(schemaWithoutQuotes);", "\t\t\toldJarPath.append(jarFullName);", "", "\t\t\t// Copy jar file to DBJARS directory.", "\t\t\tString absJarDir = null;", "\t\t\ttry {", "", "\t\t\t\t// Create the DBJARS directory.", "\t\t\t\tFile jarDir = new File(System.getProperty(\"user.dir\") +", "\t\t\t\t\tseparator + \"DBJARS\" + separator + schemaWithoutQuotes);", "\t\t\t\tabsJarDir = jarDir.getAbsolutePath();", "\t\t\t\tjarDir.mkdirs();", "", "\t\t\t\t// Create streams.", "\t\t\t\tFileInputStream oldJarFile =", "\t\t\t\t\tnew FileInputStream(oldJarPath.toString());", "\t\t\t\tFileOutputStream newJarFile =", "\t\t\t\t\tnew FileOutputStream(absJarDir + jarFullName);", "", "\t\t\t\t// Copy.", "\t\t\t\tint st = 0;", "\t\t\t\twhile (true) {", "\t\t\t\t\tif (oldJarFile.available() == 0)", "\t\t\t\t\t\tbreak;", "\t\t\t\t\tbyte[] bAr = new byte[oldJarFile.available()];", "\t\t\t\t\toldJarFile.read(bAr);", "\t\t\t\t\tnewJarFile.write(bAr);", "\t\t\t\t}", "", "\t\t\t\tnewJarFile.close();", "\t\t\t\toldJarFile.close();", "", "\t\t\t} catch (Exception e) {", "\t\t\t\tLogs.debug(\"DBLOOK_FailedToLoadJar\",", "\t\t\t\t\tabsJarDir + jarFullName.toString());", "\t\t\t\tLogs.debug(e);", "\t\t\t\tfirstTime = false;", "\t\t\t\tcontinue;", "\t\t\t}", "", "\t\t\t// Now, add the DDL to read the jar from DBJARS.", "\t\t\tStringBuffer loadJarString = new StringBuffer();", "\t\t\tloadJarString.append(\"CALL SQLJ.INSTALL_JAR('file:\");", "\t\t\tloadJarString.append(absJarDir);", "\t\t\tloadJarString.append(jarFullName);", "\t\t\tloadJarString.append(\"', '\");", "\t\t\tloadJarString.append(schemaName);", "\t\t\tloadJarString.append(\".\");", "\t\t\tloadJarString.append(jarName);", "\t\t\tloadJarString.append(\"', 0)\");", "", "\t\t\tLogs.writeToNewDDL(loadJarString.toString());", "\t\t\tLogs.writeStmtEndToNewDDL();", "\t\t\tLogs.writeNewlineToNewDDL();", "\t\t\tfirstTime = false;", "" ] } ] }, { "file": "java/tools/org/apache/derby/tools/dblook.java", "hunks": [ { "added": [ " boolean at10_9 = atVersion( conn, 10, 9 );" ], "header": "@@ -518,6 +518,7 @@ public final class dblook {", "removed": [] }, { "added": [ " DB_Jar.doJars(sourceDBName, this.conn, at10_9);" ], "header": "@@ -530,7 +531,7 @@ public final class dblook {", "removed": [ "\t\t\t\tDB_Jar.doJars(sourceDBName, this.conn);" ] }, { "added": [ " /**", " * inverse of expandDoubleQuotes", " */", " public static String unExpandDoubleQuotes(String name) {", "", " if ((name == null) || (name.indexOf(\"\\\"\") < 0))", " // nothing to do.", " return name;", "", " char [] cA = name.toCharArray();", "", " char [] result = new char[cA.length];", "", " int j = 0;", " for (int i = 0; i < cA.length; i++) {", "", " if (cA[i] == '\"') {", " result[j++] = cA[i];", " j++; // skip next char which must be \" also", " }", " else", " result[j++] = cA[i];", "", " }", "", " return new String(result, 0, j);", "", " }", "", "" ], "header": "@@ -1014,6 +1015,36 @@ public final class dblook {", "removed": [] } ] } ]
derby-DERBY-5358-5355fd27
DERBY-5358: SYSCS_COMPRESS_TABLE failed with conglomerate not found exception Make TableDescriptor.heapConglomNumber volatile to ensure that getHeapConglomerateId() never sees a partly initialized value. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1354015 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/sql/dictionary/TableDescriptor.java", "hunks": [ { "added": [ "", " /**", " * <p>", " * The id of the heap conglomerate for the table described by this", " * instance. The value -1 means it's uninitialized, in which case it", " * will be initialized lazily when {@link #getHeapConglomerateId()} is", " * called.", " * </p>", " *", " * <p>", " * It is declared volatile to ensure that concurrent callers of", " * {@code getHeapConglomerateId()} while {@code heapConglomNumber} is", " * uninitialized, will either see the value -1 or the fully initialized", " * conglomerate number, and never see a partially initialized value", " * (as was the case in DERBY-5358 because reads/writes of a long field are", " * not guaranteed to be atomic unless the field is declared volatile).", " * </p>", " */", " private volatile long heapConglomNumber = -1;", "" ], "header": "@@ -142,7 +142,26 @@ public class TableDescriptor extends TupleDescriptor", "removed": [ "\tlong\t\t\t\t\t\t\theapConglomNumber = -1;" ] }, { "added": [], "header": "@@ -336,8 +355,6 @@ public class TableDescriptor extends TupleDescriptor", "removed": [ "\t\tDataDictionary dd = getDataDictionary();", "" ] } ] } ]
derby-DERBY-5363-8bdf7afe
DERBY-5363 Tighten default permissions of DB files with >= JDK6 Patch derby-5363-followup, which adds a missing accessController block around setting the system property SERVER_STARTED_FROM_CMD_LINE. Without the patch, this would fail if running with a security manager specified on the command line. If the property permission is missing, the error is printed unconditionally and exit(1) from main is taken. Cf. DERBY-5413 which tried another (aborted) approach to make sure it got printed. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1177718 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/drda/NetworkServerControl.java", "hunks": [ { "added": [ "import java.security.AccessController;", "import java.security.PrivilegedExceptionAction;" ], "header": "@@ -25,6 +25,8 @@ import java.io.PrintWriter;", "removed": [] }, { "added": [ " try {", " AccessController.doPrivileged(new PrivilegedExceptionAction() {", " public Object run() throws Exception {", " System.setProperty(", " Property.SERVER_STARTED_FROM_CMD_LINE,", " \"true\");", " return null;", " }});", " } catch (Exception e) {", " server.consoleExceptionPrintTrace(e);", " System.exit(1);", " }" ], "header": "@@ -303,8 +305,18 @@ public class NetworkServerControl{", "removed": [ " System.setProperty(Property.SERVER_STARTED_FROM_CMD_LINE,", " \"true\");" ] } ] } ]
derby-DERBY-5363-a6026ca8
DERBY-5363 Tighten permissions of DB files to owner with >= JDK7 Patch derby-5363-followup-linux. RestrictiveFilePermissionsTest for this feature broke on some platforms (thanks to Kathey for noticing). Apparently, the ACL view of Posix file system permissions is not available for all Unix/Linux versions in JDK 1.7 (I had tested on Solaris 11 and Windows). The changes in the test now fall back on using PosixFileAttributeView#readAttributes if the ACL view is not available. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1179042 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5363-a65a5980
DERBY-5363 Tighten permissions of DB files to owner with >= JDK7 Patch derby-5363-limit-to-java7b, which limits the default restrictive permission for the network server further: only on Java 7 or higher. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1179320 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/drda/NetworkServerControl.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.info.JVMInfo;" ], "header": "@@ -29,6 +29,7 @@ import java.security.AccessController;", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/services/io/FileUtil.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.info.JVMInfo;" ], "header": "@@ -41,6 +41,7 @@ import java.util.ArrayList;", "removed": [] }, { "added": [ " // running with the network server started from the command line", " // *and* at Java 7 or above", " if (JVMInfo.JDK_ID >= JVMInfo.J2SE_17 && ", " (PropertyUtil.getSystemBoolean(", " Property.SERVER_STARTED_FROM_CMD_LINE, false)) ) {", " // proceed", " } else {" ], "header": "@@ -654,9 +655,13 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [ " // running with the network server started from the command line.", " if ( !PropertyUtil.getSystemBoolean(", " Property.SERVER_STARTED_FROM_CMD_LINE, false)) {" ] } ] } ]
derby-DERBY-5363-dc43cf84
DERBY-5363 Tighten default permissions of DB files with >= JDK6 Patch derby-5363-full-5 implements the ability to restrict file permission of newly created directories and files beyond the default access (cf. umask on Posix file systems and similar on NTFS), i.e. to the account creating the file. This behavior is controlled by a property, "derby.storage.useDefaultFilePermissions", cf the release notes attached to issue. By default the property is true, i.e. gives the existing (lax) behavior on embedded and with the network server if started via the API. If the server is started from the command line, the new restrictive permissions apply by default. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1176591 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/impl/drda/DssTrace.java", "hunks": [ { "added": [ "import java.security.AccessControlException;", "import org.apache.derby.iapi.services.io.FileUtil;" ], "header": "@@ -24,10 +24,12 @@ import java.io.File;", "removed": [] }, { "added": [ " comBufferWriter =", " ((PrintWriter)AccessController.doPrivileged(", " public Object run()", " throws SecurityException, IOException {", " File f = new File(fileName);", " boolean exists = f.exists();", " PrintWriter pw =", " new PrintWriter(", " new java.io.BufferedWriter(", " new java.io.FileWriter(", " fileName),", " 4096));", " if (!exists) {", " FileUtil.limitAccessToOwner(f);", " }", " return pw;" ], "header": "@@ -182,10 +184,23 @@ public class DssTrace", "removed": [ " comBufferWriter = ((PrintWriter)AccessController.doPrivileged(", " public Object run() throws SecurityException, IOException {", " return new PrintWriter (new java.io.BufferedWriter (new java.io.FileWriter (fileName), 4096));" ] } ] }, { "file": "java/engine/org/apache/derby/iapi/services/info/JVMInfo.java", "hunks": [ { "added": [ " public static final int J2SE_17 = 8; // Java SE 7" ], "header": "@@ -56,6 +56,7 @@ public abstract class JVMInfo", "removed": [] }, { "added": [ " else if (javaVersion.equals(\"1.7\"))", " {", " id = J2SE_17;", " }" ], "header": "@@ -131,6 +132,10 @@ public abstract class JVMInfo", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/services/io/FileUtil.java", "hunks": [ { "added": [ "import java.io.File;", "import java.io.FileInputStream;", "import java.io.FileOutputStream;", "import java.io.IOException;", "import java.io.InputStream;", "import java.io.OutputStream;", "import java.lang.reflect.Array;", "import java.lang.reflect.InvocationTargetException;", "import java.lang.reflect.Method;", "import java.lang.reflect.Field;", "import java.util.ArrayList;", "import java.util.Iterator;", "import java.util.List;", "import org.apache.derby.iapi.reference.Property;", "import org.apache.derby.iapi.services.property.PropertyUtil;", "import org.apache.derby.shared.common.sanity.SanityManager;" ], "header": "@@ -21,13 +21,28 @@", "removed": [ "import java.io.*;" ] }, { "added": [ " limitAccessToOwner(to);", "" ], "header": "@@ -129,6 +144,8 @@ public abstract class FileUtil {", "removed": [] }, { "added": [], "header": "@@ -169,10 +186,6 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [ "\tpublic static boolean copyFile(File from, File to)", "\t{", "\t\treturn copyFile(from, to, (byte[])null);", "\t}" ] }, { "added": [ " limitAccessToOwner(to);" ], "header": "@@ -187,6 +200,7 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [] }, { "added": [], "header": "@@ -221,13 +235,6 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [ " public static boolean copyDirectory( StorageFactory storageFactory,", " StorageFile from,", " File to)", " {", " return copyDirectory( storageFactory, from, to, null, null, true);", " }", " " ] }, { "added": [ " limitAccessToOwner(to);", "" ], "header": "@@ -254,6 +261,8 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [] }, { "added": [ " limitAccessToOwner(to);" ], "header": "@@ -315,6 +324,7 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [] }, { "added": [ " to.limitAccessToOwner();", "", " String[] list = from.list();" ], "header": "@@ -383,7 +393,9 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [ "\t\tString[] list = from.list();" ] } ] }, { "file": "java/engine/org/apache/derby/impl/io/DirFile.java", "hunks": [ { "added": [ "import java.security.AccessControlException;", "import org.apache.derby.iapi.error.StandardException;", "import org.apache.derby.iapi.services.io.FileUtil;", "import org.apache.derby.shared.common.reference.SQLState;" ], "header": "@@ -36,6 +36,10 @@ import java.io.FileNotFoundException;", "removed": [] }, { "added": [], "header": "@@ -90,20 +94,6 @@ class DirFile extends File implements StorageFile", "removed": [ " /**", " * Get the name of the directory of temporary files.", " *", " * @return The abstract name of the temp directory;", " */", " static StorageFile getTempDir() throws IOException", " {", " File temp = File.createTempFile(\"derby\", \"tmp\");", " StorageFile parent = new DirFile( temp.getParent());", " temp.delete();", "", "\t\treturn parent;", "\t} // End of getTempDir", "" ] }, { "added": [ " boolean exists = exists();", " OutputStream result = new FileOutputStream(this);", "", " if (!exists) {", " FileUtil.limitAccessToOwner(this);", " }", "", " return result;" ], "header": "@@ -115,7 +105,14 @@ class DirFile extends File implements StorageFile", "removed": [ " return new FileOutputStream( (File) this);" ] }, { "added": [ " boolean exists = exists();", " OutputStream result = new FileOutputStream( getPath(), append);", "", " if (!exists) {", " FileUtil.limitAccessToOwner(this);", " }", "", " return result;" ], "header": "@@ -133,7 +130,14 @@ class DirFile extends File implements StorageFile", "removed": [ " return new FileOutputStream( getPath(), append);" ] }, { "added": [ " public synchronized int getExclusiveFileLock() throws StandardException" ], "header": "@@ -157,7 +161,7 @@ class DirFile extends File implements StorageFile", "removed": [ " public synchronized int getExclusiveFileLock()" ] }, { "added": [ " limitAccessToOwner();" ], "header": "@@ -167,6 +171,7 @@ class DirFile extends File implements StorageFile", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/io/DirFile4.java", "hunks": [ { "added": [ "import java.security.AccessControlException;", "import org.apache.derby.iapi.error.StandardException;", "import org.apache.derby.shared.common.reference.SQLState;" ], "header": "@@ -34,6 +34,9 @@ import java.io.RandomAccessFile;", "removed": [] }, { "added": [ " public synchronized int getExclusiveFileLock() throws StandardException" ], "header": "@@ -115,7 +118,7 @@ class DirFile4 extends DirFile", "removed": [ " public synchronized int getExclusiveFileLock()" ] }, { "added": [ " limitAccessToOwner(); // tamper-proof.." ], "header": "@@ -152,6 +155,7 @@ class DirFile4 extends DirFile", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/io/InputStreamFile.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.error.StandardException;" ], "header": "@@ -34,6 +34,7 @@ import java.io.IOException;", "removed": [] }, { "added": [ " public int getExclusiveFileLock() throws StandardException" ], "header": "@@ -369,7 +370,7 @@ abstract class InputStreamFile implements StorageFile", "removed": [ " public int getExclusiveFileLock()" ] } ] }, { "file": "java/engine/org/apache/derby/impl/io/vfmem/VirtualFile.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.error.StandardException;" ], "header": "@@ -25,6 +25,7 @@ import java.io.FileNotFoundException;", "removed": [] }, { "added": [ " public int getExclusiveFileLock() throws StandardException {" ], "header": "@@ -319,7 +320,7 @@ public class VirtualFile", "removed": [ " public int getExclusiveFileLock() {" ] } ] }, { "file": "java/engine/org/apache/derby/impl/load/ExportWriteData.java", "hunks": [ { "added": [ " File outputFile = new File(outputFileName);", " FileUtil.limitAccessToOwner(outputFile);", "" ], "header": "@@ -105,7 +105,10 @@ final class ExportWriteData extends ExportWriteDataAbstract", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/services/monitor/FileMonitor.java", "hunks": [ { "added": [ " boolean created = false;" ], "header": "@@ -136,6 +136,7 @@ public final class FileMonitor extends BaseMonitor implements java.security.Priv", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/services/monitor/StorageFactoryService.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.io.FileUtil;" ], "header": "@@ -56,6 +56,7 @@ import java.security.AccessController;", "removed": [] }, { "added": [ "", " boolean created = rootDir.mkdirs();", " if (created) {", " rootDir.limitAccessToOwner();", " }" ], "header": "@@ -91,10 +92,14 @@ final class StorageFactoryService implements PersistentService", "removed": [ " rootDir.mkdirs();" ] }, { "added": [ " FileUtil.limitAccessToOwner(servicePropertiesFile);", "" ], "header": "@@ -422,6 +427,8 @@ final class StorageFactoryService implements PersistentService", "removed": [] }, { "added": [], "header": "@@ -430,7 +437,6 @@ final class StorageFactoryService implements PersistentService", "removed": [ "" ] } ] }, { "file": "java/engine/org/apache/derby/impl/services/stream/SingleStream.java", "hunks": [ { "added": [], "header": "@@ -25,9 +25,7 @@ import org.apache.derby.iapi.services.stream.InfoStreams;", "removed": [ "import org.apache.derby.iapi.services.sanity.SanityManager;", "import org.apache.derby.iapi.services.monitor.ModuleSupportable;" ] }, { "added": [ "import org.apache.derby.iapi.services.i18n.MessageService;", "import org.apache.derby.iapi.services.io.FileUtil;", "import org.apache.derby.shared.common.reference.MessageId;" ], "header": "@@ -47,6 +45,9 @@ import java.lang.reflect.Field;", "removed": [] }, { "added": [ " FileUtil.limitAccessToOwner(streamFile);" ], "header": "@@ -195,6 +196,7 @@ implements InfoStreams, ModuleControl, java.security.PrivilegedAction", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java", "hunks": [ { "added": [], "header": "@@ -57,11 +57,8 @@ import org.apache.derby.io.StorageFactory;", "removed": [ "import org.apache.derby.catalog.UUID;", "import org.apache.derby.iapi.services.property.PropertyUtil;", "import org.apache.derby.iapi.util.StringUtil;" ] }, { "added": [ "import java.io.IOException;" ], "header": "@@ -69,17 +66,14 @@ import org.apache.derby.iapi.reference.Property;", "removed": [ "import java.security.PrivilegedExceptionAction;", "import java.io.FileOutputStream;", "import java.io.FileInputStream;", "import java.io.IOException;" ] }, { "added": [ " throws StandardException" ], "header": "@@ -2200,6 +2194,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [] }, { "added": [ " throws StandardException" ], "header": "@@ -2426,6 +2421,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [] }, { "added": [ " catch( PrivilegedActionException pae) {", " throw (StandardException)pae.getCause();", " }" ], "header": "@@ -2439,7 +2435,9 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " catch( PrivilegedActionException pae) { return false;} // does not throw an exception" ] }, { "added": [ " throws StandardException" ], "header": "@@ -2494,6 +2492,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [] }, { "added": [ " catch( PrivilegedActionException pae) {", " throw (StandardException)pae.getCause();", " }" ], "header": "@@ -2504,7 +2503,9 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " catch( PrivilegedActionException pae) { return false;} // does not throw an exception" ] }, { "added": [ " catch( PrivilegedActionException pae) {" ], "header": "@@ -2576,7 +2577,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " catch( PrivilegedActionException pae) { " ] }, { "added": [ " catch( PrivilegedActionException pae) {" ], "header": "@@ -2600,7 +2601,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " catch( PrivilegedActionException pae) { " ] }, { "added": [ " public final Object run() throws IOException, StandardException" ], "header": "@@ -2616,7 +2617,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " public final Object run() throws IOException" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/data/BaseDataFileFactory.java", "hunks": [ { "added": [ "import java.io.FileNotFoundException;", "import java.security.AccessControlException;" ], "header": "@@ -82,10 +82,12 @@ import java.util.Hashtable;", "removed": [] }, { "added": [ " fileLock.limitAccessToOwner();" ], "header": "@@ -1916,6 +1918,7 @@ public class BaseDataFileFactory", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java", "hunks": [ { "added": [ "", " directory.limitAccessToOwner();" ], "header": "@@ -780,6 +780,8 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction", "removed": [] }, { "added": [ " FileUtil.limitAccessToOwner(backupFile);" ], "header": "@@ -1092,6 +1094,7 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction", "removed": [] }, { "added": [ " file.limitAccessToOwner();" ], "header": "@@ -1338,6 +1341,7 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction", "removed": [] }, { "added": [ " stub.limitAccessToOwner();" ], "header": "@@ -1576,6 +1580,7 @@ class RAFContainer extends FileContainer implements PrivilegedExceptionAction", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/data/RFResource.java", "hunks": [ { "added": [ "import java.io.FileNotFoundException;" ], "header": "@@ -21,6 +21,7 @@", "removed": [] }, { "added": [ " StorageFile parentDir = directory.getParentDir();", " boolean pdExisted = parentDir.exists();", "" ], "header": "@@ -89,6 +90,9 @@ class RFResource implements FileResource {", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/log/LogToFile.java", "hunks": [ { "added": [ " long instant = LogCounter.makeLogInstantAsLong(filenumber,", " LOG_FILE_HEADER_SIZE);", " return getLogFileAtPosition(instant);", " }", " /**", " Get a read-only handle to the log file positioned at the stated position", " <P> MT- read only", " @return null if file does not exist or of the wrong format", " @exception IOException cannot access the log at the new position.", " @exception StandardException Standard Derby error policy", " */", " protected StorageRandomAccessFile getLogFileAtPosition(long logInstant)", " throws IOException, StandardException" ], "header": "@@ -2998,23 +2998,23 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [ "\t\tlong instant = LogCounter.makeLogInstantAsLong(filenumber,", "\t\t\t\t\t\t\t\t\t\t\t\t\t LOG_FILE_HEADER_SIZE);", "\t\treturn getLogFileAtPosition(instant);", "\t}", "\t/**", "\t\tGet a read-only handle to the log file positioned at the stated position", "\t\t<P> MT- read only", "\t\t@return null if file does not exist or of the wrong format", "\t\t@exception IOException cannot access the log at the new position.", "\t\t@exception StandardException Standard Derby error policy", "\t*/", "\tprotected StorageRandomAccessFile getLogFileAtPosition(long logInstant)", "\t\t throws IOException, StandardException" ] }, { "added": [ " throws StandardException" ], "header": "@@ -5697,6 +5697,7 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [] }, { "added": [ " if (pae.getCause() instanceof StandardException) {", " throw (StandardException)pae.getCause();", " }", "", " }" ], "header": "@@ -5707,8 +5708,12 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [ " }\t" ] }, { "added": [ " public final Object run() throws IOException, StandardException {" ], "header": "@@ -5757,7 +5762,7 @@ public final class LogToFile implements LogFactory, ModuleControl, ModuleSupport", "removed": [ "\tpublic final Object run() throws IOException {" ] } ] }, { "file": "java/engine/org/apache/derby/io/StorageFile.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.error.StandardException;" ], "header": "@@ -27,6 +27,7 @@ import java.io.FileNotFoundException;", "removed": [] }, { "added": [ " public int getExclusiveFileLock() throws StandardException;" ], "header": "@@ -253,7 +254,7 @@ public interface StorageFile", "removed": [ " public int getExclusiveFileLock();" ] } ] }, { "file": "java/testing/org/apache/derbyTesting/functionTests/util/corruptio/CorruptFile.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.error.StandardException;" ], "header": "@@ -25,13 +25,11 @@ import org.apache.derby.io.StorageRandomAccessFile;", "removed": [ "import java.io.FileOutputStream;", "import java.io.FileInputStream;", "import java.io.RandomAccessFile;" ] }, { "added": [ " public synchronized int getExclusiveFileLock() throws StandardException" ], "header": "@@ -322,7 +320,7 @@ class CorruptFile implements StorageFile {", "removed": [ " public synchronized int getExclusiveFileLock()" ] } ] }, { "file": "java/testing/org/apache/derbyTesting/junit/TestConfiguration.java", "hunks": [ { "added": [ " * @param dbName We sometimes need to know outside to be able to pass it on" ], "header": "@@ -710,7 +710,7 @@ public final class TestConfiguration {", "removed": [ " * @param dbName We sometimes need to know outside to be able topass it on" ] }, { "added": [ " this.hostName = DEFAULT_HOSTNAME;" ], "header": "@@ -1046,7 +1046,7 @@ public final class TestConfiguration {", "removed": [ " this.hostName = null;" ] } ] } ]
derby-DERBY-5363-ff249585
DERBY-5363 Tighten permissions of DB files to owner with >= JDK7 Patch derby-5363-followup-unix. It turns out there is no guarantee the the underlying file system supports ACLs even though Files#getFileAttributeView called with aclFileAttributeViewClz.class as an argument returns an object. We also need to call the method: FileStore#supportsFileAttributeView(AclFileAttributeView.class) to ascertain whether we have support for ACLs. To get at the current FileStore, we need to inquire about that given a path: Files.getFileStore(<path>) which requires the RuntimePermission "getFileStoreAttributes", hence the current patch's changes to the policy files. With the patch, RestrictiveFilePermissionsTest run OK on Solaris/UFS. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1180713 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/services/io/FileUtil.java", "hunks": [ { "added": [ " private static Class fileStoreClz;", " private static Method supportsFileAttributeView;", " private static Method getFileStore;" ], "header": "@@ -611,9 +611,12 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [] }, { "added": [ " fileStoreClz = Class.forName(", " \"java.nio.file.FileStore\");", " get = pathsClz.getMethod(", " \"get\",", " new Class[]{String.class, stringArrayClz});", " getFileAttributeView = filesClz.getMethod(", " \"getFileAttributeView\",", " new Class[]{pathClz, Class.class, linkOptionArrayClz});", " supportsFileAttributeView = fileStoreClz.getMethod(", " \"supportsFileAttributeView\",", " new Class[]{Class.class});", " getFileStore = filesClz.getMethod(\"getFileStore\",", " new Class[]{pathClz});" ], "header": "@@ -712,17 +715,19 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [ "", " get = pathsClz.", " getMethod(\"get\",", " new Class[]{String.class, stringArrayClz});", "", " getFileAttributeView = filesClz.", " getMethod(\"getFileAttributeView\",", " new Class[]{pathClz,", " Class.class,", " linkOptionArrayClz});", "" ] }, { "added": [ " e.printStackTrace();" ], "header": "@@ -747,6 +752,7 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [] }, { "added": [ " // ACLs supported on this platform, now check the current file", " // system:", " Object fileStore = getFileStore.invoke(", " null,", " new Object[]{fileP});", "", " boolean supported =", " ((Boolean)supportsFileAttributeView.invoke(", " fileStore,", " new Object[]{aclFileAttributeViewClz})).booleanValue();", "", " if (!supported) {", " return false;", " }", "", "" ], "header": "@@ -869,6 +875,22 @@ nextFile:\tfor (int i = 0; i < list.length; i++) {", "removed": [] } ] } ]
derby-DERBY-5367-9a3cbedf
DERBY-5367: Stale data retrieved when using new collation=TERRITORY_BASED:PRIMARY feature Deoptimize code path for BTree insert when updating columns with a collation different from UCS BASIC. Simply undeleting the existing matching row (marked as deleted) may be incorrect, because the value stored there can be different from the key value used for lookup due to the collation. Added code to track whether a conglomerate contains a collated column or not, such that the right insert code path can be chosen. The array of collation ids is scanned when a conglomerate is created, and when a conglomerate is "restored" from disk (i.e. readExternal). Added a test for the new code path (based on the issue report). Patch file: derby-5367-4c-fix_with_optimization_improved.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1174436 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/access/btree/BTree.java", "hunks": [ { "added": [], "header": "@@ -57,8 +57,6 @@ import java.io.ObjectInput;", "removed": [ "import org.apache.derby.iapi.services.io.ArrayUtil;", "" ] }, { "added": [ " /**", " * Tells if there is at least one column in the conglomerate whose collation", " * isn't StringDataValue.COLLATION_TYPE_UCS_BASIC.", " */", " protected boolean hasCollatedTypes;" ], "header": "@@ -192,6 +190,11 @@ public abstract class BTree extends GenericConglomerate", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/store/access/btree/BTreeController.java", "hunks": [ { "added": [], "header": "@@ -51,7 +51,6 @@ import org.apache.derby.iapi.types.RowLocation;", "removed": [ "import org.apache.derby.impl.store.access.conglomerate.TemplateRow;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/access/btree/OpenBTree.java", "hunks": [ { "added": [], "header": "@@ -34,7 +34,6 @@ import org.apache.derby.iapi.store.access.TransactionController;", "removed": [ "import org.apache.derby.iapi.store.raw.RecordHandle;" ] }, { "added": [], "header": "@@ -91,7 +90,6 @@ public class OpenBTree", "removed": [ " private DynamicCompiledOpenConglomInfo init_dynamic_info;" ] }, { "added": [], "header": "@@ -414,8 +412,6 @@ public class OpenBTree", "removed": [ " init_dynamic_info = dynamic_info;", "" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/access/btree/index/B2I.java", "hunks": [ { "added": [], "header": "@@ -29,12 +29,9 @@ import java.util.Properties;", "removed": [ "import org.apache.derby.iapi.services.io.FormatableBitSet;", "import org.apache.derby.iapi.store.raw.Page;", "import org.apache.derby.impl.store.access.btree.ControlRow;" ] }, { "added": [], "header": "@@ -66,7 +63,6 @@ import org.apache.derby.impl.store.access.btree.OpenBTree;", "removed": [ "import org.apache.derby.iapi.services.io.CompressedNumber;" ] }, { "added": [ " hasCollatedTypes = hasCollatedColumns(collation_ids);" ], "header": "@@ -596,6 +592,7 @@ public class B2I extends BTree", "removed": [] }, { "added": [ " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(!hasCollatedTypes);", " }" ], "header": "@@ -1156,6 +1153,9 @@ public class B2I extends BTree", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/store/access/conglomerate/ConglomerateUtil.java", "hunks": [ { "added": [], "header": "@@ -28,7 +28,6 @@ import org.apache.derby.iapi.services.io.CompressedNumber;", "removed": [ "import org.apache.derby.iapi.store.access.ColumnOrdering;" ] }, { "added": [ " * @return {@code true} if at least one column has a different collation", " * than UCS BASIC, {@code false} otherwise.", " public static boolean readCollationIdArray(" ], "header": "@@ -310,9 +309,11 @@ public final class ConglomerateUtil", "removed": [ " public static void readCollationIdArray(" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/access/conglomerate/GenericConglomerate.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.types.StringDataValue;" ], "header": "@@ -30,8 +30,8 @@ import org.apache.derby.iapi.error.StandardException;", "removed": [ "" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/access/conglomerate/OpenConglomerateScratchSpace.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.sanity.SanityManager;", "" ], "header": "@@ -32,6 +32,8 @@ import org.apache.derby.iapi.types.DataValueDescriptor;", "removed": [] }, { "added": [ " private final int[] format_ids;", " private final int[] collation_ids;", " /**", " * Tells if there is at least one type in the conglomerate whose collation", " * isn't StringDataValue.COLLATION_TYPE_UCS_BASIC. This can be determined", " * by looking at the collation ids, but now the caller is passing in the", " * value to avoid having to look at all the collation ids multiple times.", " */", " private final boolean hasCollatedTypes;" ], "header": "@@ -74,8 +76,15 @@ public class OpenConglomerateScratchSpace", "removed": [ " private int[] format_ids;", " private int[] collation_ids;" ] }, { "added": [ " * @param hasCollatedTypes whether there is at least one collated type with", " * a collation other than UCS BASIC in the conglomerate", " int[] collation_ids,", " boolean hasCollatedTypes)", " this.hasCollatedTypes = hasCollatedTypes;", " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(GenericConglomerate.hasCollatedColumns(", " collation_ids) == hasCollatedTypes);", " }" ], "header": "@@ -96,13 +105,21 @@ public class OpenConglomerateScratchSpace", "removed": [ " int[] collation_ids)" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/access/heap/Heap.java", "hunks": [ { "added": [], "header": "@@ -42,7 +42,6 @@ import org.apache.derby.iapi.error.StandardException;", "removed": [ "import org.apache.derby.iapi.store.access.conglomerate.TransactionManager;" ] }, { "added": [ " /**", " * Tells if there is at least one column in the conglomerate whose collation", " * isn't StringDataValue.COLLATION_TYPE_UCS_BASIC.", " */", " private boolean hasCollatedTypes;" ], "header": "@@ -190,6 +189,11 @@ public class Heap", "removed": [] }, { "added": [ " hasCollatedTypes = hasCollatedColumns(collation_ids);" ], "header": "@@ -305,6 +309,7 @@ public class Heap", "removed": [] }, { "added": [ " return(new OpenConglomerateScratchSpace(", " format_ids, collation_ids, hasCollatedTypes));" ], "header": "@@ -580,7 +585,8 @@ public class Heap", "removed": [ " return(new OpenConglomerateScratchSpace(format_ids, collation_ids));" ] }, { "added": [ " if (SanityManager.DEBUG) {", " SanityManager.ASSERT(!hasCollatedTypes);", " }" ], "header": "@@ -1202,6 +1208,9 @@ public class Heap", "removed": [] } ] } ]
derby-DERBY-5369-0d2a54f3
DERBY-5369: Checkin Brett Bergquist's patch to add != to the list of operators supported by restricted table functions. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1160858 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-537-3cfc8572
DERBY-537 (partial) Change the DatabaseClassLoadingTest to install and replace the jar files using the SupportFilesSetup decorator so that the engine will have the correct permissions to read the jar files. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@473780 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-537-609999f7
DERBY-537 (partial) Remove some dead code for handling jar files stored within the database. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@474376 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/store/access/FileResource.java", "hunks": [ { "added": [], "header": "@@ -22,7 +22,6 @@", "removed": [ "import org.apache.derby.iapi.store.access.DatabaseInstant;" ] }, { "added": [ "\tpublic void remove(String name, long currentGenerationId)" ], "header": "@@ -84,13 +83,10 @@ public interface FileResource {", "removed": [ "\t @param purgeOnCommit true means purge the fileResource ", "\t when the current transaction commits. false means retain", "\t the file resource for use by replication. ", "\tpublic void remove(String name, long currentGenerationId, boolean purgeOnCommit)" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/AddJarConstantAction.java", "hunks": [ { "added": [], "header": "@@ -33,8 +33,6 @@ import org.apache.derby.catalog.UUID;", "removed": [ "", "\tprivate final UUID id;" ] }, { "added": [ "\tAddJarConstantAction(" ], "header": "@@ -53,12 +51,11 @@ class AddJarConstantAction extends DDLConstantAction", "removed": [ "\tAddJarConstantAction(UUID id,", "\t\tthis.id = id;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/DropJarConstantAction.java", "hunks": [ { "added": [], "header": "@@ -35,8 +35,6 @@ import org.apache.derby.catalog.UUID;", "removed": [ "", "\tprivate final UUID id;" ] }, { "added": [ "\tDropJarConstantAction(String schemaName," ], "header": "@@ -49,15 +47,12 @@ class DropJarConstantAction extends DDLConstantAction", "removed": [ "\t *\t@param\tid\t\t\t\t\tThe id for the jar file", "\tDropJarConstantAction(UUID id,", "\t\t\t\t\t\t\t\t String schemaName,", "\t\tthis.id = id;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/JarDDL.java", "hunks": [ { "added": [ "\t\t\tcaf.getAddJarConstantAction(schemaName,sqlName,externalPath);" ], "header": "@@ -43,7 +43,7 @@ public class JarDDL", "removed": [ "\t\t\tcaf.getAddJarConstantAction(null,schemaName,sqlName,externalPath);" ] }, { "added": [ "\t\t\tcaf.getDropJarConstantAction(schemaName,sqlName);" ], "header": "@@ -61,7 +61,7 @@ public class JarDDL", "removed": [ "\t\t\tcaf.getDropJarConstantAction(null,schemaName,sqlName);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/JarUtil.java", "hunks": [ { "added": [], "header": "@@ -53,7 +53,6 @@ class JarUtil", "removed": [ "\tprivate UUID id; //For add null means create a new id." ] }, { "added": [ "\tprivate JarUtil(String schemaName, String sqlName)" ], "header": "@@ -65,10 +64,9 @@ class JarUtil", "removed": [ "\tprivate JarUtil(UUID id, String schemaName, String sqlName)", "\t\tthis.id = id;" ] }, { "added": [ "\tadd(String schemaName, String sqlName, String externalPath)", "\t\tJarUtil jutil = new JarUtil(schemaName, sqlName);" ], "header": "@@ -91,10 +89,10 @@ class JarUtil", "removed": [ "\tadd(UUID id, String schemaName, String sqlName, String externalPath)", "\t\tJarUtil jutil = new JarUtil(id, schemaName, sqlName);" ] }, { "added": [ " fid = ddg.newFileInfoDescriptor(/*DJD*/null, sd, sqlName, generationId);" ], "header": "@@ -138,7 +136,7 @@ class JarUtil", "removed": [ " fid = ddg.newFileInfoDescriptor(id, sd, sqlName, generationId);" ] }, { "added": [ "\tdrop(String schemaName, String sqlName)", "\t\tJarUtil jutil = new JarUtil(schemaName,sqlName);", "\t\tjutil.drop();" ], "header": "@@ -164,11 +162,11 @@ class JarUtil", "removed": [ "\tdrop(UUID id, String schemaName, String sqlName,boolean purgeOnCommit)", "\t\tJarUtil jutil = new JarUtil(id, schemaName,sqlName);", "\t\tjutil.drop(purgeOnCommit);" ] }, { "added": [ "\tprivate void drop() throws StandardException" ], "header": "@@ -177,12 +175,10 @@ class JarUtil", "removed": [ "\t @param purgeOnCommit True means purge the old jar file on commit. False", "\t means leave it around for use by replication.", "\tprivate void drop(boolean purgeOnCommit) throws StandardException" ] }, { "added": [], "header": "@@ -191,15 +187,6 @@ class JarUtil", "removed": [ "\t\tif (SanityManager.DEBUG)", "\t\t{", "\t\t\tif (id != null && !fid.getUUID().equals(id))", "\t\t\t{", "\t\t\t\tSanityManager.THROWASSERT(\"Drop id mismatch want=\"+id+", "\t\t\t\t\t\t\" have \"+fid.getUUID());", "\t\t\t}", "\t\t}", "" ] }, { "added": [ "\t\t\t\tfid.getGenerationId());" ], "header": "@@ -231,7 +218,7 @@ class JarUtil", "removed": [ "\t\t\t\tfid.getGenerationId(), true /*purgeOnCommit*/);" ] }, { "added": [ "\treplace(String schemaName, String sqlName,", "\t\t\tString externalPath)", "\t\tJarUtil jutil = new JarUtil(schemaName,sqlName);", "\t\t\treturn jutil.replace(is);" ], "header": "@@ -242,29 +229,26 @@ class JarUtil", "removed": [ "\t @param id The id for the jar file we add. Ignored if null.", "\t @param purgeOnCommit True means purge the old jar file on commit. False", "\t means leave it around for use by replication.", "\treplace(UUID id,String schemaName, String sqlName,", "\t\t\tString externalPath,boolean purgeOnCommit)", "\t\tJarUtil jutil = new JarUtil(id,schemaName,sqlName);", "\t\t\treturn jutil.replace(is,purgeOnCommit);" ] }, { "added": [ "\tprivate long replace(InputStream is) throws StandardException" ], "header": "@@ -286,7 +270,7 @@ class JarUtil", "removed": [ "\tprivate long replace(InputStream is,boolean purgeOnCommit) throws StandardException" ] }, { "added": [], "header": "@@ -298,15 +282,6 @@ class JarUtil", "removed": [ "\t\tif (SanityManager.DEBUG)", "\t\t{", "\t\t\tif (id != null && !fid.getUUID().equals(id))", "\t\t\t{", "\t\t\t\tSanityManager.THROWASSERT(\"Replace id mismatch want=\"+", "\t\t\t\t\tid+\" have \"+fid.getUUID());", "\t\t\t}", "\t\t}", "" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/ReplaceJarConstantAction.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.sql.execute.ConstantAction;" ], "header": "@@ -21,21 +21,15 @@", "removed": [ "import org.apache.derby.iapi.services.sanity.SanityManager;", "import org.apache.derby.iapi.sql.execute.ConstantAction;", "", "", "import org.apache.derby.catalog.UUID;", "", "\tprivate final UUID id;" ] }, { "added": [ "\tReplaceJarConstantAction(" ], "header": "@@ -48,17 +42,15 @@ class ReplaceJarConstantAction extends DDLConstantAction", "removed": [ "\t *\t@param\tid\t\t\t\t\tThe id for the jar file", "\tReplaceJarConstantAction(UUID id,", "\t\tthis.id = id;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/data/RFResource.java", "hunks": [ { "added": [ "\tprivate final BaseDataFileFactory factory;", "\tRFResource(BaseDataFileFactory dataFactory) {" ], "header": "@@ -46,9 +46,9 @@ import java.io.IOException;", "removed": [ "\tprotected final BaseDataFileFactory factory;", "\tpublic RFResource(BaseDataFileFactory dataFactory) {" ] }, { "added": [ "\tpublic void remove(String name, long currentGenerationId)" ], "header": "@@ -148,7 +148,7 @@ class RFResource implements FileResource {", "removed": [ "\tpublic void remove(String name, long currentGenerationId, boolean purgeOnCommit)" ] }, { "added": [ "\t\ttran.logAndDo(new RemoveFileOperation(name, currentGenerationId, true));", "\t\tServiceable s = new RemoveFile(getAsFile(name, currentGenerationId));", "\t tran.addPostCommitWork(s);", "\tpublic long replace(String name, long currentGenerationId, InputStream source)", "\t\tremove(name, currentGenerationId);" ], "header": "@@ -171,27 +171,24 @@ class RFResource implements FileResource {", "removed": [ "\t\ttran.logAndDo(new RemoveFileOperation(name, currentGenerationId, purgeOnCommit));", "\t\tif (purgeOnCommit) {", "\t\t\tServiceable s = new RemoveFile(getAsFile(name, currentGenerationId));", "", "\t\t\ttran.addPostCommitWork(s);", "\t\t}", "\tpublic long replace(String name, long currentGenerationId, InputStream source, boolean purgeOnCommit)", "\t\tremove(name, currentGenerationId, purgeOnCommit);" ] }, { "added": [], "header": "@@ -209,14 +206,6 @@ class RFResource implements FileResource {", "removed": [ "\t/**", "\t @see FileResource#getAsFile", "\t */", "\tprivate StorageFile getAsFile(String name)", "\t{", "\t\treturn factory.storageFactory.newStorageFile( name);", "\t}", "" ] } ] } ]
derby-DERBY-537-66f66a11
DERBY-537 Fix sqlj.replace_jar and sqlj.remove_jar to work under a security manager. Add a test to test the simple mechanics of the the sqlj functions separated from the jar files being active on the database class path. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@483738 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/execute/JarUtil.java", "hunks": [ { "added": [ " long generationId = setJar(jarExternalName, is, true, 0L);" ], "header": "@@ -133,7 +133,7 @@ class JarUtil", "removed": [ " long generationId = setJar(jarExternalName, is);" ] }, { "added": [ "\t\t\tlong generationId = setJar(jarExternalName, is, false,", "\t\t\t\t\tfid.getGenerationId());", " " ], "header": "@@ -284,10 +284,9 @@ class JarUtil", "removed": [ "\t\t\tlong generationId = ", "\t\t\t\tfr.replace(jarExternalName,", "\t\t\t\t\tfid.getGenerationId(), is);", "" ] } ] }, { "file": "java/engine/org/apache/derby/impl/store/raw/data/RFResource.java", "hunks": [ { "added": [ "import java.security.AccessController;", "import java.security.PrivilegedAction;", "import java.security.PrivilegedActionException;", "import java.security.PrivilegedExceptionAction;" ], "header": "@@ -43,6 +43,10 @@ import java.io.InputStream;", "removed": [] }, { "added": [ "final class RemoveFile implements Serviceable, PrivilegedExceptionAction" ], "header": "@@ -223,7 +227,7 @@ class RFResource implements FileResource {", "removed": [ "class RemoveFile implements Serviceable" ] }, { "added": [ " try {", " AccessController.doPrivileged(this);", " } catch (PrivilegedActionException e) {", " throw (StandardException) (e.getException());", " }" ], "header": "@@ -235,15 +239,11 @@ class RemoveFile implements Serviceable", "removed": [ " // SECURITY PERMISSION - MP1, OP5", " if (fileToGo.exists())", " {", " if (!fileToGo.delete())", " {", " throw StandardException.newException(", " SQLState.FILE_CANNOT_REMOVE_FILE, fileToGo);", " }", " }" ] } ] } ]
derby-DERBY-537-6989c4c7
DERBY-537 (partial) Cleanup JarUtil removing code that is never called and making class package private. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@473416 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/execute/JarUtil.java", "hunks": [ { "added": [ "class JarUtil" ], "header": "@@ -48,12 +48,8 @@ import java.sql.CallableStatement;", "removed": [ "public class JarUtil", "\tpublic static final String ADD_JAR_DDL = \"ADD JAR\";", "\tpublic static final String DROP_JAR_DDL = \"DROP JAR\";", "\tpublic static final String REPLACE_JAR_DDL = \"REPLACE JAR\";", "\tpublic static final String READ_JAR = \"READ JAR\";" ] }, { "added": [ "\tprivate JarUtil(UUID id, String schemaName, String sqlName)" ], "header": "@@ -68,7 +64,7 @@ public class JarUtil", "removed": [ "\tpublic JarUtil(UUID id, String schemaName, String sqlName)" ] }, { "added": [ "\tstatic long" ], "header": "@@ -93,7 +89,7 @@ public class JarUtil", "removed": [ "\tstatic public long" ] }, { "added": [ "\tprivate long add(InputStream is) throws StandardException" ], "header": "@@ -121,7 +117,7 @@ public class JarUtil", "removed": [ "\tpublic long add(InputStream is) throws StandardException" ] }, { "added": [ "\tstatic void" ], "header": "@@ -160,7 +156,7 @@ public class JarUtil", "removed": [ "\tstatic public void" ] }, { "added": [ "\tprivate void drop(boolean purgeOnCommit) throws StandardException" ], "header": "@@ -179,7 +175,7 @@ public class JarUtil", "removed": [ "\tpublic void drop(boolean purgeOnCommit) throws StandardException" ] }, { "added": [ "\tstatic long" ], "header": "@@ -249,7 +245,7 @@ public class JarUtil", "removed": [ "\tstatic public long" ] }, { "added": [ "\tprivate long replace(InputStream is,boolean purgeOnCommit) throws StandardException" ], "header": "@@ -283,7 +279,7 @@ public class JarUtil", "removed": [ "\tpublic long replace(InputStream is,boolean purgeOnCommit) throws StandardException" ] }, { "added": [], "header": "@@ -332,22 +328,6 @@ public class JarUtil", "removed": [ "\t/**", "\t Get the FileInfoDescriptor for a jar file from the current connection's database or", "\t null if it does not exist.", "", "\t @param schemaName the name for the schema that holds the jar file.", "\t @param sqlName the sql name for the jar file.", "\t @return The FileInfoDescriptor.", "\t @exception StandardException Opps", "\t */", "\tpublic static FileInfoDescriptor getInfo(String schemaName, String sqlName, String statementType)", "\t\t throws StandardException", "\t{", "\t\tJarUtil jUtil = new JarUtil(null,schemaName,sqlName);", "\t\treturn jUtil.getInfo();", "\t}", "" ] }, { "added": [], "header": "@@ -359,44 +339,6 @@ public class JarUtil", "removed": [ "\t// get the current version of the jar file as a File or InputStream", "\tpublic static Object getAsObject(String schemaName, String sqlName)", "\t\t throws StandardException", "\t{", "\t\tJarUtil jUtil = new JarUtil(null,schemaName,sqlName);", "", "\t\tFileInfoDescriptor fid = jUtil.getInfo();", "\t\tif (fid == null)", "\t\t\tthrow StandardException.newException(SQLState.LANG_FILE_DOES_NOT_EXIST, sqlName,schemaName);", "", "\t\tlong generationId = fid.getGenerationId();", "", "\t\tStorageFile f = jUtil.getAsFile(generationId);", "\t\tif (f != null)", "\t\t\treturn f;", "", "\t\treturn jUtil.getAsStream(generationId);", "\t}", "", "\tprivate StorageFile getAsFile(long generationId) {", "\t\treturn fr.getAsFile(JarDDL.mkExternalName(schemaName, sqlName, fr.getSeparatorChar()), generationId);", "\t}", "", "\tpublic static InputStream getAsStream(String schemaName, String sqlName,", "\t\tlong generationId) throws StandardException {", "\t\tJarUtil jUtil = new JarUtil(null,schemaName,sqlName);", "", "\t\treturn jUtil.getAsStream(generationId);\t\t", "\t}", "", "\tprivate InputStream getAsStream(long generationId) throws StandardException {", "\t\ttry {", "\t\t\treturn fr.getAsStream(JarDDL.mkExternalName(schemaName, sqlName, fr.getSeparatorChar()), generationId);", "\t\t} catch (IOException ioe) {", " throw StandardException.newException(SQLState.LANG_FILE_ERROR, ioe, ioe.toString()); ", "\t\t}", "\t}", "" ] } ] } ]
derby-DERBY-537-9fef6397
DERBY-537 (partial) Call FileResource.add in a privleged block when executing code to add a jar, driven by sqlj.install_jar. Allows one test fixture in DatabaseClassLoadingTest to be executed with a security manager. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@473834 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/execute/JarUtil.java", "hunks": [ { "added": [], "header": "@@ -39,7 +39,6 @@ import org.apache.derby.iapi.sql.depend.DependencyManager;", "removed": [ "import org.apache.derby.iapi.services.io.FileUtil;" ] }, { "added": [ "\tprivate long add(final InputStream is) throws StandardException" ], "header": "@@ -119,7 +118,7 @@ class JarUtil", "removed": [ "\tprivate long add(InputStream is) throws StandardException" ] }, { "added": [ " SchemaDescriptor sd = dd.getSchemaDescriptor(schemaName, null, true);", " try {", " notifyLoader(false);", " dd.invalidateAllSPSPlans();", " final String jarExternalName = JarDDL.mkExternalName(schemaName,", " sqlName, fr.getSeparatorChar());", "", " long generationId = setJar(jarExternalName, is);", "", " fid = ddg.newFileInfoDescriptor(id, sd, sqlName, generationId);", " dd.addDescriptor(fid, sd, DataDictionary.SYSFILES_CATALOG_NUM,", " false, lcc.getTransactionExecute());", " return generationId;", " } finally {", " notifyLoader(true);", " }", " * Drop a jar file from the current connection's database.", " * ", " * @param id", " * The id for the jar file we drop. Ignored if null.", " * @param schemaName", " * the name for the schema that holds the jar file.", " * @param sqlName", " * the sql name for the jar file.", " * @param purgeOnCommit", " * True means purge the old jar file on commit. False means leave", " * it around for use by replication.", " * ", " * @exception StandardException", " * Opps", " */" ], "header": "@@ -130,34 +129,40 @@ class JarUtil", "removed": [ "\t\ttry {", "\t\t\tnotifyLoader(false);", "\t\t\tdd.invalidateAllSPSPlans();", "\t\t\tlong generationId = fr.add(JarDDL.mkExternalName(schemaName, sqlName, fr.getSeparatorChar()),is);", "", "\t\t\tSchemaDescriptor sd = dd.getSchemaDescriptor(schemaName, null, true);", "", "\t\t\tfid = ddg.newFileInfoDescriptor(id, sd,", "\t\t\t\t\t\t\tsqlName, generationId);", "\t\t\tdd.addDescriptor(fid, sd, DataDictionary.SYSFILES_CATALOG_NUM,", "\t\t\t\t\t\t\t false, lcc.getTransactionExecute());", "\t\t\treturn generationId;", "\t\t} finally {", "\t\t\tnotifyLoader(true);", "\t\t}", "\t Drop a jar file from the current connection's database.", "", "\t @param id The id for the jar file we drop. Ignored if null.", "\t @param schemaName the name for the schema that holds the jar file.", "\t @param sqlName the sql name for the jar file.", "\t @param purgeOnCommit True means purge the old jar file on commit. False", "\t means leave it around for use by replication.", "", "\t @exception StandardException Opps", "\t */" ] }, { "added": [ " final String jarExternalName =", " JarDDL.mkExternalName(schemaName, sqlName, fr.getSeparatorChar());", "\t\t\t\tfr.replace(jarExternalName," ], "header": "@@ -307,11 +312,13 @@ class JarUtil", "removed": [ "\t\t\t\tfr.replace(JarDDL.mkExternalName(schemaName, sqlName, fr.getSeparatorChar())," ] } ] } ]
derby-DERBY-537-a4b59331
DERBY-709, committing on behalf of Suresh Thalamati -- Removed the requirement for read permission on "user.dir" for backup to run under security manager. Absolute Path were used only to log into backup history file. Changed it to log canonical paths only if it can be obtainer ,otherwise only relative paths are written to the backup history file. -- Added a missing privileged blocks to the save service.properties file into the backup. -- Added privileged blocks for test util file functions that are called through SQL functions/procedures. -- Enabled some of the tests which were not running under security manager earlier because of this bug to run by default with security manager. Backup tests that test backup with jar Operations still can not be run under security manager due to bug DERBY-537. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@381389 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java", "hunks": [ { "added": [ " private static final int REGULAR_FILE_GET_CANONICALPATH_ACTION = 15;", " private static final int STORAGE_FILE_GET_CANONICALPATH_ACTION = 16;" ], "header": "@@ -140,6 +140,8 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [] }, { "added": [ " StorageFile dbHistoryFile = null;", " File backupHistoryFile = null;", " " ], "header": "@@ -592,8 +594,10 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ "" ] }, { "added": [ " ", "\t\t\tbackupcopy = new File(backupDir, dbname);", " MessageId.STORE_BACKUP_STARTED, ", " canonicalDbName, ", " getFilePath(backupcopy)));", " ", " // check if a backup copy of this database already exists," ], "header": "@@ -604,14 +608,17 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " MessageId.STORE_BACKUP_STARTED, canonicalDbName));", "", "\t\t\t// if a backup copy of this database already exists,", "\t\t\tbackupcopy = new File(backupDir, dbname);" ] }, { "added": [ " getFilePath(backupcopy),", " getFilePath(oldbackup)));" ], "header": "@@ -637,8 +644,8 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " backupcopy.getCanonicalPath(),", " oldbackup.getCanonicalPath()));" ] }, { "added": [ " dbHistoryFile = storageFactory.newStorageFile(BACKUP_HISTORY);", " backupHistoryFile = new File(backupcopy, BACKUP_HISTORY); ", " // copy the history file into the backup. ", " if(!privCopyFile(dbHistoryFile, backupHistoryFile))", " throw StandardException. ", " newException(SQLState.RAWSTORE_ERROR_COPYING_FILE,", " dbHistoryFile, backupHistoryFile); ", "", "" ], "header": "@@ -651,6 +658,15 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [] }, { "added": [ " logHistory(historyFile,", " MessageService.getTextMessage(", " MessageId.STORE_DATA_SEG_BACKUP_COMPLETED,", " getFilePath(segBackup)));", "" ], "header": "@@ -780,13 +796,12 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ "\t\t\tlogHistory(historyFile,", " MessageService.getTextMessage(", " MessageId.STORE_COPIED_DB_DIR,", " canonicalDbName,", " backupcopy.getCanonicalPath()));", "\t\t" ] }, { "added": [ " getFilePath(logdir),", " getFilePath(logBackup)));" ], "header": "@@ -794,8 +809,8 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " logdir.getCanonicalPath(),", " logBackup.getCanonicalPath()));" ] }, { "added": [ " getFilePath(oldbackup)));", " // copy the updated version of history file with current", " // backup information into the backup.", " if(!privCopyFile(dbHistoryFile, backupHistoryFile))", " throw StandardException. ", " newException(SQLState.RAWSTORE_ERROR_COPYING_FILE,", " dbHistoryFile, backupHistoryFile); " ], "header": "@@ -841,13 +856,19 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " oldbackup.getCanonicalPath()));" ] }, { "added": [ " /*", " * Get the file path. If the canonical path can be obtained then return the ", " * canonical path, otherwise just return the abstract path. Typically if", " * there are no permission to read user.dir when running under security", " * manager canonical path can not be obtained.", " *", " * This method is used to a write path name to error/status log file, where it", " * would be nice to print full paths but not esstential that the user ", " * grant permissions to read user.dir property.", " */", " private String getFilePath(StorageFile file) {", " String path = privGetCanonicalPath(file);", " if(path != null ) {", " return path;", " }else {", " //can not get the canoncal path, ", " // return the abstract path", " return file.getPath();", " }", " }", "", " /*", " * Get the file path. If the canonical path can be obtained then return the ", " * canonical path, otherwise just return the abstract path. Typically if", " * there are no permission to read user.dir when running under security", " * manager canonical path can not be obtained.", " *", " * This method is used to a write a file path name to error/status log file, ", " * where it would be nice to print full paths but not esstential that the user", " * grant permissions to read user.dir property.", " *", " */", " private String getFilePath(File file) {", " String path = privGetCanonicalPath(file);", " if(path != null ) {", " return path;", " }else {", " // can not get the canoncal path, ", " // return the abstract path", " return file.getPath();", " }", " }" ], "header": "@@ -1172,6 +1193,48 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [] }, { "added": [ "", "", " private synchronized String privGetCanonicalPath(final StorageFile file)", " {", " actionCode = STORAGE_FILE_GET_CANONICALPATH_ACTION;", " actionStorageFile = file;", "", " try", " {", " return (String) AccessController.doPrivileged( this);", " }", " catch( PrivilegedActionException pae) { return null;} // does not throw an exception", " finally", " {", " actionStorageFile = null;", " }", " }", "", "", " private synchronized String privGetCanonicalPath(final File file)", " {", " actionCode = REGULAR_FILE_GET_CANONICALPATH_ACTION;", " actionRegularFile = file;", "", " try", " {", " return (String) AccessController.doPrivileged( this);", " }", " catch( PrivilegedActionException pae) { return null;} // does not throw an exception", " finally", " {", " actionRegularFile = null;", " }", " }", "" ], "header": "@@ -1472,7 +1535,41 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup", "removed": [ " " ] } ] }, { "file": "java/testing/org/apache/derbyTesting/functionTests/util/FTFileUtil.java", "hunks": [ { "added": [ "import java.security.AccessController;", "import java.security.PrivilegedAction;", "import java.security.PrivilegedActionException;", "import java.security.PrivilegedExceptionAction;", "" ], "header": "@@ -22,6 +22,11 @@ package org.apache.derbyTesting.functionTests.util;", "removed": [] }, { "added": [ " * rename a file. ", " * This method is called by some tests through a SQL procedure:", " * RENAME_FILE(LOCATION VARCHAR(32000), NAME VARCHAR(32000), ", " * NEW_NAME VARCHAR(32000))", " * @param location location of the file", " * @param name the file's name", "\t * @param newName the file's new name", "\tpublic static void renameFile(String location, String name , ", " String newName) throws Exception", "\t\tfinal File src = new File(location, name);", "\t\tfinal File dst = new File(location, newName);", " ", " // needs to run in a privileged block as it will be", "\t\t// called through a SQL statement and thus a generated", "\t\t// class. The generated class on the stack has no permissions", "\t\t// granted to it.", " AccessController.doPrivileged(new PrivilegedExceptionAction() {", " public Object run() throws Exception {", " if(!src.renameTo(dst))", " {", " throw new Exception(\"unable to rename File: \" +", " src.getAbsolutePath() +", " \" To: \" + dst.getAbsolutePath());", " }", " ", " return null; // nothing to return", " }", " });", " }", " * This method is called by some tests through a SQL function:", " * fileExists(fileName varchar(128))returns VARCHAR(100)", " *", " public static String fileExists(String fileName) ", " throws PrivilegedActionException", " final File fl = new File(fileName);", " ", " // needs to run in a privileged block as it will be", "\t\t// called through a SQL statement and thus a generated", "\t\t// class. The generated class on the stack has no permissions", "\t\t// granted to it.", "", " return (String) ", " AccessController.doPrivileged(new PrivilegedExceptionAction() {", " public Object run()", " {", " if(fl.exists()) {", " return \"true\";", " }else {", " return \"false\";", " }", " }", " });" ], "header": "@@ -50,40 +55,71 @@ public class FTFileUtil", "removed": [ "\t rename a file", "\t @param location location of the file", "\t @param name the file's name", "\t @param newName the file's new name", "\tpublic static void renameFile(String location, String name , String newName) throws Exception", "\t\tFile src = new File(location, name);", "\t\tFile dst = new File(location, newName);", "\t\tif(!src.renameTo(dst))", "\t\t{", "\t\t\tthrow new Exception(\"unable to rename File: \" +", "\t\t\t\t\t\t\t\tsrc.getAbsolutePath() +", "\t\t\t\t\t\t\t \" To: \" + dst.getAbsolutePath());", "\t\t}", "\t}", " public static String fileExists(String fileName) throws Exception", " File fl = new File(fileName);", " if(fl.exists()) {", " return \"true\";", " }else {", " return \"false\";", " }" ] }, { "added": [ " * Remove a directory and all of its contents.", " * This method is called by some tests through a SQL function:", " * removeDirectory(fileName varchar(128)) returns VARCHAR(100)", " * @param name the file's name.", "\tpublic static String removeDirectory(final String directory)", " throws PrivilegedActionException", " // needs to run in a privileged block as it will be", "\t\t// called through a SQL statement and thus a generated", "\t\t// class. The generated class on the stack has no permissions", "\t\t// granted to it.", "", " return (String) ", " AccessController.doPrivileged(new PrivilegedExceptionAction() {", " public Object run()", " {", " return (removeDirectory(", " new File(directory)) ? \"true\" : \"false\");", " }", " });" ], "header": "@@ -128,17 +164,32 @@ public class FTFileUtil", "removed": [ " *\tRemove a directory and all of its contents.", " * @param name the file's name.", "\tpublic static String removeDirectory(String directory)", "\t return (removeDirectory(new File(directory)) ? \"true\" : \"false\");" ] } ] } ]
derby-DERBY-537-ed1c2e3b
DERBY-537 (partial) Fix the reading of the jar file (through a URL or file name) for sqlj.install_jar and replace_jar to be under a privileged block. Switched the order of lookup from the jar path to be URL and then as a file name. Otherwise a security exception is thrown trying to open the URL path as a file name. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@473828 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/services/io/FileUtil.java", "hunks": [ { "added": [], "header": "@@ -26,7 +26,6 @@ import org.apache.derby.io.WritableStorageFactory;", "removed": [ "import java.net.*;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/JarUtil.java", "hunks": [ { "added": [ "import java.io.FileInputStream;", "import java.net.MalformedURLException;", "import java.net.URL;", "import java.security.AccessController;", "import java.security.PrivilegedActionException;", "" ], "header": "@@ -40,13 +40,15 @@ import org.apache.derby.iapi.reference.SQLState;", "removed": [ "import org.apache.derby.io.StorageFile;", "import java.sql.CallableStatement;", "import java.sql.Connection;", "import java.sql.SQLException;" ] }, { "added": [ "\t\t\tis = openJarURL(externalPath);" ], "header": "@@ -97,7 +99,7 @@ class JarUtil", "removed": [ "\t\t\tis = FileUtil.getInputStream(externalPath, 0);" ] }, { "added": [ "\t\t\tis = openJarURL(externalPath);" ], "header": "@@ -255,7 +257,7 @@ class JarUtil", "removed": [ "\t\t\tis = FileUtil.getInputStream(externalPath, 0);" ] }, { "added": [ "", " /**", " * Open an input stream to read a URL or a file.", " * URL is attempted first, if the string does not conform", " * to a URL then an attempt to open it as a regular file", " * is tried.", " * <BR>", " * Attempting the file first can throw a security execption", " * when a valid URL is passed in.", " * The security exception is due to not have the correct permissions", " * to access the bogus file path. To avoid this the order was reversed", " * to attempt the URL first and only attempt a file open if creating", " * the URL throws a MalformedURLException.", " */", " private static InputStream openJarURL(final String externalPath)", " throws IOException", " {", " try {", " return (InputStream) AccessController.doPrivileged", " (new java.security.PrivilegedExceptionAction(){", " ", " public Object run() throws IOException { ", " try {", " return new URL(externalPath).openStream();", " } catch (MalformedURLException mfurle)", " {", " return new FileInputStream(externalPath);", " }", " }", " });", " } catch (PrivilegedActionException e) {", " throw (IOException) e.getException();", " }", " }" ], "header": "@@ -343,4 +345,38 @@ class JarUtil", "removed": [] } ] } ]
derby-DERBY-5370-8f23d8f7
DERBY-5370: Checkin in Brett Bergquist's patch to make Restriction.toSQL() handle more data types. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1160445 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5374-ce63a70b
DERBY-5374; converted ij5Test fails with weme6.2 (CDC/Foundation): junit.framework.ComparisonFailure: Output at line 1 expected:<CONNECTION0* - jdbc:derby:wombat> but was:<ERROR XJ004: Database '' not found.> skipping the test with JSR169/CDC/Foundation. Also correcting a comment. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1155163 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5377-8454e15e
DERBY-5377; AssertionFailedError in testCaseCS4595B_NonUniqueIndex in AccessTest Adjusting the test to not check for pages visited in assertStatsOK method if 'null' is passed in for the expPages parameter, and passing in 'null' with the three queries that have shown to hit this issue. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1306596 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5377-efe47b40
DERBY-5377: AssertionFailedError in testCaseCS4595B_NonUniqueIndex in AccessTest Dump the statistics on assert failures to help debugging the problem. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1227121 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5379-2198fafc
DERBY-5379 testDERBY5120NumRowsInSydependsForTrigger - The number of values assigned is not the same as the number of specified or implied columns. DERBY-5484 Upgradetest fails with upgrade from 10.8.2.2 (7 errors, 1 failure) on trunk The above 2 jiras are duplicates. The upgrade tests are failing when doing an upgrade from 10.8.2.2 to trunk. The tests that are failing were written for DERBY-5120, DERBY-5044. Both these bugs got fixed in 10.8.2.2 and higher. The purpose of these tests is to show that when the tests are done with a release with those fixes missing, we will see the incorrect behavior but once the database is upgraded to 10.8.2.2 and higher, the tests will start functioning correctly. The problem is that we do not recognize that if the database is created with 10.8.2.2, then we will not the problem behavior because 10.8.2.2 already has the required fixes in it for DERBY-5120 and DERBY-5044. I have fixed this by making the upgrade test understand that incorrect behavior would be seen only for releases under 10.8.2.2 git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1203252 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5382-4ee7918f
DERBY-5382; Convert existing harness recovery tests to JUnit tests Adjusting OCRecoveryTest to create the database in the first launched method. Modifying TestConfiguration to this end to look for a property 'derby.tests.defaultDatabaseName' which can get passed on in a new BaseTestCase method ((assertLaunchedJUnitTestMethod(String, String)). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1294805 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/TestConfiguration.java", "hunks": [ { "added": [ " // Check for possibly passed in DatabaseName", " // this is used in OCRecoveryTest", " String propDefDbName = getSystemProperties().getProperty(", " \"derby.tests.defaultDatabaseName\");", " if (propDefDbName != null)", " this.defaultDbName = propDefDbName;", " else", " this.defaultDbName=DEFAULT_DBNAME;" ], "header": "@@ -1093,7 +1093,14 @@ public final class TestConfiguration {", "removed": [ " this.defaultDbName = DEFAULT_DBNAME;" ] } ] } ]
derby-DERBY-5382-e06ed553
DERBY-5382; Convert existing harness recovery test to JUnit tests follow up patch, which - removes shutdowns which were preventing any recovery to happen - makes the first step a separate forkable method - adds some comments git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1293028 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5390-8f874691
DERBY-5390: NPE in BasicDatabase.stop in replication slave mode (dd.clearSequenceCaches) Added check to see if the data dictionary is available. Patch file: derby-5390-1a_check_for_null.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1164358 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/db/BasicDatabase.java", "hunks": [ { "added": [ " // The data dictionary is not available if this database has the", " // role as an active replication slave database.", " if (dd != null) {", " try {", " // on orderly shutdown, try not to leak unused numbers from", " // the sequence generators.", " dd.clearSequenceCaches();", " } catch (StandardException se) {", " se.printStackTrace(Monitor.getStream().getPrintWriter());", " }" ], "header": "@@ -241,13 +241,16 @@ public class BasicDatabase implements ModuleControl, ModuleSupportable, Property", "removed": [ " try {", " // on orderly shutdown, try not to leak unused numbers from the sequence generators.", " dd.clearSequenceCaches();", " }", " catch (Throwable t)", " {", " t.printStackTrace(Monitor.getStream().getPrintWriter());" ] } ] } ]
derby-DERBY-5391-70ff9b02
DERBY-5391: Fix the statement duration and error log reader vtis to handle the new timestamp format which we've been using in Derby logs since 10.7. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1162827 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/diag/ErrorLogReader.java", "hunks": [ { "added": [ "\tprivate int endTimestampIndex;" ], "header": "@@ -89,7 +89,7 @@ public class ErrorLogReader extends VTITemplate", "removed": [ "\tprivate int gmtIndex;" ] }, { "added": [ "\tprivate static final String END_TIMESTAMP = \" Thread\";" ], "header": "@@ -97,7 +97,7 @@ public class ErrorLogReader extends VTITemplate", "removed": [ "\tprivate static final String GMT_STRING = \" GMT\";" ] }, { "added": [ " endTimestampIndex = line.indexOf( END_TIMESTAMP );" ], "header": "@@ -178,7 +178,7 @@ public class ErrorLogReader extends VTITemplate", "removed": [ "\t\t\tgmtIndex = line.indexOf(GMT_STRING);" ] }, { "added": [ "\t\t\tif (endTimestampIndex != -1 && threadIndex != -1 && xidIndex != -1 && " ], "header": "@@ -191,7 +191,7 @@ public class ErrorLogReader extends VTITemplate", "removed": [ "\t\t\tif (gmtIndex != -1 && threadIndex != -1 && xidIndex != -1 && " ] }, { "added": [ "\t\t\t\treturn line.substring(0, endTimestampIndex);" ], "header": "@@ -234,7 +234,7 @@ public class ErrorLogReader extends VTITemplate", "removed": [ "\t\t\t\treturn line.substring(0, gmtIndex);" ] }, { "added": [ "\t// column1: TS varchar(29) not null" ], "header": "@@ -314,7 +314,7 @@ public class ErrorLogReader extends VTITemplate", "removed": [ "\t// column1: TS varchar(26) not null" ] } ] }, { "file": "java/engine/org/apache/derby/diag/StatementDuration.java", "hunks": [ { "added": [ "import java.text.SimpleDateFormat;" ], "header": "@@ -26,7 +26,7 @@ import java.io.FileNotFoundException;", "removed": [ "" ] }, { "added": [ "\tprivate int endTimestampIndex;", "\tprivate static final String END_TIMESTAMP = \" Thread\";" ], "header": "@@ -86,13 +86,13 @@ public class StatementDuration extends VTITemplate", "removed": [ "\tprivate int gmtIndex;", "\tprivate static final String GMT_STRING = \" GMT\";" ] }, { "added": [ " endTimestampIndex = line.indexOf( END_TIMESTAMP );", "\t\t\tif (endTimestampIndex != -1 && threadIndex != -1 && xidIndex != -1)" ], "header": "@@ -170,12 +170,12 @@ public class StatementDuration extends VTITemplate", "removed": [ "\t\t\tgmtIndex = line.indexOf(GMT_STRING);", "\t\t\tif (gmtIndex != -1 && threadIndex != -1)" ] }, { "added": [ "\t\t\t\tTimestamp endTs = stringToTimestamp( newRow[0] );", "\t\t\t\tTimestamp startTs = stringToTimestamp( currentRow[0] );" ], "header": "@@ -198,9 +198,9 @@ public class StatementDuration extends VTITemplate", "removed": [ "\t\t\t\tTimestamp endTs = Timestamp.valueOf(newRow[0]);", "\t\t\t\tTimestamp startTs = Timestamp.valueOf(currentRow[0]);" ] }, { "added": [ " // Turn a string into a Timestamp", " private Timestamp stringToTimestamp( String raw ) throws SQLException", " {", " //", " // We have to handle two timestamp formats.", " //", " // 1) Logged timestamps look like this before 10.7 and the fix introduced by DERBY-4752:", " //", " // 2006-12-15 16:14:58.280 GMT", " //", " // 2) From 10.7 onward, logged timestamps look like this:", " //", " // Fri Aug 26 09:28:00 PDT 2011", " //", " String trimmed = raw.trim();", "", " // if we're dealing with a pre-10.7 timestamp", " if ( !Character.isDigit( trimmed.charAt( trimmed.length() -1 ) ) )", " {", " // strip off the trailing timezone, which Timestamp does not expect", "", " trimmed = trimmed.substring( 0, trimmed.length() - 4 );", " ", " return Timestamp.valueOf( trimmed );", " }", " else", " {", " //", " // From 10.7 onward, the logged timestamp was formatted by Date.toString().", " //", " SimpleDateFormat sdf = new SimpleDateFormat( \"EEE MMM dd HH:mm:ss zzz yyyy\" );", "", " try {", " return new Timestamp( sdf.parse( trimmed ).getTime() );", " }", " catch (Exception e)", " {", " throw new SQLException( e.getMessage() );", " }", " }", " }" ], "header": "@@ -208,6 +208,47 @@ public class StatementDuration extends VTITemplate", "removed": [] }, { "added": [ "\t\t\t\treturn line.substring(0, endTimestampIndex);" ], "header": "@@ -250,7 +291,7 @@ public class StatementDuration extends VTITemplate", "removed": [ "\t\t\t\treturn line.substring(0, gmtIndex);" ] } ] } ]
derby-DERBY-5393-6d2e2b6d
DERBY-5393: Remove old in-memory database purge mechanism Removed deprecated functionality. Patch file: derby-5393-1a.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1164370 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/io/VFMemoryStorageFactory.java", "hunks": [ { "added": [], "header": "@@ -63,26 +63,6 @@ public class VFMemoryStorageFactory", "removed": [ " /**", " * TODO: Remove this method once the new mechanism has been added.", " * Deletes the database if it exists.", " *", " * @param dbName the database name", " * @return {@code true} if the database was deleted, {@code false} otherwise", " */", " public static boolean purgeDatabase(final String dbName) {", " // TODO: Should we check if the database is booted / active?", " synchronized (DATABASES) {", " DataStore store = (DataStore)DATABASES.remove(dbName);", " if (store != null) {", " // Delete everything.", " store.purge();", " return true;", " }", " return false;", " }", " }", "" ] }, { "added": [], "header": "@@ -329,7 +309,6 @@ public class VFMemoryStorageFactory", "removed": [ " // TODO: Are there any streams that needs to be flushed?" ] }, { "added": [ " if (dir == null || dir.length() == 0) {" ], "header": "@@ -347,7 +326,7 @@ public class VFMemoryStorageFactory", "removed": [ " if (dir == null || dir.equals(\"\")) {" ] }, { "added": [ " if (path == null || path.length() == 0) {" ], "header": "@@ -364,7 +343,7 @@ public class VFMemoryStorageFactory", "removed": [ " if (path == null || path.equals(\"\")) {" ] } ] } ]
derby-DERBY-5395-431cefd7
DERBY-5395: Let only the DBO run certain diagnostic vtis. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1163740 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/diag/ErrorLogReader.java", "hunks": [ { "added": [ "import java.security.PrivilegedAction;", "import java.security.AccessController;" ], "header": "@@ -26,6 +26,8 @@ import java.io.FileNotFoundException;", "removed": [] }, { "added": [ "import org.apache.derby.iapi.reference.Property;", "import org.apache.derby.iapi.error.StandardException;" ], "header": "@@ -35,8 +37,10 @@ import java.sql.SQLException;", "removed": [] }, { "added": [ "\tpublic ErrorLogReader() throws StandardException", " DiagUtil.checkAccess();", "", " final String home = (String)AccessController.doPrivileged", " (", " new PrivilegedAction()", " {", " public Object run()", " {", " return System.getProperty( Property.SYSTEM_HOME_PROPERTY );", " }", " }", " );" ], "header": "@@ -117,9 +121,20 @@ public class ErrorLogReader extends VTITemplate", "removed": [ "\tpublic ErrorLogReader()", "\t\tString home = System.getProperty(\"derby.system.home\");" ] } ] }, { "file": "java/engine/org/apache/derby/diag/StatementCache.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.error.StandardException;", "import org.apache.derby.iapi.services.context.ContextService;" ], "header": "@@ -29,10 +29,11 @@ import java.util.Collection;", "removed": [ "import org.apache.derby.iapi.sql.conn.ConnectionUtil;" ] } ] }, { "file": "java/engine/org/apache/derby/diag/StatementDuration.java", "hunks": [ { "added": [ "import java.security.PrivilegedAction;", "import java.security.AccessController;" ], "header": "@@ -26,6 +26,8 @@ import java.io.FileNotFoundException;", "removed": [] }, { "added": [ "import org.apache.derby.iapi.error.StandardException;", "import org.apache.derby.iapi.reference.Property;" ], "header": "@@ -35,9 +37,11 @@ import java.sql.SQLException;", "removed": [] }, { "added": [ "\tpublic StatementDuration() throws StandardException", " DiagUtil.checkAccess();", "", " final String home = (String)AccessController.doPrivileged", " (", " new PrivilegedAction()", " {", " public Object run()", " {", " return System.getProperty( Property.SYSTEM_HOME_PROPERTY );", " }", " }", " );" ], "header": "@@ -107,9 +111,20 @@ public class StatementDuration extends VTITemplate", "removed": [ "\tpublic StatementDuration()", "\t\tString home = System.getProperty(\"derby.system.home\");" ] } ] }, { "file": "java/engine/org/apache/derby/diag/TransactionTable.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.error.StandardException;" ], "header": "@@ -21,6 +21,7 @@", "removed": [] } ] } ]
derby-DERBY-5398-c8986dff
DERBY-5398: Use a transient transaction to flush unused sequence values back to disk during orderly engine shutdown. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1166859 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.services.i18n.MessageService;", "import org.apache.derby.iapi.services.monitor.Monitor;", "import org.apache.derby.iapi.store.access.AccessFactory;" ], "header": "@@ -29,10 +29,13 @@ import org.apache.derby.iapi.services.cache.Cacheable;", "removed": [] }, { "added": [ " // We flush the current value to disk on database shutdown also.", " boolean gapClosed = updateCurrentValueOnDisk( null, peekAtCurrentValue() );", "", " // log an error message if we failed to flush the preallocated values.", " if ( !gapClosed )", " {", " String errorMessage = MessageService.getTextMessage", " (", " SQLState.LANG_CANT_FLUSH_PREALLOCATOR,", " _sequenceGenerator.getSchemaName(),", " _sequenceGenerator.getName()", " );", "", " Monitor.getStream().println( errorMessage );", " }" ], "header": "@@ -195,10 +198,24 @@ public abstract class SequenceUpdater implements Cacheable", "removed": [ " updateCurrentValueOnDisk( null, peekAtCurrentValue() );" ] } ] } ]
derby-DERBY-540-bf4839e7
DERBY-540 Do not prepend database name for classpath databases with leading slash.This causes databases to be not found when in jar files on the database. Correct the lookup of resources in the class path storage factory to not use the methods that prepend the current class name, instead use methods from ClassLoader directly. The leading slash was incorrectly added to avoid the automatic package prepending performed by Class.getResource. Removed code that tried to optimise not using the thread context class loader, simply have a fixed lookup for resources of thread context class loader followed by class loader for Derby/system classloader. Add lang/dbjar.sql to test databases within a jar and within a jar on the classpath and class loading from such databases. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@240111 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/io/CPFile.java", "hunks": [ { "added": [ "" ], "header": "@@ -21,13 +21,9 @@", "removed": [ "import org.apache.derby.io.StorageRandomAccessFile;", "", "import org.apache.derby.iapi.services.sanity.SanityManager;", "import java.io.OutputStream;", "import java.io.IOException;" ] }, { "added": [ " " ], "header": "@@ -38,9 +34,7 @@ class CPFile extends InputStreamFile", "removed": [ " private int actionCode;", " private static final int EXISTS_ACTION = 1;", "" ] }, { "added": [ " \tClassLoader cl = Thread.currentThread().getContextClassLoader();", " \tif (cl != null)", " \t\tif (cl.getResource(path) != null)", " \t\t\treturn true;", " \t// don't assume the context class loader is tied", " \t// into the class loader that loaded this class.", " \tcl = getClass().getClassLoader();", "\t\t// Javadoc indicates implementations can use", "\t\t// null as a return from Class.getClassLoader()", "\t\t// to indicate the system/bootstrap classloader.", " \tif (cl != null)", " \t{", " \t\treturn (cl.getResource(path) != null);", " \t}", " \telse", " \t{", " \t\treturn ClassLoader.getSystemResource(path) != null;", " \t}" ], "header": "@@ -72,19 +66,24 @@ class CPFile extends InputStreamFile", "removed": [ " if( storageFactory.useContextLoader)", " {", " ClassLoader cl = Thread.currentThread().getContextClassLoader();", " if( cl != null && cl.getResource( path) != null)", " return true;", " }", " if( getClass().getResource( path) != null)", " {", " if( storageFactory.useContextLoader)", " storageFactory.useContextLoader = false;", " return true;", " }", " return false;" ] } ] } ]
derby-DERBY-5406-35549162
DERBY-5406: Intermittent failures in CompressTableTest and TruncateTableTest Detect if a statement is invalidated while it is being recompiled and retry the compilation. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1175785 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/GenericPreparedStatement.java", "hunks": [ { "added": [ " /** True if the statement was invalidated while it was being compiled. */", " private boolean invalidatedWhileCompiling;" ], "header": "@@ -147,6 +147,8 @@ public class GenericPreparedStatement", "removed": [] }, { "added": [ "", " final int depth = lccToUse.getStatementDepth();", " try {", " rePrepare(lccToUse);", " } finally {", " boolean recompile = false;", "", " // Check if the statement was invalidated while it was", " // compiled. The compiled version of the statement may or", " // not be up to date anymore, so we recompile the statement", " // if this happens. Note that this is checked in a finally", " // block, so we also retry if an exception was thrown. The", " // exception was probably thrown because of the changes", " // that invalidated the statement. If not, recompiling", " // will also fail, and the exception will be exposed to", " // the caller.", " //", " // invalidatedWhileCompiling and isValid are protected by", " // synchronization on \"this\".", " synchronized (this) {", " if (invalidatedWhileCompiling) {", " isValid = false;", " invalidatedWhileCompiling = false;", " recompile = true;", " }", " }", "", " if (recompile) {", " // A new statement context is pushed while compiling.", " // Typically, this context is popped by an error", " // handler at a higher level. But since we retry the", " // compilation, the error handler won't be invoked, so", " // the stack must be reset to its original state first.", " while (lccToUse.getStatementDepth() > depth) {", " lccToUse.popStatementContext(", " lccToUse.getStatementContext(), null);", " }", "", " continue recompileOutOfDatePlan;", " }", " }" ], "header": "@@ -406,7 +408,47 @@ recompileOutOfDatePlan:", "removed": [ "\t\t\t\trePrepare(lccToUse);" ] }, { "added": [ " {", " // Since the statement is in the process of being compiled,", " // and at the end of the compilation it will set isValid to", " // true and overwrite whatever we set it to here, set another", " // flag to indicate that an invalidation was requested. A", " // re-compilation will be triggered if this flag is set, but", " // not until the current compilation is done.", " invalidatedWhileCompiling = true;", " }" ], "header": "@@ -785,7 +827,16 @@ recompileOutOfDatePlan:", "removed": [] } ] } ]
derby-DERBY-5406-6555d3ce
DERBY-5406: Intermittent failures in CompressTableTest and TruncateTableTest If GenericActivationHolder determines that a recompile is needed, it now throws an exception to signal that to the caller instead of doing the recompilation itself. This way, if the statement is invalidated again during the recompilation, the already existing retry logic in the caller (that is, GenericPreparedStatement.executeStmt()) will be used to detect that the recompilation must be retried. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1189067 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/GenericActivationHolder.java", "hunks": [ { "added": [ " final boolean needNewClass =", " gc == null || gc != ps.getActivationClass();", "\t\t\tif (needNewClass || !ac.isValid())", "\t\t\t\tif (needNewClass) {", " // The statement has been re-prepared since the last time", " // we executed it. Get the new activation class.", " newGC = ps.getActivationClass();", " if (newGC == null) {", " // There is no class associated with the statement.", " // Tell the caller that the statement needs to be", " // recompiled.", " throw StandardException.newException(", " SQLState.LANG_STATEMENT_NEEDS_RECOMPILE);", " }" ], "header": "@@ -261,23 +261,24 @@ final public class GenericActivationHolder implements Activation", "removed": [ "\t\t\tif (gc != ps.getActivationClass() || !ac.isValid())", "\t\t\t\tif (gc != ps.getActivationClass()) {", "\t\t\t\t\t// ensure the statement is valid by rePreparing it.", "\t\t\t\t\t// DERBY-3260: If someone else reprepares the statement at", "\t\t\t\t\t// the same time as we do, there's a window between the", "\t\t\t\t\t// calls to rePrepare() and getActivationClass() when the", "\t\t\t\t\t// activation class can be set to null, leading to", "\t\t\t\t\t// NullPointerException being thrown later. Therefore,", "\t\t\t\t\t// synchronize on ps to close the window.", "\t\t\t\t\tsynchronized (ps) {", "\t\t\t\t\t\tps.rePrepare(getLanguageConnectionContext());", "\t\t\t\t\t\tnewGC = ps.getActivationClass();", "\t\t\t\t\t}" ] } ] } ]
derby-DERBY-5406-878e2115
DERBY-5406: Intermittent failures in CompressTableTest and TruncateTableTest Retry compilation if it fails because a conglomerate has disappeared. This may happen if DDL, compress, truncate or similar operations happen while the statement is being compiled. When trying again, the compilation should find the new conglomerate if one exists, or fail with a proper error message if the SQL object has been removed. This is a workaround for a race condition in the dependency management. When binding a statement, the compiler typically builds descriptor objects (like a TableDescriptor) from the system tables and then registers the statement as a dependent on that descriptor. However, another thread may at the same time be invalidating all dependents of that descriptor. It is possible that this happens right before the current statement has been registered as a dependent, and it will never see the invalidation request. Once it actually tries to access the conglomerate associated with the descriptor, it will fail with a "conglomerate does not exist" error, and since the statement did not see the invalidation request, the compiler doesn't know that it should retry the compilation. This fix also backs out the changes made in revision 1187204, as they addressed a subset of the cases handled by this broader fix, and are not needed any more. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1234776 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/GenericStatement.java", "hunks": [ { "added": [ " String prevErrorId = null;", " boolean recompile = false;", " } catch (StandardException se) {", " // There is a chance that we didn't see the invalidation", " // request from a DDL operation in another thread because", " // the statement wasn't registered as a dependent until", " // after the invalidation had been completed. Assume that's", " // what has happened if we see a conglomerate does not exist", " // error, and force a retry even if the statement hasn't been", " // invalidated.", " if (SQLState.STORE_CONGLOMERATE_DOES_NOT_EXIST.equals(", " se.getMessageId())) {", " // STORE_CONGLOMERATE_DOES_NOT_EXIST has exactly one", " // argument: the conglomerate id", " String conglomId = String.valueOf(se.getArguments()[0]);", "", " // Request a recompile of the statement if a conglomerate", " // disappears while we are compiling it. But if we have", " // already retried once because the same conglomerate was", " // missing, there's probably no hope that yet another retry", " // will help, so let's break out instead of potentially", " // looping infinitely.", " if (!conglomId.equals(prevErrorId)) {", " recompile = true;", " }", " prevErrorId = conglomId;", " }", " throw se;", " } finally {" ], "header": "@@ -92,13 +92,40 @@ public class GenericStatement", "removed": [ " } finally {", " boolean recompile = false;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java", "hunks": [ { "added": [], "header": "@@ -55,8 +55,6 @@ import org.apache.derby.iapi.sql.compile.RequiredRowOrdering;", "removed": [ "import org.apache.derby.iapi.sql.depend.DependencyManager;", "" ] }, { "added": [ " // probably doesn't exist anymore." ], "header": "@@ -2349,20 +2347,8 @@ public class FromBaseTable extends FromTable", "removed": [ " // probably doesn't exist anymore because of concurrent DDL or", " // compress operations, and the compilation will have to be tried", " // again.", " // The statement is typically invalidated by the operation", " // that dropped the conglomerate. However, if the invalidation", " // happened before we called createDependency(), we'll miss it", " // and we won't retry the compilation with fresh dictionary", " // information (DERBY-5406). So let's invalidate the statement", " // ourselves here.", " compilerContext.getCurrentDependent().makeInvalid(", " DependencyManager.COMPILE_FAILED,", " getLanguageConnectionContext());", "" ] } ] } ]
derby-DERBY-5406-be1b5ea1
DERBY-5406: Intermittent failures in CompressTableTest and TruncateTableTest Push retry logic down to GenericStatement.prepare() so that it also covers the code path from Connection.prepareStatement(). The previous location only covered compilations requested by the execute methods. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1190220 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/GenericPreparedStatement.java", "hunks": [ { "added": [ " boolean invalidatedWhileCompiling;" ], "header": "@@ -148,7 +148,7 @@ public class GenericPreparedStatement", "removed": [ " private boolean invalidatedWhileCompiling;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/GenericStatement.java", "hunks": [ { "added": [ "\t\treturn prepare(lcc, false);" ], "header": "@@ -82,7 +82,7 @@ public class GenericStatement", "removed": [ "\t\treturn prepMinion(lcc, true, (Object[]) null, (SchemaDescriptor) null, false); " ] }, { "added": [ "", " final int depth = lcc.getStatementDepth();", " while (true) {", " try {", " return prepMinion(lcc, true, (Object[]) null,", " (SchemaDescriptor) null, forMetaData);", " } finally {", " boolean recompile = false;", "", " // Check if the statement was invalidated while it was", " // compiled. If so, the newly compiled plan may not be", " // up to date anymore, so we recompile the statement", " // if this happens. Note that this is checked in a finally", " // block, so we also retry if an exception was thrown. The", " // exception was probably thrown because of the changes", " // that invalidated the statement. If not, recompiling", " // will also fail, and the exception will be exposed to", " // the caller.", " //", " // invalidatedWhileCompiling and isValid are protected by", " // synchronization on the prepared statement.", " synchronized (preparedStmt) {", " if (preparedStmt.invalidatedWhileCompiling) {", " preparedStmt.isValid = false;", " preparedStmt.invalidatedWhileCompiling = false;", " recompile = true;", " }", " }", "", " if (recompile) {", " // A new statement context is pushed while compiling.", " // Typically, this context is popped by an error", " // handler at a higher level. But since we retry the", " // compilation, the error handler won't be invoked, so", " // the stack must be reset to its original state first.", " while (lcc.getStatementDepth() > depth) {", " lcc.popStatementContext(", " lcc.getStatementContext(), null);", " }", "", " // Don't return yet. The statement was invalidated, so", " // we must retry the compilation.", " continue;", " }", " }", " }" ], "header": "@@ -90,7 +90,52 @@ public class GenericStatement", "removed": [ "\t\treturn prepMinion(lcc, true, (Object[]) null, (SchemaDescriptor) null, forMetaData); " ] } ] } ]
derby-DERBY-5406-d0c5d9c0
DERBY-5406: Intermittent failures in CompressTableTest and TruncateTableTest Make sure the statement receives an invalidation request if the conglomerate disappears before FromBaseTable has created the dependency on the table descriptor. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1187204 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java", "hunks": [ { "added": [ "import org.apache.derby.iapi.sql.depend.DependencyManager;", "" ], "header": "@@ -53,6 +53,8 @@ import org.apache.derby.iapi.sql.compile.RequiredRowOrdering;", "removed": [] }, { "added": [ " // probably doesn't exist anymore because of concurrent DDL or", " // compress operations, and the compilation will have to be tried", " // again.", " // The statement is typically invalidated by the operation", " // that dropped the conglomerate. However, if the invalidation", " // happened before we called createDependency(), we'll miss it", " // and we won't retry the compilation with fresh dictionary", " // information (DERBY-5406). So let's invalidate the statement", " // ourselves here.", " compilerContext.getCurrentDependent().makeInvalid(", " DependencyManager.COMPILE_FAILED,", " getLanguageConnectionContext());", "" ], "header": "@@ -2347,8 +2349,20 @@ public class FromBaseTable extends FromTable", "removed": [ " // probably doesn't exist anymore." ] } ] } ]
derby-DERBY-5407-55d3591d
DERBY-5407 When run across the network, dblook produces unusable DDL for VARCHAR FOR BIT DATA columns. The serialization of UDT associated with SYSCOLUMNS.COLUMNBDATATYPE on the wire from the network server end happens correctly. The same serialized data is received by the client but when we try to instantiate the UDT's TypeDescriptor based on this serialized data, we get confused between "VARCHAR () FOR BIT DATA" and "VARCHAR FOR BIT DATA". The deserialization on the client side happens through BaseTypeIdImpl.getTypeFormatId(). Here, we look at the string representation of the type descriptor that we received on the wire and choose the appropriate format id based on that string. The problem is in this BaseTypeIdImpl.getTypeFormatId() code, where the code looks for "VARCHAR FOR BIT DATA" rather than "VARCHAR () FOR BIT DATA" (notice the missing parentheses) else if ( "VARCHAR FOR BIT DATA".equals( unqualifiedName ) ) { return StoredFormatIds.VARBIT_TYPE_ID_IMPL; } Since "VARCHAR FOR BIT DATA" and "VARCHAR () FOR BIT DATA" do not match, we do not use format id VARBIT_TYPE_ID_IMPL. Later, we go through a switch statement based on the format id in BaseTypeIdlImpl.toParsableString(TypeDescriptor). In the switch statement, we are supposed to stuff in the width of the varchar for bit data into the parentheses ie string "VARCHAR () FOR BIT DATA" should get converted into "VARCHAR (NUMBER) FOR BIT DATA" but we don't do it because of getTypeFormatd() code problem explained earlier. To fix this, the patch has added check for If there might be dependencies on the original "VARCHAR FOR BIT DATA" check, then we can add additional check for "VARCHAR () FOR BIT DATA" in addition to the existing check for "VARCHAR FOR BIT DATA" as shown below and that fixes the problem else if ( "VARCHAR FOR BIT DATA".equals( unqualifiedName ) ) { return StoredFormatIds.VARBIT_TYPE_ID_IMPL; } else if ( "VARCHAR () FOR BIT DATA".equals( unqualifiedName ) ) { return StoredFormatIds.VARBIT_TYPE_ID_IMPL; } This commit does similar thing for "CHAR FOR BIT DATA", ie in addition to the existing test for "CHAR FOR BIT DATA", it adds a check for "CHAR () FOR BIT DATA". Keeping the existing checks will not break any dependencies that might exist on "VARCHAR FOR BIT DATA" check and "CHAR FOR BIT DATA" check. Have added a test for SYSCOLUMNS.COLUMNBDATATYPE for all the supported data types. This test will be run in both embedded and network server mode. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1364690 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5409-0c8f5852
DERBY-5409: GrantRevokeDDLTest fails under Java 7 Drop created schemas in test cases to prevent interference when test cases run in a different order than the order in which they appear in the source file. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1170470 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5412-dc5f56a1
DERBY-5412: MemoryLeakFixesTest.testRepeatedDatabaseCreationWithAutoStats() fails on phoneME: java.lang.InternalError: Number of class names exceeds vm limit. BaseTestCase: Added helper method isPhoneME() that checks if the test is running on a phoneME platform. OldVersions and lang._Suite: Replaced existing checks for phoneME with the new helper method. MemoryLeakFixesTest: Added assertion to verify that database was successfully shut down. Added manually invoked garbage collection on phoneME to avoid exceeding internal limit on number of class names. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1171665 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5413-8bdf7afe
DERBY-5363 Tighten default permissions of DB files with >= JDK6 Patch derby-5363-followup, which adds a missing accessController block around setting the system property SERVER_STARTED_FROM_CMD_LINE. Without the patch, this would fail if running with a security manager specified on the command line. If the property permission is missing, the error is printed unconditionally and exit(1) from main is taken. Cf. DERBY-5413 which tried another (aborted) approach to make sure it got printed. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1177718 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/drda/NetworkServerControl.java", "hunks": [ { "added": [ "import java.security.AccessController;", "import java.security.PrivilegedExceptionAction;" ], "header": "@@ -25,6 +25,8 @@ import java.io.PrintWriter;", "removed": [] }, { "added": [ " try {", " AccessController.doPrivileged(new PrivilegedExceptionAction() {", " public Object run() throws Exception {", " System.setProperty(", " Property.SERVER_STARTED_FROM_CMD_LINE,", " \"true\");", " return null;", " }});", " } catch (Exception e) {", " server.consoleExceptionPrintTrace(e);", " System.exit(1);", " }" ], "header": "@@ -303,8 +305,18 @@ public class NetworkServerControl{", "removed": [ " System.setProperty(Property.SERVER_STARTED_FROM_CMD_LINE,", " \"true\");" ] } ] } ]
derby-DERBY-5416-eebc9c71
DERBY-5416: SYSCS_COMPRESS_TABLE causes an OutOfMemoryError when the heap is full at call time and then gets mostly garbage collected later on. Improve the accuracy of the code that estimates the memory requirement of the sort buffer. When it detects that the current memory usage is lower than the initial memory usage, it now records the current usage and uses that value instead of the initial memory usage in future calculations. This compensates to some degree, but not fully, for the skew in the estimates due to garbage collection happening after the initial memory usage. The memory requirement will not be as badly underestimated, and the likelihood of OutOfMemoryErrors is reduced. There is no regression test case for this bug, since the only reliable, reproducible test case that we currently have, needs too much time, disk space and memory to be included in the regression test suite. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1550103 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/store/access/sort/MergeInserter.java", "hunks": [ { "added": [ " private long beginMemoryUsage;" ], "header": "@@ -62,9 +62,7 @@ final class MergeInserter implements SortController", "removed": [ "\tprivate long beginFreeMemory;", "\tprivate long beginTotalMemory;", "\tprivate long estimatedMemoryUsed;" ] }, { "added": [ " long currentMemoryUsage =", " currentTotalMemory - currentFreeMemory;" ], "header": "@@ -121,6 +119,8 @@ final class MergeInserter implements SortController", "removed": [] }, { "added": [ " long estimatedMemoryUsed =", " currentMemoryUsage - beginMemoryUsage;" ], "header": "@@ -128,8 +128,8 @@ final class MergeInserter implements SortController", "removed": [ " \t\testimatedMemoryUsed = (currentTotalMemory-currentFreeMemory) -", "\t\t \t\t\t(beginTotalMemory-beginFreeMemory);" ] }, { "added": [ " if (estimatedMemoryUsed < 0) {", " // We use less memory now than before we started filling", " // the sort buffer, probably because gc has happened. This", " // means we don't have a good estimate for how much memory", " // the sort buffer has occupied. To compensate for that,", " // set the begin memory usage to the current memory usage,", " // so that we get a more correct (but probably still too", " // low) estimate the next time we get here. See DERBY-5416.", " beginMemoryUsage = currentMemoryUsage;", " }", "" ], "header": "@@ -148,6 +148,17 @@ final class MergeInserter implements SortController", "removed": [] }, { "added": [ " beginMemoryUsage = jvm.totalMemory() - jvm.freeMemory();" ], "header": "@@ -267,9 +278,7 @@ final class MergeInserter implements SortController", "removed": [ "\t\tbeginFreeMemory = jvm.freeMemory();", "\t\tbeginTotalMemory = jvm.totalMemory();", "\t\testimatedMemoryUsed = 0;" ] } ] } ]
derby-DERBY-5420-df9ed37d
DERBY-5420 Regression suite appears locale sensitive: failed in TableLockBasicTest: bug in RealBasicNoPutResultSetStatistics Patch derby-5420-2 changes the way the localization of the floating point numbers is done for RealBasicNoPutResultSetStatistics to use an explicit decimal format in the localized message text itself. When the double argument is henceforth filled in, the locale is already set correctly (in MessageBuilder), so the the decimal point/comma will be chosen correctly according to locale. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1176633 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/build/org/apache/derbyBuild/MessageBuilder.java", "hunks": [ { "added": [ "import java.util.regex.Matcher;", "import java.util.regex.Pattern;" ], "header": "@@ -27,13 +27,14 @@ import java.io.IOException;", "removed": [ "import org.apache.tools.ant.taskdefs.Echo;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/rts/RealBasicNoPutResultSetStatistics.java", "hunks": [ { "added": [ "import java.security.AccessController;", "import java.security.PrivilegedAction;", "import org.apache.derby.iapi.reference.SQLState;", "import org.apache.derby.iapi.services.context.ContextService;", "import org.apache.derby.iapi.services.i18n.MessageService;", "import java.util.Locale;", "import java.util.Vector;" ], "header": "@@ -21,26 +21,20 @@", "removed": [ "import org.apache.derby.iapi.services.io.StoredFormatIds;", "import org.apache.derby.iapi.services.io.Formatable;", "", "import org.apache.derby.iapi.services.i18n.MessageService;", "import org.apache.derby.iapi.reference.SQLState;", "", "import org.apache.derby.iapi.services.io.FormatableHashtable;", "", "import java.util.Vector;", "", "import java.io.ObjectOutput;", "import java.io.ObjectInput;", "import java.io.IOException;" ] } ] } ]
derby-DERBY-5421-3d7c2eab
DERBY-5421; NullPointerException during system.nstest.utils.Dbutil.update_one_row merging 1177446 from 10.8 to trunk git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1177475 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/system/nstest/utils/DbUtil.java", "hunks": [ { "added": [ "\t" ], "header": "@@ -226,16 +226,7 @@ public class DbUtil {", "removed": [ "\t\tif (ps != null) {", "\t\t\ttry {", "\t\t\t\tps.close();", "\t\t\t\t", "\t\t\t} catch (Exception e) {", "\t\t\t\tprintException(", "\t\t\t\t\t\t\"closing insert stmt in dbUtil when there was a problem creating it\",", "\t\t\t\t\t\te);", "\t\t\t}", "\t\t}" ] }, { "added": [], "header": "@@ -256,8 +247,6 @@ public class DbUtil {", "removed": [ "\t\t\tif (ps2 != null)", "\t\t\t\tps2.close();" ] }, { "added": [], "header": "@@ -272,7 +261,6 @@ public class DbUtil {", "removed": [ "\t\t\tps2.close();" ] }, { "added": [], "header": "@@ -385,7 +373,6 @@ public class DbUtil {", "removed": [ "\t\t\tps2.close();" ] }, { "added": [ "\t" ], "header": "@@ -400,14 +387,7 @@ public class DbUtil {", "removed": [ "\t\tif (ps2 != null) {", "\t\t\ttry {", "\t\t\t\tps2.close();", "\t\t\t\t", "\t\t\t} catch (Exception e) {", "\t\t\t\tprintException(\"closing update stmt after work is done\", e);", "\t\t\t}", "\t\t}" ] }, { "added": [], "header": "@@ -432,8 +412,6 @@ public class DbUtil {", "removed": [ "\t\t\tif (ps != null)", "\t\t\t\tps.close();" ] }, { "added": [], "header": "@@ -443,7 +421,6 @@ public class DbUtil {", "removed": [ "\t\t\tps.close();" ] }, { "added": [], "header": "@@ -453,7 +430,6 @@ public class DbUtil {", "removed": [ "\t\t\tps.close();" ] }, { "added": [], "header": "@@ -468,16 +444,6 @@ public class DbUtil {", "removed": [ "\t\tif (ps != null) {", "\t\t\ttry {", "\t\t\t\tps.close();", "\t\t\t} catch (Exception e) {", "\t\t\t\tSystem.out", "\t\t\t\t.println(\"Error in closing prepared statement of delete_one()\");", "\t\t\t\tprintException(\"failure to close delete stmt after work done\",", "\t\t\t\t\t\te);", "\t\t\t}", "\t\t}" ] }, { "added": [], "header": "@@ -499,8 +465,6 @@ public class DbUtil {", "removed": [ "\t\t\tif (ps != null)", "\t\t\t\tps.close();" ] }, { "added": [ "\t", "\t}//of method pick_one(...)" ], "header": "@@ -543,22 +507,11 @@ public class DbUtil {", "removed": [ "\t\tif (ps != null) {", "\t\t\ttry {", "\t\t\t\tps.close();", "\t\t\t\t", "\t\t\t} catch (Exception e) {", "\t\t\t\tSystem.out", "\t\t\t\t.println(\"Error in closing prepared statement of pick_one()\");", "\t\t\t\tprintException(", "\t\t\t\t\t\t\"failure closing select stmt in pick_one after work is done\",", "\t\t\t\t\t\te);", "\t\t\t}", "\t\t}", "\t}// end of method pick_one(...)" ] } ] } ]
derby-DERBY-5424-b7d90735
DERBY-5424 On z/OS testConnectWrongSubprotocolWithSystemProperty(org.apache.derbyTesting.functionTests.tests.tools.ConnectWrongSubprotocolTest)junit.framework.AssertiFailedError Fix code and test to read and write to service.properties with a consistent encoding git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1350361 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/services/monitor/StorageFactoryService.java", "hunks": [ { "added": [ "import java.io.InputStreamReader;" ], "header": "@@ -47,6 +47,7 @@ import java.io.FileReader;", "removed": [] }, { "added": [ " // The eof token should match the ISO-8859-1 encoding ", " // of the rest of the properties file written with store.", " new OutputStreamWriter(os,\"ISO-8859-1\"));" ], "header": "@@ -359,8 +360,10 @@ final class StorageFactoryService implements PersistentService", "removed": [ " new OutputStreamWriter(os));" ] } ] } ]
derby-DERBY-5425-91f376cc
DERBY-5425: Updateable holdable ResultSet terminates early after 65638 updates Use a BackingStoreHashtable to store rows updated in such a way that they may be seen again later in the scan. The BackingStoreHashtable is made holdable if the ResultSet is holdable so that rows are not lost on commit, as they were before. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1359052 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/execute/CurrentOfResultSet.java", "hunks": [ { "added": [], "header": "@@ -30,15 +30,12 @@ import org.apache.derby.iapi.sql.execute.ExecRow;", "removed": [ "import org.apache.derby.iapi.sql.ResultSet;", "import org.apache.derby.iapi.sql.PreparedStatement;", "import org.apache.derby.iapi.sql.depend.DependencyManager;" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/IndexRowToBaseRowResultSet.java", "hunks": [ { "added": [], "header": "@@ -79,9 +79,6 @@ class IndexRowToBaseRowResultSet extends NoPutResultSetImpl", "removed": [ "\tprotected boolean currentRowPrescanned;", "\tprivate boolean sourceIsForUpdateIndexScan;", "" ] }, { "added": [], "header": "@@ -200,9 +197,6 @@ class IndexRowToBaseRowResultSet extends NoPutResultSetImpl", "removed": [ "\t\tif ((source instanceof TableScanResultSet) && ", "\t\t\t((TableScanResultSet) source).indexCols != null)", "\t\t\tsourceIsForUpdateIndexScan = true;" ] }, { "added": [], "header": "@@ -314,55 +308,6 @@ class IndexRowToBaseRowResultSet extends NoPutResultSetImpl", "removed": [ "\t\t/* beetle 3865, updateable cursor using index. When in-memory hash table was full, we", "\t\t * read forward and saved future row id's in a virtual-memory-like temp table. So if", "\t\t * we have rid's saved, and we are here, it must be non-covering index. Intercept it", "\t\t * here, so that we don't have to go to underlying index scan. We get both heap cols", "\t\t * and index cols together here for better performance.", "\t\t */", "\t\tif (sourceIsForUpdateIndexScan && ((TableScanResultSet) source).futureForUpdateRows != null)", "\t\t{", "\t\t\tcurrentRowPrescanned = false;", "\t\t\tTableScanResultSet src = (TableScanResultSet) source;", "", "\t\t\tif (src.futureRowResultSet == null)", "\t\t\t{", "\t\t\t\tsrc.futureRowResultSet = (TemporaryRowHolderResultSet) src.futureForUpdateRows.getResultSet();", "\t\t\t\tsrc.futureRowResultSet.openCore();", "\t\t\t}", "", "\t\t\tExecRow ridRow = src.futureRowResultSet.getNextRowCore();", "", "\t\t\tcurrentRow = null;", "", "\t\t\tif (ridRow != null)", "\t\t\t{", "\t\t\t\t/* To maximize performance, we only use virtual memory style heap, no", "\t\t\t\t * position index is ever created. And we save and retrieve rows from the", "\t\t\t\t * in-memory part of the heap as much as possible. We can also insert after", "\t\t\t\t * we start retrieving, the assumption is that we delete the current row right", "\t\t\t\t * after we retrieve it.", "\t\t\t\t */", "\t\t\t\tsrc.futureRowResultSet.deleteCurrentRow();", "\t\t\t\tbaseRowLocation = (RowLocation) ridRow.getColumn(1);", " \tbaseCC.fetch(", " \t baseRowLocation, compactRow.getRowArray(), accessedAllCols);", "", "\t\t\t\tcurrentRow = compactRow;", "\t\t\t\tcurrentRowPrescanned = true;", "\t\t\t}", "\t\t\telse if (src.sourceDrained)", "\t\t\t\tcurrentRowPrescanned = true;", "", "\t\t\tif (currentRowPrescanned)", "\t\t\t{", "\t\t\t\tsetCurrentRow(currentRow);", "", "\t\t\t\tnextTime += getElapsedMillis(beginTime);", "\t \t \t\treturn currentRow;", "\t\t\t}", "\t\t}", "" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/TableScanResultSet.java", "hunks": [ { "added": [], "header": "@@ -21,13 +21,11 @@", "removed": [ "import java.util.Hashtable;", "import org.apache.derby.iapi.services.io.FormatableBitSet;" ] }, { "added": [ "import org.apache.derby.iapi.store.access.BackingStoreHashtable;" ], "header": "@@ -35,7 +33,7 @@ import org.apache.derby.iapi.sql.execute.CursorResultSet;", "removed": [ "import org.apache.derby.iapi.sql.execute.TemporaryRowHolder;" ] }, { "added": [ " /**", " * This field is used by beetle 3865, updateable cursor using index. It", " * is a hash table containing updated rows that are thrown into future", " * direction of the index scan, and as a result we'll hit it again but", " * should skip it. The hash table will spill to disk if it grows too big", " * to be kept in memory.", " */", " protected BackingStoreHashtable past2FutureTbl;" ], "header": "@@ -98,22 +96,14 @@ class TableScanResultSet extends ScanResultSet", "removed": [ "\t/* Following fields are used by beetle 3865, updateable cursor using index. \"past2FutureTbl\"", "\t * is a hash table containing updated rows that are thrown into future direction of the", "\t * index scan and as a result we'll hit it again but should skip it. If this hash table", "\t * is full, we scan forward and have a virtual memory style temp heap holding future row", "\t * id's.", "\t */", "\tprotected Hashtable past2FutureTbl;", "\tprotected TemporaryRowHolder futureForUpdateRows; //tmp table for materialized rids", "\tprotected TemporaryRowHolderResultSet futureRowResultSet;\t//result set for reading from above", "\tprotected boolean skipFutureRowHolder;\t\t//skip reading rows from above", "\tprotected boolean sourceDrained;\t\t\t//all row ids materialized", "\tprotected boolean currentRowPrescanned;\t//got a row from above tmp table", "\tprotected boolean compareToLastKey;\t\t//see comments in UpdateResultSet", "\tprotected ExecRow lastCursorKey;", "\tprivate ExecRow sparseRow;\t\t\t\t//sparse row in heap column order", "\tprivate FormatableBitSet sparseRowMap;\t\t\t//which columns to read" ] }, { "added": [], "header": "@@ -462,35 +452,6 @@ class TableScanResultSet extends ScanResultSet", "removed": [ "\t/**", " * Check and make sure sparse heap row and accessed bit map are created.", "\t * beetle 3865, update cursor using index.", "\t *", "\t * @exception StandardException thrown on failure", "\t */", "\tprivate void getSparseRowAndMap() throws StandardException", "\t{", "\t\tint numCols = 1, colPos;", "\t\tfor (int i = 0; i < indexCols.length; i++)", "\t\t{", "\t\t\tcolPos = (indexCols[i] > 0) ? indexCols[i] : -indexCols[i];", "\t\t\tif (colPos > numCols)", "\t\t\t\tnumCols = colPos;", "\t\t}", "\t\tsparseRow = new ValueRow(numCols);", "\t\tsparseRowMap = new FormatableBitSet(numCols);", "\t\tfor (int i = 0; i < indexCols.length; i++)", "\t\t{", "\t\t\tif (accessedCols.get(i))", "\t\t\t{", "\t\t\t\tcolPos = (indexCols[i] > 0) ? indexCols[i] : -indexCols[i];", "\t\t\t\tsparseRow.setColumn(colPos, candidate.getColumn(i + 1));", "\t\t\t\tsparseRowMap.set(colPos - 1);", "\t\t\t}", "\t\t}", "\t}", "\t\t", "" ] }, { "added": [], "header": "@@ -510,59 +471,6 @@ class TableScanResultSet extends ScanResultSet", "removed": [ "\t\t/* beetle 3865, updateable cursor using index. We first saved updated rows with new value", "\t\t * falling into future direction of index scan in hash table, if it's full, we scanned", "\t\t * forward and saved future row ids in a virtual mem heap.", "\t\t */", "\t\tif (futureForUpdateRows != null)", "\t\t{", "\t\t\tcurrentRowPrescanned = false;", "\t\t\tif (! skipFutureRowHolder)", "\t\t\t{", "\t\t\t\tif (futureRowResultSet == null)", "\t\t\t\t{", "\t\t\t\t\tfutureRowResultSet = (TemporaryRowHolderResultSet) futureForUpdateRows.getResultSet();", "\t\t\t\t\tfutureRowResultSet.openCore();", "\t\t\t\t}", "", "\t\t\t\tExecRow ridRow = futureRowResultSet.getNextRowCore();", "", "\t\t\t\tif (ridRow != null)", "\t\t\t\t{", "\t\t\t\t\t/* to boost performance, we used virtual mem heap, and we can insert after", "\t\t\t\t\t * we start retrieving results. The assumption is to", "\t\t\t\t\t * delete current row right after we retrieve it.", "\t\t\t\t\t */", "\t\t\t\t\tfutureRowResultSet.deleteCurrentRow();", "\t\t\t\t\tRowLocation rl = (RowLocation) ridRow.getColumn(1);", "\t\t\t\t\tConglomerateController baseCC = activation.getHeapConglomerateController();", "\t\t\t\t\tif (sparseRow == null)", "\t\t\t\t\t\tgetSparseRowAndMap();", " \t \tbaseCC.fetch(", " \t \t \t rl, sparseRow.getRowArray(), sparseRowMap);", " RowLocation rl2 = (RowLocation) rl.cloneValue(false);", "\t\t\t\t\tcurrentRow.setColumn(currentRow.nColumns(), rl2);", "\t\t\t\t\tcandidate.setColumn(candidate.nColumns(), rl2);\t\t// have to be consistent!", "", "\t\t\t\t\tresult = currentRow;", "\t\t\t\t\tcurrentRowPrescanned = true;", "\t\t\t\t}", "\t\t\t\telse if (sourceDrained)", "\t\t\t\t{", "\t\t\t\t\tcurrentRowPrescanned = true;", "\t\t\t\t\tcurrentRow = null;", "\t\t\t\t}", "", "\t\t\t\tif (currentRowPrescanned)", "\t\t\t\t{", "\t\t\t\t\tsetCurrentRow(result);", "", "\t\t\t\t\tnextTime += getElapsedMillis(beginTime);", "\t \t\t \t\treturn result;", "\t\t\t\t}", "\t\t\t}", "\t\t}", "" ] }, { "added": [ "", " if (past2FutureTbl != null)", " {", " past2FutureTbl.close();", " }" ], "header": "@@ -707,8 +615,11 @@ class TableScanResultSet extends ScanResultSet", "removed": [ "\t\t\tif (futureRowResultSet != null)", "\t\t\t\tfutureRowResultSet.close();" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/UpdateResultSet.java", "hunks": [ { "added": [], "header": "@@ -21,7 +21,6 @@", "removed": [ "import java.util.Hashtable;" ] }, { "added": [ "import org.apache.derby.iapi.store.access.BackingStoreHashtable;" ], "header": "@@ -34,12 +33,12 @@ import org.apache.derby.iapi.services.sanity.SanityManager;", "removed": [ "import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;" ] }, { "added": [ "\t\tboolean notifyCursor = (tableScan != null);" ], "header": "@@ -449,7 +448,7 @@ class UpdateResultSet extends DMLWriteResultSet", "removed": [ "\t\tboolean notifyCursor = ((tableScan != null) && ! tableScan.sourceDrained);" ] }, { "added": [ "\t * (for fast search), so that when the cursor hits it again, it knows to skip it." ], "header": "@@ -592,13 +591,7 @@ class UpdateResultSet extends DMLWriteResultSet", "removed": [ "\t * (for fast search), so that when the cursor hits it again, it knows to skip it. When we get", "\t * to a point that the hash table is full, we scan forward the cursor until one of two things", "\t * happen: (1) we hit a record whose rid is in the hash table (we went through it already, so", "\t * skip it), we remove it from hash table, so that we can continue to use hash table. OR, (2) the scan", "\t * forward hit the end. If (2) happens, we can de-reference the hash table to make it available", "\t * for garbage collection. We save the future row id's in a virtual mem heap. In any case,", "\t * next read will use a row id that we saved." ] }, { "added": [ "\t\t\t\t\tDataValueDescriptor key = row[k];" ], "header": "@@ -638,16 +631,7 @@ class UpdateResultSet extends DMLWriteResultSet", "removed": [ "\t\t\t\t\tDataValueDescriptor key;", "\t\t\t\t\t/* We need to compare with saved most-forward cursor scan key if we", "\t\t\t\t\t * are reading records from the saved RowLocation temp table (instead", "\t\t\t\t\t * of the old column value) because we only care if new update value", "\t\t\t\t\t * jumps forward the most-forward scan key.", "\t\t\t\t\t */", "\t\t\t\t\tif (tableScan.compareToLastKey)", "\t\t\t\t\t\tkey = tableScan.lastCursorKey.getColumn(i + 1);", "\t\t\t\t\telse", "\t\t\t\t\t\tkey = row[k];" ] } ] } ]
derby-DERBY-5426-68c72e70
DERBY-5426: Improve error message for too much contention on a sequence/identity. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1174290 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/catalog/SequenceUpdater.java", "hunks": [ { "added": [ " // ABSTRACT OR OVERRIDABLE BEHAVIOR TO BE IMPLEMENTED BY CHILDREN" ], "header": "@@ -155,7 +155,7 @@ public abstract class SequenceUpdater implements Cacheable", "removed": [ " // ABSTRACT BEHAVIOR TO BE IMPLEMENTED BY CHILDREN" ] }, { "added": [ " /**", " * <p>", " * Create an exception to state that there is too much contention on the generator.", " * For backward compatibility reasons, different messages are needed by sequences", " * and identities. See DERBY-5426.", " * </p>", " */", " protected StandardException tooMuchContentionException()", " {", " return StandardException.newException", " ( SQLState.LANG_TOO_MUCH_CONTENTION_ON_SEQUENCE, _sequenceGenerator.getName() );", " }", " " ], "header": "@@ -187,6 +187,19 @@ public abstract class SequenceUpdater implements Cacheable", "removed": [] }, { "added": [ "", " // We try to get a sequence number. We try until we've exceeded the lock timeout", " // in case we find ourselves in a race with another session which is draining numbers from" ], "header": "@@ -322,10 +335,10 @@ public abstract class SequenceUpdater implements Cacheable", "removed": [ " ", " // We try to get a sequence number. We try a couple times in case we find", " // ourselves in a race with another session which is draining numbers from" ] }, { "added": [ " //", " // If we get here, then we exhausted our retry attempts. This might be a sign", " // that we need to increase the number of sequence numbers which we", " // allocate. There's an opportunity for Derby to tune itself here.", " //", " throw tooMuchContentionException();" ], "header": "@@ -377,18 +390,16 @@ public abstract class SequenceUpdater implements Cacheable", "removed": [ " break;", " //", " // If we get here, then we exhausted our retry attempts. This might be a sign", " // that we need to increase the number of sequence numbers which we", " // allocate. There's an opportunity for Derby to tune itself here.", " //", " throw StandardException.newException", " ( SQLState.LANG_TOO_MUCH_CONTENTION_ON_SEQUENCE, _sequenceGenerator.getName() );" ] } ] } ]
derby-DERBY-5426-db26d0a6
DERBY--5445 (Enhance existing concurrency test to stress sequence generators to also stress identity columns) DERBY-4565 added a concurrency test to stress sequence generation. I am making simple modifications to that test to add identity column stress testing. Based on a command line parameter, the test will either do sequence generation testing or identity column testing. If no parameter is specified, it will default to doing sequene generation testing. The test already takes number of parameters. One of those parameters is load options parameter. Load option parameter is indicated by -load_opts on command line and it is followed by a comma separated list of sub-parameters. An eg of load option parameter is as follows -load_opts debugging=1,numberOfGenerators=5,tablesPerGenerator=10,insertsPerTransaction=100 I am adding another pair to the comma separated sub-parameters,namely identityTest=aNumber. If identityTest is 1, then the test will do identity column stress testing. For any other value for identityTest, the test will do sequence generation testing. If the user doesn't specify identityTest in load options, the test will perform sequence generation testing. eg of asking the test to do identity column testing java org.apache.derbyTesting.perf.clients.Runner -driver org.apache.derby.jdbc.EmbeddedDriver -init -load seq_gen -load_opts debugging=1,numberOfGenerators=5,tablesPerGenerator=10,insertsPerTransaction=100,identityTest=1 -gen b2b -threads 10 Two possible way of asking the test to do sequence generation testing(identityTest set to a value other than 1 or identityTest is not specified) java org.apache.derbyTesting.perf.clients.Runner -driver org.apache.derby.jdbc.EmbeddedDriver -init -load seq_gen -load_opts debugging=1,numberOfGenerators=5,tablesPerGenerator=10,insertsPerTransaction=100,identityTest=2 -gen b2b -threads 10 OR java org.apache.derbyTesting.perf.clients.Runner -driver org.apache.derby.jdbc.EmbeddedDriver -init -load seq_gen -load_opts debugging=1,numberOfGenerators=5,tablesPerGenerator=10,insertsPerTransaction=100 -gen b2b -threads 10 When I run the test for identity columns, I can consistently see it running into derby lock time out with nested sequencec contention error while trying to get current identity value and advancing(this is what we want to achieve from the test ie that it is able to stress the functionality enough to run into contention while trying to get next range for identity columns.) Additionally, there are some lock time out errors raised by store while trying to update system catalog(this is expected too because of multiple threads simulataneously trying to do inserts into a table with identity column). I also in my codeline reverted to changes before DERBY-5426 (DERBY-4526 is Improve the error raised by too much contention on a sequence/identity.) was fixed and saw sequence contention errors (without the lock time out error encapsulation). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1179374 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/perf/clients/Runner.java", "hunks": [ { "added": [ " System.out.print(\"initializing database for \");", " System.out.println((Runner.getLoadOpt( \"identityTest\", 0 ) == 1)?", " \t\t\t\t\"identity column testing...\":", " \t\t\t\t\"sequence generator testing...\");" ], "header": "@@ -93,7 +93,10 @@ public class Runner {", "removed": [ " System.out.println(\"initializing database...\");" ] }, { "added": [ "\" - identityTest: 1 means do identity column testing, any other number \\n\" +", "\" means do sequence generator testing. If no identityTest is specified \\n\" +", "\" then sequence generator testing will be done by default \\n\" +" ], "header": "@@ -250,6 +253,9 @@ public class Runner {", "removed": [] } ] }, { "file": "java/testing/org/apache/derbyTesting/perf/clients/SequenceGeneratorConcurrency.java", "hunks": [ { "added": [ " * Machinery to test the concurrency of sequence/identity generators." ], "header": "@@ -32,7 +32,7 @@ import java.util.Random;", "removed": [ " * Machinery to test the concurrency of sequence generators." ] }, { "added": [ " private boolean _runIdentityTest;" ], "header": "@@ -53,6 +53,7 @@ public class SequenceGeneratorConcurrency", "removed": [] }, { "added": [ " //If no identityTest is specified, then do sequence testing.", " _runIdentityTest = ( Runner.getLoadOpt( \"identityTest\", 0 ) == 1);" ], "header": "@@ -60,6 +61,8 @@ public class SequenceGeneratorConcurrency", "removed": [] }, { "added": [ " /** Return whether we are doing identity column testing */", " public boolean runIdentityTest() { return _runIdentityTest; }", "" ], "header": "@@ -75,6 +78,9 @@ public class SequenceGeneratorConcurrency", "removed": [] }, { "added": [ " buffer.append( \", identityTest = \" + _runIdentityTest );" ], "header": "@@ -83,6 +89,7 @@ public class SequenceGeneratorConcurrency", "removed": [] }, { "added": [ " boolean runIdentityTest = _loadOptions.runIdentityTest();", " \tif (!runIdentityTest)", " runDDL( conn, \"create sequence \" + makeSequenceName( sequence ) );", " \tif (runIdentityTest)", " runDDL( conn, \"create table \" + makeTableName( sequence, table ) + \"( a int, b int generated always as identity)\" );", " \telse", " runDDL( conn, \"create table \" + makeTableName( sequence, table ) + \"( a int )\" );" ], "header": "@@ -115,14 +122,19 @@ public class SequenceGeneratorConcurrency", "removed": [ " runDDL( conn, \"create sequence \" + makeSequenceName( sequence ) );", " runDDL( conn, \"create table \" + makeTableName( sequence, table ) + \"( a int )\" );" ] }, { "added": [ " boolean runIdentityTest = _loadOptions.runIdentityTest();" ], "header": "@@ -183,6 +195,7 @@ public class SequenceGeneratorConcurrency", "removed": [] }, { "added": [ " if ( table == 0 ){", " \tif(!runIdentityTest) ", " \tps = prepareStatement( _conn, debugging, valuesClause );", " \telse", " \tps = prepareStatement( _conn, debugging, \"values (1)\" );", " }", " else { ", " \tif(!runIdentityTest) ", " ps = prepareStatement( _conn, debugging, \"insert into \" + tableName + \"( a ) \" + valuesClause ); ", " \telse", " \tps = prepareStatement( _conn, debugging, \"insert into \" + tableName + \"( a ) values(1)\"); ", " \t}" ], "header": "@@ -195,8 +208,18 @@ public class SequenceGeneratorConcurrency", "removed": [ " if ( table == 0 ) { ps = prepareStatement( _conn, debugging, valuesClause ); }", " else { ps = prepareStatement( _conn, debugging, \"insert into \" + tableName + \"( a ) \" + valuesClause ); }" ] } ] } ]
derby-DERBY-5438-04c92ef6
DERBY-5438: Empty MAPS table in toursdb - Let insertMaps.main() throw exceptions instead of just printing them to make errors stop the build - Change URL in insertMaps to match the URL in build.xml - Change directory for the process running insertMaps so that it finds the gif files git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1177589 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/demo/toursdb/insertMaps.java", "hunks": [ { "added": [ "\tpublic static final String dbURLCS = new String(\"jdbc:derby:toursdb\");", "\t\tSystem.out.println(\"Loading the Derby jdbc driver...\");", "\t\tClass.forName(CSdriver).newInstance();", "\t\tSystem.out.println(\"Getting Derby database connection...\");", "\t\tConnection connCS = DriverManager.getConnection(dbURLCS);", "\t\tSystem.out.println(\"Successfully got the Derby database connection...\");", "\t\tSystem.out.println(\"Inserted \" + insertRows(null, connCS) +", "\t\t\t\t\t\t \" rows into the ToursDB\");", "\t\tconnCS.close();" ], "header": "@@ -37,32 +37,21 @@ import java.sql.SQLException;", "removed": [ "\tpublic static final String dbURLCS = new String(\"jdbc:derby:toursDB\");", "\t\ttry {", "\t\t\tConnection connCS = null;", "", "\t\t\tSystem.out.println(\"Loading the Derby jdbc driver...\");", "\t\t\tClass.forName(CSdriver).newInstance();", "\t\t\tSystem.out.println(\"Getting Derby database connection...\");", "\t\t\tconnCS = DriverManager.getConnection(dbURLCS);", "\t\t\tSystem.out.println(\"Successfully got the Derby database connection...\");", "", "\t\t\tSystem.out.println(\"Inserted \" + insertRows(null,connCS) + \" rows into the ToursDB\");", "", "\t\t\tconnCS.close();", "\t\t} catch (SQLException e) {", "\t\t\tSystem.out.println (\"FAIL -- unexpected exception: \" + e.toString());", "\t\t\te.printStackTrace();", "\t\t} catch (Exception e) {", "\t\t\tSystem.out.println (\"FAIL -- unexpected exception: \" + e.toString());", "\t\t\te.printStackTrace();", "\t\t}" ] } ] } ]
derby-DERBY-5440-910eb101
DERBY-5440: test failure in testBTreeForwardScan_fetchRows_resumeAfterWait_nonUnique_split(org.apache.derbyTesting.functionTests.tests.store.IndexSplitDeadlockTest)junit.framework.AssertionFailedError: expected:<1> but was:<0> git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1181756 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5444-859720d8
DERBY-5444: SpawnedProcess.complete may fail to destroy the process when a timeout is specified Rewrote loop logic to ensure that the process is destroyed when a timeout is specified and exceeded. Patch file: derby-5444-1c-destroy_on_timeout.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1179546 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/SpawnedProcess.java", "hunks": [ { "added": [ " final long start = System.currentTimeMillis();", " boolean timedOut = true;", " exitCode = javaProcess.exitValue();", " //if no exception thrown, exited normally", " destroy = timedOut = false;", " break;", " // Ignore exception, it means that the process is running.", " Thread.sleep(1000);", " totalwait = System.currentTimeMillis() - start;", " // If we timed out, make sure we try to destroy the process.", " if (timedOut) {", " destroy = true;", " }" ], "header": "@@ -158,23 +158,25 @@ public final class SpawnedProcess {", "removed": [ " exitCode = javaProcess.exitValue();", " //if no exception thrown, exited normally", " destroy = false;", " break;", " if (totalwait >= timeout) {", " destroy = true;", " break;", " } else {", " totalwait += 1000;", " Thread.sleep(1000);", " }" ] } ] } ]
derby-DERBY-5447-b8339336
DERBY-5447: Deadlock in AutomaticIndexStatisticsTest.testShutdownWhileScanningThenDelete (BasePage.releaseExclusive and Observable.deleteObserver (BaseContainerHandle)) Clean the daemon context only after the running worker thread (if any) has finished to avoid Java deadlock when closing the container handles obtained with the context. The deadlock is intermittent, but can easily be reproduced, and involves synchronization in BasePage and in java.util.Observable. Patch file: derby-5447-2a-change_istat_shutdown.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1180790 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/services/daemon/IndexStatisticsDaemonImpl.java", "hunks": [ { "added": [ " // Controls execution of last cleanup step outside of the synchronized", " // block. Should only be done once, and this is ensured by the guard on", " // 'queue' and the value of 'daemonDisabled'.", " boolean clearContext = false;", " clearContext = true;" ], "header": "@@ -886,9 +886,14 @@ public class IndexStatisticsDaemonImpl", "removed": [] }, { "added": [], "header": "@@ -913,12 +918,7 @@ public class IndexStatisticsDaemonImpl", "removed": [ " // DERBY-5336: Trigger cleanup code to remove the context", " // from the context service. This pattern was", " // copied from BasicDaemon.", " ctxMgr.cleanupOnError(StandardException.normalClose(), false);", "" ] }, { "added": [ "", " // DERBY-5447: Remove the context only after the running daemon thread", " // (if any) has been shut down to avoid Java deadlocks", " // when closing the container handles obtained with this", " // context.", " if (clearContext) {", " // DERBY-5336: Trigger cleanup code to remove the context", " // from the context service. This pattern was", " // copied from BasicDaemon.", " ctxMgr.cleanupOnError(StandardException.normalClose(), false);", " }" ], "header": "@@ -935,6 +935,17 @@ public class IndexStatisticsDaemonImpl", "removed": [] } ] } ]
derby-DERBY-5449-90e6ec94
DERBY-5449: 10.8 client with 10.5 server gives ClassCastException Convert BOOLEAN parameter values to SMALLINT (and not only the parameter meta-data) when talking to old servers that don't understand BOOLEAN. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1186020 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5453-5250aca2
DERBY-5453: Remove unused methods in Cursor and NetCursor git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1181713 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/client/org/apache/derby/client/am/Cursor.java", "hunks": [ { "added": [], "header": "@@ -126,12 +126,6 @@ public abstract class Cursor {", "removed": [ " public Cursor(Agent agent, byte[] dataBuffer) {", " this(agent);", " dataBuffer_ = dataBuffer;", " setAllRowsReceivedFromServer(false);", " }", "" ] }, { "added": [], "header": "@@ -263,11 +257,6 @@ public abstract class Cursor {", "removed": [ " // Associate a new underlying COM or SQLDA output data buffer for this converter.", " public final void setBuffer(byte[] dataBuffer) {", " dataBuffer_ = dataBuffer;", " }", "" ] }, { "added": [], "header": "@@ -311,18 +300,6 @@ public abstract class Cursor {", "removed": [ " final int getPosition() {", " return position_;", " }", "", " final void setPosition(int newPosition) {", " position_ = newPosition;", " }", "", " public final void markCurrentRowPosition() {", " currentRowPosition_ = position_;", " }", "" ] }, { "added": [], "header": "@@ -331,26 +308,6 @@ public abstract class Cursor {", "removed": [ " final void repositionCursorToCurrentRow() {", " position_ = currentRowPosition_;", " }", "", " final void repositionCursorToNextRow() {", " position_ = nextRowPosition_;", " }", "", " public final byte[] getDataBuffer() {", " return dataBuffer_;", " }", "", " public final int getDataBufferLength() {", " return dataBuffer_.length;", " }", "", " public final int getLastValidBytePosition() {", " return lastValidBytePosition_;", " }", "" ] }, { "added": [], "header": "@@ -764,9 +721,6 @@ public abstract class Cursor {", "removed": [ " // get the raw clob bytes, without translation. dataOffset must be int[1]", " abstract public byte[] getClobBytes_(int column, int[] dataOffset /*output*/) throws SqlException;", "" ] } ] }, { "file": "java/client/org/apache/derby/client/net/NetCursor.java", "hunks": [ { "added": [], "header": "@@ -386,32 +386,6 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {", "removed": [ " protected boolean isDataBufferNull() {", " if (dataBuffer_ == null) {", " return true;", " } else {", " return false;", " }", " }", "", " protected void allocateDataBuffer() {", " int length;", " if (maximumRowSize_ > DssConstants.MAX_DSS_LEN) {", " length = maximumRowSize_;", " } else {", " length = DssConstants.MAX_DSS_LEN;", " }", "", " dataBuffer_ = new byte[length];", " position_ = 0;", " lastValidBytePosition_ = 0;", " }", "", " protected void allocateDataBuffer(int length) {", " dataBuffer_ = new byte[length];", " }", "", "" ] }, { "added": [], "header": "@@ -899,14 +873,6 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {", "removed": [ " void setBlocking(int queryProtocolType) {", " if (queryProtocolType == CodePoint.LMTBLKPRC) {", " blocking_ = true;", " } else {", " blocking_ = false;", " }", " }", "" ] }, { "added": [], "header": "@@ -1029,26 +995,6 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {", "removed": [ " public byte[] getClobBytes_(int column, int[] dataOffset /*output*/) throws SqlException {", " int index = column - 1;", " byte[] data = null;", "", " // locate the EXTDTA bytes, if any", " data = findExtdtaData(column);", "", " if (data != null) {", " // data found", " // set data offset based on the presence of a null indicator", " if (!nullable_[index]) {", " dataOffset[0] = 0;", " } else {", " dataOffset[0] = 1;", " }", " }", "", " return data;", " }", "" ] }, { "added": [], "header": "@@ -1062,47 +1008,6 @@ public class NetCursor extends org.apache.derby.client.am.Cursor {", "removed": [ "", " int ensureSpaceForDataBuffer(int ddmLength) {", " if (dataBuffer_ == null) {", " allocateDataBuffer();", " }", " //super.resultSet.cursor.clearColumnDataOffsetsCache();", " // Need to know how many bytes to ask from the Reply object,", " // and handle the case where buffer is not big enough for all the bytes.", " // Get the length in front of the code point first.", "", " int bytesAvailableInDataBuffer = dataBuffer_.length - lastValidBytePosition_;", "", " // Make sure the buffer has at least ddmLength amount of room left.", " // If not, expand the buffer before calling the getQrydtaData() method.", " if (bytesAvailableInDataBuffer < ddmLength) {", "", " // Get a new buffer that is twice the size of the current buffer.", " // Copy the contents from the old buffer to the new buffer.", " int newBufferSize = 2 * dataBuffer_.length;", "", " while (newBufferSize < ddmLength) {", " newBufferSize = 2 * newBufferSize;", " }", "", " byte[] tempBuffer = new byte[newBufferSize];", "", " System.arraycopy(dataBuffer_,", " 0,", " tempBuffer,", " 0,", " lastValidBytePosition_);", "", " // Make the new buffer the dataBuffer.", " dataBuffer_ = tempBuffer;", "", " // Recalculate bytesAvailableInDataBuffer", " bytesAvailableInDataBuffer = dataBuffer_.length - lastValidBytePosition_;", " }", " return bytesAvailableInDataBuffer;", " }", "" ] } ] } ]
derby-DERBY-5454-23e8c92e
DERBY-5454; ERROR 40001 deadlock in nstest on select max(serialkey) replacing the code causing the intermittent deadlock with non-jdbc calls git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1311804 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/system/nstest/utils/DbUtil.java", "hunks": [ { "added": [], "header": "@@ -24,7 +24,6 @@ package org.apache.derbyTesting.system.nstest.utils;", "removed": [ "import java.sql.ResultSet;" ] }, { "added": [ "\t\t\t\t\t.println(\"LOCK TIMEOUT obtained during insert - add_one_row() \"", "\t\t\t\t}", "\t\t\t\telse if (sqe.getSQLState().equalsIgnoreCase(\"23505\")) {", "\t\t\t\t System.out", "\t\t\t\t .println(\"prevented duplicate row - add_one_row(): \"", "\t\t\t\t + sqe.getSQLState() + \"; \" + sqe.getMessage());", "" ], "header": "@@ -198,9 +197,14 @@ public class DbUtil {", "removed": [ "\t\t\t\t\t.println(\"LOCK TIMEOUT obatained during insert - add_one_row() \"", "\t\t\t\t\t" ] }, { "added": [ "\t\t\t\t", "\t\tlong minVal = NsTest.NUM_UNTOUCHED_ROWS + 1;//the max we start with", "\t\t// the max serialkey since it keeps a count of the num of inserts made", "\t\t// so far", "\t\t// value even if the row does not exist (i.e. in a situation where some", "\t\t// other thread has deleted this row).", "\t\t// now get a value between the original max, and the current max ", "\t\tlong rowToReturn = minVal + (long)(rand.nextDouble() * (maxVal - minVal));" ], "header": "@@ -449,61 +453,20 @@ public class DbUtil {", "removed": [ "\t\t", "\t\tPreparedStatement ps = null;", "\t\t// ResultSet rs = null;", "\t\t", "\t\ttry {", "\t\t\t", "\t\t\tps = conn", "\t\t\t.prepareStatement(\"select max(serialkey) from nstesttab where serialkey > ?\");", "\t\t} catch (Exception e) {", "\t\t\tSystem.out", "\t\t\t.println(\"Unexpected error creating the select prepared statement in pick_one()\");", "\t\t\tprintException(\"failure to prepare select stmt in pick_one()\", e);", "\t\t\treturn (0);", "\t\t}", "\t\t", "\t\tlong minVal = NsTest.NUM_UNTOUCHED_ROWS + 1;", "\t\t// long maxVal = nstest.MAX_INITIAL_ROWS * nstest.INIT_THREADS; //the", "\t\t// max we start with", "\t\t// the max serialkey", "\t\t// since it keeps a count of the num of inserts made so far", "\t\t// value even if", "\t\t// the row does not exist (i.e. in a situation where some other thread", "\t\t// has deleted this row).", "\t\tlong rowToReturn = (minVal + 1)", "\t\t+ (Math.abs(rand.nextLong()) % (maxVal - minVal));", "\t\ttry {", "\t\t\tps.setLong(1, rowToReturn);", "\t\t\tResultSet rs = ps.executeQuery();", "\t\t\twhile (rs.next()) {", "\t\t\t\tif (rs.getLong(1) > 0) {", "\t\t\t\t\trowToReturn = rs.getLong(1);", "\t\t\t\t\t//System.out", "\t\t\t\t\t//.println(getThreadName()", "\t\t\t\t\t//\t\t+ \" dbutil.pick_one() -> Obtained row from the table \"", "\t\t\t\t\t//\t\t+ rowToReturn);", "\t\t\t\t} else {", "\t\t\t\t\tSystem.out", "\t\t\t\t\t.println(getThreadName()", "\t\t\t\t\t\t\t+ \" dbutil.pick_one() -> Returning random serialkey of \"", "\t\t\t\t\t\t\t+ rowToReturn);", "\t\t\t\t}", "\t\t\t}", "\t\t} catch (SQLException sqe) {", "\t\t\tSystem.out.println(sqe + \" while selecting a random row\");", "\t\t\tsqe.printStackTrace();", "\t\t}", "\t\t", "\t", "\t\t" ] } ] } ]
derby-DERBY-5456-0086962a
DERBY-5456: Make the network server more diligent about collecting up all of the local addresses which it can. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1183463 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java", "hunks": [ { "added": [ " localAddresses = new ArrayList(3);", " localAddresses.add(bindAddr);", " ", " try { localAddresses.add(InetAddress.getLocalHost()); }", " catch(UnknownHostException uhe) { unknownHostException( uhe ); }", " ", " try { localAddresses.add(InetAddress.getByName(\"localhost\")); }", " catch(UnknownHostException uhe) { unknownHostException( uhe ); }", " private void unknownHostException( Throwable t )", " {", " try {", " consolePropertyMessage( \"DRDA_UnknownHostWarning.I\", t.getMessage() );", " } catch (Exception e)", " { ", " // just a warning shouldn't actually throw an exception", " }", " }" ], "header": "@@ -2641,20 +2641,24 @@ public final class NetworkServerControlImpl {", "removed": [ "\t\t\tlocalAddresses = new ArrayList(3);", "\t\t\tlocalAddresses.add(bindAddr);", "\t\t\ttry {", "\t\t\t\tlocalAddresses.add(InetAddress.getLocalHost());", "\t\t\t\tlocalAddresses.add(InetAddress.getByName(\"localhost\"));", "\t\t\t}catch(UnknownHostException uhe)", "\t\t\t{", "\t\t\t\ttry {", "\t\t\t\t\tconsolePropertyMessage(\"DRDA_UnknownHostWarning.I\",uhe.getMessage());", "\t\t\t\t} catch (Exception e)", "\t\t\t\t{ // just a warning shouldn't actually throw an exception", "\t\t\t\t}", "\t\t\t}\t\t\t" ] } ] } ]
derby-DERBY-5459-f9a06892
DERBY-5459 Result set metadata are out of sync on client after underlying table is altered Patch derby-5459-3. We now resend the result set metadata to the client when the cursor is opened if the prepared statement gets recompiled due to it being out of date when the server tries to execute it (DRDAConnThread line 871). To detect this we introduce a version counter which is incremented each time a statement is (re)compiled and make a note which version's metadata gets sent to the client as part of the explicit prepare. That version is compared with the current version when we execute to make the decision whether to resend metadata or not. This also fixes DERBY-2402, a duplicate. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1205426 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/drda/org/apache/derby/impl/drda/DRDAConnThread.java", "hunks": [ { "added": [ " database.getCurrentStatement().sqldaType = sqldaType;" ], "header": "@@ -828,6 +828,7 @@ class DRDAConnThread extends Thread {", "removed": [] } ] }, { "file": "java/drda/org/apache/derby/impl/drda/DRDAStatement.java", "hunks": [ { "added": [ " /**", " * If this changes, we need to re-send result set metadata to client, since", " * a change indicates the engine has recompiled the prepared statement.", " */", " long versionCounter;", "", " /**", " * Saved value returned from {@link DRDAConnThread#from", " * parsePRPSQLSTT}. Used to determine if the statment is such that we may", " * need to re-send metadata at execute time, see {@link #versionCounter}.", " */", " int sqldaType;", "" ], "header": "@@ -284,6 +284,19 @@ class DRDAStatement", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/GenericPreparedStatement.java", "hunks": [ { "added": [ " /**", " * Incremented for each (re)compile.", " */", " private long versionCounter;", "" ], "header": "@@ -173,6 +173,11 @@ public class GenericPreparedStatement", "removed": [] } ] } ]
derby-DERBY-5463-404a2f06
DERBY-5463: Don't alter the value of drdamaint when generating a new release.properties file; commit derby-5463-01-aa-leaveDRDAmaintIDAlone.diff. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1557823 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/build/org/apache/derbyPreBuild/ReleaseProperties.java", "hunks": [ { "added": [ "import java.io.FileInputStream;", "import java.util.Properties;" ], "header": "@@ -22,10 +22,12 @@", "removed": [] }, { "added": [ " private final static String DRDA_MAINT = \"drdamaint\";", " private final static int DRDA_MAINT_ID_DEFAULT = 0;", "" ], "header": "@@ -76,6 +78,9 @@ public class ReleaseProperties extends Task", "removed": [] }, { "added": [ " int drdaMaintID = readDRDAMaintID( target );", " System.out.println( \"XXX ReleaseProperties. drda maint id = \" + drdaMaintID );", " " ], "header": "@@ -138,6 +143,9 @@ public class ReleaseProperties extends Task", "removed": [] }, { "added": [ " propertiesPW.println( DRDA_MAINT + \"=\" + drdaMaintID );" ], "header": "@@ -150,7 +158,7 @@ public class ReleaseProperties extends Task", "removed": [ " propertiesPW.println( \"drdamaint=0\" );" ] }, { "added": [ "", " /**", " * <p>", " * Read the DRDA maintenance id from the existing release properties.", " * Returns 0 if the release properties file doesn't exist.", " * </p>", " */", " private int readDRDAMaintID( File inputFile )", " throws Exception", " {", " if ( !inputFile.exists() ) { return DRDA_MAINT_ID_DEFAULT; }", " ", " Properties releaseProperties = new Properties();", " releaseProperties.load( new FileInputStream( inputFile ) );", "", " String stringValue = releaseProperties.getProperty( DRDA_MAINT );", "", " return Integer.parseInt( stringValue );", " }" ], "header": "@@ -232,6 +240,25 @@ public class ReleaseProperties extends Task", "removed": [] } ] } ]
derby-DERBY-5466-2782d722
DERBY-5466: Enable the new var_pop(), var_samp(), stddev_pop(), and stddev_samp() aggregates by wiring Scott's aggregators into the Derby bind logic. Commit derby-5466-02-ab-bindLogic.diff. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1694918 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/AggregateNode.java", "hunks": [ { "added": [ "import org.apache.derby.catalog.TypeDescriptor;", "import org.apache.derby.catalog.types.AggregateAliasInfo;" ], "header": "@@ -23,6 +23,8 @@ package\torg.apache.derby.impl.sql.compile;", "removed": [] }, { "added": [ " static final class BuiltinAggDescriptor", " {", " public final String aggName;", " public final String aggClassName;", " public final TypeDescriptor argType;", " public final TypeDescriptor returnType;", "", " public BuiltinAggDescriptor", " (", " String aggName,", " String aggClassName,", " TypeDescriptor argType,", " TypeDescriptor returnType", " )", " {", " this.aggName = aggName;", " this.aggClassName = aggClassName;", " this.argType = argType;", " this.returnType = returnType;", " }", " }", " ", " //", " // Builtin aggregates which implement org.apache.derby.agg.Aggregator.", " //", " private static BuiltinAggDescriptor[] BUILTIN_MODERN_AGGS =", " {", " new BuiltinAggDescriptor", " (", " \"VAR_POP\",", " \"org.apache.derby.impl.sql.execute.VarPAggregator\",", " TypeDescriptor.DOUBLE,", " TypeDescriptor.DOUBLE", " ),", " new BuiltinAggDescriptor", " (", " \"VAR_SAMP\",", " \"org.apache.derby.impl.sql.execute.VarSAggregator\",", " TypeDescriptor.DOUBLE,", " TypeDescriptor.DOUBLE", " ),", " new BuiltinAggDescriptor", " (", " \"STDDEV_POP\",", " \"org.apache.derby.impl.sql.execute.StdDevPAggregator\",", " TypeDescriptor.DOUBLE,", " TypeDescriptor.DOUBLE", " ),", " new BuiltinAggDescriptor", " (", " \"STDDEV_SAMP\",", " \"org.apache.derby.impl.sql.execute.StdDevSAggregator\",", " TypeDescriptor.DOUBLE,", " TypeDescriptor.DOUBLE", " ),", " };", " " ], "header": "@@ -44,6 +46,63 @@ import org.apache.derby.iapi.types.DataTypeDescriptor;", "removed": [] }, { "added": [ " String schemaName = userAggregateName.getSchemaName();", " boolean noSchema = schemaName == null;", " getSchemaDescriptor( schemaName, true ),", " userAggregateName.getTableName(),", " noSchema" ], "header": "@@ -299,12 +358,14 @@ class AggregateNode extends UnaryOperatorNode", "removed": [ "", " getSchemaDescriptor( userAggregateName.getSchemaName(), true ),", " userAggregateName.getTableName()" ] }, { "added": [ " //", " // Don't need a privilege check for modern, builtin (system)", " // aggregates. They are tricky. They masquerade as user-defined", " // aggregates because they implement org.apache.derby.agg.Aggregator", " //", " if ( !SchemaDescriptor.STD_SYSTEM_SCHEMA_NAME.equals( ad.getSchemaName() ) )", " {", " getCompilerContext().addRequiredUsagePriv( ad );", " }" ], "header": "@@ -334,7 +395,15 @@ class AggregateNode extends UnaryOperatorNode", "removed": [ " getCompilerContext().addRequiredUsagePriv( ad );" ] }, { "added": [ " static AliasDescriptor resolveAggregate", " ( DataDictionary dd, SchemaDescriptor sd, String rawName, boolean noSchema )", " // first see if this is one of the builtin aggregates which", " // implements the Aggregator interface", " AliasDescriptor ad = resolveBuiltinAggregate( dd, rawName, noSchema );", " if ( ad != null ) { return ad; }", " " ], "header": "@@ -455,10 +524,15 @@ class AggregateNode extends UnaryOperatorNode", "removed": [ " static AliasDescriptor resolveAggregate", " ( DataDictionary dd, SchemaDescriptor sd, String rawName )" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/StaticMethodCallNode.java", "hunks": [ { "added": [ " resolveRoutine( fromList, subqueryList, aggregates, sd, noSchema );" ], "header": "@@ -239,7 +239,7 @@ class StaticMethodCallNode extends MethodCallNode", "removed": [ " resolveRoutine( fromList, subqueryList, aggregates, sd );" ] }, { "added": [ " resolveRoutine(fromList, subqueryList, aggregates, sd, noSchema);" ], "header": "@@ -276,7 +276,7 @@ class StaticMethodCallNode extends MethodCallNode", "removed": [ " resolveRoutine(fromList, subqueryList, aggregates, sd);" ] }, { "added": [ " resolveRoutine(fromList, subqueryList, aggregates, sd, noSchema);" ], "header": "@@ -301,7 +301,7 @@ class StaticMethodCallNode extends MethodCallNode", "removed": [ " resolveRoutine(fromList, subqueryList, aggregates, sd);" ] }, { "added": [ " resolveRoutine(fromList, subqueryList, aggregates, sd, noSchema);" ], "header": "@@ -313,7 +313,7 @@ class StaticMethodCallNode extends MethodCallNode", "removed": [ " resolveRoutine(fromList, subqueryList, aggregates, sd);" ] }, { "added": [ " List<AggregateNode> aggregates, SchemaDescriptor sd,", " boolean noSchema)" ], "header": "@@ -532,7 +532,8 @@ class StaticMethodCallNode extends MethodCallNode", "removed": [ " List<AggregateNode> aggregates, SchemaDescriptor sd)" ] } ] } ]
derby-DERBY-5466-a37ea514
DERBY-5466: Raise a useful error when someone attempts to use DISTINCT with the statistics aggregates. Commit derby-5466-03-aa-distinctError.diff. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1695154 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/sql/compile/AggregateNode.java", "hunks": [ { "added": [ " boolean noSchema = true;", " noSchema = (userAggregateName.getSchemaName() == null );" ], "header": "@@ -349,8 +349,10 @@ class AggregateNode extends UnaryOperatorNode", "removed": [] }, { "added": [], "header": "@@ -359,7 +361,6 @@ class AggregateNode extends UnaryOperatorNode", "removed": [ " boolean noSchema = schemaName == null;" ] }, { "added": [ " boolean isModernBuiltinAggregate =", " SchemaDescriptor.STD_SYSTEM_SCHEMA_NAME.equals( ad.getSchemaName() );", "", " if ( distinct && isModernBuiltinAggregate )", " {", " throw StandardException.newException( SQLState.LANG_BAD_DISTINCT_AGG );", " }" ], "header": "@@ -388,6 +389,13 @@ class AggregateNode extends UnaryOperatorNode", "removed": [] } ] } ]
derby-DERBY-5468-8e9474f0
DERBY-5468: Assertion failed in AutomaticIndexStatisticsMultiTest.testMTSelectWithDDL (expected:<0> but was:<1>) Improves the error reporting such that the cause of the error is reported. Patch file: derby-5468-1a-error_reporting.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1185187 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5472-028d92e6
DERBY-5472: Speed up MemoryLeakFixesTest.testRepeatedDatabaseCreationWithAutoStats() Reduce the amount of statement compilation and the number of iterations to speed up the test case, while preserving the ability to reproduce the original problem. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1186630 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5475-88b42546
DERBY-5475: Formalize use of old Derby distributions in test Expose the release repository in TestConfiguration. Removed some leftover throws IOException, and added some exception handling to avoid one throws IOException clause. Patch file: derby-5475-4a-less_exceptions_and_expose.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1352498 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/DerbyDistribution.java", "hunks": [ { "added": [ " * Derives the information for a Derby distribution.", " *", " File[] productionJars, File[] testingJars) {" ], "header": "@@ -75,13 +75,13 @@ public class DerbyDistribution", "removed": [ " * @throws IOException if obtaining the canonical path of a file fails", " File[] productionJars, File[] testingJars)", " throws IOException {" ] }, { "added": [ " private static String constructJarClasspath(File[] jars) {", " try {", " sb.append(jars[i].getCanonicalPath());", " } catch (IOException ioe) {", " // Do the next best thing; use absolute path.", " String absPath = jars[i].getAbsolutePath();", " sb.append(absPath);", " BaseTestCase.println(\"obtaining canonical path for \" +", " absPath + \" failed: \" + ioe.getMessage());", " }" ], "header": "@@ -273,13 +273,19 @@ public class DerbyDistribution", "removed": [ " * @throws IOException if obtaining the canonical path of a file fails", " private static String constructJarClasspath(File[] jars)", " throws IOException {", " sb.append(jars[i].getCanonicalPath());" ] } ] }, { "file": "java/testing/org/apache/derbyTesting/junit/TestConfiguration.java", "hunks": [ { "added": [ "import java.io.IOException;" ], "header": "@@ -22,6 +22,7 @@ package org.apache.derbyTesting.junit;", "removed": [] }, { "added": [ " /** Repository of old/previous Derby releases available on the local system. */", " private static ReleaseRepository releaseRepository;" ], "header": "@@ -126,6 +127,8 @@ public final class TestConfiguration {", "removed": [] }, { "added": [ "", " /**", " * Returns the release repository containing old Derby releases available", " * on the local system.", " * <p>", " * <strong>NOTE</strong>: It is your responsibility to keep the repository", " * up to date. This usually involves syncing the local Subversion repository", " * of previous Derby releases with the master repository at Apache.", " *", " * @see ReleaseRepository", " */", " public static synchronized ReleaseRepository getReleaseRepository() {", " if (releaseRepository == null) {", " try {", " releaseRepository = ReleaseRepository.getInstance();", " } catch (IOException ioe) {", " BaseTestCase.printStackTrace(ioe);", " Assert.fail(\"failed to initialize the release repository: \" +", " ioe.getMessage());", " }", " }", " return releaseRepository;", " }", "" ], "header": "@@ -215,7 +218,30 @@ public final class TestConfiguration {", "removed": [ " " ] } ] } ]
derby-DERBY-5475-e2f4f825
DERBY-5475: Formalize use of old Derby distributions in tests Added a very simple repository for Derby releases. It is compatible with the existing property used to control where the tests look for old releases (i.e. the upgrade and the compatibility test): derbyTesting.oldReleasePath Added two new classes in the junit test framework directory: o ReleaseRepository The repository, from which you can obtain a list of available Derby distributions that exist on the local machine. o DerbyDistribution Represents an on-disk, JAR-based Derby distribution. The initial repository is very simple, I expect that its functionality may be somewhat extended as tests start using it. Patch file: derby-5475-3a-repository.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1330751 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/ReleaseRepository.java", "hunks": [ { "added": [ "/*", "", " Derby - Class org.apache.derbyTesting.junit.ReleaseRepository", "", " Licensed to the Apache Software Foundation (ASF) under one or more", " contributor license agreements. See the NOTICE file distributed with", " this work for additional information regarding copyright ownership.", " The ASF licenses this file to you under the Apache License, Version 2.0", " (the \"License\"); you may not use this file except in compliance with", " the License. You may obtain a copy of the License at", "", " http://www.apache.org/licenses/LICENSE-2.0", "", " Unless required by applicable law or agreed to in writing, software", " distributed under the License is distributed on an \"AS IS\" BASIS,", " WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.", " See the License for the specific language governing permissions and", " limitations under the License.", "", " */", "package org.apache.derbyTesting.junit;", "", "import java.io.File;", "import java.io.FileFilter;", "import java.io.IOException;", "import java.util.ArrayList;", "import java.util.Collections;", "import java.util.List;", "", "import org.apache.derbyTesting.functionTests.util.PrivilegedFileOpsForTests;", "", "/**", " * A repository for Derby releases.", " * <p>", " * The releases are used by tests, for instance by the upgrade and compatibility", " * tests, to verify characteristics and behavior across Derby releases.", " * <p>", " * This particular repository is rather dumb - it is up to the user to keep the", " * repository content updated. The repository layout is based on the layout of", " * the SVN repository for releases at", " * {@code https://svn.apache.org/repos/asf/db/derby/jars}. This means there will", " * be a directory for each release, where the directory name is the release", " * version. Inside this directory, all the distribution JARs can be found.", " * <p>", " * The repository location defaults to {@code $HOME/.derbyTestingReleases} on", " * UNIX-like systems, and to {@code %UserProfile%\\.derbyTestingReleases} on", " * Windows (in Java, both of these maps to the system property 'user.home').", " * The location can be overridden by specifying the system property", " * {@code derbyTesting.oldReleasePath}.", " * <p>", " * If the default location doesn't exist, and the system property", " * {@code derbyTesting.oldReleasePath} is unspecified, it is up to the tests", " * using the release repository to decide if this condition fails the test or", " * not. If the system property is set to a non-existing directory an exception", " * will be thrown when instantiating the repository.", " * <p>", " * The repository is lazily initialized, as there's no reason to incur the", " * initialization cost when running tests that don't require the repository.", " * The disk is inspected only when the repository is instantiated, any updates", " * to the on-disk repository after the repository has been instantiated will", " * not take effect.", " * <p>", " * <em>Implementation note</em>: This code should be runnable with J2ME, which", " * means that it needs to be compatible with J2SE 1.4 for the time being.", " */", "public class ReleaseRepository {", "", " /**", " * The property used to override the location of the repository. The name", " * is used for compatibility reasons.", " */", " private static final String OVERRIDE_HOME_PROP =", " \"derbyTesting.oldReleasePath\";", " private static final File DEFAULT_HOME;", " static {", " String home = BaseTestCase.getSystemProperty(\"user.home\");", " DEFAULT_HOME = new File(home, \".derbyTestingReleases\");", " }", "", " /** The repository instance. */", " private static ReleaseRepository repos;", "", " /**", " * Returns the release repository object.", " * <p>", " * The release repository will be built from a default directory, or", " * from the directory specified by the system property", " * {@code derbyTesting.oldReleasePath}.", " *", " * @return The release repository object.", " */", " public static synchronized ReleaseRepository getInstance()", " throws IOException {", " if (repos == null) {", " File location = DEFAULT_HOME;", " String overrideLoc = BaseTestCase.getSystemProperty(", " OVERRIDE_HOME_PROP);", " if (overrideLoc != null) {", " location = new File(overrideLoc);", " if (!PrivilegedFileOpsForTests.exists(location)) {", " throw new IOException(\"the specified Derby release \" +", " \"repository doesn't exist: \" + location.getPath());", " }", " }", " repos = new ReleaseRepository(location);", " repos.buildDistributionList();", " }", " return repos;", " }", "", " /** The repository location (on disk). */", " private final File reposLocation;", " /**", " * List of distributions found in the repository. If {@code null}, the", " * repository hasn't been initialized.", " */", " private List dists;", "", " /**", " * Creates a new, empty repository.", " *", " * @param reposLocation the location of the repository contents", " * @see #buildDistributionList()", " */", " private ReleaseRepository(File reposLocation) {", " this.reposLocation = reposLocation;", " }", "", " /**", " * Returns the list of distributions in the repository.", " *", " * @return A sorted list of Derby distributions, with the newest", " * distribution at index zero, or an empty list if there are no", " * distributions in the repository.", " */", " public DerbyDistribution[] getDistributions()", " throws IOException {", " DerbyDistribution[] clone = new DerbyDistribution[dists.size()];", " dists.toArray(clone);", " return clone;", " }", "", " private void buildDistributionList()", " throws IOException {", " if (dists != null) {", " throw new IllegalStateException(\"repository already initialized\");", " }", "", " File[] tmpCandDists = reposLocation.listFiles(new FileFilter() {", "", " public boolean accept(File pathname) {", " if (!pathname.isDirectory()) {", " return false;", " }", " String name = pathname.getName();", " // Stay away from regexp for now (JSR169).", " // Allow only digits and three dots (\"10.8.1.2\")", " int dots = 0;", " for (int i=0; i < name.length(); i++) {", " char ch = name.charAt(i);", " if (ch == '.') {", " dots++;", " } else if (!Character.isDigit(ch)) {", " return false;", " }", " }", " return dots == 3;", " }", " });", " if (tmpCandDists == null) {", " tmpCandDists = new File[0];", " }", " traceit(\"{ReleaseRepository} \" + tmpCandDists.length +", " \" candidate releases at \" + reposLocation);", "", " dists = new ArrayList(tmpCandDists.length);", " for (int i=0; i < tmpCandDists.length; i++) {", " File dir = tmpCandDists[i];", " // We extract the version from the directory name.", " // We can also extract it by running sysinfo if that turns out to", " // be necessary.", " // From the check in the FileFilter we know we'll get four", " // components when splitting on dot.", " String[] comp = Utilities.split(dir.getName(), '.');", " DerbyVersion version;", " try {", " version = new DerbyVersion(", " Integer.parseInt(comp[0]),", " Integer.parseInt(comp[1]),", " Integer.parseInt(comp[2]),", " Integer.parseInt(comp[3]));", " } catch (NumberFormatException nfe) {", " traceit(\"skipped distribution, invalid version: \" +", " dir.getAbsolutePath());", " continue;", " }", " DerbyDistribution dist = DerbyDistribution.getInstance(", " dir, version);", " // TODO: 10.0.1.2 is considered invalid because it doesn't have a", " // a client JAR. Accept, ignore, or warn all the time?", " if (dist == null) {", " traceit(\"skipped invalid distribution: \" +", " dir.getAbsolutePath());", " } else {", " dists.add(dist);", " }", " }", " filterDistributions(dists);", " Collections.sort(dists);", " dists = Collections.unmodifiableList(dists);", " }", "", " /**", " * Filters out distributions that cannot be run in the current environment", " * for some reason.", " * <p>", " * The reason for getting filtered out is typically due to lacking", " * functionality or a bug in a specific Derby distribution.", " *", " * @param dists the list of distributions to filter (modified in-place)", " */", " private void filterDistributions(List dists) {", " // Specific version we want to filter out in some situations.", " DerbyVersion jsr169Support = DerbyVersion._10_1;", " DerbyVersion noPhoneMEBoot = DerbyVersion._10_3_1_4;", "", " for (int i=dists.size() -1; i >= 0; i--) {", " DerbyDistribution dist = (DerbyDistribution)dists.get(i);", " DerbyVersion distVersion = dist.getVersion();", " // JSR169 support was only added with 10.1, so don't", " // run 10.0 to later upgrade if that's what our jvm is supporting.", " if (JDBC.vmSupportsJSR169() &&", " distVersion.lessThan(jsr169Support)) {", " println(\"skipping \" + distVersion.toString() + \" on JSR169\");", " dists.remove(i);", " continue;", " }", " // Derby 10.3.1.4 does not boot on the phoneME advanced platform,", " // (see DERBY-3176) so don't run upgrade tests in this combination.", " if (BaseTestCase.isPhoneME() &&", " noPhoneMEBoot.equals(distVersion)) {", " println(\"skipping \" + noPhoneMEBoot.toString() +", " \" on CVM/phoneme\");", " dists.remove(i);", " continue;", " }", " }", " }", "", " /** Prints a trace message if tracing is enabled. */", " private static void traceit(String msg) {", " BaseTestCase.traceit(msg);", " }", "", " /** Prints a debug message if debugging is enabled. */", " private static void println(String msg) {", " BaseTestCase.println(msg);", " }", "}" ], "header": "@@ -0,0 +1,259 @@", "removed": [] } ] } ]
derby-DERBY-5475-e8c9fe3e
DERBY-5475: Formalize use of old Derby distributions in tests Another preparation patch, mostly renaming a class and some methods. Renamed DerbyVersionSimple to Version to indicate that it isn't coded specifically to represent a Derby version - it can be used to represent any version consisting of a major and a minor version component. Made variables final, some formatting changes. Renamed 'atLeastAs' to 'atLeast' to follow existing pattern. Patch file: derby-5475-2a-rename_and_cleanup.diff git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1245349 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/testing/org/apache/derbyTesting/junit/DerbyVersion.java", "hunks": [ { "added": [ " private final Version simpleVersion;" ], "header": "@@ -82,7 +82,7 @@ public class DerbyVersion", "removed": [ " private final DerbyVersionSimple simpleVersion;" ] }, { "added": [ " this.simpleVersion = new Version(major, minor);" ], "header": "@@ -114,7 +114,7 @@ public class DerbyVersion", "removed": [ " this.simpleVersion = new DerbyVersionSimple(major, minor);" ] }, { "added": [ " * @return {@code true} if this version is equal to or higher than", " * {@code other}, {@code false} otherwise.", " public boolean atLeast(DerbyVersion other) {" ], "header": "@@ -164,10 +164,10 @@ public class DerbyVersion", "removed": [ " * @return {@code true} if this version is equal/higher than {@code other},", " * {@code false} otherwise.", " public boolean atLeastAs(DerbyVersion other) {" ] }, { "added": [ " * @return {@code true} if this version is equal to or lower than", " * {@code other}, {@code false} otherwise.", " public boolean atMost(DerbyVersion other) {" ], "header": "@@ -176,10 +176,10 @@ public class DerbyVersion", "removed": [ " * @return {@code true} if this version is equal/lower than {@code other},", " * {@code false} otherwise.", " public boolean atMostAs(DerbyVersion other) {" ] } ] }, { "file": "java/testing/org/apache/derbyTesting/junit/Version.java", "hunks": [ { "added": [ " Derby - Class org.apache.derbyTesting.junit.Version" ], "header": "@@ -1,6 +1,6 @@", "removed": [ " Derby - Class org.apache.derbyTesting.junit.DerbyVersionSimple" ] }, { "added": [ " * A generic class for storing a major and minor version number.", " * This class assumes that more capable versions compare greater than less", " * capable versions.", " *", " * @see DerbyVersion ", "public final class Version", " private final int _major;", " private final int _minor;", " Version(int major, int minor) {", " this._major = major;", " this._minor = minor;", " public Version(String desc)", " throws NumberFormatException {", " StringTokenizer tokens = new StringTokenizer( desc, \".\" );", " this._major = Integer.parseInt(tokens.nextToken());", " this._minor = Integer.parseInt(tokens.nextToken());", " * Returns {@code true} if this version is at least as advanced", " * as the other version.", " public boolean atLeast(Version that) {", " // Comparable BEHAVIOR", " return compareTo((Version)o);", " public int compareTo(Version that) {" ], "header": "@@ -23,63 +23,50 @@ package org.apache.derbyTesting.junit;", "removed": [ " * A class for storing a major and minor version number. This class", " * assumes that more capable versions compare greater than less capable versions.", " * </p>", "public final class DerbyVersionSimple", " private\tint\t_major;", " private\tint\t_minor;", " DerbyVersionSimple( int major, int minor )", " {", " constructorMinion( major, minor );", " public\tDerbyVersionSimple( String desc )", " throws NumberFormatException", " {", " StringTokenizer\t\ttokens = new StringTokenizer( desc, \".\" );", "", " constructorMinion", " (", " java.lang.Integer.parseInt( tokens.nextToken() ),", " java.lang.Integer.parseInt( tokens.nextToken() )", " );", " }", "", " private\tvoid\tconstructorMinion( int major, int minor )", " {", " _major = major;", " _minor = minor;", " * <p>", " * Returns true if this Version is at least as advanced", " * as that Version.", " * </p>", " public\tboolean\tatLeast( DerbyVersionSimple that )", " {", " //\tComparable BEHAVIOR", " return compareTo((DerbyVersionSimple)o);", " public int compareTo(DerbyVersionSimple that) {", "" ] } ] } ]
derby-DERBY-5484-2198fafc
DERBY-5379 testDERBY5120NumRowsInSydependsForTrigger - The number of values assigned is not the same as the number of specified or implied columns. DERBY-5484 Upgradetest fails with upgrade from 10.8.2.2 (7 errors, 1 failure) on trunk The above 2 jiras are duplicates. The upgrade tests are failing when doing an upgrade from 10.8.2.2 to trunk. The tests that are failing were written for DERBY-5120, DERBY-5044. Both these bugs got fixed in 10.8.2.2 and higher. The purpose of these tests is to show that when the tests are done with a release with those fixes missing, we will see the incorrect behavior but once the database is upgraded to 10.8.2.2 and higher, the tests will start functioning correctly. The problem is that we do not recognize that if the database is created with 10.8.2.2, then we will not the problem behavior because 10.8.2.2 already has the required fixes in it for DERBY-5120 and DERBY-5044. I have fixed this by making the upgrade test understand that incorrect behavior would be seen only for releases under 10.8.2.2 git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1203252 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5486-d3134918
DERBY-5486: Remove useless tests which are causing instabilities in the nightly test runs. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1197272 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5488-10608cf4
DERBY-5488: Move BigDecimal getters/setters from JDBC 2.0 implementation into JSR 169 implementation. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1197264 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement.java", "hunks": [ { "added": [ "import java.math.BigDecimal;" ], "header": "@@ -31,6 +31,7 @@ import org.apache.derby.iapi.services.sanity.SanityManager;", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedCallableStatement20.java", "hunks": [ { "added": [], "header": "@@ -91,32 +91,6 @@ public abstract class EmbedCallableStatement20", "removed": [ " /**", " * JDBC 2.0", " *", " * Get the value of a NUMERIC parameter as a java.math.BigDecimal object.", " *", " * @param parameterIndex the first parameter is 1, the second is 2, ...", " * @return the parameter value (full precision); if the value is SQL NULL, ", " * the result is null ", " * @exception SQLException if a database-access error occurs.", " */", " public BigDecimal getBigDecimal(int parameterIndex) throws SQLException ", "\t{", "\t\tcheckStatus();", "\t\ttry {", "\t\t\tDataValueDescriptor dvd = getParms().getParameterForGet(parameterIndex-1);", "\t\t\tif (wasNull = dvd.isNull())", "\t\t\t\treturn null;", "\t\t\t", "\t\t\treturn org.apache.derby.iapi.types.SQLDecimal.getBigDecimal(dvd);", "\t\t\t", "\t\t} catch (StandardException e)", "\t\t{", "\t\t\tthrow EmbedResultSet.noStateChangeException(e);", "\t\t}", "\t}", "" ] } ] }, { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedPreparedStatement.java", "hunks": [ { "added": [ "import java.math.BigDecimal;", "import java.math.BigInteger;" ], "header": "@@ -40,6 +40,8 @@ import org.apache.derby.iapi.error.StandardException;", "removed": [] }, { "added": [ "\t/*", "\t** Methods using BigDecimal, moved back into EmbedPreparedStatement", "\t** since our small device implementation now requires CDC/FP 1.1, which", " ** supports BigDecimal.", "\t*/", "\t/**", " * Set a parameter to a java.lang.BigDecimal value. ", " * The driver converts this to a SQL NUMERIC value when", " * it sends it to the database.", " *", " * @param parameterIndex the first parameter is 1, the second is 2, ...", " * @param x the parameter value", "\t * @exception SQLException thrown on failure.", " */", " public final void setBigDecimal(int parameterIndex, BigDecimal x) throws SQLException {", "\t\tcheckStatus();", "\t\ttry {", "\t\t\t/* JDBC is one-based, DBMS is zero-based */", "\t\t\tgetParms().getParameterForSet(parameterIndex - 1).setBigDecimal(x);", "", "\t\t} catch (Throwable t) {", "\t\t\tthrow EmbedResultSet.noStateChangeException(t);", "\t\t}", "\t}", "" ], "header": "@@ -444,6 +446,31 @@ public abstract class EmbedPreparedStatement", "removed": [] } ] } ]
derby-DERBY-5488-57c1b5cf
DERBY-5488: Add JDBC limit/offset escape syntax. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1200492 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/sql/execute/ResultSetFactory.java", "hunks": [ { "added": [ "\t * @param fetchFirstMethod The FETCH FIRST/NEXT parameter was specified", "\t * @param hasJDBClimitClause True if the offset/fetchFirst clauses were added by JDBC LIMIT escape syntax" ], "header": "@@ -1638,8 +1638,8 @@ public interface ResultSetFactory {", "removed": [ "\t * @param fetchFirstMethod The FETCH FIRST/NEXT parameter was", "\t * specified" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/CreateViewNode.java", "hunks": [ { "added": [ " private boolean hasJDBClimitClause; // true if using JDBC limit/offset escape syntax" ], "header": "@@ -71,6 +71,7 @@ public class CreateViewNode extends DDLStatementNode", "removed": [] }, { "added": [ "\t * @param hasJDBClimitClause True if the offset/fetchFirst clauses come from JDBC limit/offset escape syntax" ], "header": "@@ -85,6 +86,7 @@ public class CreateViewNode extends DDLStatementNode", "removed": [] }, { "added": [ " Object fetchFirst,", " Object hasJDBClimitClause)" ], "header": "@@ -96,7 +98,8 @@ public class CreateViewNode extends DDLStatementNode", "removed": [ " Object fetchFirst)" ] }, { "added": [ " this.hasJDBClimitClause = (hasJDBClimitClause == null) ? false : ((Boolean) hasJDBClimitClause).booleanValue();" ], "header": "@@ -107,6 +110,7 @@ public class CreateViewNode extends DDLStatementNode", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/CursorNode.java", "hunks": [ { "added": [ " private boolean hasJDBClimitClause; // true if using JDBC limit/offset escape syntax" ], "header": "@@ -55,6 +55,7 @@ public class CursorNode extends DMLStatementNode", "removed": [] }, { "added": [ "\t * @param hasJDBClimitClause True if the offset/fetchFirst clauses come from JDBC limit/offset escape syntax" ], "header": "@@ -89,6 +90,7 @@ public class CursorNode extends DMLStatementNode", "removed": [] }, { "added": [ " Object hasJDBClimitClause," ], "header": "@@ -105,6 +107,7 @@ public class CursorNode extends DMLStatementNode", "removed": [] }, { "added": [ " this.hasJDBClimitClause = (hasJDBClimitClause == null) ? false : ((Boolean) hasJDBClimitClause).booleanValue();" ], "header": "@@ -114,6 +117,7 @@ public class CursorNode extends DMLStatementNode", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/FromSubquery.java", "hunks": [ { "added": [ " private boolean hasJDBClimitClause; // true if using JDBC limit/offset escape syntax" ], "header": "@@ -50,6 +50,7 @@ public class FromSubquery extends FromTable", "removed": [] }, { "added": [ "\t * @param hasJDBClimitClause True if the offset/fetchFirst clauses come from JDBC limit/offset escape syntax" ], "header": "@@ -64,6 +65,7 @@ public class FromSubquery extends FromTable", "removed": [] }, { "added": [ " Object hasJDBClimitClause," ], "header": "@@ -73,6 +75,7 @@ public class FromSubquery extends FromTable", "removed": [] }, { "added": [ " this.hasJDBClimitClause = (hasJDBClimitClause == null) ? false : ((Boolean) hasJDBClimitClause).booleanValue();" ], "header": "@@ -82,6 +85,7 @@ public class FromSubquery extends FromTable", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/InsertNode.java", "hunks": [ { "added": [ " private boolean hasJDBClimitClause; // true if using JDBC limit/offset escape syntax" ], "header": "@@ -113,6 +113,7 @@ public final class InsertNode extends DMLModStatementNode", "removed": [] }, { "added": [ " * @param orderByList The order by list for the source result set, null if no order by list", "\t * @param offset The value of a <result offset clause> if present", "\t * @param fetchFirst The value of a <fetch first clause> if present", "\t * @param hasJDBClimitClause True if the offset/fetchFirst clauses come from JDBC limit/offset escape syntax" ], "header": "@@ -127,8 +128,10 @@ public final class InsertNode extends DMLModStatementNode", "removed": [ " * @param orderByList The order by list for the source result set, null if", "\t *\t\t\tno order by list" ] }, { "added": [ " Object fetchFirst,", " Object hasJDBClimitClause)" ], "header": "@@ -138,7 +141,8 @@ public final class InsertNode extends DMLModStatementNode", "removed": [ " Object fetchFirst)" ] }, { "added": [ " this.hasJDBClimitClause = (hasJDBClimitClause == null) ? false : ((Boolean) hasJDBClimitClause).booleanValue();" ], "header": "@@ -155,6 +159,7 @@ public final class InsertNode extends DMLModStatementNode", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/RowCountNode.java", "hunks": [ { "added": [ " /**", " * True if the offset/fetchFirst clauses were added by JDBC LIMIT syntax.", " */", " private boolean hasJDBClimitClause;" ], "header": "@@ -48,6 +48,10 @@ public final class RowCountNode extends SingleChildResultSetNode", "removed": [] }, { "added": [ " Object fetchFirst,", " Object hasJDBClimitClause)" ], "header": "@@ -58,7 +62,8 @@ public final class RowCountNode extends SingleChildResultSetNode", "removed": [ " Object fetchFirst)" ] }, { "added": [ " this.hasJDBClimitClause = (hasJDBClimitClause == null) ? false : ((Boolean) hasJDBClimitClause).booleanValue();" ], "header": "@@ -66,6 +71,7 @@ public final class RowCountNode extends SingleChildResultSetNode", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/RowResultSetNode.java", "hunks": [ { "added": [ " boolean hasJDBClimitClause; // were OFFSET/FETCH FIRST specified by a JDBC LIMIT clause?" ], "header": "@@ -63,6 +63,7 @@ public class RowResultSetNode extends FromTable", "removed": [] }, { "added": [ " * @param hasJDBClimitClause true if the clauses were added by (and have the semantics of) a JDBC limit clause", " void pushOffsetFetchFirst( ValueNode offset, ValueNode fetchFirst, boolean hasJDBClimitClause )", " this.hasJDBClimitClause = hasJDBClimitClause;" ], "header": "@@ -377,11 +378,13 @@ public class RowResultSetNode extends FromTable", "removed": [ " void pushOffsetFetchFirst(ValueNode offset, ValueNode fetchFirst)" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/SelectNode.java", "hunks": [ { "added": [ " boolean hasJDBClimitClause; // were OFFSET/FETCH FIRST specified by a JDBC LIMIT clause?" ], "header": "@@ -107,6 +107,7 @@ public class SelectNode extends ResultSetNode", "removed": [] }, { "added": [ " * @param hasJDBClimitClause true if the clauses were added by (and have the semantics of) a JDBC limit clause", " void pushOffsetFetchFirst( ValueNode offset, ValueNode fetchFirst, boolean hasJDBClimitClause )", " this.hasJDBClimitClause = hasJDBClimitClause;" ], "header": "@@ -934,11 +935,13 @@ public class SelectNode extends ResultSetNode", "removed": [ " void pushOffsetFetchFirst(ValueNode offset, ValueNode fetchFirst)" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/SetOperatorNode.java", "hunks": [ { "added": [ " boolean hasJDBClimitClause; // were OFFSET/FETCH FIRST specified by a JDBC LIMIT clause?" ], "header": "@@ -63,6 +63,7 @@ abstract class SetOperatorNode extends TableOperatorNode", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/compile/SubqueryNode.java", "hunks": [ { "added": [ " private boolean hasJDBClimitClause; // true if using JDBC limit/offset escape syntax" ], "header": "@@ -170,6 +170,7 @@ public class SubqueryNode extends ValueNode", "removed": [] }, { "added": [ "\t * @param hasJDBClimitClause True if the offset/fetchFirst clauses come from JDBC limit/offset escape syntax" ], "header": "@@ -209,6 +210,7 @@ public class SubqueryNode extends ValueNode", "removed": [] }, { "added": [ " Object fetchFirst,", " Object hasJDBClimitClause)", " this.hasJDBClimitClause = (hasJDBClimitClause == null) ? false : ((Boolean) hasJDBClimitClause).booleanValue();" ], "header": "@@ -217,13 +219,15 @@ public class SubqueryNode extends ValueNode", "removed": [ " Object fetchFirst)" ] }, { "added": [ " resultSet.pushOffsetFetchFirst( offset, fetchFirst, hasJDBClimitClause );" ], "header": "@@ -868,7 +872,7 @@ public class SubqueryNode extends ValueNode", "removed": [ " resultSet.pushOffsetFetchFirst(offset, fetchFirst);" ] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/GenericResultSetFactory.java", "hunks": [ { "added": [ " boolean hasJDBClimitClause," ], "header": "@@ -1283,6 +1283,7 @@ public class GenericResultSetFactory implements ResultSetFactory", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/impl/sql/execute/RowCountResultSet.java", "hunks": [ { "added": [ " final private boolean hasJDBClimitClause;" ], "header": "@@ -56,6 +56,7 @@ class RowCountResultSet extends NoPutResultSetImpl", "removed": [] }, { "added": [ " * @param hasJDBClimitClause True if offset/fetchFirst clauses were added by JDBC LIMIT escape syntax" ], "header": "@@ -79,6 +80,7 @@ class RowCountResultSet extends NoPutResultSetImpl", "removed": [] }, { "added": [ " boolean hasJDBClimitClause," ], "header": "@@ -91,6 +93,7 @@ class RowCountResultSet extends NoPutResultSetImpl", "removed": [] }, { "added": [ " this.hasJDBClimitClause = hasJDBClimitClause;" ], "header": "@@ -102,6 +105,7 @@ class RowCountResultSet extends NoPutResultSetImpl", "removed": [] } ] } ]
derby-DERBY-5488-59f54f8f
DERBY-5488: First changes to implement new JDBC 4.1 object mappings. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1196680 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5488-955f104b
DERBY-5488: Add a couple additional limit/offset tests. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1201041 13f79535-47bb-0310-9956-ffa450edef68
[]
derby-DERBY-5488-96748ad1
DERBY-5488: Eliminate some NPEs in ParameterMappingTest when run on OJEC. git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1199392 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/iapi/types/BigIntegerDecimal.java", "hunks": [ { "added": [ "import java.math.BigDecimal;" ], "header": "@@ -21,6 +21,7 @@", "removed": [] }, { "added": [ "\tpublic Object\tgetObject() throws StandardException", "\t{", "\t\tif ( isNull() ) { return null; }", " else { return new BigDecimal( getString() ); }", "\t}", "" ], "header": "@@ -195,6 +196,12 @@ public final class BigIntegerDecimal extends BinaryDecimal", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/NumberDataType.java", "hunks": [ { "added": [ "import java.math.BigDecimal;", "" ], "header": "@@ -21,6 +21,8 @@", "removed": [] } ] }, { "file": "java/engine/org/apache/derby/iapi/types/SQLDecimal.java", "hunks": [ { "added": [], "header": "@@ -67,13 +67,6 @@ import java.sql.Types;", "removed": [ "\tprivate static final BigDecimal ZERO = BigDecimal.valueOf(0L);", "\tprivate static final BigDecimal ONE = BigDecimal.valueOf(1L);", "\tstatic final BigDecimal MAXLONG_PLUS_ONE = BigDecimal.valueOf(Long.MAX_VALUE).add(ONE);", "\tstatic final BigDecimal MINLONG_MINUS_ONE = BigDecimal.valueOf(Long.MIN_VALUE).subtract(ONE);", "", "", "" ] } ] } ]
derby-DERBY-5488-ce664ad5
DERBY-5488: Rename SCOPE_CATLOG to SCOPE_CATALOG and add a redundant trailing SCOPE_CATLOG column to the ResultSet returned by DatabaseMetaData.getColumns(). git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1204684 13f79535-47bb-0310-9956-ffa450edef68
[ { "file": "java/engine/org/apache/derby/impl/jdbc/EmbedDatabaseMetaData.java", "hunks": [ { "added": [ " * <LI><B>SCOPE_CATALOG</B> String => catalog of table that is the" ], "header": "@@ -1864,7 +1864,7 @@ public class EmbedDatabaseMetaData extends ConnectionChild", "removed": [ " * <LI><B>SCOPE_CATLOG</B> String => catalog of table that is the" ] } ] } ]