instance_id
stringlengths
17
39
repo
stringclasses
8 values
issue_id
stringlengths
14
34
pr_id
stringlengths
14
34
linking_methods
sequencelengths
1
3
base_commit
stringlengths
40
40
merge_commit
stringlengths
0
40
hints_text
sequencelengths
0
106
resolved_comments
sequencelengths
0
119
created_at
unknown
labeled_as
sequencelengths
0
7
problem_title
stringlengths
7
174
problem_statement
stringlengths
0
55.4k
gold_files
sequencelengths
0
10
gold_files_postpatch
sequencelengths
1
10
test_files
sequencelengths
0
60
gold_patch
stringlengths
220
5.83M
test_patch
stringlengths
386
194k
split_random
stringclasses
3 values
split_time
stringclasses
3 values
issue_start_time
unknown
issue_created_at
unknown
issue_by_user
stringlengths
3
21
split_repo
stringclasses
3 values
netty/netty/7947_7968
netty/netty
netty/netty/7947
netty/netty/7968
[ "timestamp(timedelta=0.0, similarity=0.8568307862755823)" ]
b192bf12ad2af92bba0f32c9d3127b1192e54670
400ca8733427732e426d6fa0eb88fcfe7e06b06d
[ "There is a solid chance I'm just missing something, and if so, sorry for the noise.", "@bryce-anderson basically you are asking to get access to stream 1 (upgrade stream) ?", "@normanmaurer, yes, specifically as a client since I wasn't clear on that.", "@bryce-anderson I think you are right (but I am a bit busy atm so I may miss something). Did you check `Http2FrameCodec`? If its not possible atm are you interested in doing a PR ?", "@normanmaurer, I think the implementation of `Http2StreamChannel` in the private class `DefaultHttp2StreamChannel` inside the `Http2MultiplexCodec` and I don't see a way to surface it (outside of reflection magic).\r\n\r\nI'm more than happy to work on a PR presuming @ejona86 or someone else more versed doesn't point me to an API I missed. I'll start on it tomorrow unless someone intervenes. ", "What do you want the `Http2StreamChannel` for? It seems the only use is to get the `Http2StreamChannel`. IIRC, that should not be used when using `Http2MultiplexCodec`; Http2FrameStream is only for when using `Http2FrameCodec` directly.", "@ejona86, I'm not sure what you mean. I don't want to use the `Http2FrameStream`, I want to use the child channels in the `Http2MultiplexCodec`, and I think they are all implementations of the `Http2StreamChannel`, which is what you get from the `Http2StreamChannelBootstrap`. Unfortunately you can't get the `Http2StreamChannel` for the upgrade stream since it was created for you by the upgrade mechanism.", "You can't get _any_ of the channels created for you. That's true for any inbound request. Instead, you can specify the handler that will be installed in the pipeline for that channel. If you need to interact with that channel, then you coordinate with your handler. That's standard Netty for incoming connections.\r\n\r\nWhat am I missing?", "I'm talking about a client side usage of `Http2MultiplexCodec` when doing a h2c upgrade. How should I be using it for the request that resulted in the upgrade? In a way, I did create that channel via my HTTP/1.x upgrade request, but it is not like streams 3+ where it begins as a h2 stream from birth.", "Ah, I see. _Client-side._ And you did say client in the issue title.\r\n\r\nI only did server-side for this before. Client side looks more complicated API-wise, because we don't know whether we'll be speaking HTTP/1 or HTTP/2. The current upgrade API is call-back based, where you provide a Handler that will be used when appropriate.\r\n\r\nIt's a bit hard for me to see how this all works, because the current example relies on using HTTP/2 API and then using HttpToHttp2ConnectionHandler or Http2ConnectionHandler depending on the upgrade result. Although, maybe that's what we need here, use a HTTP/2 handler with HTTP/1 conversion logic and if the upgrade is successful the handler would be \"migrated\" to a child channel (although add/removal from the pipeline may be confusing).\r\n\r\nAlternatively, maybe the handler could be a HTTP/1 handler that continues receiving HTTP/1 frames (converted from HTTP/2) until the end of the response, and it just ignores any HTTP/2 frames flowing past. In that solution, there _wouldn't_ be a childchan for the upgraded stream. (Note: the handler could actually speak HTTP/2, but there would need to be another HTTP/1 to HTTP/2 conversion; yes, that means we would convert HTTP/2 to HTTP/1 to HTTP/2 for the upgraded stream.)\r\n\r\nYeah. This would take some time to figure out.", "Sorry for the confusion about using this client side.\r\n\r\nAn awkward part of the 'flow-past' solution is that you can't manage response backpressure for the upgraded stream in the normal Netty manner since if you turn off auto-read or you stall the whole h2 session.\r\n\r\nMy first inclination is to mange this via a richer event model where if the Http2MultiplexCodec gets installed due to a h2c upgrade, the tail of the pipeline gets an event that provides a handle for the child stream and it's the tail of the pipelines job to make the transition. The transition will be pretty complex, but h2c upgrades are anything but simple.\r\nFor Finagles current usage, since we're not doing the dispatch work in the pipeline, it's relatively easy for us to switch the underlying `Channel`, but this isn't exactly a normal pattern when using Netty directly.\r\n\r\nI'm happy to start putting together something along those lines if it sounds like a reasonable direction to try out.\r\n", "Ah... flow control. Of course. So axe the HTTP/1 conversion idea.\r\n\r\n> the tail of the pipeline gets an event that provides a handle for the child stream and it's the tail of the pipelines job to make the transition\r\n\r\nThis is sort of fine, but it is timing sensitive (\"racy\"). If any handler in the pipeline delayed that event, then frames could begin appearing in the childchan before the transition has started.\r\n\r\nI think simply passing in a \"stream-1 HTTP/2 handler\" to use if upgrade succeeds would be more straight-forward, and more in line with the rest of the upgrade logic. Complexity-wise it seems similar or maybe easier to use (although there may be a cost to preparing that handler even when unused).", "@ejona86, you're right, the racy nature of pushing messages down that new child stream channel could be tricky and it's not really the same pattern used elsewhere. I like the idea of providing an upgrade initializer/handler.\r\n\r\nI'd like to get this sorted out sooner than later so we can start using the `Http2MultiplexCodec` in our client, so I'm happy to take a run at this unless someone else feels like they really want to tackle it.", "@bryce-anderson, go for it. I couldn't work on it soon and the upgrade handler seems good enough to me.", "👍 sounds good to me\n\n> Am 17.05.2018 um 18:16 schrieb Eric Anderson <notifications@github.com>:\n> \n> @bryce-anderson, go for it. I couldn't work on it soon and the upgrade handler seems good enough to me.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n> \n" ]
[ "nit: remove ?", "you can remove the `= null`", "please enclose with `{}`", "nit: remove", "You could also just use `ChannelInboundHandlerAdapter` and so not need to declare an extra class .", "2018", "formatting ", "nit: final", "private ?", "s/he/the/", "can we come up with a better name than stream2?", "ffti: s/utilize/use", "Ha, it seems I'm not very creative. I cargo culted the name from the Http2FrameCodec.onStreamActive0 method, but should have changed the name. Good catch. I'll change it to `codecStream`." ]
"2018-05-24T19:53:12Z"
[]
No way to get a Http2StreamChannel for a client h2c upgrade stream via Http2MultiplexCodec
### Expected behavior When using the `Http2MultiplexCodec` on a client after a h2c upgrade is performed, I expect to be able to get a handle on the `Http2StreamChannel` associated with the upgrade request when using the `Http2MultiplexCodec`. ### Actual behavior There is no public API for surfacing a `Http2StreamChannel` that resulted from the client request. ### Netty version Branch 4.1 (as of SHA 0bce0450c05697d)
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilder.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilder.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecClientUpgradeTest.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java index da2b27a1d7b..47f2533f3f8 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java @@ -143,7 +143,7 @@ public class Http2FrameCodec extends Http2ConnectionHandler { private static final InternalLogger LOG = InternalLoggerFactory.getInstance(Http2FrameCodec.class); - private final PropertyKey streamKey; + protected final PropertyKey streamKey; private final PropertyKey upgradeKey; private final Integer initialFlowControlWindowSize; diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java index 515b21d495d..5f17c7503f9 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java @@ -49,7 +49,11 @@ import java.util.ArrayDeque; import java.util.Queue; +import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_STREAM_ID; import static io.netty.handler.codec.http2.Http2CodecUtil.isStreamIdValid; +import static io.netty.handler.codec.http2.Http2Error.INTERNAL_ERROR; +import static io.netty.handler.codec.http2.Http2Exception.connectionError; + import static java.lang.Math.min; /** @@ -153,6 +157,7 @@ public int guess() { } private final ChannelHandler inboundStreamHandler; + private final ChannelHandler upgradeStreamHandler; private int initialOutboundStreamWindow = Http2CodecUtil.DEFAULT_WINDOW_SIZE; private boolean parentReadInProgress; @@ -168,9 +173,25 @@ public int guess() { Http2MultiplexCodec(Http2ConnectionEncoder encoder, Http2ConnectionDecoder decoder, Http2Settings initialSettings, - ChannelHandler inboundStreamHandler) { + ChannelHandler inboundStreamHandler, + ChannelHandler upgradeStreamHandler) { super(encoder, decoder, initialSettings); this.inboundStreamHandler = inboundStreamHandler; + this.upgradeStreamHandler = upgradeStreamHandler; + } + + @Override + public void onHttpClientUpgrade() throws Http2Exception { + // We must have an upgrade handler or else we can't handle the stream + if (upgradeStreamHandler == null) { + throw connectionError(INTERNAL_ERROR, "Client is misconfigured for upgrade requests"); + } + // Creates the Http2Stream in the Connection. + super.onHttpClientUpgrade(); + // Now make a new FrameStream, set it's underlying Http2Stream, and initialize it. + Http2MultiplexCodecStream codecStream = newStream(); + codecStream.setStreamAndProperty(streamKey, connection().stream(HTTP_UPGRADE_STREAM_ID)); + onHttp2UpgradeStreamInitialized(ctx, codecStream); } private static void registerDone(ChannelFuture future) { @@ -236,6 +257,22 @@ final void onHttp2Frame(ChannelHandlerContext ctx, Http2Frame frame) { } } + private void onHttp2UpgradeStreamInitialized(ChannelHandlerContext ctx, Http2MultiplexCodecStream stream) { + assert stream.state() == Http2Stream.State.HALF_CLOSED_LOCAL; + DefaultHttp2StreamChannel ch = new DefaultHttp2StreamChannel(stream, true); + ch.outboundClosed = true; + + // Add our upgrade handler to the channel and then register the channel. + // The register call fires the channelActive, etc. + ch.pipeline().addLast(upgradeStreamHandler); + ChannelFuture future = ctx.channel().eventLoop().register(ch); + if (future.isDone()) { + registerDone(future); + } else { + future.addListener(CHILD_CHANNEL_REGISTRATION_LISTENER); + } + } + @Override final void onHttp2StreamStateChanged(ChannelHandlerContext ctx, Http2FrameStream stream) { Http2MultiplexCodecStream s = (Http2MultiplexCodecStream) stream; diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilder.java index 8e0929094a6..94f0d497203 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilder.java @@ -29,6 +29,7 @@ public class Http2MultiplexCodecBuilder extends AbstractHttp2ConnectionHandlerBuilder<Http2MultiplexCodec, Http2MultiplexCodecBuilder> { final ChannelHandler childHandler; + private ChannelHandler upgradeStreamHandler; Http2MultiplexCodecBuilder(boolean server, ChannelHandler childHandler) { server(server); @@ -83,6 +84,14 @@ public Http2MultiplexCodecBuilder gracefulShutdownTimeoutMillis(long gracefulShu return super.gracefulShutdownTimeoutMillis(gracefulShutdownTimeoutMillis); } + public Http2MultiplexCodecBuilder withUpgradeStreamHandler(ChannelHandler upgradeStreamHandler) { + if (this.isServer()) { + throw new IllegalArgumentException("Server codecs don't use an extra handler for the upgrade stream"); + } + this.upgradeStreamHandler = upgradeStreamHandler; + return this; + } + @Override public boolean isServer() { return super.isServer(); @@ -157,6 +166,6 @@ public Http2MultiplexCodec build() { @Override protected Http2MultiplexCodec build( Http2ConnectionDecoder decoder, Http2ConnectionEncoder encoder, Http2Settings initialSettings) { - return new Http2MultiplexCodec(encoder, decoder, initialSettings, childHandler); + return new Http2MultiplexCodec(encoder, decoder, initialSettings, childHandler, upgradeStreamHandler); } }
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java index cb774a94ca9..fcfdb4b75e5 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java @@ -14,9 +14,11 @@ */ package io.netty.handler.codec.http2; +import io.netty.channel.Channel; import io.netty.channel.ChannelHandler; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.ChannelInitializer; import io.netty.channel.embedded.EmbeddedChannel; import io.netty.handler.codec.http.DefaultFullHttpRequest; import io.netty.handler.codec.http.FullHttpRequest; @@ -43,7 +45,8 @@ public void testUpgradeToHttp2FrameCodec() throws Exception { @Test public void testUpgradeToHttp2MultiplexCodec() throws Exception { - testUpgrade(Http2MultiplexCodecBuilder.forClient(new HttpInboundHandler()).build()); + testUpgrade(Http2MultiplexCodecBuilder.forClient(new HttpInboundHandler()) + .withUpgradeStreamHandler(new ChannelInboundHandlerAdapter()).build()); } private static void testUpgrade(Http2ConnectionHandler handler) throws Exception { diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecClientUpgradeTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecClientUpgradeTest.java new file mode 100644 index 00000000000..26b63ed7f9c --- /dev/null +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecClientUpgradeTest.java @@ -0,0 +1,81 @@ +/* + * Copyright 2018 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package io.netty.handler.codec.http2; + +import org.junit.Test; + +import io.netty.channel.ChannelHandler; +import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.embedded.EmbeddedChannel; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +public class Http2MultiplexCodecClientUpgradeTest { + + @ChannelHandler.Sharable + private final class NoopHandler extends ChannelInboundHandlerAdapter { + @Override + public void channelActive(ChannelHandlerContext ctx) { + ctx.channel().close(); + } + } + + private final class UpgradeHandler extends ChannelInboundHandlerAdapter { + Http2Stream.State stateOnActive; + int streamId; + + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + Http2StreamChannel ch = (Http2StreamChannel) ctx.channel(); + stateOnActive = ch.stream().state(); + streamId = ch.stream().id(); + super.channelActive(ctx); + } + } + + private Http2MultiplexCodec newCodec(ChannelHandler upgradeHandler) { + Http2MultiplexCodecBuilder builder = Http2MultiplexCodecBuilder.forClient(new NoopHandler()); + builder.withUpgradeStreamHandler(upgradeHandler); + return builder.build(); + } + + @Test + public void upgradeHandlerGetsActivated() throws Exception { + UpgradeHandler upgradeHandler = new UpgradeHandler(); + Http2MultiplexCodec codec = newCodec(upgradeHandler); + EmbeddedChannel ch = new EmbeddedChannel(codec); + + codec.onHttpClientUpgrade(); + + assertFalse(upgradeHandler.stateOnActive.localSideOpen()); + assertTrue(upgradeHandler.stateOnActive.remoteSideOpen()); + assertEquals(1, upgradeHandler.streamId); + assertTrue(ch.finishAndReleaseAll()); + } + + @Test(expected = Http2Exception.class) + public void clientUpgradeWithoutUpgradeHandlerThrowsHttp2Exception() throws Http2Exception { + Http2MultiplexCodec codec = Http2MultiplexCodecBuilder.forClient(new NoopHandler()).build(); + EmbeddedChannel ch = new EmbeddedChannel(codec); + try { + codec.onHttpClientUpgrade(); + } finally { + assertTrue(ch.finishAndReleaseAll()); + } + } +} diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java index 7a0e8c6a3a0..2b398ef62cc 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java @@ -633,7 +633,7 @@ public TestableHttp2MultiplexCodec(Http2ConnectionEncoder encoder, Http2ConnectionDecoder decoder, Http2Settings initialSettings, ChannelHandler inboundStreamHandler) { - super(encoder, decoder, initialSettings, inboundStreamHandler); + super(encoder, decoder, initialSettings, inboundStreamHandler, null); } void onHttp2Frame(Http2Frame frame) {
train
val
"2018-06-04T20:40:08"
"2018-05-16T17:50:08Z"
bryce-anderson
val
netty/netty/7969_7976
netty/netty
netty/netty/7969
netty/netty/7976
[ "timestamp(timedelta=0.0, similarity=0.9186618764462279)" ]
9a3311506e5aa9ffd4bfa58c47ec22c37a0d5599
f904c63a535ebfd51d6911cca28aac0eb23c8df0
[ "@Bennett-Lynch let it extend `Http2StreamFrame` sounds like the correct thing to do. Would you be interested in proving a PR ?\r\n\r\nI think I would leave the `isEndStream` as it is and just use the `Http2Flags` for it.", "@Bennett-Lynch I had a bit time this morning so I just did it myself and adding unit tests:\r\n\r\nhttps://github.com/netty/netty/pull/7976\r\n\r\nPTAL", "Thanks, @normanmaurer -- change looks good to me.\r\n\r\nFine with using `Http2Flags` for detecting `isEndStream`, but may I ask your reasoning for it? Maintaining a consistent interface across frame types seems more ideal, but I realize these UnknownFrames are a bit of an edge case.", "@Bennett-Lynch because its redundant as it exists in the flags already (and you can have other flags as well)." ]
[]
"2018-05-28T05:14:34Z"
[]
Make Http2UnknownFrame extend Http2StreamFrame rather than Http2Frame
This is a follow-up issue regarding: https://github.com/netty/netty/issues/7860 ### Expected behavior Writing an `Http2UnknownFrame` to a child channel will be written to the parent channel and underlying HTTP/2 connection via `Http2MultiplexCodec`. When `Http2MultiplexCodec` reads an `Http2UnknownFrame`, it will be propagated to the correct child channel. ### Actual behavior `Http2MultiplexCodec` rejects the write since `Http2UnknownFrame` is not an `instanceof Http2StreamFrame`. https://github.com/netty/netty/blob/dfeb4b15b587bf5298555e969a01378853efab8f/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java#L1021-L1028 `Http2MultiplexCodec` does not propagate reads to child channels since `Http2UnknownFrame` is not an `instanceof Http2StreamFrame`. https://github.com/netty/netty/blob/dfeb4b15b587bf5298555e969a01378853efab8f/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java#L219-L221 ### Proposal `Http2UnknownFrame` should extend `Http2StreamFrame`, rather than `Http2Frame`, as it already defines the two stream-related methods that the two interfaces differ in. Should it possibly declare an `isEndStream` method as well, similar to `Http2DataFrame`?
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java index 51eeafaf0d9..41e578703a5 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java @@ -20,13 +20,12 @@ import io.netty.util.internal.UnstableApi; @UnstableApi -public interface Http2UnknownFrame extends Http2Frame, ByteBufHolder { +public interface Http2UnknownFrame extends Http2StreamFrame, ByteBufHolder { + @Override Http2FrameStream stream(); - /** - * Set the {@link Http2FrameStream} object for this frame. - */ + @Override Http2UnknownFrame stream(Http2FrameStream stream); byte frameType();
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java index 7a0e8c6a3a0..f07b218e095 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java @@ -104,6 +104,43 @@ public void tearDown() throws Exception { // TODO(buchgr): GOAWAY Logic // TODO(buchgr): Test ChannelConfig.setMaxMessagesPerRead + @Test + public void writeUnknownFrame() { + childChannelInitializer.handler = new ChannelInboundHandlerAdapter() { + @Override + public void channelActive(ChannelHandlerContext ctx) { + ctx.writeAndFlush(new DefaultHttp2HeadersFrame(new DefaultHttp2Headers())); + ctx.writeAndFlush(new DefaultHttp2UnknownFrame((byte) 99, new Http2Flags())); + ctx.fireChannelActive(); + } + }; + + Channel childChannel = newOutboundStream(); + assertTrue(childChannel.isActive()); + + Http2FrameStream stream = readOutboundHeadersAndAssignId(); + parentChannel.runPendingTasks(); + + Http2UnknownFrame frame = parentChannel.readOutbound(); + assertEquals(stream, frame.stream()); + assertEquals(99, frame.frameType()); + assertEquals(new Http2Flags(), frame.flags()); + frame.release(); + } + + @Test + public void readUnkownFrame() { + LastInboundHandler inboundHandler = streamActiveAndWriteHeaders(inboundStream); + codec.onHttp2Frame(new DefaultHttp2UnknownFrame((byte) 99, new Http2Flags()).stream(inboundStream)); + codec.onChannelReadComplete(); + + // headers and unknown frame + verifyFramesMultiplexedToCorrectChannel(inboundStream, inboundHandler, 2); + + Channel childChannel = newOutboundStream(); + assertTrue(childChannel.isActive()); + } + @Test public void headerAndDataFramesShouldBeDelivered() { LastInboundHandler inboundHandler = new LastInboundHandler();
train
val
"2018-05-27T10:02:49"
"2018-05-24T22:02:57Z"
Bennett-Lynch
val
netty/netty/7860_7976
netty/netty
netty/netty/7860
netty/netty/7976
[ "keyword_pr_to_issue" ]
9a3311506e5aa9ffd4bfa58c47ec22c37a0d5599
f904c63a535ebfd51d6911cca28aac0eb23c8df0
[ "@Bennett-Lynch PTAL https://github.com/netty/netty/pull/7867", "Fixed..", "Thanks for the quick fix!", "Np \n\n> Am 13.04.2018 um 17:33 schrieb Bennett Lynch <notifications@github.com>:\n> \n> Thanks for the quick fix!\n> \n> —\n> You are receiving this because you modified the open/close state.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n> \n", "@normanmaurer Were you able to confirm a corresponding inbound read from the outbound writeUnknownFrame() unit test you added? I'm unable to detect a read on either the connection or child channel pipeline, and I'm having trouble pinpointing where it might be getting lost in the codecs.", "In my simple unit test it seemed to work... that said maybe there is a bug somewhere. Please re-open this issue if you think there may be one and I will try to find some time to check", "@normanmaurer I'm hesitant to re-open as I might be doing something wrong.\r\n\r\nWhen trying to write the following to a client's Http2 multiplex child channel:\r\n```\r\nHttp2UnknownFrame unknownFrame = new DefaultHttp2UnknownFrame(\r\n (byte) 20,\r\n new Http2Flags().ack(false),\r\n Unpooled.buffer().writeInt(1)\r\n);\r\nunknownFrame.stream(childChannel.stream());\r\n```\r\nI get the following error:\r\n```\r\njava.lang.IllegalArgumentException: Message must be an Http2StreamFrame: DefaultHttp2UnknownFrame(frameType=20, stream=3, flags=value = 0 (), content=UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 4, cap: 256))\r\n\tat io.netty.handler.codec.http2.Http2MultiplexCodec$DefaultHttp2StreamChannel$Http2ChannelUnsafe.write(Http2MultiplexCodec.java:1028)\r\n\t...\r\n```\r\n\r\nSo I cloned `DefaultHttp2UnknownFrame` just to implement `Http2StreamFrame`. The write on the client side then succeeds. The server then fails to read the frame with the following error:\r\n```\r\njava.lang.NullPointerException: null\r\n\tat io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.requireStream(Http2FrameCodec.java:575)\r\n\tat io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onUnknownFrame(Http2FrameCodec.java:497)\r\n\tat io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.onUnknownFrame0(DefaultHttp2ConnectionDecoder.java:171)\r\n\tat io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onUnknownFrame(DefaultHttp2ConnectionDecoder.java:517)\r\n\tat io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onUnknownFrame(Http2InboundFrameLogger.java:133)\r\n\tat io.netty.handler.codec.http2.DefaultHttp2FrameReader.readUnknownFrame(DefaultHttp2FrameReader.java:616)\r\n\tat io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:281)\r\n\tat io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:160)\r\n\t...\r\n```\r\n\r\nI'm a little bit confused by this as I am explicitly setting the stream on the client side. I'm not sure where that's getting lost when going across the wire. DATA frames are being sent successfully on the same child channel.\r\n\r\nIf it looks like I'm doing something blatantly wrong, could you kindly point it out? If not, then maybe this is worth re-opening.\r\n\r\nAnd apologies for not having a concrete reproducible example. My attempts are embedded in some separate application logic, and there's no client/server multiplex example in this repository for me to experiment with." ]
[]
"2018-05-28T05:14:34Z"
[]
HTTP2 Multiplex API support for unknown/custom frames
### Expected behavior The HTTP/2 spec explicitly permits extension of the protocol in the form of new frame types: http://httpwg.org/specs/rfc7540.html#rfc.section.5.5 I would like/expect to be able to write/read custom frame types using the HTTP2 Multiplex API. That is, if I write a Http2StreamFrame to a child channel with e.g., `frameType == (byte) 20`, I would be able to read the frame on the server's corresponding child channel. ### Actual behavior The HTTP2 Multiplex API allows any object of type `Http2StreamFrame`: https://github.com/netty/netty/blob/501662a77f8166382eefd18a8620983d27f59d4a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java#L993 The corresponding write, however, rejects any object that is of type `Http2Frame` and not one of the spec-defined frame types: https://github.com/netty/netty/blob/501662a77f8166382eefd18a8620983d27f59d4a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameCodec.java#L299-L304 Writing a custom implementation of `Http2StreamFrame` or `Http2UnknownFrame` to a child channel will be rejected by the `Http2FrameCodec`.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java index 51eeafaf0d9..41e578703a5 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2UnknownFrame.java @@ -20,13 +20,12 @@ import io.netty.util.internal.UnstableApi; @UnstableApi -public interface Http2UnknownFrame extends Http2Frame, ByteBufHolder { +public interface Http2UnknownFrame extends Http2StreamFrame, ByteBufHolder { + @Override Http2FrameStream stream(); - /** - * Set the {@link Http2FrameStream} object for this frame. - */ + @Override Http2UnknownFrame stream(Http2FrameStream stream); byte frameType();
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java index 7a0e8c6a3a0..f07b218e095 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecTest.java @@ -104,6 +104,43 @@ public void tearDown() throws Exception { // TODO(buchgr): GOAWAY Logic // TODO(buchgr): Test ChannelConfig.setMaxMessagesPerRead + @Test + public void writeUnknownFrame() { + childChannelInitializer.handler = new ChannelInboundHandlerAdapter() { + @Override + public void channelActive(ChannelHandlerContext ctx) { + ctx.writeAndFlush(new DefaultHttp2HeadersFrame(new DefaultHttp2Headers())); + ctx.writeAndFlush(new DefaultHttp2UnknownFrame((byte) 99, new Http2Flags())); + ctx.fireChannelActive(); + } + }; + + Channel childChannel = newOutboundStream(); + assertTrue(childChannel.isActive()); + + Http2FrameStream stream = readOutboundHeadersAndAssignId(); + parentChannel.runPendingTasks(); + + Http2UnknownFrame frame = parentChannel.readOutbound(); + assertEquals(stream, frame.stream()); + assertEquals(99, frame.frameType()); + assertEquals(new Http2Flags(), frame.flags()); + frame.release(); + } + + @Test + public void readUnkownFrame() { + LastInboundHandler inboundHandler = streamActiveAndWriteHeaders(inboundStream); + codec.onHttp2Frame(new DefaultHttp2UnknownFrame((byte) 99, new Http2Flags()).stream(inboundStream)); + codec.onChannelReadComplete(); + + // headers and unknown frame + verifyFramesMultiplexedToCorrectChannel(inboundStream, inboundHandler, 2); + + Channel childChannel = newOutboundStream(); + assertTrue(childChannel.isActive()); + } + @Test public void headerAndDataFramesShouldBeDelivered() { LastInboundHandler inboundHandler = new LastInboundHandler();
train
val
"2018-05-27T10:02:49"
"2018-04-10T21:42:20Z"
Bennett-Lynch
val
netty/netty/7990_7994
netty/netty
netty/netty/7990
netty/netty/7994
[ "timestamp(timedelta=0.0, similarity=0.8921366031300989)" ]
a4393831f0c95e4b89812238b9ac26ac4322e451
1611acf4cee4481b89a2cf024ccf821de2dbf13c
[ "@henrik-lindqvist yes this sounds like a bug...\r\n\r\n@carl-mastrangelo @ejona86 @Scottmitch @trustin What you think is the correct behaviour here ?", "I agree there is inconsistency here, and it is not just with `convertToByte` but also with other `convertTo` methods that have a special case for `AsciiString`. The `AsciiString` special case was done for performance reasons, but I think it is more correct to consider the entire value during conversion. We may be able to keep the performance optimization and enforce a max length check too if we want.\r\n\r\nWe should also be consistent with convertToChar [1] which doesn't have an `AsciiString` optimization but should follow the same semantics.\r\n\r\n[1] https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java#L88", "My own 2cents: Delete it, or return null. I don't think there is a reasonable default behavior. Also, there is hardly anyone using this method, except `getByteAndRemove`, which is itself seldomly used. Poking around on github's search shows no users.\r\n\r\nValueConverter makes no claims about what subclasses _should_ do, so who can say what will happen? Further, if I had an instance of type ValueConverter, the implementation would be really important to know. It cannot provide the abstraction it promises.", "To be consistent with convertByte is should parse the number, not use the first byte. See: \r\nhttps://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java#L74", "@henrik-lindqvist PTAL https://github.com/netty/netty/pull/7994" ]
[]
"2018-06-01T08:34:26Z"
[]
CharSequenceValueConverter.convertToByte works differently for AsciiString vs String
### Netty version 4.1 Looking at function CharSequenceValueConverter.convertToByte (CharSequence): https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java#L79 For AsciiString it takes the first byte, but for any other kind of CharSequence it parse the number value of the entire sequence, i.e. using Byte.parseByte(). This is surely not expected behavior. See also: https://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html#parseByte-java.lang.String-
[ "codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java" ]
[ "codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java" ]
[ "codec/src/test/java/io/netty/handler/codec/CharSequenceValueConverterTest.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java b/codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java index b6510d1ff7e..1157af0623e 100644 --- a/codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java +++ b/codec/src/main/java/io/netty/handler/codec/CharSequenceValueConverter.java @@ -77,7 +77,7 @@ public CharSequence convertByte(byte value) { @Override public byte convertToByte(CharSequence value) { - if (value instanceof AsciiString) { + if (value instanceof AsciiString && value.length() == 1) { return ((AsciiString) value).byteAt(0); } return Byte.parseByte(value.toString());
diff --git a/codec/src/test/java/io/netty/handler/codec/CharSequenceValueConverterTest.java b/codec/src/test/java/io/netty/handler/codec/CharSequenceValueConverterTest.java index 5543e2f9095..2347f0d0bf9 100644 --- a/codec/src/test/java/io/netty/handler/codec/CharSequenceValueConverterTest.java +++ b/codec/src/test/java/io/netty/handler/codec/CharSequenceValueConverterTest.java @@ -14,6 +14,7 @@ */ package io.netty.handler.codec; +import io.netty.util.AsciiString; import org.junit.Test; import static org.junit.Assert.assertEquals; @@ -30,6 +31,16 @@ public void testBoolean() { assertFalse(converter.convertToBoolean(converter.convertBoolean(false))); } + @Test + public void testByteFromAsciiString() { + assertEquals(127, converter.convertToByte(AsciiString.of("127"))); + } + + @Test(expected = NumberFormatException.class) + public void testByteFromEmptyAsciiString() { + converter.convertToByte(AsciiString.EMPTY_STRING); + } + @Test public void testByte() { assertEquals(Byte.MAX_VALUE, converter.convertToByte(converter.convertByte(Byte.MAX_VALUE)));
train
val
"2018-05-30T22:07:42"
"2018-05-30T23:22:35Z"
henrik-lindqvist
val
netty/netty/7988_8001
netty/netty
netty/netty/7988
netty/netty/8001
[ "timestamp(timedelta=0.0, similarity=0.842426538919858)" ]
00786337029d48d2e5815ff5076284a46373062b
b192bf12ad2af92bba0f32c9d3127b1192e54670
[ "@ejona86 @carl-mastrangelo @nmittler who would be the best to ping on your end ?", "+ @flooey", "Yep, that's probably just a bug in Conscrypt. Can you tell me what you're doing to trigger that?", "Nevermind, I see through the stack trace that this is due to calling `wrap()` on a closed SSLEngine. That obviously shouldn't throw NPE. I've uploaded a fix for review.", "Thanks... left a comment there\n\n> Am 31.05.2018 um 12:30 schrieb Adam Vartanian <notifications@github.com>:\n> \n> Nevermind, I see through the stack trace that this is due to calling wrap() on a closed SSLEngine. That obviously shouldn't throw NPE. I've uploaded a fix for review.\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Okay, this should be fixed at head. I expect to make our next release on Monday.", "Thanks... would be nice if you could just comment here once done\n\n> Am 31.05.2018 um 21:06 schrieb Adam Vartanian <notifications@github.com>:\n> \n> Okay, this should be fixed at head. I expect to make our next release on Monday.\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n" ]
[]
"2018-06-04T16:06:12Z"
[]
NPE in testsuite logged when using Conscrypt.
Saw this during the build: ``` Running io.netty.handler.ssl.ConscryptJdkSslEngineInteropTest 18:01:39.304 [nioEventLoopGroup-676-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 10 thread-local buffer(s) from thread: nioEventLoopGroup-676-3 18:01:39.304 [nioEventLoopGroup-676-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 8 thread-local buffer(s) from thread: nioEventLoopGroup-676-2 [98.268s][info ][gc,start ] GC(54) Pause Young (G1 Evacuation Pause) [98.268s][info ][gc,task ] GC(54) Using 8 workers of 8 for evacuation [98.289s][info ][gc,phases ] GC(54) Pre Evacuate Collection Set: 0.0ms [98.289s][info ][gc,phases ] GC(54) Evacuate Collection Set: 19.5ms [98.289s][info ][gc,phases ] GC(54) Post Evacuate Collection Set: 1.5ms [98.289s][info ][gc,phases ] GC(54) Other: 0.4ms [98.289s][info ][gc,heap ] GC(54) Eden regions: 178->0(171) [98.289s][info ][gc,heap ] GC(54) Survivor regions: 3->10(23) [98.289s][info ][gc,heap ] GC(54) Old regions: 7->7 [98.289s][info ][gc,heap ] GC(54) Humongous regions: 54->54 [98.289s][info ][gc,metaspace ] GC(54) Metaspace: 33982K->33982K(1081344K) [98.289s][info ][gc ] GC(54) Pause Young (G1 Evacuation Pause) 482M->140M(604M) 21.501ms [98.290s][info ][gc,cpu ] GC(54) User=0.03s Sys=0.01s Real=0.02s 18:01:39.566 [nioEventLoopGroup-689-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x4b609c89, L:/127.0.0.1:37424 - R:/127.0.0.1:45348] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 18:01:39.568 [nioEventLoopGroup-690-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xc3c9581f, L:/127.0.0.1:45348 - R:/127.0.0.1:37424] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 18:01:39.588 [nioEventLoopGroup-689-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 14 thread-local buffer(s) from thread: nioEventLoopGroup-689-1 18:01:39.601 [nioEventLoopGroup-690-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 10 thread-local buffer(s) from thread: nioEventLoopGroup-690-1 18:01:39.642 [nioEventLoopGroup-677-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 7 thread-local buffer(s) from thread: nioEventLoopGroup-677-2 18:01:39.643 [nioEventLoopGroup-677-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 6 thread-local buffer(s) from thread: nioEventLoopGroup-677-3 18:01:39.684 [nioEventLoopGroup-692-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x34ae8109, L:/127.0.0.1:40594 - R:/127.0.0.1:43038] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 18:01:39.685 [nioEventLoopGroup-693-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x6cb9adc0, L:/127.0.0.1:43038 - R:/127.0.0.1:40594] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 18:01:39.713 [nioEventLoopGroup-692-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 8 thread-local buffer(s) from thread: nioEventLoopGroup-692-1 18:01:39.730 [nioEventLoopGroup-693-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-693-1 18:01:39.811 [nioEventLoopGroup-678-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-678-2 18:01:39.813 [nioEventLoopGroup-678-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 10 thread-local buffer(s) from thread: nioEventLoopGroup-678-3 18:01:39.905 [nioEventLoopGroup-679-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 10 thread-local buffer(s) from thread: nioEventLoopGroup-679-3 18:01:39.999 [nioEventLoopGroup-679-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 8 thread-local buffer(s) from thread: nioEventLoopGroup-679-2 18:01:40.171 [nioEventLoopGroup-680-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 160 thread-local buffer(s) from thread: nioEventLoopGroup-680-3 18:01:40.171 [nioEventLoopGroup-680-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-680-2 18:01:40.323 [nioEventLoopGroup-681-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 160 thread-local buffer(s) from thread: nioEventLoopGroup-681-3 18:01:40.325 [nioEventLoopGroup-681-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-681-2 18:01:40.482 [nioEventLoopGroup-682-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-682-2 18:01:40.482 [nioEventLoopGroup-682-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 160 thread-local buffer(s) from thread: nioEventLoopGroup-682-3 18:01:40.528 [nioEventLoopGroup-695-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x4ca0a975, L:/172.19.0.2:37904 - R:/172.19.0.2:51436] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 18:01:40.550 [nioEventLoopGroup-683-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 160 thread-local buffer(s) from thread: nioEventLoopGroup-683-3 18:01:40.553 [nioEventLoopGroup-695-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 12 thread-local buffer(s) from thread: nioEventLoopGroup-695-1 18:01:40.559 [nioEventLoopGroup-696-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 5 thread-local buffer(s) from thread: nioEventLoopGroup-696-1 18:01:40.643 [nioEventLoopGroup-683-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-683-2 18:01:40.691 [nioEventLoopGroup-684-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 161 thread-local buffer(s) from thread: nioEventLoopGroup-684-3 18:01:40.706 [main] DEBUG i.n.h.s.u.InsecureTrustManagerFactory - Accepting a server certificate: CN=example.com 18:01:40.796 [nioEventLoopGroup-684-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-684-2 18:01:40.960 [nioEventLoopGroup-699-1] DEBUG io.netty.handler.ssl.SslHandler - SSLException during trying to call SSLEngine.wrap(...) because of an previous SSLException, ignoring... javax.net.ssl.SSLHandshakeException: SSL handshake aborted: ssl=0x7f12c41b3cf8: Failure in SSL library, usually a protocol error error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED (../ssl/handshake.cc:363 0x7f12bb959503:0x00000000) at org.conscrypt.SSLUtils.toSSLHandshakeException(SSLUtils.java:331) at org.conscrypt.ConscryptEngine.handshake(ConscryptEngine.java:1013) at org.conscrypt.ConscryptEngine.wrap(ConscryptEngine.java:1427) at java.base/javax.net.ssl.SSLEngine.wrap(SSLEngine.java:511) at org.conscrypt.Java8EngineWrapper.wrap(Java8EngineWrapper.java:56) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:997) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:803) at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:771) at io.netty.handler.ssl.SslHandler.handleUnwrapThrowable(SslHandler.java:1207) at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1183) at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1221) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:844) Caused by: javax.net.ssl.SSLProtocolException: SSL handshake aborted: ssl=0x7f12c41b3cf8: Failure in SSL library, usually a protocol error error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED (../ssl/handshake.cc:363 0x7f12bb959503:0x00000000) at org.conscrypt.NativeCrypto.ENGINE_SSL_do_handshake(Native Method) at org.conscrypt.NativeSsl.doHandshake(NativeSsl.java:391) at org.conscrypt.ConscryptEngine.handshake(ConscryptEngine.java:978) ... 27 common frames omitted 18:01:40.971 [nioEventLoopGroup-685-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-685-2 18:01:40.974 [nioEventLoopGroup-685-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 161 thread-local buffer(s) from thread: nioEventLoopGroup-685-3 18:01:41.018 [nioEventLoopGroup-698-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 4 thread-local buffer(s) from thread: nioEventLoopGroup-698-1 18:01:41.025 [nioEventLoopGroup-699-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 7 thread-local buffer(s) from thread: nioEventLoopGroup-699-1 18:01:41.026 [nioEventLoopGroup-686-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 161 thread-local buffer(s) from thread: nioEventLoopGroup-686-3 18:01:41.124 [nioEventLoopGroup-686-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-686-2 18:01:41.329 [nioEventLoopGroup-687-2] DEBUG io.netty.buffer.PoolThreadCache - Freed 9 thread-local buffer(s) from thread: nioEventLoopGroup-687-2 18:01:41.339 [nioEventLoopGroup-687-3] DEBUG io.netty.buffer.PoolThreadCache - Freed 161 thread-local buffer(s) from thread: nioEventLoopGroup-687-3 18:01:41.399 [main] DEBUG i.n.h.s.u.InsecureTrustManagerFactory - Accepting a server certificate: CN=example.com 18:01:41.426 [nioEventLoopGroup-702-1] DEBUG io.netty.handler.ssl.SslHandler - SSLException during trying to call SSLEngine.wrap(...) because of an previous SSLException, ignoring... javax.net.ssl.SSLHandshakeException: SSL handshake aborted: ssl=0x7f129800be38: Failure in SSL library, usually a protocol error error:10000416:SSL routines:OPENSSL_internal:SSLV3_ALERT_CERTIFICATE_UNKNOWN (../ssl/tls_record.cc:586 0x7f1298870be8:0x00000001) at org.conscrypt.SSLUtils.toSSLHandshakeException(SSLUtils.java:331) at org.conscrypt.ConscryptEngine.handshake(ConscryptEngine.java:1013) at org.conscrypt.ConscryptEngine.wrap(ConscryptEngine.java:1427) at java.base/javax.net.ssl.SSLEngine.wrap(SSLEngine.java:511) at org.conscrypt.Java8EngineWrapper.wrap(Java8EngineWrapper.java:56) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:997) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:803) at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:771) at io.netty.handler.ssl.SslHandler.handleUnwrapThrowable(SslHandler.java:1207) at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1183) at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1221) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:844) Caused by: javax.net.ssl.SSLProtocolException: SSL handshake aborted: ssl=0x7f129800be38: Failure in SSL library, usually a protocol error error:10000416:SSL routines:OPENSSL_internal:SSLV3_ALERT_CERTIFICATE_UNKNOWN (../ssl/tls_record.cc:586 0x7f1298870be8:0x00000001) at org.conscrypt.NativeCrypto.ENGINE_SSL_do_handshake(Native Method) at org.conscrypt.NativeSsl.doHandshake(NativeSsl.java:391) at org.conscrypt.ConscryptEngine.handshake(ConscryptEngine.java:978) ... 27 common frames omitted 18:01:41.432 [nioEventLoopGroup-701-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 6 thread-local buffer(s) from thread: nioEventLoopGroup-701-1 18:01:41.439 [nioEventLoopGroup-702-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 8 thread-local buffer(s) from thread: nioEventLoopGroup-702-1 18:01:41.563 [main] DEBUG i.n.h.s.u.InsecureTrustManagerFactory - Accepting a server certificate: CN=example.com 18:01:41.691 [nioEventLoopGroup-704-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xaea8b212, L:/127.0.0.1:34256 - R:/127.0.0.1:60070] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 18:01:41.692 [nioEventLoopGroup-705-1] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x045d2f89, L:/127.0.0.1:60070 - R:localhost/127.0.0.1:34256] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 18:01:41.697 [nioEventLoopGroup-705-1] WARN i.n.c.AbstractChannelHandlerContext - Failed to mark a promise as failure because it has succeeded already: DefaultChannelPromise@80355a8(success) javax.net.ssl.SSLException: java.lang.NullPointerException: bio == null at org.conscrypt.SSLUtils.toSSLException(SSLUtils.java:341) at org.conscrypt.ConscryptEngine.convertException(ConscryptEngine.java:1151) at org.conscrypt.ConscryptEngine.readPendingBytesFromBIO(ConscryptEngine.java:1267) at org.conscrypt.ConscryptEngine.wrap(ConscryptEngine.java:1411) at java.base/javax.net.ssl.SSLEngine.wrap(SSLEngine.java:511) at org.conscrypt.Java8EngineWrapper.wrap(Java8EngineWrapper.java:56) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:997) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:803) at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:771) at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:752) at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:1625) at io.netty.handler.ssl.SslHandler.closeOutboundAndChannel(SslHandler.java:1593) at io.netty.handler.ssl.SslHandler.close(SslHandler.java:710) at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624) at io.netty.channel.AbstractChannelHandlerContext.access$1100(AbstractChannelHandlerContext.java:38) at io.netty.channel.AbstractChannelHandlerContext$13.run(AbstractChannelHandlerContext.java:613) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:844) Caused by: java.lang.NullPointerException: bio == null at org.conscrypt.NativeCrypto.SSL_pending_written_bytes_in_BIO(Native Method) at org.conscrypt.NativeSsl$BioWrapper.getPendingWrittenBytes(NativeSsl.java:571) at org.conscrypt.ConscryptEngine.pendingOutboundEncryptedBytes(ConscryptEngine.java:553) at org.conscrypt.ConscryptEngine.readPendingBytesFromBIO(ConscryptEngine.java:1236) ... 19 common frames omitted 18:01:41.749 [nioEventLoopGroup-704-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 8 thread-local buffer(s) from thread: nioEventLoopGroup-704-1 18:01:41.771 [nioEventLoopGroup-705-1] DEBUG io.netty.buffer.PoolThreadCache - Freed 11 thread-local buffer(s) from thread: nioEventLoopGroup-705-1 ``` Most interesting is this: ``` javax.net.ssl.SSLException: java.lang.NullPointerException: bio == null at org.conscrypt.SSLUtils.toSSLException(SSLUtils.java:341) at org.conscrypt.ConscryptEngine.convertException(ConscryptEngine.java:1151) at org.conscrypt.ConscryptEngine.readPendingBytesFromBIO(ConscryptEngine.java:1267) at org.conscrypt.ConscryptEngine.wrap(ConscryptEngine.java:1411) at java.base/javax.net.ssl.SSLEngine.wrap(SSLEngine.java:511) at org.conscrypt.Java8EngineWrapper.wrap(Java8EngineWrapper.java:56) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:997) at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:803) at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:771) at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:752) at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:1625) at io.netty.handler.ssl.SslHandler.closeOutboundAndChannel(SslHandler.java:1593) at io.netty.handler.ssl.SslHandler.close(SslHandler.java:710) at ``` I wonder if this is just a bug in conscrypt ?
[ "pom.xml" ]
[ "pom.xml" ]
[]
diff --git a/pom.xml b/pom.xml index 4cb89d220c5..71e3e4b2da7 100644 --- a/pom.xml +++ b/pom.xml @@ -225,7 +225,7 @@ <tcnative.classifier>${os.detected.classifier}</tcnative.classifier> <conscrypt.groupId>org.conscrypt</conscrypt.groupId> <conscrypt.artifactId>conscrypt-openjdk-uber</conscrypt.artifactId> - <conscrypt.version>1.1.2</conscrypt.version> + <conscrypt.version>1.1.3</conscrypt.version> <conscrypt.classifier /> <jni.classifier>${os.detected.name}-${os.detected.arch}</jni.classifier> <logging.config>${project.basedir}/../common/src/test/resources/logback-test.xml</logging.config>
null
test
val
"2018-06-04T18:09:42"
"2018-05-30T18:25:54Z"
normanmaurer
val
netty/netty/8002_8009
netty/netty
netty/netty/8002
netty/netty/8009
[ "keyword_pr_to_issue" ]
b192bf12ad2af92bba0f32c9d3127b1192e54670
abe77511b99b93a7ec98ddea78243ff6e6235107
[ "@vsabella I guess we should just remove it completely. \r\n\r\n@carl-mastrangelo @ejona86 @Scottmitch thoughts ?", "Also @mosesn @bryce-anderson WDYT ?", "Looks like dead code to me and `Http2CodecUtil` is still tagged `@UnstableApi` so removing it seems like the right move.", "@bryce-anderson sounds good... Will try to find time to write a patch or if you want you could also if you have time 👍 ", "I can take it. Unless it is actually in use it should be trivial to remove.", "Thanks that would be awesome\n\n> Am 06.06.2018 um 14:28 schrieb Bryce Anderson <notifications@github.com>:\n> \n> I can take it. Unless it is actually in use it should be trivial to remove.\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Thanks @bryce-anderson " ]
[]
"2018-06-06T21:47:16Z"
[ "cleanup" ]
Http2CodecUtil.java - emptyPingBuf() is no longer needed (as writePing takes a long)
https://github.com/netty/netty/blob/b192bf12ad2af92bba0f32c9d3127b1192e54670/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java#L177 In https://github.com/netty/netty/commit/501662a77f8166382eefd18a8620983d27f59d4a - emptyPing data changed to long (instead of buffer data) This should be updated.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java" ]
[]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java index 317ee48063d..7ebc8fd4888 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java @@ -67,9 +67,6 @@ public final class Http2CodecUtil { private static final ByteBuf CONNECTION_PREFACE = unreleasableBuffer(directBuffer(24).writeBytes("PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n".getBytes(UTF_8))) .asReadOnly(); - private static final ByteBuf EMPTY_PING = - unreleasableBuffer(directBuffer(PING_FRAME_PAYLOAD_LENGTH).writeZero(PING_FRAME_PAYLOAD_LENGTH)) - .asReadOnly(); private static final int MAX_PADDING_LENGTH_LENGTH = 1; public static final int DATA_FRAME_HEADER_LENGTH = FRAME_HEADER_LENGTH + MAX_PADDING_LENGTH_LENGTH; @@ -169,14 +166,6 @@ public static ByteBuf connectionPrefaceBuf() { return CONNECTION_PREFACE.retainedDuplicate(); } - /** - * Returns a buffer filled with all zeros that is the appropriate length for a PING frame. - */ - public static ByteBuf emptyPingBuf() { - // Return a duplicate so that modifications to the reader index will not affect the original buffer. - return EMPTY_PING.retainedDuplicate(); - } - /** * Iteratively looks through the causality chain for the given exception and returns the first * {@link Http2Exception} or {@code null} if none.
null
train
val
"2018-06-04T20:40:08"
"2018-06-04T23:05:38Z"
vsabella
val
netty/netty/8041_8067
netty/netty
netty/netty/8041
netty/netty/8067
[ "timestamp(timedelta=21445.0, similarity=0.8587414992007502)" ]
a214f2eb9692040cf45e20739a7dd319f47ff8c8
94e5676a1fd85653ac2f0cb11abf394718214718
[ "A simple solution might be to add something like\r\n```java\r\n} else if (connection().goAwayReceived()) {\r\n ReferenceCountUtil.release(frame);\r\n promise.setFailure(Http2Exception.streamError(-1, Http2Error.REFUSED_STREAM, \"Received GOAWAY before stream was initialized.\")\r\n );\r\n return;\r\n}\r\n```\r\n[here](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java#L1102) in the write method of the `Http2MultiplexCodec.DefaultHttp2StreamChannel.Http2ChannelUnsafe`.", "And I'm happy to work on this once we have a consensus on direction.", "> PROTOCOL_EXCEPTION that gets sent to the server and the connection gets torn down.\r\n\r\nCan you clarify exactly what is being sent to the server? If the connection is being hard-shutdown that sounds like a problem, but sending a GO_AWAY and initiating local graceful close isn't necessarily a problem IIUC (this just prevents the server from pushing any streams). The rational accompanied with the GO_AWAY is a bit awkward from the perspective of the server and is somewhat leaking client state ... but the rational is only meant to be informative anyways. Can you clarify if this is causing a problem or breaking something?\r\n\r\nIf we no longer want to send a GO_AWAY and initiate graceful closure from the local end point in this scenario I would propose the following patch (or something similar):\r\n\r\n```diff\r\ndiff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java\r\nindex 4251317457..1b6f15bc44 100644\r\n--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java\r\n+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java\r\n@@ -864,9 +864,9 @@ public class DefaultHttp2Connection implements Http2Connection {\r\n private void checkNewStreamAllowed(int streamId, State state) throws Http2Exception {\r\n assert state != IDLE;\r\n if (goAwayReceived() && streamId > localEndpoint.lastStreamKnownByPeer()) {\r\n- throw connectionError(PROTOCOL_ERROR, \"Cannot create stream %d since this endpoint has received a \" +\r\n- \"GOAWAY frame with last stream id %d.\", streamId,\r\n- localEndpoint.lastStreamKnownByPeer());\r\n+ throw streamError(streamId, REFUSED_STREAM,\r\n+ \"Cannot create stream %d since this endpoint has received a GOAWAY frame with last stream id %d.\",\r\n+ streamId, localEndpoint.lastStreamKnownByPeer());\r\n }\r\n if (!isValidStreamId(streamId)) {\r\n if (streamId < 0) {\r\n```\r\n\r\nThe rational being the actual H2 logic resides in the lower level API and we should ensure consistent behavior between the child channel API and lower level API.", "This is causing an issue as the client sends a GOAWAY and closes the connection on flush. The exception is generated in the [`DefaultHttp2ConnectionEncoder`](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java#L166) which passes it to the lifecycle manager [here](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java#L234), which passes it to the [`Http2ConnectionHandler`](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L617), and ultimately the `onConnectionError` method and the hard shutdown pathway [here](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L654).\r\n\r\nI think the patch you're proposing will result in a stream exception which is fine from the side of the client, but I think it will result in sending a RST_STREAM to the peer for a stream in the idle state which is a connection level protocol error. We could try to intercept it for the case of 'trying to initiate a stream' but I suspect it will start to get complex. Alternatively, maybe we can just add the graceful shutdown hint to the connection error for this case, though threading that info through might also be pretty complex. If things are working right, that should put us in the 'not a problem' behavior you mentioned above where the client sends a GOAWAY but it's a draining GOAWAY. This is still pretty awkward since it's triggered by attempting to initiate a new stream as opposed to being a response to a received GOAWAY, and it signals draining of inbound streams in response to an outbound stream concern, and the client didn't signal that it wasn't interested in pushed streams anymore just because the server won't accept new streams.\r\n\r\nWhat is the pattern that the users of the non-multiplexcodec API should use to ensure they don't start a new stream after receiving a GOAWAY? In our usage of the old-style API we handle GOAWAY frames manually and stop accepting new streams at that point [here](https://github.com/twitter/finagle/blob/develop/finagle-http2/src/main/scala/com/twitter/finagle/http2/transport/StreamTransportFactory.scala#L125). Attempts to initiate new dispatches check this state and fail if we've gotten a GOAWAY. This section of code has more or less the same job as the stream channels of Http2MultiplexCodec so that might be why it feels natural to do the state checking there.", "@Scottmitch, do you have any more suggestions/opinions? I'd like to get this into 4.1.26 if at all possible, which I believe is knocking at the door. That said, I think it's more desirable to be correct than released earlier. 😄", "I went ahead and opened PR https://github.com/netty/netty/pull/8057 with what feels like the right thing to do based on our usage of the old Http2 API, namely that we had to handle ensuring a GOAWAY hadn't been issued already ourselves. I put it up eagerly so that it has a chance of getting into Netty 4.1.26 if it's the right thing to do, but I'm happy to throw that away if we can come to a decision that something else should be done.", "Sorry was AFK for the past few days ...\r\n\r\n> I think it will result in sending a RST_STREAM to the peer for a stream in the idle state \r\n\r\nGood point. We would need to avoid violating the protocol. Failing the promise should be sufficient.\r\n\r\n> the client didn't signal that it wasn't interested in pushed streams anymore just because the server won't accept new streams.\r\n\r\nAnother good point. I agree that forcing a GO_AWAY doesn't make much sense in this scenario.\r\n\r\nIn general I think we should make the fix in the interface API which is responsible for the core H2 logic instead of fragmenting the logic in the child channel API. Everything above (e.g. child channel API) would then inherit the fix.", "> In general I think we should make the fix in the interface API which is responsible for the core H2 logic instead of fragmenting the logic in the child channel API. Everything above (e.g. child channel API) would then inherit the fix.\r\n\r\nFailing the promise on such a failed write should be sufficient to get the behavior I want and will work for all APIs. We can fail the promise for both API's either in the `Http2FrameCodec` or in the `DefaultHttp2ConnectionEncoder`. If we modify the `Endpoint.incrementAndGetNextStreamId()` method to consider received GOAWAY's it will fail the promise in `Http2FrameCodec` which feels pretty good.\r\n\r\nI'd prefer the change to `incrementAndGetNextStreamId()` unless we don't consider it part of the `Endpoint`s job to validate that possibility. It seems like it should be.", "See PR https://github.com/netty/netty/pull/8069 for my proposal. I think the change in `Http2ConnectionHandler` is useful either way bcz as you pointed out we may end up inadvertently sending a RST on a stream that never existed which would kill the connection.\r\n\r\n> I'd prefer the change to incrementAndGetNextStreamId()\r\n\r\nCan you clarify this suggestion. It isn't clear how `incrementAndGetNextStreamId` would be changed (e.g. would it throw, would it know about the promise, etc...).", "@Scottmitch, see https://github.com/netty/netty/pull/8067 for what I mean by changing `incrementAndGetNextStreamId()`. It already has the semantics of returning negative values to signal that a stream id couldn't be assigned and this is extended to cover the case that a GOAWAY has been issued by the peer.\r\n\r\nA quick skim of your PR looks like it would probably accomplish the same goal. It would have the result in burning reserved stream id's that would go unused, but I don't think that's of any real consequence. I don't see a reason that both changes couldn't coexist in case someone wants to use the `Http2ConnectionHandler` and not the `Http2FrameCodec`.", "This is fixed now, though we've unfortunately hit the situation where the client will send a RST_STREAM(cancel) on an idle stream. I have a couple PRs up, any of which should fix it https://github.com/netty/netty/pull/8067, https://github.com/netty/netty/pull/8086, though the fallout is different for each.", "https://github.com/netty/netty/pull/8086 has been merged, so I think this can be closed." ]
[ "Would it be cleaner to instead set `nextReservationStreamId = -1` (or Integer.MIN_VALUE) in `goAwayReceived()`?", "Perhaps. The connection between `nextReservationStreamId` and `nextStreamIdToCreate` is somewhat subtle, but it looks safe to me. I'm happy to change it if you like.", "I wonder if we should instead change `Http2FrameCodec` from this....\r\n\r\n```java\r\nfinal int streamId = connection.local().incrementAndGetNextStreamId();\r\nif (streamId < 0) {\r\n promise.setFailure(new Http2NoMoreStreamIdsException());\r\n return;\r\n}\r\nstream.id = streamId;\r\n```\r\n\r\nto this...\r\n\r\n```java\r\nfinal Http2Stream oldStream;\r\ntry {\r\n oldStream = connection.local().createStream(connection.local().incrementAndGetNextStreamId(), false);\r\n} catch (Throwable cause) {\r\n promise.setFailure(cause);\r\n return;\r\n}\r\nstream.id = oldStream.id();\r\n```\r\n\r\nthis way the `Http2FrameCodec` doesn't have to make assumptions about `Http2NoMoreStreamIdsException` or create any new failure cases.", "Right now the stream creation is driven almost entirely in the `Connection[Encoder|Decoder]`s (with the exception being h2c upgrade pathways and a stream error handling pathway for decoding headers that were too large). It makes a fair amount of sense to have it in the Encoder/Decoder due to the requirement of monotonically increasing stream-id's: having stream creation in the Encoder/Decoder helps ensure that we put HEADERS frames on the socket in the right order.\r\n\r\nThat said, it would probably work. I also don't think it's a good idea to be dolling out stream id's that look valid but aren't, but that's somewhat orthogonal other than combining both changes would result in throwing a [`Http2NoMoreStreamIdsException`](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java#L873) instead of the synthetic RST_STREAM you have in https://github.com/netty/netty/pull/8069. That is arguably a good thing since, as @ejona86 noted, we don't want to trigger retries against the same session.", "humm seems like my suggestion doesn't play nicely with the `Http2FrameCodecBuilder.forServer().encoderEnforceMaxConcurrentStreams(true)` case which delays the stream creation because it relies upon lower layers to create the streams." ]
"2018-06-27T16:49:08Z"
[]
Http2MultiplexCodec in client mode can trigger a GOAWAY if it creates a stream after getting a GOAWAY
### Expected behavior When a client attempts to utilize a new stream channel using Http2MultiplexCodec _after_ receiving a GOAWAY, the client will gracefully fail the stream channel with an exception that signifies the stream wasn't processed, perhaps such a RST_STREAM(REFUSED_STREAM). ### Actual behavior When a client attempts to create a new stream channel using Http2MultiplexCodec _after_ receiving a GOAWAY, the client will throw it's own PROTOCOL_EXCEPTION that gets sent to the server and the connection gets torn down. It's also debatable as to whether the server should be actively tearing down the connection due to that message since it is still processing client streams. ### Steps to reproduce Establish a client connection using Http2MultiplexCodec, have the server send a GOAWAY with one outstanding client stream, and attempt to create another client stream. You can observe a client-sent GOAWAY(last processed stream: 0) with a message similar to: ``` Cannot create stream 5 since this endpoint has received a GOAWAY frame with last stream id 3. ``` ### Minimal yet complete reproducer code (or URL to code) ### Netty version 4.1.26.Final-SNAPSHOT (as of 6/20/2018)
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java index 42513174572..4204945c323 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java @@ -708,7 +708,8 @@ private final class DefaultEndpoint<F extends Http2FlowController> implements En @Override public int incrementAndGetNextStreamId() { - return nextReservationStreamId >= 0 ? nextReservationStreamId += 2 : nextReservationStreamId; + return goAwayReceived() ? -1 : + nextReservationStreamId >= 0 ? nextReservationStreamId += 2 : nextReservationStreamId; } private void incrementExpectedStreamId(int streamId) {
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java index 21239860472..55d81bdbb98 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameCodecTest.java @@ -44,6 +44,7 @@ import io.netty.util.concurrent.GlobalEventExecutor; import io.netty.util.concurrent.Promise; import io.netty.util.internal.ReflectionUtil; +import org.hamcrest.Matchers; import org.junit.After; import org.junit.Assume; import org.junit.Before; @@ -58,13 +59,7 @@ import static io.netty.handler.codec.http2.Http2CodecUtil.isStreamIdValid; import static org.hamcrest.Matchers.instanceOf; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertThat; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; +import static org.junit.Assert.*; import static org.mockito.Mockito.any; import static org.mockito.Mockito.anyBoolean; import static org.mockito.Mockito.anyInt; @@ -402,6 +397,20 @@ public void goAwayLastStreamIdOverflowed() throws Exception { assertTrue(channel.isActive()); } + @Test + public void outboundStreamAfterGoAwayFails() throws Exception { + final Http2FrameStream stream = frameCodec.newStream(); + frameListener.onGoAwayRead(http2HandlerCtx, 0, Http2Error.NO_ERROR.code(), Unpooled.EMPTY_BUFFER); + + assertTrue(frameCodec.connection().goAwayReceived()); + assertTrue(channel.isOpen()); + ChannelFuture f = channel.writeAndFlush( + new DefaultHttp2HeadersFrame(new DefaultHttp2Headers(), false).stream(stream)); + + assertFalse(f.isSuccess()); + assertThat(f.cause(), instanceOf(Http2NoMoreStreamIdsException.class)); + } + @Test public void streamErrorShouldFireExceptionForInbound() throws Exception { frameListener.onHeadersRead(http2HandlerCtx, 3, request, 31, false);
val
val
"2018-06-27T10:20:59"
"2018-06-20T22:39:07Z"
bryce-anderson
val
netty/netty/8107_8109
netty/netty
netty/netty/8107
netty/netty/8109
[ "timestamp(timedelta=0.0, similarity=0.9327315514790757)" ]
cda4f88ca247d3a028deed52e6024cbf5a880b12
93d2807ff0eb6d886a4da8fc52a97ea7bbc056b5
[ "@apimastery good catch... See https://github.com/netty/netty/pull/8109" ]
[]
"2018-07-09T12:39:13Z"
[]
Log4J2 Auto-detection
Not sure if I'm missing something... [#5047](https://github.com/netty/netty/pull/5047) added Log4J2LoggerFactory and Log4J2Logger. It appears that Log4J2 auto-detection in InternalLoggerFactory wasn't added? https://github.com/netty/netty/blob/fef462c04333950201ddbfd505c0af730a0c2ffd/common/src/main/java/io/netty/util/internal/logging/InternalLoggerFactory.java#L39 Cheers!
[ "common/src/main/java/io/netty/util/internal/logging/InternalLoggerFactory.java" ]
[ "common/src/main/java/io/netty/util/internal/logging/InternalLoggerFactory.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/logging/InternalLoggerFactory.java b/common/src/main/java/io/netty/util/internal/logging/InternalLoggerFactory.java index 9f85e3646b4..12c1b5a4477 100644 --- a/common/src/main/java/io/netty/util/internal/logging/InternalLoggerFactory.java +++ b/common/src/main/java/io/netty/util/internal/logging/InternalLoggerFactory.java @@ -41,13 +41,18 @@ private static InternalLoggerFactory newDefaultFactory(String name) { try { f = new Slf4JLoggerFactory(true); f.newInstance(name).debug("Using SLF4J as the default logging framework"); - } catch (Throwable t1) { + } catch (Throwable ignore1) { try { f = Log4JLoggerFactory.INSTANCE; f.newInstance(name).debug("Using Log4J as the default logging framework"); - } catch (Throwable t2) { - f = JdkLoggerFactory.INSTANCE; - f.newInstance(name).debug("Using java.util.logging as the default logging framework"); + } catch (Throwable ignore2) { + try { + f = Log4J2LoggerFactory.INSTANCE; + f.newInstance(name).debug("Using Log4J2 as the default logging framework"); + } catch (Throwable ignore3) { + f = JdkLoggerFactory.INSTANCE; + f.newInstance(name).debug("Using java.util.logging as the default logging framework"); + } } } return f;
null
test
val
"2018-07-09T09:50:26"
"2018-07-09T00:08:32Z"
apimastery
val
netty/netty/8078_8110
netty/netty
netty/netty/8078
netty/netty/8110
[ "timestamp(timedelta=0.0, similarity=0.9102304495225041)" ]
97361fa2c89da57e88762aaca9e2b186e8c148f5
d8e59ca638b8aa4fe4d18c52c70bd92a0150de58
[ "Same goes for BufferedReader instances - same finalize problem. It's used in DNS and some other modules.", "@leventov good catch... Check https://github.com/netty/netty/pull/8110" ]
[ "This looks like it leaks, since you need to close `RandomAccessFile`. The documentation for `getChannel()` does not say closing the channel closes the `RandomAccessFile`. The documentation for `RandomAccessFile.close()` notes the opposite, that closing the file closes the channel. Looking at the source, it seems the channel holds an extra ref, so closing it does not close the `fd` held by `RandomAccessFile`.", "@ejona86 maybe I am missing something but we call `output.close()` in the finally block so it should be fine ?", "At risk of repeating myself: `output` is the _channel_. But you need to close the `RandomAccessFile`. It is subtle.", "@ejona86 sorry I missed this... This is not correct, closing the `FileChannel` here will also close the `RandomAccessFile` from which it was created. See also http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/file/4a1e42601d61/src/share/classes/sun/nio/ch/FileChannelImpl.java#l138", "Hmm... the decrement for the ref is also in RandomAccessFile, so the channel _must_ close the file. Okay. Very well. I continue to be amazed at how poor some documentation is to define resource management..." ]
"2018-07-09T13:08:53Z"
[ "improvement" ]
Avoid creating FileInputStream and FileOutputStream for obtaining FileChannel
There are multiple places in the production code of the project, e. g. in `ChunkedNioFile`, where `FileInputStream` or `FileOutputStream` are created solely to obtain a `FileChannel` from them. Until Netty projects bumps the compatibility level to Java 7 (where `FileChannel.open()` methods were added), it's better to obtain a FileChannel by means of creating a `RandomAccessFile`, because the latter doesn't override `Object.finalize()`. Also, I would extract those pieces of code into a utility methods like `openChannelForRead(File)` and `openChannelForWrite(File)`.
[ "codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractDiskHttpData.java", "codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractMemoryHttpData.java", "handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractDiskHttpData.java", "codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractMemoryHttpData.java", "handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java" ]
[ "buffer/src/test/java/io/netty/buffer/ReadOnlyDirectByteBufferBufTest.java", "testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractDiskHttpData.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractDiskHttpData.java index 544bc7c82da..94309186b41 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractDiskHttpData.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractDiskHttpData.java @@ -22,10 +22,9 @@ import io.netty.util.internal.logging.InternalLoggerFactory; import java.io.File; -import java.io.FileInputStream; -import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; +import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.nio.charset.Charset; @@ -125,9 +124,10 @@ public void setContent(ByteBuf buffer) throws IOException { } return; } - FileOutputStream outputStream = new FileOutputStream(file); + RandomAccessFile accessFile = new RandomAccessFile(file, "rw"); + accessFile.setLength(0); try { - FileChannel localfileChannel = outputStream.getChannel(); + FileChannel localfileChannel = accessFile.getChannel(); ByteBuffer byteBuffer = buffer.nioBuffer(); int written = 0; while (written < size) { @@ -136,7 +136,7 @@ public void setContent(ByteBuf buffer) throws IOException { buffer.readerIndex(buffer.readerIndex() + written); localfileChannel.force(false); } finally { - outputStream.close(); + accessFile.close(); } setCompleted(); } finally { @@ -163,8 +163,8 @@ public void addContent(ByteBuf buffer, boolean last) file = tempFile(); } if (fileChannel == null) { - FileOutputStream outputStream = new FileOutputStream(file); - fileChannel = outputStream.getChannel(); + RandomAccessFile accessFile = new RandomAccessFile(file, "rw"); + fileChannel = accessFile.getChannel(); } while (written < localsize) { written += fileChannel.write(byteBuffer); @@ -182,8 +182,8 @@ public void addContent(ByteBuf buffer, boolean last) file = tempFile(); } if (fileChannel == null) { - FileOutputStream outputStream = new FileOutputStream(file); - fileChannel = outputStream.getChannel(); + RandomAccessFile accessFile = new RandomAccessFile(file, "rw"); + fileChannel = accessFile.getChannel(); } fileChannel.force(false); fileChannel.close(); @@ -217,10 +217,11 @@ public void setContent(InputStream inputStream) throws IOException { delete(); } file = tempFile(); - FileOutputStream outputStream = new FileOutputStream(file); + RandomAccessFile accessFile = new RandomAccessFile(file, "rw"); + accessFile.setLength(0); int written = 0; try { - FileChannel localfileChannel = outputStream.getChannel(); + FileChannel localfileChannel = accessFile.getChannel(); byte[] bytes = new byte[4096 * 4]; ByteBuffer byteBuffer = ByteBuffer.wrap(bytes); int read = inputStream.read(bytes); @@ -232,7 +233,7 @@ public void setContent(InputStream inputStream) throws IOException { } localfileChannel.force(false); } finally { - outputStream.close(); + accessFile.close(); } size = written; if (definedSize > 0 && definedSize < size) { @@ -290,8 +291,8 @@ public ByteBuf getChunk(int length) throws IOException { return EMPTY_BUFFER; } if (fileChannel == null) { - FileInputStream inputStream = new FileInputStream(file); - fileChannel = inputStream.getChannel(); + RandomAccessFile accessFile = new RandomAccessFile(file, "r"); + fileChannel = accessFile.getChannel(); } int read = 0; ByteBuffer byteBuffer = ByteBuffer.allocate(length); @@ -349,15 +350,15 @@ public boolean renameTo(File dest) throws IOException { if (!file.renameTo(dest)) { // must copy IOException exception = null; - FileInputStream inputStream = null; - FileOutputStream outputStream = null; + RandomAccessFile inputAccessFile = null; + RandomAccessFile outputAccessFile = null; long chunkSize = 8196; long position = 0; try { - inputStream = new FileInputStream(file); - outputStream = new FileOutputStream(dest); - FileChannel in = inputStream.getChannel(); - FileChannel out = outputStream.getChannel(); + inputAccessFile = new RandomAccessFile(file, "r"); + outputAccessFile = new RandomAccessFile(dest, "rw"); + FileChannel in = inputAccessFile.getChannel(); + FileChannel out = outputAccessFile.getChannel(); while (position < size) { if (chunkSize < size - position) { chunkSize = size - position; @@ -367,9 +368,9 @@ public boolean renameTo(File dest) throws IOException { } catch (IOException e) { exception = e; } finally { - if (inputStream != null) { + if (inputAccessFile != null) { try { - inputStream.close(); + inputAccessFile.close(); } catch (IOException e) { if (exception == null) { // Choose to report the first exception exception = e; @@ -378,9 +379,9 @@ public boolean renameTo(File dest) throws IOException { } } } - if (outputStream != null) { + if (outputAccessFile != null) { try { - outputStream.close(); + outputAccessFile.close(); } catch (IOException e) { if (exception == null) { // Choose to report the first exception exception = e; @@ -422,17 +423,17 @@ private static byte[] readFrom(File src) throws IOException { throw new IllegalArgumentException( "File too big to be loaded in memory"); } - FileInputStream inputStream = new FileInputStream(src); + RandomAccessFile accessFile = new RandomAccessFile(src, "r"); byte[] array = new byte[(int) srcsize]; try { - FileChannel fileChannel = inputStream.getChannel(); + FileChannel fileChannel = accessFile.getChannel(); ByteBuffer byteBuffer = ByteBuffer.wrap(array); int read = 0; while (read < srcsize) { read += fileChannel.read(byteBuffer); } } finally { - inputStream.close(); + accessFile.close(); } return array; } diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractMemoryHttpData.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractMemoryHttpData.java index 4cb7e567b25..5a0f322d4dc 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractMemoryHttpData.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/AbstractMemoryHttpData.java @@ -20,10 +20,9 @@ import io.netty.handler.codec.http.HttpConstants; import java.io.File; -import java.io.FileInputStream; -import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; +import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.nio.charset.Charset; @@ -131,8 +130,8 @@ public void setContent(File file) throws IOException { throw new IllegalArgumentException("File too big to be loaded in memory"); } checkSize(newsize); - FileInputStream inputStream = new FileInputStream(file); - FileChannel fileChannel = inputStream.getChannel(); + RandomAccessFile accessFile = new RandomAccessFile(file, "r"); + FileChannel fileChannel = accessFile.getChannel(); byte[] array = new byte[(int) newsize]; ByteBuffer byteBuffer = ByteBuffer.wrap(array); int read = 0; @@ -140,7 +139,7 @@ public void setContent(File file) throws IOException { read += fileChannel.read(byteBuffer); } fileChannel.close(); - inputStream.close(); + accessFile.close(); byteBuffer.flip(); if (byteBuf != null) { byteBuf.release(); @@ -232,8 +231,8 @@ public boolean renameTo(File dest) throws IOException { return true; } int length = byteBuf.readableBytes(); - FileOutputStream outputStream = new FileOutputStream(dest); - FileChannel fileChannel = outputStream.getChannel(); + RandomAccessFile accessFile = new RandomAccessFile(dest, "rw"); + FileChannel fileChannel = accessFile.getChannel(); int written = 0; if (byteBuf.nioBufferCount() == 1) { ByteBuffer byteBuffer = byteBuf.nioBuffer(); @@ -249,7 +248,7 @@ public boolean renameTo(File dest) throws IOException { fileChannel.force(false); fileChannel.close(); - outputStream.close(); + accessFile.close(); return written == length; } diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java b/handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java index 339a3e57769..0743a0dc65f 100644 --- a/handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java +++ b/handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java @@ -21,8 +21,8 @@ import io.netty.channel.FileRegion; import java.io.File; -import java.io.FileInputStream; import java.io.IOException; +import java.io.RandomAccessFile; import java.nio.channels.FileChannel; /** @@ -45,7 +45,7 @@ public class ChunkedNioFile implements ChunkedInput<ByteBuf> { * Creates a new instance that fetches data from the specified file. */ public ChunkedNioFile(File in) throws IOException { - this(new FileInputStream(in).getChannel()); + this(new RandomAccessFile(in, "r").getChannel()); } /** @@ -55,7 +55,7 @@ public ChunkedNioFile(File in) throws IOException { * {@link #readChunk(ChannelHandlerContext)} call */ public ChunkedNioFile(File in, int chunkSize) throws IOException { - this(new FileInputStream(in).getChannel(), chunkSize); + this(new RandomAccessFile(in, "r").getChannel(), chunkSize); } /**
diff --git a/buffer/src/test/java/io/netty/buffer/ReadOnlyDirectByteBufferBufTest.java b/buffer/src/test/java/io/netty/buffer/ReadOnlyDirectByteBufferBufTest.java index 5af25396017..f92d5766824 100644 --- a/buffer/src/test/java/io/netty/buffer/ReadOnlyDirectByteBufferBufTest.java +++ b/buffer/src/test/java/io/netty/buffer/ReadOnlyDirectByteBufferBufTest.java @@ -21,9 +21,8 @@ import java.io.ByteArrayInputStream; import java.io.File; -import java.io.FileInputStream; -import java.io.FileOutputStream; import java.io.IOException; +import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.ReadOnlyBufferException; import java.nio.channels.FileChannel; @@ -307,12 +306,12 @@ public void testWrapMemoryMapped() throws Exception { ByteBuf b2 = null; try { - output = new FileOutputStream(file).getChannel(); + output = new RandomAccessFile(file, "rw").getChannel(); byte[] bytes = new byte[1024]; PlatformDependent.threadLocalRandom().nextBytes(bytes); output.write(ByteBuffer.wrap(bytes)); - input = new FileInputStream(file).getChannel(); + input = new RandomAccessFile(file, "r").getChannel(); ByteBuffer m = input.map(FileChannel.MapMode.READ_ONLY, 0, input.size()); b1 = buffer(m); diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java index 881acd1bcff..ae858254259 100644 --- a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java +++ b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java @@ -32,9 +32,9 @@ import org.junit.Test; import java.io.File; -import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; +import java.io.RandomAccessFile; import java.nio.channels.WritableByteChannel; import java.util.Random; import java.util.concurrent.atomic.AtomicReference; @@ -121,7 +121,7 @@ protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) { // Request file region which is bigger then the underlying file. FileRegion region = new DefaultFileRegion( - new FileInputStream(file).getChannel(), 0, data.length + 1024); + new RandomAccessFile(file, "r").getChannel(), 0, data.length + 1024); assertThat(cc.writeAndFlush(region).await().cause(), CoreMatchers.<Throwable>instanceOf(IOException.class)); cc.close().sync(); @@ -183,8 +183,8 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E Channel cc = cb.connect(sc.localAddress()).sync().channel(); FileRegion region = new DefaultFileRegion( - new FileInputStream(file).getChannel(), startOffset, data.length - bufferSize); - FileRegion emptyRegion = new DefaultFileRegion(new FileInputStream(file).getChannel(), 0, 0); + new RandomAccessFile(file, "r").getChannel(), startOffset, data.length - bufferSize); + FileRegion emptyRegion = new DefaultFileRegion(new RandomAccessFile(file, "r").getChannel(), 0, 0); if (!defaultFileRegion) { region = new FileRegionWrapper(region);
val
val
"2019-08-16T15:18:17"
"2018-06-28T13:58:11Z"
leventov
val
netty/netty/8122_8124
netty/netty
netty/netty/8122
netty/netty/8124
[ "timestamp(timedelta=0.0, similarity=0.870323260503237)" ]
785473788f3f19531294802b727fb10a48938222
182ffdaf6d4c067514fc70b00c4a5abc67753531
[ "sounds good.", "Thanks for the PR. Looks good.", "b109, should we be checking build too?", "@johnou honestly I think we either should either just assume its ok if people use java8 (and if its not they should really upgrade) or only not use it for java9 and higher. /cc @hc-codersatlas WDYT ?", "@normanmaurer agree" ]
[ "Not a part of this commit, but is this commented out code necessary? Same above.", "Optional: consider splitting the manual safepointer to make the inliner happy." ]
"2018-07-11T11:30:50Z"
[ "cleanup" ]
Only apply copyMemoy safepoint for java <= 8
Hi, as per https://bugs.openjdk.java.net/browse/JDK-8149596 (and the source code of Java.nio.Bits) safepoint polling is no longer required (or rather it shouldn't be done). It's included as part of Unsafe.copyMemory. Can we add a check to PlatformDependent where this occurs for java version <= 8 so we don't essentially do safepoint polling twice? Thanks.
[ "common/src/main/java/io/netty/util/internal/PlatformDependent0.java" ]
[ "common/src/main/java/io/netty/util/internal/PlatformDependent0.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/PlatformDependent0.java b/common/src/main/java/io/netty/util/internal/PlatformDependent0.java index c6ebb55131d..53e9ed7bdcf 100644 --- a/common/src/main/java/io/netty/util/internal/PlatformDependent0.java +++ b/common/src/main/java/io/netty/util/internal/PlatformDependent0.java @@ -550,7 +550,16 @@ static void putLong(byte[] data, int index, long value) { } static void copyMemory(long srcAddr, long dstAddr, long length) { - //UNSAFE.copyMemory(srcAddr, dstAddr, length); + // Manual safe-point polling is only needed prior Java9: + // See https://bugs.openjdk.java.net/browse/JDK-8149596 + if (javaVersion() <= 8) { + copyMemoryWithSafePointPolling(srcAddr, dstAddr, length); + } else { + UNSAFE.copyMemory(srcAddr, dstAddr, length); + } + } + + private static void copyMemoryWithSafePointPolling(long srcAddr, long dstAddr, long length) { while (length > 0) { long size = Math.min(length, UNSAFE_COPY_THRESHOLD); UNSAFE.copyMemory(srcAddr, dstAddr, size); @@ -561,7 +570,17 @@ static void copyMemory(long srcAddr, long dstAddr, long length) { } static void copyMemory(Object src, long srcOffset, Object dst, long dstOffset, long length) { - //UNSAFE.copyMemory(src, srcOffset, dst, dstOffset, length); + // Manual safe-point polling is only needed prior Java9: + // See https://bugs.openjdk.java.net/browse/JDK-8149596 + if (javaVersion() <= 8) { + copyMemoryWithSafePointPolling(src, srcOffset, dst, dstOffset, length); + } else { + UNSAFE.copyMemory(src, srcOffset, dst, dstOffset, length); + } + } + + private static void copyMemoryWithSafePointPolling( + Object src, long srcOffset, Object dst, long dstOffset, long length) { while (length > 0) { long size = Math.min(length, UNSAFE_COPY_THRESHOLD); UNSAFE.copyMemory(src, srcOffset, dst, dstOffset, size);
null
val
val
"2018-08-18T07:28:31"
"2018-07-11T04:17:06Z"
re-thc
val
netty/netty/8134_8136
netty/netty
netty/netty/8134
netty/netty/8136
[ "timestamp(timedelta=88567.0, similarity=0.8846251840296907)" ]
be5b5a3b29806429103c088555b5e211efe9985f
cc246dc1cf3f9363555d0271ee300c19250ea966
[ "Thanks for reporting but 4.0.x is EOL for over an year. Please upgrade to 4.1.x\n\n> Am 18.07.2018 um 16:59 schrieb Dmitry Shatov <notifications@github.com>:\n> \n> Expected behavior\n> \n> The work of EpollEventLoop without the appearance of any exceptions in the log.\n> \n> Actual behavior\n> \n> Occasionally, this exception is logged:\n> \n> io.netty.channel.ChannelException: timerfd_settime() failed: Illegal argument\n> at io.netty.channel.epoll.Native.epollWait0(Native Method) ~[netty-all-4.0.52.Final.jar:4.0.52.Final]\n> at io.netty.channel.epoll.Native.epollWait(Native.java:124) ~[netty-all-4.0.52.Final.jar:4.0.52.Final]\n> at io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:231) ~[netty-all-4.0.52.Final.jar:4.0.52.Final]\n> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:248) ~[netty-all-4.0.52.Final.jar:4.0.52.Final]\n> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131) [netty-all-4.0.52.Final.jar:4.0.52.Final]\n> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]\n> Moreover, the application's performance slows down, because in the EpollEventLoop.run() the specified exception causes the thread to sleep for a second. Then, the exception is thrown again, and so on.\n> \n> I consider this behavior a bug.\n> \n> Possible workarounds\n> \n> 1. Downgrade Netty version to 4.0.50.\n> The bug appeared in this commit, so the latest Netty version (in branch 4.0), not affected by the problem, is 4.0.50.\n> 2. Switch Netty version to latest 4.1.x release.\n> It seems that in the 4.1 branch the bug is fixed (see #7970).\n> \n> Netty version\n> \n> 4.0.52-4.0.56\n> \n> OS version\n> \n> Any Linux version that supports the timerfd_settime system call.\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n" ]
[]
"2018-07-18T09:17:15Z"
[]
Incorrect use of timerfd_settime system call in epoll-based backend.
### Expected behavior The work of EpollEventLoop without the appearance of any exceptions in the log. ### Actual behavior Occasionally, this exception is logged: ```[NettyIO0] WARN i.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop. io.netty.channel.ChannelException: timerfd_settime() failed: Illegal argument at io.netty.channel.epoll.Native.epollWait0(Native Method) ~[netty-all-4.0.52.Final.jar:4.0.52.Final] at io.netty.channel.epoll.Native.epollWait(Native.java:124) ~[netty-all-4.0.52.Final.jar:4.0.52.Final] at io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:231) ~[netty-all-4.0.52.Final.jar:4.0.52.Final] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:248) ~[netty-all-4.0.52.Final.jar:4.0.52.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131) [netty-all-4.0.52.Final.jar:4.0.52.Final] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151] ``` Moreover, the application's performance slows down, because in the EpollEventLoop.run() the specified exception causes the thread to sleep for a second. Then, the exception is thrown again, and so on. I consider this behavior a bug. ### Possible workarounds *1. Downgrade Netty version to 4.0.50.* The bug appeared in [this commit](https://github.com/netty/netty/commit/35e7e2aa20c9b77b2ea681a40fa9b361c56d5f39), so the latest Netty version (in branch 4.0), not affected by the problem, is 4.0.50. *2. Switch Netty version to latest 4.1.x release.* It seems that in the 4.1 branch the bug is fixed (see #7970). ### Netty version 4.0.52-4.0.56 ### OS version Any Linux version that supports the timerfd_settime system call.
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java" ]
[]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java index 4ab1e6d6387..53938f72bbf 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java @@ -229,7 +229,7 @@ private int epollWait(boolean oldWakeup) throws IOException { long totalDelay = delayNanos(System.nanoTime()); int delaySeconds = (int) min(totalDelay / 1000000000L, Integer.MAX_VALUE); return Native.epollWait(epollFd, events, timerFd, delaySeconds, - (int) min(totalDelay - delaySeconds * 1000000000L, Integer.MAX_VALUE)); + (int) min(totalDelay - delaySeconds * 1000000000L, 999999999L)); } private int epollWaitNow() throws IOException {
null
train
val
"2018-02-12T18:31:57"
"2018-07-18T08:59:28Z"
dshatov
val
netty/netty/8144_8150
netty/netty
netty/netty/8144
netty/netty/8150
[ "timestamp(timedelta=0.0, similarity=0.8447178205446683)" ]
952eeb8e1e3706b09d4d3f32f16d7f0e5c540cb5
620dad0c2602440f2c457a5cb2a4c7a29e786bda
[ "interesting... Let me investigate ", "@normanmaurer I have tested this case on netty 4.1.27.Final and netty-tcnative-boringssl-static 2.0.12.Final, the process would not crash any more. It seems that the latest version of boringssl have fixed this issue. Then the validation actually takes place in the io.netty.handler.ssl. ReferenceCountedOpenSslEngine#checkSniHostnameMatch() and it blocks me since underscore character is not allowed in definition of host name in rfc 1123 (which is based on rfc 952).\r\n\r\n> A \"name\" (Net, Host, Gateway, or Domain name) is a text string up to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus sign (-), and period (.).\r\n\r\nBut I‘m just curious about the JDK that provided the abstract class SNIMatcher which used \r\nSNIServerName to represent the \"server name\", and it seems to be a liberal way to implement SNI. I have also tested my case on JDK provided SSL implementation and the underscore character is passed the test.", "@r9liucc can you please verify https://github.com/netty/netty/pull/8150 ?", "@normanmaurer I have verified this on branch `sni_underscore` and the results seems ok.\r\nOpenssl Client test:\r\n```\r\nopenssl s_client -cert service.crt -key service.key -CAfile root_ca.pem -connect 127.0.0.1:8080 -servername rb7yh582ov5d0umxr2td.v1_1\r\nCONNECTED(00000003)\r\ndepth=1 C = CN, ST = SC, L = CD, O = HCB, OU = HCB, CN = my_org\r\nverify return:1\r\ndepth=0 C = CN, ST = SC, L = CD, O = HCB, OU = HCB, CN = my_cn\r\nverify return:1\r\n---\r\nCertificate chain\r\n 0 s:/C=CN/ST=SC/L=CD/O=HCB/OU=HCB/CN=my_cn\r\n i:/C=CN/ST=SC/L=CD/O=HCB/OU=HCB/CN=my_org\r\n---\r\nServer certificate\r\n-----BEGIN CERTIFICATE-----\r\n...ignore\r\n-----END CERTIFICATE-----\r\nsubject=/C=CN/ST=SC/L=CD/O=HCB/OU=HCB/CN=my_cn\r\nissuer=/C=CN/ST=SC/L=CD/O=HCB/OU=HCB/CN=my_org\r\n---\r\nNo client certificate CA names sent\r\nClient Certificate Types: RSA sign, ECDSA sign\r\nRequested Signature Algorithms: ECDSA+SHA256:0x04+0x08:RSA+SHA256:ECDSA+SHA384:0x05+0x08:RSA+SHA384:0x06+0x08:RSA+SHA512:RSA+SHA1\r\nShared Requested Signature Algorithms: ECDSA+SHA256:RSA+SHA256:ECDSA+SHA384:RSA+SHA384:RSA+SHA512:RSA+SHA1\r\nPeer signing digest: SHA256\r\nServer Temp Key: ECDH, P-256, 256 bits\r\n---\r\nSSL handshake has read 1364 bytes and written 2507 bytes\r\n---\r\nNew, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256\r\nServer public key is 2048 bit\r\nSecure Renegotiation IS supported\r\nNo ALPN negotiated\r\nSSL-Session:\r\n Protocol : TLSv1.2\r\n Cipher : ECDHE-RSA-AES128-GCM-SHA256\r\n Session-ID: 9C33100A039306570D48E007DF129AE9E36EAC63E5E0044D99F52FA5DC4F261F\r\n Session-ID-ctx:\r\n Master-Key: 05D02E543544B0F9CC5BB0BC6D38B21FA440C77810B1857553FC24F534E408E4CD727E57CA814B1623B6D85FB7B2B6EA\r\n Key-Arg : None\r\n PSK identity: None\r\n PSK identity hint: None\r\n SRP username: None\r\n Start Time: 1532596523\r\n Timeout : 300 (sec)\r\n Verify return code: 0 (ok)\r\n---\r\nGET /hello HTTP/1.1\r\n\r\nHTTP/1.1 200 OK\r\nContent-Type: application/json;charset=UTF-8\r\nContent-Length: 7\r\n\r\n\"hello\"\r\n```\r\n\r\nI have also verified outbound data and the client sni has successfully been sent out for negotiation.\r\nClient code snippets:\r\n```\r\nprivate final List<SNIServerName> sni = new ArrayList<>();\r\n// init sni\r\nthis.sni.add(new SNIHostName(remoteIdentifier.idDistKey().getBytes(StandardCharsets.UTF_8)));\r\n\r\n// set sni\r\nSSLParameters sslParameters = sslEngine.getSSLParameters();\r\nsslParameters.setServerNames(sni);\r\nsslEngine.setSSLParameters(sslParameters);\r\n...\r\n```" ]
[ "Is there an issue on tcnarive for this?", "Not yet... will open one" ]
"2018-07-25T10:12:30Z"
[ "defect" ]
Using underscore character('_') as part of SNI leads JVM crash
Hello guys, we use Netty as our network implementation of microservices framework. We plan to support https and thus enable authentication on both client side and server side. But we encounter a problem when we want to use underscore character('_') in TLS SNI. ### Steps to reproduce #### 1. Netty Server implementation: ``` // define ssl provider private static final SslProvider SSL_PROVIDER = SslProvider.OPENSSL; // factory method for creating ssl context public SslContext serverCtx(InputStream certificate, InputStream privateKey) { try { return SslContextBuilder.forServer(certificate, privateKey) .sslProvider(SSL_PROVIDER) .trustManager(TwoWayTrustManagerFactory.INSTANCE) .clientAuth(ClientAuth.REQUIRE) .build(); } catch (Exception e) { throw new RuntimeException("failed to create ssl context", e); } } // SNI Matcher implementation @Slf4j public class MySNIMatcher extends SNIMatcher { private final byte[] hostIdentifier; public MySNIMatcher(Identifier hostIdentifier) { super(0); this.hostIdentifier = hostIdentifier.idDistKey().getBytes(StandardCharsets.US_ASCII); } @Override public boolean matches(SNIServerName sniServerName) { if (log.isDebugEnabled()) { log.debug("match sni {}", new String(sniServerName.getEncoded(), StandardCharsets.US_ASCII)); } return Arrays.equals(sniServerName.getEncoded(), hostIdentifier); } } // Netty bootstrap bootstrap.childHandler(new ChannelInitializer<Channel>() { @Override protected void initChannel(Channel ch) throws Exception { ChannelPipeline pipeline = ch.pipeline(); if (conf.logEnable()) pipeline.addLast("logging", new LoggingHandler()); if (conf.sslEnable()) { // conf.sslCtx() will actually use serverCtx() to create context SSLEngine engine = conf.sslCtx().newEngine(ch.alloc()); SSLParameters s = engine.getSSLParameters(); s.setSNIMatchers(new MySNIMatcher(...)); s.setEndpointIdentificationAlgorithm("HTTPS"); engine.setSSLParameters(s); pipeline.addLast("ssl", new SslHandler(engine, true)); } codec(pipeline); int idleTimeout = (int) TimeUnit.MILLISECONDS.toSeconds(conf.idleTimeout()); if (0 < idleTimeout) pipeline.addLast("idle", new IdleStateHandler(0, 0, idleTimeout)); pipeline.addLast("handler", handler); } }) ``` #### 2. Openssl Client test: ``` openssl s_client -cert service.crt -key service.key -CAfile root_ca.pem -connect 127.0.0.1:8080 -servername rb8hx3pww30y3tvw0mwy.v1_1 CONNECTED(00000003) write:errno=0 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 210 bytes Verification: OK --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: PSK identity: None PSK identity hint: None SRP username: None Start Time: 1532336403 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no --- ``` As we can see from the command, we used `rb8hx3pww30y3tvw0mwy.v1_1` as sni stdout of JVM process when crash: ``` java: ../ssl/handshake_server.c:541: negotiate_version: Assertion `!ssl->s3->have_version' failed. ``` Please note that if we use sni which excluded underscore character(e.g. `rb8hx3pww30y3tvw0mwy.v1`), everything works fine, I can see sni logs in MySNIMatcher. ### Netty version Netty: 4.1.14.Final Boringssl: netty-tcnative-boringssl-static:2.0.5.Final ### Openssl Version 1.0.2o ### JVM version (e.g. `java -version`) java version "1.8.0_112" Java(TM) SE Runtime Environment (build 1.8.0_112-b16) Java HotSpot(TM) 64-Bit Server VM (build 25.112-b16, mixed mode) ### OS version (e.g. `uname -a`) macOS High Sierra 10.13.5 Darwin Kernel Version 17.6.0
[ "handler/src/main/java/io/netty/handler/ssl/Java8SslUtils.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslServerContext.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/Java8SslUtils.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslServerContext.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/Java8SslTestUtils.java", "handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/Java8SslUtils.java b/handler/src/main/java/io/netty/handler/ssl/Java8SslUtils.java index 583d4cf4986..b40c121748d 100644 --- a/handler/src/main/java/io/netty/handler/ssl/Java8SslUtils.java +++ b/handler/src/main/java/io/netty/handler/ssl/Java8SslUtils.java @@ -69,7 +69,7 @@ static void setSNIMatchers(SSLParameters sslParameters, Collection<?> matchers) } @SuppressWarnings("unchecked") - static boolean checkSniHostnameMatch(Collection<?> matchers, String hostname) { + static boolean checkSniHostnameMatch(Collection<?> matchers, byte[] hostname) { if (matchers != null && !matchers.isEmpty()) { SNIHostName name = new SNIHostName(hostname); Iterator<SNIMatcher> matcherIt = (Iterator<SNIMatcher>) matchers.iterator(); diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java index b98e9858d7b..8b147357667 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java @@ -20,6 +20,7 @@ import io.netty.internal.tcnative.Buffer; import io.netty.internal.tcnative.SSL; import io.netty.util.AbstractReferenceCounted; +import io.netty.util.CharsetUtil; import io.netty.util.ReferenceCounted; import io.netty.util.ResourceLeakDetector; import io.netty.util.ResourceLeakDetectorFactory; @@ -1817,7 +1818,7 @@ private boolean isDestroyed() { return destroyed != 0; } - final boolean checkSniHostnameMatch(String hostname) { + final boolean checkSniHostnameMatch(byte[] hostname) { return Java8SslUtils.checkSniHostnameMatch(matchers, hostname); } diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslServerContext.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslServerContext.java index 9f09393f01e..b9c8cf60b8c 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslServerContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslServerContext.java @@ -19,6 +19,7 @@ import io.netty.internal.tcnative.SSL; import io.netty.internal.tcnative.SSLContext; import io.netty.internal.tcnative.SniHostNameMatcher; +import io.netty.util.CharsetUtil; import io.netty.util.internal.PlatformDependent; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; @@ -244,7 +245,8 @@ private static final class OpenSslSniHostnameMatcher implements SniHostNameMatch public boolean match(long ssl, String hostname) { ReferenceCountedOpenSslEngine engine = engineMap.get(ssl); if (engine != null) { - return engine.checkSniHostnameMatch(hostname); + // TODO: In the next release of tcnative we should pass the byte[] directly in and not use a String. + return engine.checkSniHostnameMatch(hostname.getBytes(CharsetUtil.UTF_8)); } logger.warn("No ReferenceCountedOpenSslEngine found for SSL pointer: {}", ssl); return false;
diff --git a/handler/src/test/java/io/netty/handler/ssl/Java8SslTestUtils.java b/handler/src/test/java/io/netty/handler/ssl/Java8SslTestUtils.java index 32219ff155a..50fb935d9dc 100644 --- a/handler/src/test/java/io/netty/handler/ssl/Java8SslTestUtils.java +++ b/handler/src/test/java/io/netty/handler/ssl/Java8SslTestUtils.java @@ -23,17 +23,18 @@ import javax.net.ssl.SSLEngine; import javax.net.ssl.SSLParameters; import java.security.Provider; +import java.util.Arrays; import java.util.Collections; final class Java8SslTestUtils { private Java8SslTestUtils() { } - static void setSNIMatcher(SSLParameters parameters) { + static void setSNIMatcher(SSLParameters parameters, final byte[] match) { SNIMatcher matcher = new SNIMatcher(0) { @Override public boolean matches(SNIServerName sniServerName) { - return false; + return Arrays.equals(match, sniServerName.getEncoded()); } }; parameters.setSNIMatchers(Collections.singleton(matcher)); diff --git a/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java index 9be57e54680..99fb8fa82e2 100644 --- a/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java @@ -22,6 +22,8 @@ import io.netty.handler.ssl.util.InsecureTrustManagerFactory; import io.netty.handler.ssl.util.SelfSignedCertificate; import io.netty.internal.tcnative.SSL; +import io.netty.util.CharsetUtil; +import io.netty.util.internal.EmptyArrays; import io.netty.util.internal.PlatformDependent; import org.junit.Assume; import org.junit.BeforeClass; @@ -986,7 +988,7 @@ public void testSNIMatchersDoesNotThrow() throws Exception { SSLEngine engine = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); try { SSLParameters parameters = new SSLParameters(); - Java8SslTestUtils.setSNIMatcher(parameters); + Java8SslTestUtils.setSNIMatcher(parameters, EmptyArrays.EMPTY_BYTES); engine.setSSLParameters(parameters); } finally { cleanupServerSslEngine(engine); @@ -994,6 +996,28 @@ public void testSNIMatchersDoesNotThrow() throws Exception { } } + @Test + public void testSNIMatchersWithSNINameWithUnderscore() throws Exception { + assumeTrue(PlatformDependent.javaVersion() >= 8); + byte[] name = "rb8hx3pww30y3tvw0mwy.v1_1".getBytes(CharsetUtil.UTF_8); + SelfSignedCertificate ssc = new SelfSignedCertificate(); + serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) + .sslProvider(sslServerProvider()) + .build(); + + SSLEngine engine = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); + try { + SSLParameters parameters = new SSLParameters(); + Java8SslTestUtils.setSNIMatcher(parameters, name); + engine.setSSLParameters(parameters); + assertTrue(unwrapEngine(engine).checkSniHostnameMatch(name)); + assertFalse(unwrapEngine(engine).checkSniHostnameMatch("other".getBytes(CharsetUtil.UTF_8))); + } finally { + cleanupServerSslEngine(engine); + ssc.delete(); + } + } + @Test(expected = IllegalArgumentException.class) public void testAlgorithmConstraintsThrows() throws Exception { SelfSignedCertificate ssc = new SelfSignedCertificate();
train
val
"2018-07-25T06:32:28"
"2018-07-23T10:15:27Z"
r9liucc
val
netty/netty/8152_8153
netty/netty
netty/netty/8152
netty/netty/8153
[ "timestamp(timedelta=0.0, similarity=0.9011044041844438)" ]
952eeb8e1e3706b09d4d3f32f16d7f0e5c540cb5
9b08dbca00ffdcb37ec0be2100d868c7c2dd72b9
[ "@nitsanw doh! I will look into it", "It looks like the issue is still present in 4.0.\r\n\r\nI ran the tests from https://github.com/netty/netty/pull/8153/files#diff-21be031058403699535d72fa9ea7548fR242 against 4.0 (https://github.com/Gerrrr/netty/commit/69cefda98abf6afe7eaf35b7b0db0e95e28a7f19) and got are two failures:\r\n\r\n```\r\n$ mvn test -Dtest=ByteBufUtilTest\r\n....\r\n T E S T S\r\n-------------------------------------------------------\r\n[jetty-alpn-agent] Using: alpn-boot-8.1.11.v20170118.jar\r\nRunning io.netty.buffer.ByteBufUtilTest\r\n17:57:54.424 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework\r\n17:57:54.426 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple\r\n17:57:54.426 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4\r\n17:57:54.442 [main] DEBUG i.n.util.internal.PlatformDependent - Platform: MacOS\r\n17:57:54.444 [main] DEBUG i.n.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false\r\n17:57:54.444 [main] DEBUG i.n.util.internal.PlatformDependent0 - Java version: 8\r\n17:57:54.447 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available\r\n17:57:54.448 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available\r\n17:57:54.449 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available\r\n17:57:54.449 [main] DEBUG i.n.util.internal.PlatformDependent0 - direct buffer constructor: available\r\n17:57:54.450 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true\r\n17:57:54.450 [main] DEBUG i.n.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9\r\n17:57:54.450 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available\r\n17:57:54.450 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available\r\n17:57:54.451 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: /var/folders/9b/pdm76rrd72l04r6dyf1j4lt80000gn/T (java.io.tmpdir)\r\n17:57:54.451 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)\r\n17:57:54.454 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false\r\n17:57:54.454 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: 3817865216 bytes\r\n17:57:54.454 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1\r\n17:57:54.455 [main] DEBUG io.netty.util.internal.CleanerJava6 - java.nio.ByteBuffer.cleaner(): available\r\n17:57:54.499 [main] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.bytebuf.checkAccessible: true\r\n17:57:54.503 [main] DEBUG i.n.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@56aac163\r\n17:57:54.520 [main] DEBUG i.n.u.i.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024\r\n17:57:54.520 [main] DEBUG i.n.u.i.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096\r\n17:57:54.521 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: unpooled\r\n17:57:54.521 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536\r\n17:57:54.521 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384\r\nTests run: 29, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.754 sec <<< FAILURE! - in io.netty.buffer.ByteBufUtilTest\r\ntestWriteUtf8CompositeWrapped(io.netty.buffer.ByteBufUtilTest) Time elapsed: 0.009 sec <<< FAILURE!\r\njava.lang.AssertionError: expected:<UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 27, cap: 64)> but was:<CompositeByteBuf(ridx: 1, widx: 1, cap: 64, components=3)>\r\n\tat io.netty.buffer.ByteBufUtilTest.testWriteUtf8CompositeWrapped(ByteBufUtilTest.java:183)\r\n\r\ntestWriteUsAsciiCompositeWrapped(io.netty.buffer.ByteBufUtilTest) Time elapsed: 0.001 sec <<< FAILURE!\r\njava.lang.AssertionError: expected:<UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 10, cap: 16)> but was:<CompositeByteBuf(ridx: 1, widx: 1, cap: 64, components=3)>\r\n\tat io.netty.buffer.ByteBufUtilTest.testWriteUsAsciiCompositeWrapped(ByteBufUtilTest.java:138)\r\n\r\nHeap\r\n PSYoungGen total 76288K, used 52573K [0x000000076ab00000, 0x0000000770000000, 0x00000007c0000000)\r\n eden space 65536K, 80% used [0x000000076ab00000,0x000000076de577a0,0x000000076eb00000)\r\n from space 10752K, 0% used [0x000000076f580000,0x000000076f580000,0x0000000770000000)\r\n to space 10752K, 0% used [0x000000076eb00000,0x000000076eb00000,0x000000076f580000)\r\n ParOldGen total 175104K, used 0K [0x00000006c0000000, 0x00000006cab00000, 0x000000076ab00000)\r\n object space 175104K, 0% used [0x00000006c0000000,0x00000006c0000000,0x00000006cab00000)\r\n Metaspace used 10029K, capacity 10146K, committed 10240K, reserved 1058816K\r\n class space used 1254K, capacity 1267K, committed 1280K, reserved 1048576K\r\n\r\nResults :\r\n\r\nFailed tests:\r\n ByteBufUtilTest.testWriteUsAsciiCompositeWrapped:138 expected:<UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 10, cap: 16)> but was:<CompositeByteBuf(ridx: 1, widx: 1, cap: 64, components=3)>\r\n ByteBufUtilTest.testWriteUtf8CompositeWrapped:183 expected:<UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 27, cap: 64)> but was:<CompositeByteBuf(ridx: 1, widx: 1, cap: 64, components=3)>\r\n\r\nTests run: 29, Failures: 2, Errors: 0, Skipped: 0\r\n```", "@Gerrrr 4.0 is EOL, try updating to 4.1." ]
[]
"2018-07-26T07:26:36Z"
[ "defect" ]
Leak detection combined with composite buffers results in incorrectly handled writerIndex
This is a nasty bug, which only manifests with leak detection, so would be very confusing in the wild. It was discovered in a test suite which validated with a range of buffer types and resulted in a flaky test. Fortunately composite buffers are not in common use, and leak detection is not permanently on, so the bug should not effect most people most of the time. This also made it harder to hunt down. In some places the `writerIndex` field of `AbstractByteBuf` is manipulated directly under the assumption that subclasses of `AbstractByteBuf` are safe to manipulate, while subclasses of `WrappedByteBuf` are to be unwrapped until, hopefully, an `AbstractByteBuf` will emerge. E.g.: ``` public static int writeAscii(ByteBuf buf, CharSequence seq) { // ASCII uses 1 byte per char final int len = seq.length(); if (seq instanceof AsciiString) { AsciiString asciiString = (AsciiString) seq; buf.writeBytes(asciiString.array(), asciiString.arrayOffset(), len); } else { for (;;) { if (buf instanceof AbstractByteBuf) { AbstractByteBuf byteBuf = (AbstractByteBuf) buf; byteBuf.ensureWritable(len); int written = writeAscii(byteBuf, byteBuf.writerIndex, seq, len); byteBuf.writerIndex += written; return written; } else if (buf instanceof WrappedByteBuf) { // Unwrap as the wrapped buffer may be an AbstractByteBuf and so we can use fast-path. buf = buf.unwrap(); } else { buf.writeBytes(seq.toString().getBytes(CharsetUtil.US_ASCII)); } } } return len; } ``` The assumption breaks with `WrappedCompositeByteBuf` as it is a subclass of `AbstractByteBuf` via `CompositeByteBuf`, but should have perhaps been a subclass of `WrappedByteBuf`. The code ends up writing the data to the 0 offset, overwriting whatever it finds there. I've hit the resulting bug on `writeUtf` and can see it would also manifest for `writeAscii`, and perhaps elsewhere. Leak detection for composite buffers extends `WrappedCompositeByteBuf`. The bug can be fixed by using the accessor methods instead of the field. ### Steps to reproduce Run a writeAscii/Utf test with paranoid leak detection to reproduce. ### Minimal yet complete reproducer code (or URL to code) ### Netty version 4.1, perhaps earlier too, I did not check.
[ "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[ "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[ "buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java index 0d1b3a9b89f..e7afab36728 100644 --- a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java +++ b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java @@ -498,7 +498,10 @@ public static int writeUtf8(ByteBuf buf, CharSequence seq) { */ public static int reserveAndWriteUtf8(ByteBuf buf, CharSequence seq, int reserveBytes) { for (;;) { - if (buf instanceof AbstractByteBuf) { + if (buf instanceof WrappedCompositeByteBuf) { + // WrappedCompositeByteBuf is a sub-class of AbstractByteBuf so it needs special handling. + buf = buf.unwrap(); + } else if (buf instanceof AbstractByteBuf) { AbstractByteBuf byteBuf = (AbstractByteBuf) buf; byteBuf.ensureWritable0(reserveBytes); int written = writeUtf8(byteBuf, byteBuf.writerIndex, seq, seq.length()); @@ -665,7 +668,10 @@ public static int writeAscii(ByteBuf buf, CharSequence seq) { buf.writeBytes(asciiString.array(), asciiString.arrayOffset(), len); } else { for (;;) { - if (buf instanceof AbstractByteBuf) { + if (buf instanceof WrappedCompositeByteBuf) { + // WrappedCompositeByteBuf is a sub-class of AbstractByteBuf so it needs special handling. + buf = buf.unwrap(); + } else if (buf instanceof AbstractByteBuf) { AbstractByteBuf byteBuf = (AbstractByteBuf) buf; byteBuf.ensureWritable0(len); int written = writeAscii(byteBuf, byteBuf.writerIndex, seq, len);
diff --git a/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java b/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java index 38e36e20db3..da9b344f406 100644 --- a/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java +++ b/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java @@ -238,6 +238,42 @@ public void testWriteUsAsciiWrapped() { buf2.unwrap().release(); } + @Test + public void testWriteUsAsciiComposite() { + String usAscii = "NettyRocks"; + ByteBuf buf = Unpooled.buffer(16); + buf.writeBytes(usAscii.getBytes(CharsetUtil.US_ASCII)); + ByteBuf buf2 = Unpooled.compositeBuffer().addComponent( + Unpooled.buffer(8)).addComponent(Unpooled.buffer(24)); + // write some byte so we start writing with an offset. + buf2.writeByte(1); + ByteBufUtil.writeAscii(buf2, usAscii); + + // Skip the previously written byte. + assertEquals(buf, buf2.skipBytes(1)); + + buf.release(); + buf2.release(); + } + + @Test + public void testWriteUsAsciiCompositeWrapped() { + String usAscii = "NettyRocks"; + ByteBuf buf = Unpooled.buffer(16); + buf.writeBytes(usAscii.getBytes(CharsetUtil.US_ASCII)); + ByteBuf buf2 = new WrappedCompositeByteBuf(Unpooled.compositeBuffer().addComponent( + Unpooled.buffer(8)).addComponent(Unpooled.buffer(24))); + // write some byte so we start writing with an offset. + buf2.writeByte(1); + ByteBufUtil.writeAscii(buf2, usAscii); + + // Skip the previously written byte. + assertEquals(buf, buf2.skipBytes(1)); + + buf.release(); + buf2.release(); + } + @Test public void testWriteUtf8() { String usAscii = "Some UTF-8 like äÄ∏ŒŒ"; @@ -252,6 +288,42 @@ public void testWriteUtf8() { buf2.release(); } + @Test + public void testWriteUtf8Composite() { + String utf8 = "Some UTF-8 like äÄ∏ŒŒ"; + ByteBuf buf = Unpooled.buffer(16); + buf.writeBytes(utf8.getBytes(CharsetUtil.UTF_8)); + ByteBuf buf2 = Unpooled.compositeBuffer().addComponent( + Unpooled.buffer(8)).addComponent(Unpooled.buffer(24)); + // write some byte so we start writing with an offset. + buf2.writeByte(1); + ByteBufUtil.writeUtf8(buf2, utf8); + + // Skip the previously written byte. + assertEquals(buf, buf2.skipBytes(1)); + + buf.release(); + buf2.release(); + } + + @Test + public void testWriteUtf8CompositeWrapped() { + String utf8 = "Some UTF-8 like äÄ∏ŒŒ"; + ByteBuf buf = Unpooled.buffer(16); + buf.writeBytes(utf8.getBytes(CharsetUtil.UTF_8)); + ByteBuf buf2 = new WrappedCompositeByteBuf(Unpooled.compositeBuffer().addComponent( + Unpooled.buffer(8)).addComponent(Unpooled.buffer(24))); + // write some byte so we start writing with an offset. + buf2.writeByte(1); + ByteBufUtil.writeUtf8(buf2, utf8); + + // Skip the previously written byte. + assertEquals(buf, buf2.skipBytes(1)); + + buf.release(); + buf2.release(); + } + @Test public void testWriteUtf8Surrogates() { // leading surrogate + trailing surrogate
val
val
"2018-07-25T06:32:28"
"2018-07-25T16:34:36Z"
nitsanw
val
netty/netty/8166_8167
netty/netty
netty/netty/8166
netty/netty/8167
[ "timestamp(timedelta=33843.0, similarity=0.8526570227983766)" ]
3ab7cac6209cfa3c2d13929123bb61903fe10f58
94396e195d816ce9826bd820e3742731ef406d7f
[ "@koo-taejin PTAL https://github.com/netty/netty/pull/8168" ]
[ "Just use\r\n\r\nreturn isSupported() && option instanceof NioChannelOption", "I have corrected what you said. :)" ]
"2018-08-02T07:28:17Z"
[]
In netty-4.1.28.Final version, an error occurs when running on jdk6.
### Actual behavior ``` io.netty.channel.ChannelException: Unable to create Channel from class class io.netty.channel.socket.nio.NioSocketChannel Caused by: java.lang.NoClassDefFoundError: java/nio/channels/NetworkChannel at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:105) at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:94) at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:87) at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:80) ... 32 more Caused by: java.lang.ClassNotFoundException: java.nio.channels.NetworkChannel at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) ``` ### Steps to reproduce ``` // run following test on jvm6 @Test public void listenerTest() throws Exception { Bootstrap bootstrap = createBootstrap(); Channel channel = bootstrap.connect(host, port).sync().channel(); } public Bootstrap createBootstrap() { EventLoopGroup workerGroup = new NioEventLoopGroup(); Bootstrap bootstrap = new Bootstrap(); bootstrap.group(workerGroup).channel(NioSocketChannel.class) .handler(new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel ch) throws Exception { ch.pipeline().addLast(new HttpClientCodec()); ch.pipeline().addLast(new HttpObjectAggregator(65535)); } }); return bootstrap; } ``` ### Netty version netty-4.1.28.Final ### JVM version (e.g. `java -version`) 1.6.0_45 ### OS version (e.g. `uname -a`) Windows10, CentOS 6.6 I think this issue is problem, but I'm not sure. If this issue is problem, May I make a correction?
[ "transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java" ]
[ "transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java index 74431791ce4..e6fec4861ec 100644 --- a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java +++ b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java @@ -462,10 +462,19 @@ protected Executor prepareToClose() { } private final class NioSocketChannelConfig extends DefaultSocketChannelConfig { + + private final SocketChannelOptionHelper socketChannelOptionHelper; private volatile int maxBytesPerGatheringWrite = Integer.MAX_VALUE; + private NioSocketChannelConfig(NioSocketChannel channel, Socket javaSocket) { super(channel, javaSocket); calculateMaxBytesPerGatheringWrite(); + + if (PlatformDependent.javaVersion() >= 7) { + this.socketChannelOptionHelper = new NioChannelOptionHelper(); + } else { + this.socketChannelOptionHelper = new DisabledOptionHelper(); + } } @Override @@ -482,16 +491,16 @@ public NioSocketChannelConfig setSendBufferSize(int sendBufferSize) { @Override public <T> boolean setOption(ChannelOption<T> option, T value) { - if (PlatformDependent.javaVersion() >= 7 && option instanceof NioChannelOption) { - return NioChannelOption.setOption(jdkChannel(), (NioChannelOption<T>) option, value); + if (socketChannelOptionHelper.isSupported(option)) { + return socketChannelOptionHelper.setOption(jdkChannel(), option, value); } return super.setOption(option, value); } @Override public <T> T getOption(ChannelOption<T> option) { - if (PlatformDependent.javaVersion() >= 7 && option instanceof NioChannelOption) { - return NioChannelOption.getOption(jdkChannel(), (NioChannelOption<T>) option); + if (socketChannelOptionHelper.isSupported(option)) { + return socketChannelOptionHelper.getOption(jdkChannel(), option); } return super.getOption(option); } @@ -499,8 +508,8 @@ public <T> T getOption(ChannelOption<T> option) { @SuppressWarnings("unchecked") @Override public Map<ChannelOption<?>, Object> getOptions() { - if (PlatformDependent.javaVersion() >= 7) { - return getOptions(super.getOptions(), NioChannelOption.getOptions(jdkChannel())); + if (socketChannelOptionHelper.isSupported()) { + return getOptions(super.getOptions(), socketChannelOptionHelper.getOptions(jdkChannel())); } return super.getOptions(); } @@ -525,4 +534,82 @@ private SocketChannel jdkChannel() { return ((NioSocketChannel) channel).javaChannel(); } } + + private interface SocketChannelOptionHelper { + + boolean isSupported(); + + boolean isSupported(ChannelOption option); + + <T> boolean setOption(SocketChannel channel, ChannelOption<T> option, T value); + + <T> T getOption(SocketChannel channel, ChannelOption<T> option); + + ChannelOption[] getOptions(SocketChannel channel); + } + + private final class DisabledOptionHelper implements SocketChannelOptionHelper { + + @Override + public boolean isSupported() { + return false; + } + + @Override + public boolean isSupported(ChannelOption option) { + return false; + } + + @Override + public <T> boolean setOption(SocketChannel channel, ChannelOption<T> option, T value) { + throw new IllegalArgumentException("DisabledOptionHelper"); + } + + @Override + public <T> T getOption(SocketChannel channel, ChannelOption<T> option) { + throw new IllegalArgumentException("DisabledOptionHelper"); + } + + @Override + public ChannelOption[] getOptions(SocketChannel channel) { + throw new IllegalArgumentException("DisabledOptionHelper"); + } + } + + private final class NioChannelOptionHelper implements SocketChannelOptionHelper { + + @Override + public boolean isSupported() { + return PlatformDependent.javaVersion() >= 7; + } + + @Override + public boolean isSupported(ChannelOption option) { + return isSupported() && option instanceof NioChannelOption; + } + + private void validate(ChannelOption option) { + if (!isSupported(option)) { + throw new IllegalArgumentException("Not supported ChannelOption"); + } + } + + @Override + public <T> boolean setOption(SocketChannel channel, ChannelOption<T> option, T value) { + validate(option); + return NioChannelOption.setOption(channel, (NioChannelOption<T>) option, value); + } + + @Override + public <T> T getOption(SocketChannel channel, ChannelOption<T> option) { + validate(option); + return NioChannelOption.getOption(channel, (NioChannelOption<T>) option); + } + + @Override + public ChannelOption[] getOptions(SocketChannel channel) { + return NioChannelOption.getOptions(channel); + } + } + }
null
train
val
"2018-08-01T08:31:31"
"2018-08-02T02:37:18Z"
koo-taejin
val
netty/netty/8170_8171
netty/netty
netty/netty/8170
netty/netty/8171
[ "timestamp(timedelta=1.0, similarity=0.851378458133769)" ]
3ab7cac6209cfa3c2d13929123bb61903fe10f58
44d3753c481d61a83097fbbee681512aa8833da8
[ "Will try to check later today. " ]
[]
"2018-08-02T16:45:19Z"
[]
NPE with BoringSSL when trying to init with unsupported Cipher
Trying to run the code from #8165 will NPE when a client attempts to connect. ```java java.lang.NullPointerException: null at io.netty.handler.ssl.ReferenceCountedOpenSslContext.destroy(ReferenceCountedOpenSslContext.java:489) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.ReferenceCountedOpenSslContext.access$100(ReferenceCountedOpenSslContext.java:75) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.ReferenceCountedOpenSslContext$2.deallocate(ReferenceCountedOpenSslContext.java:122) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.util.AbstractReferenceCounted.release0(AbstractReferenceCounted.java:82) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.util.AbstractReferenceCounted.release(AbstractReferenceCounted.java:71) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.ReferenceCountedOpenSslContext.release(ReferenceCountedOpenSslContext.java:601) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.ReferenceCountedOpenSslContext.<init>(ReferenceCountedOpenSslContext.java:309) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.OpenSslContext.<init>(OpenSslContext.java:43) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:347) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:335) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.SslContext.newServerContextInternal(SslContext.java:421) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.handler.ssl.SslContextBuilder.build(SslContextBuilder.java:447) ~[netty-all-4.1.28.Final.jar:4.1.28.Final] at Main$1.initChannel(Main.java:40) ~[test/:?] at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:115) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:107) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:637) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.DefaultChannelPipeline.access$000(DefaultChannelPipeline.java:46) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1487) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1161) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:686) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:510) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:423) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:482) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-all-4.1.28.Final.jar:4.1.28.Final] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.28.Final.jar:4.1.28.Final] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172] ``` ### Expected behavior ### Actual behavior ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ### Netty version ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java index 4a020226776..18c86795744 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java @@ -486,7 +486,11 @@ private void destroy() { SSLContext.free(ctx); ctx = 0; - sessionContext().destroy(); + + OpenSslSessionContext context = sessionContext(); + if (context != null) { + context.destroy(); + } } } finally { writerLock.unlock();
diff --git a/handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java b/handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java index 752424cfb55..0a9429e9c1a 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java @@ -24,6 +24,8 @@ import org.junit.Test; import javax.net.ssl.SSLEngine; +import javax.net.ssl.SSLException; +import java.util.Collections; public class SslContextBuilderTest { @@ -71,6 +73,30 @@ public void testServerContextOpenssl() throws Exception { testServerContext(SslProvider.OPENSSL); } + @Test(expected = IllegalArgumentException.class) + public void testInvalidCipherJdk() throws Exception { + Assume.assumeTrue(OpenSsl.isAvailable()); + testInvalidCipher(SslProvider.JDK); + } + + @Test(expected = SSLException.class) + public void testInvalidCipherOpenSSL() throws Exception { + Assume.assumeTrue(OpenSsl.isAvailable()); + testInvalidCipher(SslProvider.OPENSSL); + } + + private static void testInvalidCipher(SslProvider provider) throws Exception { + SelfSignedCertificate cert = new SelfSignedCertificate(); + SslContextBuilder builder = SslContextBuilder.forClient() + .sslProvider(provider) + .ciphers(Collections.singleton("SOME_INVALID_CIPHER")) + .keyManager(cert.certificate(), + cert.privateKey()) + .trustManager(cert.certificate()); + SslContext context = builder.build(); + context.newEngine(UnpooledByteBufAllocator.DEFAULT); + } + private static void testClientContextFromFile(SslProvider provider) throws Exception { SelfSignedCertificate cert = new SelfSignedCertificate(); SslContextBuilder builder = SslContextBuilder.forClient()
train
val
"2018-08-01T08:31:31"
"2018-08-02T15:35:55Z"
rkapsi
val
netty/netty/8196_8197
netty/netty
netty/netty/8196
netty/netty/8197
[ "timestamp(timedelta=11.0, similarity=0.9039576643540869)", "keyword_pr_to_issue" ]
8255f85f24edab5316c0db3aa55bb9709448daa5
785473788f3f19531294802b727fb10a48938222
[ "@vkostyukov thats a good point... We may want to \"replace\" the current implementation with an `AtomicInteger` that is updated . WDYT ?", "@normanmaurer Yeah, that seems reasonable to me. Even if we won't be able to guarantee certain precision it should be totally fine for the sake of observability (provided we document it).\r\n\r\nI'd be happy to contribute a fix for this unless somebody else wants to work on this.", "@vkostyukov that would be awesome... Like you already stated we should just make this \"as cheap as possible\". ", "@normanmaurer @Wingfril, CSL's summer intern, want to give this PR a try (she is working on overriding the event loop chooser in Finagle).", "Perfect... let her know I am always happy to help 👍\n\n> Am 14.08.2018 um 22:20 schrieb Vladimir Kostyukov <notifications@github.com>:\n> \n> @normanmaurer @Wingfril, CSL's summer intern, want to give this PR a try (she is working on overriding experimenting with event loop chooser in Finagle).\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "I'm not really sure why we even need to do that? Can't we just get the super.size() itself without worrying if it is in the event loop? It seems `MpscQueue` is no longer in the code, and so live locking shouldn't be an issue anymore? `SingleThreadEventExecutor` uses a `LinkedBlockingQueue` which has a size variable which is an `atomicInteger` already?\r\n\r\nWoops", "@Wingfril it's still in the code, see `io.netty.channel.nio.NioEventLoop#newTaskQueue`, `io.netty.channel.epoll.EpollEventLoop#newTaskQueue` and `io.netty.channel.kqueue.KQueueEventLoop#newTaskQueue`", "@johnou actually I think @Wingfril is right, the original issue from which the run-in-eventloop workaround stemmed was based on a livelock issue (https://github.com/netty/netty/issues/3675, https://github.com/netty/netty/issues/5297) encountered with the old netty-modified `MpscLinkedQueue` implementation used at the time\r\n\r\nThis was since replaced by the shaded jctools impl in https://github.com/netty/netty/pull/5051 (by you I see!), and there's no reason to suspect its `size()` method suffers from the same problem.\r\n\r\nSo I would propose just to undo the original run-in-eventloop workaround and instead call `size()` on the queue directly, @normanmaurer do you agree?", "@njhill it may have been updated but it's still a single consumer queue.\r\n\r\n@nitsanw is it safe to invoke MpscQueue.size() from multiple threads? Given the name, single consumer I would lean towards no, does it still risk producing a livelock?", "Yes\n\nOn Thu, Aug 16, 2018, 13:44 Johno Crawford <notifications@github.com> wrote:\n\n> @nitsanw <https://github.com/nitsanw> is it safe to invoke\n> MpscQueue.size() from multiple threads?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/netty/netty/issues/8196#issuecomment-413517653>, or mute\n> the thread\n> <https://github.com/notifications/unsubscribe-auth/ACkWLzc1jkQqpmOi4nLxrP9LSfwX09YQks5uRVsWgaJpZM4V802C>\n> .\n>\n", "Then please just remove the code 👍🔥\n\n> Am 16.08.2018 um 16:27 schrieb Nitsan Wakart <notifications@github.com>:\n> \n> Yes\n> \n> On Thu, Aug 16, 2018, 13:44 Johno Crawford <notifications@github.com> wrote:\n> \n> > @nitsanw <https://github.com/nitsanw> is it safe to invoke\n> > MpscQueue.size() from multiple threads?\n> >\n> > —\n> > You are receiving this because you were mentioned.\n> > Reply to this email directly, view it on GitHub\n> > <https://github.com/netty/netty/issues/8196#issuecomment-413517653>, or mute\n> > the thread\n> > <https://github.com/notifications/unsubscribe-auth/ACkWLzc1jkQqpmOi4nLxrP9LSfwX09YQks5uRVsWgaJpZM4V802C>\n> > .\n> >\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@nitsanw was the yes for safe or risk producing a livelock? :)", "@johnou I'm pretty sure the livelock issue wasn't inherently related to the single-consumerness. Looking at the history it appeared to be more likely linked to the custom modifications made (e.g. possibly `RecyclableMpscLinkedQueueNode`).\r\n\r\nOf course the reported size may not be \"precise\" due to the lazy semantics but should hopefully be \"safe\".", "You asked if it was safe, right?\n\n> On 16 Aug 2018, at 16:58, Johno Crawford <notifications@github.com> wrote:\n> \n> @nitsanw was the yes for safe or risk producing a livelock? :)\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@nitsanw and also asked if it was unsafe and posed risks ie. livelock, but given your comment now I will assume it's safe to invoke size() from multiple threads, thanks!", "@johnou I apologize if I sounded impatient :-) I was reading your question from the GitHub email. Here's what I got:\r\n> @nitsanw is it safe to invoke MpscQueue.size() from multiple threads?\r\n\r\nIt appears that further edits do not get sent as further e-mails.\r\n\r\nSo, yes, it's safe to call size on any queue in jctools from any thread, no issue. Furthermore, current queue being used is either the MpscChunkedArrayQueue or MpscUnboundedArrayQueue:\r\nhttps://github.com/netty/netty/blob/06f3574e46e0623a45a93d36dbe7aa3d7455a995/common/src/main/java/io/netty/util/internal/PlatformDependent.java#L823\r\n\r\nBoth of which have a very fast `size()` method.\r\n\r\nAs a side note, I'm not sure why in the non-unsafe case you do not use MpscChunkedAtomicArrayQueue:\r\nhttps://github.com/netty/netty/blob/06f3574e46e0623a45a93d36dbe7aa3d7455a995/common/src/main/java/io/netty/util/internal/PlatformDependent.java#L859\r\n\r\nThanks", "@nitsanw maybe consider doing a PR ? :)", "@nitsanw all good, cheers for the follow-up!", "@vkostyukov fixes via https://github.com/netty/netty/pull/8197 by @Wingfril :)", "@vkostyukov btw if there is anything that makes sense to contribute back to netty in terms of a custom `EventExecutorChooserFactory` I would be more then happy to review it :)", "@normanmaurer We've been getting pretty mixed results so far (@Wingfril can probably speak more about it) as we weren't really able to employ the `pendingTask` method.", "@Wingfril I would very interested to hear more.", "@normanmaurer \r\nHeyo. \r\nSo we tried several different load metrics to try and apply power of 2 choices (and also a heap when using number of channels as a load metric), such as keeping track of the amount of channels in each eventloop, looking at the pending tasks per eventloop, and also I modified a local copy of netty to look at the awake time/alive time of the eventloop. \r\n\r\nIn staging and in simulations, none of the metrics that we tried had improved overall latency or throughput. In fact, I don't think there was even a decrease in standard deviations between stats of the netty default and 2 of the load metrics. When we looked at the number of channels per eventloop, we were able to make the channels more evenly spread out to the eventloops, but again, this did not translate to performance improvements. ", "@Wingfril interesting.. Let me know if you find anything interesting in the future. " ]
[ "I think all these changes should go into `EpollEventLoop` / `NioEventLoop` / `KQueueEventLoop` and the `SingleThreadEventExecutor` should not be changed. This is mainly because otherwise it may produce unexpected effects for people that extend the `SingleThreadEventExecutor`. ", "This should be final btw." ]
"2018-08-15T17:31:07Z"
[ "improvement" ]
(Nio|Epoll)EventLoop.pendingTasks should not block when called outside of event loop
### Expected behavior `EventLoop.pendingTasks` should be (reasonably) cheap to invoke so it can be used within observability. ### Actual behavior We've been experimenting with overriding `EventExecutorChooserFactory` with more superior load balancing algorithms and found out that it might be quite non-trivial to support pending-tasks load metric (pick the least loaded event loop). After poking around the code, it seems when querying `pendingTasks` of either `NioEventLoop` or `EpollEventLoop`, the method [blocks if called outside of said event loop](https://github.com/netty/netty/blob/630c82717dd8d822323c9e4a8ad51ea4f8a38f86/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java#L262-L272). Put it this way, it's only cheap to query event loop's pending tasks from within the same event loop. Perhaps I miss some (historical) context for `pendingTask`, but for an uneducated reader, it just defeats the purpose of a method providing visibility into an event loop. I was hoping to pick up some context on the current behavior of `pendingTasks` and see if we ever want/will to change it.
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java index bb51d8f3962..be5814f5955 100644 --- a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java +++ b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java @@ -304,7 +304,7 @@ protected boolean hasTasks() { * Return the number of tasks that are pending for processing. * * <strong>Be aware that this operation may be expensive as it depends on the internal implementation of the - * SingleThreadEventExecutor. So use it was care!</strong> + * SingleThreadEventExecutor. So use it with care!</strong> */ public int pendingTasks() { return taskQueue.size(); diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java index 68e9c125a18..c90a2ca1069 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java @@ -73,12 +73,6 @@ public int get() throws Exception { return epollWaitNow(); } }; - private final Callable<Integer> pendingTasksCallable = new Callable<Integer>() { - @Override - public Integer call() throws Exception { - return EpollEventLoop.super.pendingTasks(); - } - }; private volatile int wakenUp; private volatile int ioRatio = 50; @@ -215,17 +209,6 @@ protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } - @Override - public int pendingTasks() { - // As we use a MpscQueue we need to ensure pendingTasks() is only executed from within the EventLoop as - // otherwise we may see unexpected behavior (as size() is only allowed to be called by a single consumer). - // See https://github.com/netty/netty/issues/5297 - if (inEventLoop()) { - return super.pendingTasks(); - } else { - return submit(pendingTasksCallable).syncUninterruptibly().getNow(); - } - } /** * Returns the percentage of the desired amount of time spent for I/O in the event loop. */ diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java index 5af59ba912c..64a09c97d1b 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java @@ -66,12 +66,6 @@ public int get() throws Exception { return kqueueWaitNow(); } }; - private final Callable<Integer> pendingTasksCallable = new Callable<Integer>() { - @Override - public Integer call() throws Exception { - return KQueueEventLoop.super.pendingTasks(); - } - }; private volatile int wakenUp; private volatile int ioRatio = 50; @@ -301,14 +295,6 @@ protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } - @Override - public int pendingTasks() { - // As we use a MpscQueue we need to ensure pendingTasks() is only executed from within the EventLoop as - // otherwise we may see unexpected behavior (as size() is only allowed to be called by a single consumer). - // See https://github.com/netty/netty/issues/5297 - return inEventLoop() ? super.pendingTasks() : submit(pendingTasksCallable).syncUninterruptibly().getNow(); - } - /** * Returns the percentage of the desired amount of time spent for I/O in the event loop. */ diff --git a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java index 2c197775a92..e36e24a2797 100644 --- a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java +++ b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java @@ -71,12 +71,6 @@ public int get() throws Exception { return selectNow(); } }; - private final Callable<Integer> pendingTasksCallable = new Callable<Integer>() { - @Override - public Integer call() throws Exception { - return NioEventLoop.super.pendingTasks(); - } - }; // Workaround for JDK NIO bug. // @@ -259,18 +253,6 @@ protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } - @Override - public int pendingTasks() { - // As we use a MpscQueue we need to ensure pendingTasks() is only executed from within the EventLoop as - // otherwise we may see unexpected behavior (as size() is only allowed to be called by a single consumer). - // See https://github.com/netty/netty/issues/5297 - if (inEventLoop()) { - return super.pendingTasks(); - } else { - return submit(pendingTasksCallable).syncUninterruptibly().getNow(); - } - } - /** * Registers an arbitrary {@link SelectableChannel}, not necessarily created by Netty, to the {@link Selector} * of this event loop. Once the specified {@link SelectableChannel} is registered, the specified {@code task} will
null
train
val
"2018-08-15T20:07:56"
"2018-08-14T17:07:53Z"
vkostyukov
val
netty/netty/8201_8204
netty/netty
netty/netty/8201
netty/netty/8204
[ "timestamp(timedelta=0.0, similarity=0.8970199075001212)" ]
785473788f3f19531294802b727fb10a48938222
8679c5ef43d285b22ae603965bb8254164f1570e
[ "@jprante can you provide a PR ?\r\n\r\n", "Method handles works just fine in Java 11, see https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/store/MMapDirectory.java#L339", "@johnou @jprante I think we should look into using handles... Can you open an issue ?", "Fixed in https://github.com/netty/netty/pull/8204. @jprante PTAL", "@jprante this will allow to use the native transports without unsafe:\r\n\r\nhttps://github.com/netty/netty/pull/8231" ]
[ "idle question: why not use PrivilegedExceptionAction ?", "Could just catch a `Throwable`?", "@trustin see https://github.com/netty/netty/issues/6096", "Also honestly I think it will not make any difference. So maybe just keep it ?", "I think there is not much advantage here.. We will need to unwrap then etc. " ]
"2018-08-18T11:55:54Z"
[ "defect" ]
CleanerJava9 instantiation fails under SecurityManager
### Expected behavior When instantiating `io.netty.util.internal.CleanerJava9`, it should be possible when running under a SecurityManager. ### Actual behavior `io.netty.util.internal.CleanerJava9` fails with `java.lang.ExceptionInInitializerError` ``` java.lang.ExceptionInInitializerError: null at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:186) ~[?:?] at io.netty.util.ConstantPool.<init>(ConstantPool.java:32) ~[?:?] at io.netty.util.AttributeKey$1.<init>(AttributeKey.java:27) ~[?:?] at io.netty.util.AttributeKey.<clinit>(AttributeKey.java:27) ~[?:?] at org.elasticsearch.transport.netty4.Netty4Transport.<clinit>(Netty4Transport.java:263) ~[?:?] at org.elasticsearch.transport.Netty4Plugin.getSettings(Netty4Plugin.java:56) ~[?:?] at org.elasticsearch.plugins.PluginsService.lambda$getPluginSettings$0(PluginsService.java:90) ~[elasticsearch-6.3.2.jar:?] at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) ~[?:?] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1492) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) ~[?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) ~[?:?] at org.elasticsearch.plugins.PluginsService.getPluginSettings(PluginsService.java:90) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.node.Node.<init>(Node.java:327) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.node.Node.<init>(Node.java:251) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:215) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:215) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:328) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:127) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.3.2.jar:?] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:86) ~[elasticsearch-6.3.2.jar:?] Caused by: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "accessDeclaredMembers") at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:?] at java.security.AccessController.checkPermission(AccessController.java:895) ~[?:?] at java.lang.SecurityManager.checkPermission(SecurityManager.java:335) ~[?:?] at java.lang.Class.checkMemberAccess(Class.java:2806) ~[?:?] at java.lang.Class.getDeclaredMethod(Class.java:2430) ~[?:?] at io.netty.util.internal.CleanerJava9.<clinit>(CleanerJava9.java:41) ~[?:?] ... 27 more ``` The method call `PlatformDependent0.UNSAFE.getClass().getDeclaredMethod("invokeCleaner", ByteBuffer.class)` and `INVOKE_CLEANER.invoke(PlatformDependent0.UNSAFE, buffer);` should be protected by ``` AccessController.doPrivileged(new PrivilegedExceptionAction() { public Object run() throws Exception { .... } } ```` so that a security manager can apply a policy to allow "accessDeclaredMembers". I know that using `PlatformDependent0.UNSAFE` is riding like a dead horse - it appears in Java 10 for the last time. Java 11 will drop `sun.misc.Unsafe`. A solution (or demonstration) for using native Linux epoll transport under Java 10 (without `sun.misc.Unsafe`) and Java 11+ is most appreciated. ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ### Netty version 4.1.28.Final ### JVM version (e.g. `java -version`) openjdk version "10.0.2" 2018-07-17 OpenJDK Runtime Environment (build 10.0.2+13) OpenJDK 64-Bit Server VM (build 10.0.2+13, mixed mode) ### OS version (e.g. `uname -a`) Linux phoebe 4.17.12-200.fc28.x86_64 #1 SMP Fri Aug 3 15:01:13 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[ "common/src/main/java/io/netty/util/internal/CleanerJava9.java" ]
[ "common/src/main/java/io/netty/util/internal/CleanerJava9.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/CleanerJava9.java b/common/src/main/java/io/netty/util/internal/CleanerJava9.java index d111a7e54a5..3a0c5854c38 100644 --- a/common/src/main/java/io/netty/util/internal/CleanerJava9.java +++ b/common/src/main/java/io/netty/util/internal/CleanerJava9.java @@ -21,6 +21,8 @@ import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.nio.ByteBuffer; +import java.security.AccessController; +import java.security.PrivilegedAction; /** * Provide a way to clean a ByteBuffer on Java9+. @@ -34,20 +36,26 @@ final class CleanerJava9 implements Cleaner { final Method method; final Throwable error; if (PlatformDependent0.hasUnsafe()) { - ByteBuffer buffer = ByteBuffer.allocateDirect(1); - Object maybeInvokeMethod; - try { - // See https://bugs.openjdk.java.net/browse/JDK-8171377 - Method m = PlatformDependent0.UNSAFE.getClass().getDeclaredMethod("invokeCleaner", ByteBuffer.class); - m.invoke(PlatformDependent0.UNSAFE, buffer); - maybeInvokeMethod = m; - } catch (NoSuchMethodException e) { - maybeInvokeMethod = e; - } catch (InvocationTargetException e) { - maybeInvokeMethod = e; - } catch (IllegalAccessException e) { - maybeInvokeMethod = e; - } + final ByteBuffer buffer = ByteBuffer.allocateDirect(1); + Object maybeInvokeMethod = AccessController.doPrivileged(new PrivilegedAction<Object>() { + @Override + public Object run() { + try { + // See https://bugs.openjdk.java.net/browse/JDK-8171377 + Method m = PlatformDependent0.UNSAFE.getClass().getDeclaredMethod( + "invokeCleaner", ByteBuffer.class); + m.invoke(PlatformDependent0.UNSAFE, buffer); + return m; + } catch (NoSuchMethodException e) { + return e; + } catch (InvocationTargetException e) { + return e; + } catch (IllegalAccessException e) { + return e; + } + } + }); + if (maybeInvokeMethod instanceof Throwable) { method = null; error = (Throwable) maybeInvokeMethod; @@ -73,10 +81,35 @@ static boolean isSupported() { @Override public void freeDirectBuffer(ByteBuffer buffer) { - try { - INVOKE_CLEANER.invoke(PlatformDependent0.UNSAFE, buffer); - } catch (Throwable cause) { - PlatformDependent0.throwException(cause); + // Try to minimize overhead when there is no SecurityManager present. + // See https://bugs.openjdk.java.net/browse/JDK-8191053. + if (System.getSecurityManager() == null) { + try { + INVOKE_CLEANER.invoke(PlatformDependent0.UNSAFE, buffer); + } catch (Throwable cause) { + PlatformDependent0.throwException(cause); + } + } else { + freeDirectBufferPrivileged(buffer); + } + } + + private static void freeDirectBufferPrivileged(final ByteBuffer buffer) { + Exception error = AccessController.doPrivileged(new PrivilegedAction<Exception>() { + @Override + public Exception run() { + try { + INVOKE_CLEANER.invoke(PlatformDependent0.UNSAFE, buffer); + } catch (InvocationTargetException e) { + return e; + } catch (IllegalAccessException e) { + return e; + } + return null; + } + }); + if (error != null) { + PlatformDependent0.throwException(error); } } }
null
val
val
"2018-08-18T07:28:31"
"2018-08-17T08:02:31Z"
jprante
val
netty/netty/8208_8211
netty/netty
netty/netty/8208
netty/netty/8211
[ "timestamp(timedelta=0.0, similarity=0.862535738498586)" ]
2bb9f64e16dbb5cbbf691e284a97d745378a7b8a
6888af6ba57ab6b88ad95a9383ee3c3492efec96
[ "@ramtech123 thanks.. Check https://github.com/netty/netty/pull/8211. This allows you to specify the charset to use while decoding. ", "Based on my own testing with Windows Server 2016, it seems like Windows auto-detects the encoding. I tried with ANSI (ASCII), Unicode (UTF-16), and UTF-8 and they all worked. I saved a trash hostname in `hosts` (different for each test) and then did a ping. Ping noticed my trash hostname configuration each time.", "https://serverfault.com/a/452269 seems to think that the encoding should be UTF-8. I wonder if Windows became more permissive in a later version of Windows. I wasn't able to find any official documentation as to the encoding.", "@ejona86 I mean what we could do is test UTF-8, ASCII, Unicode (in this order) until we were able to parse something when on windows... WDYT ?", "@ejona86 , I also could not find any standards around hosts file encoding. The comment here https://serverfault.com/questions/926706/is-there-any-ietf-rfc-standard-defined-for-hosts-file-encoding indicates it never had a formal standard.\r\n\r\n@normanmaurer just curious about \"...until we were able to parse something when on windows\" - don't we expect this situation in non-Windows machine? ", "@ramtech123 I never heard of using something linux unicode for `/etc/hosts`.", "ok", "@normanmaurer, that sounds like it would work. Note that ASCII doesn't get you anything over UTF-8. Although Latin-1 or similar would, which may be the system's default encoding (dunno). And to be precise, \"Unicode\" would be \"UTF-16\" (probably UTF-16LE).\r\n\r\nLinux has been using UTF-8 for a really, really long time now. I did dig through glibc a very quick amount and it looks like it just used the system encoding in normal C-style (\"does this char* look like this char*?\").\r\n\r\nIn both cases, using _actual_ unicode codepoints may not be at issue (at which point UTF-8 vs some-random-codepage are the same). It sounds like @ramtech123 mainly suffered issue because the file was encoded with UTF-16.", "@ejona86 PTAL https://github.com/netty/netty/pull/8211 again" ]
[]
"2018-08-21T16:28:26Z"
[]
Support hosts file encoded in unicode format
### Expected behavior Netty should be able to resolve IP address from hosts file when the file is encoded in unicode format. Java is able to resolve hostname from unicode encoded hosts file. Disabling Vertx (Netty) hostname resolution using `-Dvertx.disableDnsResolver=true` flag in our application was the workaround that we used, until we finally identified root cause. ### Actual behavior `HostsFileParser.parseSilently()` returns `HostsFileEntries` instance with zero entries if the hosts file is encoded in unicode format. ### Steps to reproduce Open hosts file with text editor (Eg. Windows Notepad) as Administrator. Save hosts file with unicode encoding. Run the reproducer and observe the console output. ### Minimal yet complete reproducer code (or URL to code) https://github.com/ramtech123/pocs/tree/master/netty-hosts-reproducer ### Netty version 4.1 ### JVM version (e.g. `java -version`) Java 1.8 ### OS version (e.g. `uname -a`) Tested with Windows10, Windows Server 2012, 2016
[ "resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java", "resolver/src/main/java/io/netty/resolver/HostsFileParser.java" ]
[ "resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java", "resolver/src/main/java/io/netty/resolver/HostsFileParser.java" ]
[ "resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java" ]
diff --git a/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java b/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java index 7598a29200c..9051262eb1c 100644 --- a/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java +++ b/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java @@ -15,11 +15,14 @@ */ package io.netty.resolver; +import io.netty.util.CharsetUtil; +import io.netty.util.internal.PlatformDependent; import io.netty.util.internal.UnstableApi; import java.net.Inet4Address; import java.net.Inet6Address; import java.net.InetAddress; +import java.nio.charset.Charset; import java.util.Locale; import java.util.Map; @@ -33,7 +36,7 @@ public final class DefaultHostsFileEntriesResolver implements HostsFileEntriesRe private final Map<String, Inet6Address> inet6Entries; public DefaultHostsFileEntriesResolver() { - this(HostsFileParser.parseSilently()); + this(parseEntries()); } // for testing purpose only @@ -65,4 +68,14 @@ public InetAddress address(String inetHost, ResolvedAddressTypes resolvedAddress String normalize(String inetHost) { return inetHost.toLowerCase(Locale.ENGLISH); } + + private static HostsFileEntries parseEntries() { + if (PlatformDependent.isWindows()) { + // Ony windows there seems to be no standard for the encoding used for the hosts file, so let us + // try multiple until we either were able to parse it or there is none left and so we return an + // empty intstance. + return HostsFileParser.parseSilently(Charset.defaultCharset(), CharsetUtil.UTF_16, CharsetUtil.UTF_8); + } + return HostsFileParser.parseSilently(); + } } diff --git a/resolver/src/main/java/io/netty/resolver/HostsFileParser.java b/resolver/src/main/java/io/netty/resolver/HostsFileParser.java index 1997fe9e165..16eaba6167f 100644 --- a/resolver/src/main/java/io/netty/resolver/HostsFileParser.java +++ b/resolver/src/main/java/io/netty/resolver/HostsFileParser.java @@ -23,12 +23,14 @@ import java.io.BufferedReader; import java.io.File; -import java.io.FileReader; +import java.io.FileInputStream; import java.io.IOException; +import java.io.InputStreamReader; import java.io.Reader; import java.net.Inet4Address; import java.net.Inet6Address; import java.net.InetAddress; +import java.nio.charset.Charset; import java.util.ArrayList; import java.util.List; import java.util.Locale; @@ -66,14 +68,25 @@ private static File locateHostsFile() { } /** - * Parse hosts file at standard OS location. + * Parse hosts file at standard OS location using the systems default {@link Charset} for decoding. * * @return a {@link HostsFileEntries} */ public static HostsFileEntries parseSilently() { + return parseSilently(Charset.defaultCharset()); + } + + /** + * Parse hosts file at standard OS location using the given {@link Charset}s one after each other until + * we were able to parse something or none is left. + * + * @param charsets the {@link Charset}s to try as file encodings when parsing. + * @return a {@link HostsFileEntries} + */ + public static HostsFileEntries parseSilently(Charset... charsets) { File hostsFile = locateHostsFile(); try { - return parse(hostsFile); + return parse(hostsFile, charsets); } catch (IOException e) { if (logger.isWarnEnabled()) { logger.warn("Failed to load and parse hosts file at " + hostsFile.getPath(), e); @@ -83,7 +96,7 @@ public static HostsFileEntries parseSilently() { } /** - * Parse hosts file at standard OS location. + * Parse hosts file at standard OS location using the system default {@link Charset} for decoding. * * @return a {@link HostsFileEntries} * @throws IOException file could not be read @@ -93,19 +106,37 @@ public static HostsFileEntries parse() throws IOException { } /** - * Parse a hosts file. + * Parse a hosts file using the system default {@link Charset} for decoding. * * @param file the file to be parsed * @return a {@link HostsFileEntries} * @throws IOException file could not be read */ public static HostsFileEntries parse(File file) throws IOException { + return parse(file, Charset.defaultCharset()); + } + + /** + * Parse a hosts file. + * + * @param file the file to be parsed + * @param charsets the {@link Charset}s to try as file encodings when parsing. + * @return a {@link HostsFileEntries} + * @throws IOException file could not be read + */ + public static HostsFileEntries parse(File file, Charset... charsets) throws IOException { checkNotNull(file, "file"); + checkNotNull(charsets, "charsets"); if (file.exists() && file.isFile()) { - return parse(new BufferedReader(new FileReader(file))); - } else { - return HostsFileEntries.EMPTY; + for (Charset charset: charsets) { + HostsFileEntries entries = parse(new BufferedReader(new InputStreamReader( + new FileInputStream(file), charset))); + if (entries != HostsFileEntries.EMPTY) { + return entries; + } + } } + return HostsFileEntries.EMPTY; } /**
diff --git a/resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java b/resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java index 6b908f57420..ac4e4b2eb69 100644 --- a/resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java +++ b/resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java @@ -15,13 +15,18 @@ */ package io.netty.resolver; +import io.netty.util.CharsetUtil; +import org.junit.Assume; import org.junit.Test; import java.io.BufferedReader; +import java.io.File; import java.io.IOException; import java.io.StringReader; import java.net.Inet4Address; import java.net.Inet6Address; +import java.nio.charset.Charset; +import java.nio.charset.UnsupportedCharsetException; import java.util.Map; import static org.junit.Assert.*; @@ -60,4 +65,41 @@ public void testParse() throws IOException { assertEquals("192.168.0.5", inet4Entries.get("host7").getHostAddress()); assertEquals("0:0:0:0:0:0:0:1", inet6Entries.get("host1").getHostAddress()); } + + @Test + public void testParseUnicode() throws IOException { + final Charset unicodeCharset; + try { + unicodeCharset = Charset.forName("unicode"); + } catch (UnsupportedCharsetException e) { + Assume.assumeNoException(e); + return; + } + testParseFile(HostsFileParser.parse( + new File(getClass().getResource("hosts-unicode").getFile()), unicodeCharset)); + } + + @Test + public void testParseMultipleCharsets() throws IOException { + final Charset unicodeCharset; + try { + unicodeCharset = Charset.forName("unicode"); + } catch (UnsupportedCharsetException e) { + Assume.assumeNoException(e); + return; + } + testParseFile(HostsFileParser.parse(new File(getClass().getResource("hosts-unicode").getFile()), + CharsetUtil.UTF_8, CharsetUtil.ISO_8859_1, unicodeCharset)); + } + + private static void testParseFile(HostsFileEntries entries) throws IOException { + Map<String, Inet4Address> inet4Entries = entries.inet4Entries(); + Map<String, Inet6Address> inet6Entries = entries.inet6Entries(); + + assertEquals("Expected 2 IPv4 entries", 2, inet4Entries.size()); + assertEquals("Expected 1 IPv6 entries", 1, inet6Entries.size()); + assertEquals("127.0.0.1", inet4Entries.get("localhost").getHostAddress()); + assertEquals("255.255.255.255", inet4Entries.get("broadcasthost").getHostAddress()); + assertEquals("0:0:0:0:0:0:0:1", inet6Entries.get("localhost").getHostAddress()); + } }
test
val
"2018-08-23T11:07:09"
"2018-08-20T13:29:06Z"
ramtech123
val
netty/netty/8220_8227
netty/netty
netty/netty/8220
netty/netty/8227
[ "timestamp(timedelta=50.0, similarity=1.0)" ]
6888af6ba57ab6b88ad95a9383ee3c3492efec96
3d3af5b6c7ce5708488e2affec4781223a793b14
[]
[]
"2018-08-25T01:50:37Z"
[ "defect" ]
Recycler will produce npe error when multiple recycled at different thread
Netty version: 4.1.29.Final-SNAPSHOT public void testMultipleRecycleAtDifferentThread() { Recycler<HandledObject> recycler = newRecycler(1024); final HandledObject object = recycler.get(); final Thread thread1 = new Thread(new Runnable() { @Override public void run() { object.recycle(); // stack=null } }); thread1.start(); try { thread1.join(); } catch (InterruptedException e) { } final Thread thread2 = new Thread(new Runnable() { @Override public void run() { object.recycle(); // stack npe } }); thread2.start(); try { thread2.join(); } catch (InterruptedException e) { } }
[ "common/src/main/java/io/netty/util/Recycler.java" ]
[ "common/src/main/java/io/netty/util/Recycler.java" ]
[ "common/src/test/java/io/netty/util/RecyclerTest.java" ]
diff --git a/common/src/main/java/io/netty/util/Recycler.java b/common/src/main/java/io/netty/util/Recycler.java index 491464d9a31..5e7db4235b1 100644 --- a/common/src/main/java/io/netty/util/Recycler.java +++ b/common/src/main/java/io/netty/util/Recycler.java @@ -216,6 +216,11 @@ public void recycle(Object object) { if (object != value) { throw new IllegalArgumentException("object does not belong to handle"); } + + if (this.lastRecycledId != this.recycleId) { + throw new IllegalStateException("recycled already"); + } + stack.push(this); } }
diff --git a/common/src/test/java/io/netty/util/RecyclerTest.java b/common/src/test/java/io/netty/util/RecyclerTest.java index 6eeace5f001..e4bcbf1da46 100644 --- a/common/src/test/java/io/netty/util/RecyclerTest.java +++ b/common/src/test/java/io/netty/util/RecyclerTest.java @@ -81,6 +81,38 @@ public void testMultipleRecycle() { object.recycle(); } + @Test(expected = IllegalStateException.class) + public void testMultipleRecycleAtDifferentThread() throws InterruptedException { + Recycler<HandledObject> recycler = newRecycler(1024); + final HandledObject object = recycler.get(); + final AtomicReference<IllegalStateException> exceptionStore = new AtomicReference<IllegalStateException>(); + final Thread thread1 = new Thread(new Runnable() { + @Override + public void run() { + object.recycle(); + } + }); + thread1.start(); + thread1.join(); + + final Thread thread2 = new Thread(new Runnable() { + @Override + public void run() { + try { + object.recycle(); + } catch (IllegalStateException e) { + exceptionStore.set(e); + } + } + }); + thread2.start(); + thread2.join(); + IllegalStateException exception = exceptionStore.get(); + if (exception != null) { + throw exception; + } + } + @Test public void testRecycle() { Recycler<HandledObject> recycler = newRecycler(1024);
train
val
"2018-08-24T19:48:27"
"2018-08-24T13:12:33Z"
zhaojigang
val
netty/netty/8230_8232
netty/netty
netty/netty/8230
netty/netty/8232
[ "timestamp(timedelta=1.0, similarity=0.9491979909819099)", "keyword_pr_to_issue" ]
8679c5ef43d285b22ae603965bb8254164f1570e
79706357c73ded02615d0445db7503b646ff9547
[]
[]
"2018-08-28T18:23:59Z"
[ "defect" ]
Race condition in the NonStickyEventExecutorGroup class
### Expected behavior All `Runnable` tasks submitted through `NonStickyEventExecutorGroup` should get executed. ### Actual behavior Occasionally the last task submitted is not getting executed. ### Steps to reproduce The following unit-test has a non-zero chance of failure (about 1/2 of the time fail on my laptop). ```java @Test public void testRaceCondition() throws InterruptedException { EventExecutorGroup group = new UnorderedThreadPoolEventExecutor(1); NonStickyEventExecutorGroup nonStickyGroup = new NonStickyEventExecutorGroup(group, maxTaskExecutePerRun); try { EventExecutor executor = nonStickyGroup.next(); for (int j = 0; j < 5000; j++) { final CountDownLatch firstCompleted = new CountDownLatch(1); final CountDownLatch latch = new CountDownLatch(2); for (int i = 0; i < 2; i++) { executor.execute(new Runnable() { @Override public void run() { firstCompleted.countDown(); latch.countDown(); } }); Assert.assertTrue(firstCompleted.await(1, TimeUnit.SECONDS)); } Assert.assertTrue(latch.await(5, TimeUnit.SECONDS)); } } finally { nonStickyGroup.shutdownGracefully(); } } ``` ### Bug Description In the `NonStickyOrderedEventExecutor` class inside the `NonStickyEventExecutorGroup` class, when the `execute(Runnable)` method (Line [314](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java#L314)) is called, it enqueues the runnable, followed by checking the `state` to see if it needs to submit itself to the underlying `executor`. However, if at the same time the `run` thread already broke out of the polling loop, but before resetting the `state` to `NONE` Line [260](https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java#L260), then the runnable that just get enqueued will not be executed, due to the `compareAndSet` check return `false`, until the next time `execute(Runnable)` is called again with a new runnable. This bug manifest itself as occasionally a http client is not receiving any response from the server after request was sent if the server side is using `NonStickyEventExecutorGroup` as the event executor group for the child handler. ### Netty version 4.1.30.Final-SNAPSHOT ### JVM version (e.g. `java -version`) 1.8.0_151 ### OS version (e.g. `uname -a`) Darwin <host> 17.7.0 Darwin Kernel Version 17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64 x86_64
[ "common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java" ]
[ "common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java" ]
[ "common/src/test/java/io/netty/util/concurrent/NonStickyEventExecutorGroupTest.java" ]
diff --git a/common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java b/common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java index ed3a5873eca..bcc4b829964 100644 --- a/common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java +++ b/common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java @@ -259,7 +259,24 @@ public void run() { } } else { state.set(NONE); - return; // done + // After setting the state to NONE, look at the tasks queue one more time. + // If it is empty, then we can return from this method. + // Otherwise, it means the producer thread has called execute(Runnable) + // and enqueued a task in between the tasks.poll() above and the state.set(NONE) here. + // There are two possible scenarios when this happen + // + // 1. The producer thread sees state == NONE, hence the compareAndSet(NONE, SUBMITTED) + // is successfully setting the state to SUBMITTED. This mean the producer + // will call / has called executor.execute(this). In this case, we can just return. + // 2. The producer thread don't see the state change, hence the compareAndSet(NONE, SUBMITTED) + // returns false. In this case, the producer thread won't call executor.execute. + // In this case, we need to change the state to RUNNING and keeps running. + // + // The above cases can be distinguished by performing a + // compareAndSet(NONE, RUNNING). If it returns "false", it is case 1; otherwise it is case 2. + if (tasks.peek() == null || !state.compareAndSet(NONE, RUNNING)) { + return; // done + } } } }
diff --git a/common/src/test/java/io/netty/util/concurrent/NonStickyEventExecutorGroupTest.java b/common/src/test/java/io/netty/util/concurrent/NonStickyEventExecutorGroupTest.java index 16035505e8a..b32df7eb4f7 100644 --- a/common/src/test/java/io/netty/util/concurrent/NonStickyEventExecutorGroupTest.java +++ b/common/src/test/java/io/netty/util/concurrent/NonStickyEventExecutorGroupTest.java @@ -25,6 +25,7 @@ import java.util.Collection; import java.util.List; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; @@ -93,6 +94,35 @@ public void run() { } } + @Test + public void testRaceCondition() throws InterruptedException { + EventExecutorGroup group = new UnorderedThreadPoolEventExecutor(1); + NonStickyEventExecutorGroup nonStickyGroup = new NonStickyEventExecutorGroup(group, maxTaskExecutePerRun); + + try { + EventExecutor executor = nonStickyGroup.next(); + + for (int j = 0; j < 5000; j++) { + final CountDownLatch firstCompleted = new CountDownLatch(1); + final CountDownLatch latch = new CountDownLatch(2); + for (int i = 0; i < 2; i++) { + executor.execute(new Runnable() { + @Override + public void run() { + firstCompleted.countDown(); + latch.countDown(); + } + }); + Assert.assertTrue(firstCompleted.await(1, TimeUnit.SECONDS)); + } + + Assert.assertTrue(latch.await(5, TimeUnit.SECONDS)); + } + } finally { + nonStickyGroup.shutdownGracefully(); + } + } + private static void execute(EventExecutorGroup group, CountDownLatch startLatch) throws Throwable { EventExecutor executor = group.next(); Assert.assertTrue(executor instanceof OrderedEventExecutor);
train
val
"2018-08-28T16:32:29"
"2018-08-28T01:41:06Z"
chtyim
val
netty/netty/8217_8240
netty/netty
netty/netty/8217
netty/netty/8240
[ "timestamp(timedelta=0.0, similarity=0.8585023550456583)" ]
c74b3f3a3b73fee125048b0f486fc9c19fb3bc14
3eec66a974f85bb154966750678348afdb694789
[ "fwiw - we use OpenSSL in some integration tests. They have to run on Java 6. I end-up with implementing this nasty hack (test-code only):\r\n```java\r\n private static void patchOpenSslLogging() {\r\n InternalLoggerFactory originFactory = InternalLoggerFactory.getDefaultFactory();\r\n if (!originFactory.getClass().getName().equals(\"io.netty.util.internal.logging.Log4J2LoggerFactory\")) {\r\n // it's already patched or netty does not use log4j2 at all\r\n return;\r\n }\r\n\r\n // Netty created Log4J2LoggerFactory this implies we have log4j2 on a classpath\r\n // let's check what version is this\r\n try {\r\n ExtendedLoggerWrapper.class.getMethod(\"debug\", String.class, Object.class);\r\n // apparently we are using a new log4j2 version! This means we live in the future\r\n // and no longer supporting Java 6. Hence this whole netty logging patching should\r\n // be removed!\r\n throw new AssertionError(\"You managed to upgrade log4j2, congratulations!\"\r\n + \" Now let's removing the ugly netty logging patching!\");\r\n } catch (NoSuchMethodException e) {\r\n // ok, this is an old version of log4j2. let's do some ugly patching\r\n\r\n // reasoning:\r\n // log4j2 2.3 is the last version supporting Java 6. However it's incompatible\r\n // with Netty - some methods are missing and this results in AbstractMethodError\r\n // so let's patch the Netty logging to use our wrapper which implements the missing\r\n // methods on its own.\r\n InternalLoggerFactory wrappedFactory = new PatchedLog4j2NettyLoggerFactory(originFactory);\r\n InternalLoggerFactory.setDefaultFactory(wrappedFactory);\r\n }\r\n }\r\n```\r\n\r\n```java\r\n\r\n/**\r\n * Patched log4j2 netty logger factory.\r\n *\r\n * Reasoning: Netty requires Java 6, but its Log4j2 logger implementation\r\n * requires a version of Log4j2 which does not work on Java 6.\r\n *\r\n * Netty with old Log4j2 version throws AbstractMethodError. This factory produces\r\n * loggers which bypass the missing methods.\r\n *\r\n */\r\npublic final class PatchedLog4j2NettyLoggerFactory extends InternalLoggerFactory {\r\n private final Log4J2LoggerFactory delegate;\r\n\r\n public PatchedLog4j2NettyLoggerFactory(InternalLoggerFactory delegate) {\r\n this.delegate = (Log4J2LoggerFactory) delegate;\r\n }\r\n\r\n @Override\r\n protected InternalLogger newInstance(String s) {\r\n return new DelegatingLogger(delegate.newInstance(s));\r\n }\r\n\r\n private static class DelegatingLogger implements InternalLogger {\r\n private final InternalLogger delegate;\r\n\r\n private DelegatingLogger(InternalLogger delegate) {\r\n this.delegate = delegate;\r\n }\r\n\r\n @Override\r\n public String name() {\r\n return delegate.name();\r\n }\r\n\r\n @Override\r\n public boolean isTraceEnabled() {\r\n return delegate.isTraceEnabled();\r\n }\r\n\r\n @Override\r\n public void trace(String s) {\r\n delegate.trace(s);\r\n }\r\n\r\n @Override\r\n public void trace(String s, Object o) {\r\n delegate.trace(s, new Object[]{o});\r\n }\r\n\r\n @Override\r\n public void trace(String s, Object o, Object o1) {\r\n delegate.trace(s, new Object[]{o, o1});\r\n }\r\n\r\n @Override\r\n public void trace(String s, Object... objects) {\r\n delegate.trace(s, objects);\r\n }\r\n\r\n @Override\r\n public void trace(String s, Throwable throwable) {\r\n delegate.trace(s, throwable);\r\n }\r\n\r\n @Override\r\n public void trace(Throwable throwable) {\r\n delegate.trace(throwable);\r\n }\r\n\r\n @Override\r\n public boolean isDebugEnabled() {\r\n return delegate.isDebugEnabled();\r\n }\r\n\r\n @Override\r\n public void debug(String s) {\r\n delegate.debug(s);\r\n }\r\n\r\n @Override\r\n public void debug(String s, Object o) {\r\n delegate.debug(s, new Object[]{o});\r\n }\r\n\r\n @Override\r\n public void debug(String s, Object o, Object o1) {\r\n delegate.debug(s, new Object[]{o, o1});\r\n }\r\n\r\n @Override\r\n public void debug(String s, Object... objects) {\r\n delegate.debug(s, objects);\r\n }\r\n\r\n @Override\r\n public void debug(String s, Throwable throwable) {\r\n delegate.debug(s, throwable);\r\n }\r\n\r\n @Override\r\n public void debug(Throwable throwable) {\r\n delegate.debug(throwable);\r\n }\r\n\r\n @Override\r\n public boolean isInfoEnabled() {\r\n return delegate.isInfoEnabled();\r\n }\r\n\r\n @Override\r\n public void info(String s) {\r\n delegate.info(s);\r\n }\r\n\r\n @Override\r\n public void info(String s, Object o) {\r\n delegate.info(s, new Object[]{o});\r\n }\r\n\r\n @Override\r\n public void info(String s, Object o, Object o1) {\r\n delegate.info(s, new Object[]{o, o1});\r\n }\r\n\r\n @Override\r\n public void info(String s, Object... objects) {\r\n delegate.info(s, objects);\r\n }\r\n\r\n @Override\r\n public void info(String s, Throwable throwable) {\r\n delegate.info(s, throwable);\r\n }\r\n\r\n @Override\r\n public void info(Throwable throwable) {\r\n delegate.info(throwable);\r\n }\r\n\r\n @Override\r\n public boolean isWarnEnabled() {\r\n return delegate.isWarnEnabled();\r\n }\r\n\r\n @Override\r\n public void warn(String s) {\r\n delegate.warn(s);\r\n }\r\n\r\n @Override\r\n public void warn(String s, Object o) {\r\n delegate.warn(s, new Object[]{o});\r\n }\r\n\r\n @Override\r\n public void warn(String s, Object... objects) {\r\n delegate.warn(s, objects);\r\n }\r\n\r\n @Override\r\n public void warn(String s, Object o, Object o1) {\r\n delegate.warn(s, new Object[]{o, o1});\r\n }\r\n\r\n @Override\r\n public void warn(String s, Throwable throwable) {\r\n delegate.warn(s, throwable);\r\n }\r\n\r\n @Override\r\n public void warn(Throwable throwable) {\r\n delegate.warn(throwable);\r\n }\r\n\r\n @Override\r\n public boolean isErrorEnabled() {\r\n return delegate.isErrorEnabled();\r\n }\r\n\r\n @Override\r\n public void error(String s) {\r\n delegate.error(s);\r\n }\r\n\r\n @Override\r\n public void error(String s, Object o) {\r\n delegate.error(s, new Object[]{o});\r\n }\r\n\r\n @Override\r\n public void error(String s, Object o, Object o1) {\r\n delegate.error(s, new Object[]{o, o1});\r\n }\r\n\r\n @Override\r\n public void error(String s, Object... objects) {\r\n delegate.error(s, objects);\r\n }\r\n\r\n @Override\r\n public void error(String s, Throwable throwable) {\r\n delegate.error(s, throwable);\r\n }\r\n\r\n @Override\r\n public void error(Throwable throwable) {\r\n delegate.error(throwable);\r\n }\r\n\r\n @Override\r\n public boolean isEnabled(InternalLogLevel internalLogLevel) {\r\n return delegate.isEnabled(internalLogLevel);\r\n }\r\n\r\n @Override\r\n public void log(InternalLogLevel internalLogLevel, String s) {\r\n delegate.log(internalLogLevel, s);\r\n }\r\n\r\n @Override\r\n public void log(InternalLogLevel internalLogLevel, String s, Object o) {\r\n delegate.log(internalLogLevel, s, o);\r\n }\r\n\r\n @Override\r\n public void log(InternalLogLevel internalLogLevel, String s, Object o, Object o1) {\r\n delegate.log(internalLogLevel, s, o, o1);\r\n }\r\n\r\n @Override\r\n public void log(InternalLogLevel internalLogLevel, String s, Object... objects) {\r\n delegate.log(internalLogLevel, s, objects);\r\n }\r\n\r\n @Override\r\n public void log(InternalLogLevel internalLogLevel, String s, Throwable throwable) {\r\n delegate.log(internalLogLevel, s, throwable);\r\n }\r\n\r\n @Override\r\n public void log(InternalLogLevel internalLogLevel, Throwable throwable) {\r\n delegate.log(internalLogLevel, throwable);\r\n }\r\n }\r\n```\r\n\r\nBasically it delegates methods `debug(String msg, Obect o)` into `debug(String msg, Object[] o)`. This creates some extra litter but that's fine for functional-only tests. ", "@jerrinot can you show me the stack trace please ?", "hi @normanmaurer, here is it:\r\n```xml\r\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\r\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\r\n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\r\n <modelVersion>4.0.0</modelVersion>\r\n\r\n <groupId>info.jerrinot</groupId>\r\n <artifactId>netty-logging-issue-reproducer</artifactId>\r\n <version>1.0-SNAPSHOT</version>\r\n\r\n <properties>\r\n <netty.version>4.1.27.Final</netty.version>\r\n <netty-tcnative.version>2.0.12.Final</netty-tcnative.version>\r\n </properties>\r\n\r\n <dependencies>\r\n <dependency>\r\n <groupId>io.netty</groupId>\r\n <artifactId>netty-common</artifactId>\r\n <version>${netty.version}</version>\r\n </dependency>\r\n <dependency>\r\n <groupId>io.netty</groupId>\r\n <artifactId>netty-handler</artifactId>\r\n <version>${netty.version}</version>\r\n </dependency>\r\n <dependency>\r\n <groupId>io.netty</groupId>\r\n <artifactId>netty-tcnative-boringssl-static</artifactId>\r\n <version>${netty-tcnative.version}</version>\r\n </dependency>\r\n\r\n <dependency>\r\n <groupId>org.apache.logging.log4j</groupId>\r\n <artifactId>log4j-core</artifactId>\r\n <version>2.3</version>\r\n </dependency>\r\n\r\n <dependency>\r\n <groupId>junit</groupId>\r\n <artifactId>junit</artifactId>\r\n <version>4.12</version>\r\n </dependency>\r\n </dependencies>\r\n</project>\r\n```\r\n\r\n\r\n```java\r\n @Test\r\n public void testNettyLoggingStacktrace() throws Throwable {\r\n Throwable cause = OpenSsl.unavailabilityCause();\r\n if (cause != null) {\r\n throw cause;\r\n }\r\n }\r\n```\r\n\r\n```\r\njava.lang.AbstractMethodError: io.netty.util.internal.logging.Log4J2Logger.debug(Ljava/lang/String;Ljava/lang/Object;)V\r\n\r\n\tat io.netty.util.internal.PlatformDependent0.explicitNoUnsafeCause0(PlatformDependent0.java:376)\r\n\tat io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:42)\r\n\tat io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:208)\r\n\tat io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:79)\r\n\tat io.netty.handler.ssl.OpenSsl.loadTcNative(OpenSsl.java:424)\r\n\tat io.netty.handler.ssl.OpenSsl.<clinit>(OpenSsl.java:97)\r\n\tat info.jerrinot.nettyloggingissue.ReproducerTest.testNettyLoggingStacktrace(ReproducerTest.java:16)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\tat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)\r\n\tat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)\r\n\tat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)\r\n\tat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)\r\n\tat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)\r\n\tat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)\r\n\tat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)\r\n\tat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)\r\n\tat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)\r\n\tat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)\r\n\tat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)\r\n\tat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)\r\n\tat org.junit.runners.ParentRunner.run(ParentRunner.java:363)\r\n\tat org.junit.runner.JUnitCore.run(JUnitCore.java:137)\r\n\tat com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)\r\n\tat com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)\r\n\tat com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)\r\n\tat com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)\r\n```\r\n\r\nI was quite confused by the stacktrace, it took me a while to realize what is going on: `io.netty.util.internal.logging.Log4J2Logger` is a subclass(!) of `org.apache.logging.log4j.spi.ExtendedLoggerWrapper`. \r\n\r\nThe Netty logging abstraction (`io.netty.util.internal.logging.InternalLogger`) has methods such as `debug(String format, Object arg)`. This method happens to be implemented by `o.a.l.l.s.ExtendedLoggerWrapper` (or its superclass), but only in newer log4j2 versions. When an old version of log4j2 is available in runtime then calling `internalLogger.debug(string, o)` throws the `AbstractMethodError`. ", "@jerrinot could you validate https://github.com/netty/netty/pull/8239 ?", "@jerrinot https://github.com/netty/netty/pull/8240 this one is better.", "hi @normanmaurer, I tried #8240 and I can confirm the test passes and Netty fallbacks to j.u.l. \r\n\r\nThis effectively means it's impossible to use Netty with Log4j2 on Java 6. That's not great for embedded use-cases. From this perspective the #8239 is nicer. \r\n\r\nI understand if you do not want to to pollute log4j2 logger with the branching for the sake of Java 6. Perhaps it could be a separate logger class. Think of `LegacyLog4J2Logger`. Would you accept a PR with this? ", "@ejona86 WDYT ?", "> This effectively means it's impossible to use Netty with Log4j2 on Java 6.\r\n\r\nYes, but that's never worked. I took this issue to be a bug report that Netty breaks (which it shouldn't). But it sounds like you're thinking of this more of a feature request to support log4j2 on Java 6?\r\n\r\n> I understand if you do not want to to pollute log4j2 logger with the branching for the sake of Java 6.\r\n\r\nI was not as concerned at the branching, but with the \"why are we investing this much?\" It seemed weird to go so far out of our way to add a new feature to Netty to support a 3-year-dead release (2.4 came out 3 years ago)? I'd understand more if log4j2 still supported Java 6 in some branch, but it doesn't seem to. I'd also understand more if log4j2+Netty+Java 6 was previously supported, but from what I can tell it wasn't. Also, log4j2 has actually received important changes which reduce garbage which make sense for Netty to use. I sort of feel like \"log4j2 is not supporting Java 6\" is the bug.\r\n\r\nThat said, the large changes to Log4J2Logger are also present in #8240 so I may have been misattributing some of what is necessary to support older Log4J2 versions." ]
[ "mismatch*" ]
"2018-08-30T05:55:29Z"
[]
Log4J2LoggerFactory depends on a log4j2 version which does not work on Java 6
### Expected behavior Netty logging works via log4j2 while on Java 6 ### Actual behavior Log4j2 2.4+ depends on Java 7. Netty depends on log4j2 methods which are present only in newer versions. When older log4j2 version is on a classpath then Netty fails to initialize itself due `AbstractMethodError` while logging. ### Steps to reproduce Have Netty 4.1.27 and log4j2 on a classpath and run `InternalLoggerFactory.getInstance("foo").debug("bar", new Object));` ### Minimal yet complete reproducer code (or URL to code) ```xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>info.jerrinot</groupId> <artifactId>netty-logging-issue-reproducer</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>io.netty</groupId> <artifactId>netty-common</artifactId> <version>4.1.27.Final</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.3</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> </dependency> </dependencies> </project> ``` ```java package info.jerrinot.nettyloggingissue; import io.netty.util.internal.logging.InternalLoggerFactory; import org.junit.Test; public class ReproducerTest { @Test public void testNettyLogging() { InternalLoggerFactory.getInstance("foo").debug("bar", new Object()); } } ``` ### Netty version 4.1.27 ### JVM version (e.g. `java -version`) irrelevant ### OS version (e.g. `uname -a`) irrelevant
[ "common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java" ]
[ "common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java" ]
[]
diff --git a/common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java b/common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java index 36f28083753..5c3593f203a 100644 --- a/common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java +++ b/common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java @@ -21,14 +21,42 @@ import org.apache.logging.log4j.spi.ExtendedLogger; import org.apache.logging.log4j.spi.ExtendedLoggerWrapper; +import java.security.AccessController; +import java.security.PrivilegedAction; + import static io.netty.util.internal.logging.AbstractInternalLogger.EXCEPTION_MESSAGE; class Log4J2Logger extends ExtendedLoggerWrapper implements InternalLogger { private static final long serialVersionUID = 5485418394879791397L; + private static final boolean VARARGS_ONLY; + + static { + // Older Log4J2 versions have only log methods that takes the format + varargs. So we should not use + // Log4J2 if the version is too old. + // See https://github.com/netty/netty/issues/8217 + VARARGS_ONLY = AccessController.doPrivileged(new PrivilegedAction<Boolean>() { + @Override + public Boolean run() { + try { + Logger.class.getMethod("debug", String.class, Object.class); + return false; + } catch (NoSuchMethodException ignore) { + // Log4J2 version too old. + return true; + } catch (SecurityException ignore) { + // We could not detect the version so we will use Log4J2 if its on the classpath. + return false; + } + } + }); + } Log4J2Logger(Logger logger) { super((ExtendedLogger) logger, logger.getName(), logger.getMessageFactory()); + if (VARARGS_ONLY) { + throw new UnsupportedOperationException("Log4J2 version mismatch"); + } } @Override
null
train
val
"2018-09-01T08:59:08"
"2018-08-23T16:15:14Z"
jerrinot
val
netty/netty/8273_8286
netty/netty
netty/netty/8273
netty/netty/8286
[ "keyword_pr_to_issue" ]
9eb124bb629163f4b22ad223f611ed8b4eaa74e8
2ab3e13f08e8549c6957aa263390ceb36539b999
[ "@normanmaurer, please verify it or somebody else.", "@amizurov merged... thanks" ]
[ "nit: You can remove the `else` here as you return in the if block.", "@amizurov can you add comment that passing null into `contentEqualsIngoreCase` is fine ?", "Thanks, done", "If we passed null to ```contentEqulasIgnoreCase(...)``` this always return false, null safe method." ]
"2018-09-13T12:32:40Z"
[]
Resolve charset from ContentType header for SOAP 1.2 requests
Hello, there is a problem with charset resolution from `ContentType` header for SOAP 1.2 using `io.netty.handler.codec.http.HttpUtils` class. SOAP 1.2 could have action parameter in `ContentType` header and method `getCharsetAsSequence(CharSequence contentTypeValue)` from `io.netty.handler.codec.http.HttpUtils` class just finds charset word and take whole string from this word end to the end. ### Expected behavior Resolve charset from `ContentType` header with value `text/xml; charset=utf-8; action="someaction"` as UTF-8 ### Actual behavior Resolve charset substring from `ContentType` header with value `text/xml; charset=utf-8; action="someaction"` as `utf-8; action="someaction"`. And then you will get an exception `java.nio.charset.IllegalCharsetNameException` ### Steps to reproduce Resolve charset from any request which contains action parameter in `ContentType` header value ### Netty version 4.1.19 ### JVM version (e.g. `java -version`) java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode) ### OS version (e.g. `uname -a`) Linux 4.15.0-33-generic Ubuntu 18
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java index 5b6e90e9abd..94af7901fc4 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpUtil.java @@ -15,18 +15,18 @@ */ package io.netty.handler.codec.http; -import io.netty.util.AsciiString; -import io.netty.util.CharsetUtil; -import io.netty.util.NetUtil; - import java.net.InetSocketAddress; import java.net.URI; -import java.util.ArrayList; import java.nio.charset.Charset; import java.nio.charset.UnsupportedCharsetException; +import java.util.ArrayList; import java.util.Iterator; import java.util.List; +import io.netty.util.AsciiString; +import io.netty.util.CharsetUtil; +import io.netty.util.NetUtil; + /** * Utility methods useful in the HTTP context. */ @@ -60,12 +60,13 @@ public static boolean isAsteriskForm(URI uri) { /** * Returns {@code true} if and only if the connection can remain open and * thus 'kept alive'. This methods respects the value of the. + * * {@code "Connection"} header first and then the return value of * {@link HttpVersion#isKeepAliveDefault()}. */ public static boolean isKeepAlive(HttpMessage message) { CharSequence connection = message.headers().get(HttpHeaderNames.CONNECTION); - if (connection != null && HttpHeaderValues.CLOSE.contentEqualsIgnoreCase(connection)) { + if (HttpHeaderValues.CLOSE.contentEqualsIgnoreCase(connection)) { return false; } @@ -193,6 +194,7 @@ public static long getContentLength(HttpMessage message, long defaultValue) { /** * Get an {@code int} representation of {@link #getContentLength(HttpMessage, long)}. + * * @return the content length or {@code defaultValue} if this message does * not have the {@code "Content-Length"} header or its value is not * a number. Not to exceed the boundaries of integer. @@ -313,6 +315,7 @@ public static boolean isTransferEncodingChunked(HttpMessage message) { /** * Set the {@link HttpHeaderNames#TRANSFER_ENCODING} to either include {@link HttpHeaderValues#CHUNKED} if * {@code chunked} is {@code true}, or remove {@link HttpHeaderValues#CHUNKED} if {@code chunked} is {@code false}. + * * @param m The message which contains the headers to modify. * @param chunked if {@code true} then include {@link HttpHeaderValues#CHUNKED} in the headers. otherwise remove * {@link HttpHeaderValues#CHUNKED} from the headers. @@ -371,7 +374,7 @@ public static Charset getCharset(CharSequence contentTypeValue) { /** * Fetch charset from message's Content-Type header. * - * @param message entity to fetch Content-Type header from + * @param message entity to fetch Content-Type header from * @param defaultCharset result to use in case of empty, incorrect or doesn't contain required part header value * @return the charset from message's Content-Type header or {@code defaultCharset} * if charset is not presented or unparsable @@ -389,7 +392,7 @@ public static Charset getCharset(HttpMessage message, Charset defaultCharset) { * Fetch charset from Content-Type header value. * * @param contentTypeValue Content-Type header value to parse - * @param defaultCharset result to use in case of empty, incorrect or doesn't contain required part header value + * @param defaultCharset result to use in case of empty, incorrect or doesn't contain required part header value * @return the charset from message's Content-Type header or {@code defaultCharset} * if charset is not presented or unparsable */ @@ -459,13 +462,23 @@ public static CharSequence getCharsetAsSequence(CharSequence contentTypeValue) { if (contentTypeValue == null) { throw new NullPointerException("contentTypeValue"); } + int indexOfCharset = AsciiString.indexOfIgnoreCaseAscii(contentTypeValue, CHARSET_EQUALS, 0); - if (indexOfCharset != AsciiString.INDEX_NOT_FOUND) { - int indexOfEncoding = indexOfCharset + CHARSET_EQUALS.length(); - if (indexOfEncoding < contentTypeValue.length()) { - return contentTypeValue.subSequence(indexOfEncoding, contentTypeValue.length()); + if (indexOfCharset == AsciiString.INDEX_NOT_FOUND) { + return null; + } + + int indexOfEncoding = indexOfCharset + CHARSET_EQUALS.length(); + if (indexOfEncoding < contentTypeValue.length()) { + CharSequence charsetCandidate = contentTypeValue.subSequence(indexOfEncoding, contentTypeValue.length()); + int indexOfSemicolon = AsciiString.indexOfIgnoreCaseAscii(charsetCandidate, SEMICOLON, 0); + if (indexOfSemicolon == AsciiString.INDEX_NOT_FOUND) { + return charsetCandidate; } + + return charsetCandidate.subSequence(0, indexOfSemicolon); } + return null; } @@ -517,6 +530,7 @@ public static CharSequence getMimeType(CharSequence contentTypeValue) { /** * Formats the host string of an address so it can be used for computing an HTTP component * such as an URL or a Host header + * * @param addr the address * @return the formatted String */ @@ -526,7 +540,7 @@ public static String formatHostnameForHttp(InetSocketAddress addr) { if (!addr.isUnresolved()) { hostString = NetUtil.toAddressString(addr.getAddress()); } - return "[" + hostString + "]"; + return '[' + hostString + ']'; } return hostString; }
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java index 6ad08e6b4c8..31596067aef 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpUtilTest.java @@ -15,10 +15,6 @@ */ package io.netty.handler.codec.http; -import io.netty.util.CharsetUtil; -import io.netty.util.ReferenceCountUtil; -import org.junit.Test; - import java.net.InetAddress; import java.net.InetSocketAddress; import java.nio.charset.StandardCharsets; @@ -26,12 +22,14 @@ import java.util.Collections; import java.util.List; +import io.netty.util.CharsetUtil; +import io.netty.util.ReferenceCountUtil; +import org.junit.Test; + import static io.netty.handler.codec.http.HttpHeadersTestUtils.of; -import static org.hamcrest.Matchers.hasToString; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; @@ -91,6 +89,22 @@ public void testGetCharset() { assertEquals(CharsetUtil.UTF_8, HttpUtil.getCharset(UPPER_CASE_NORMAL_CONTENT_TYPE)); } + @Test + public void testGetCharsetIfNotLastParameter() { + String NORMAL_CONTENT_TYPE_WITH_PARAMETERS = "application/soap-xml; charset=utf-8; " + + "action=\"http://www.soap-service.by/foo/add\""; + + HttpMessage message = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.POST, + "http://localhost:7788/foo"); + message.headers().set(HttpHeaderNames.CONTENT_TYPE, NORMAL_CONTENT_TYPE_WITH_PARAMETERS); + + assertEquals(CharsetUtil.UTF_8, HttpUtil.getCharset(message)); + assertEquals(CharsetUtil.UTF_8, HttpUtil.getCharset(NORMAL_CONTENT_TYPE_WITH_PARAMETERS)); + + assertEquals("utf-8", HttpUtil.getCharsetAsSequence(message)); + assertEquals("utf-8", HttpUtil.getCharsetAsSequence(NORMAL_CONTENT_TYPE_WITH_PARAMETERS)); + } + @Test public void testGetCharset_defaultValue() { final String SIMPLE_CONTENT_TYPE = "text/html"; @@ -292,4 +306,15 @@ public void testIpv4Unresolved() { InetSocketAddress socketAddress = InetSocketAddress.createUnresolved("10.0.0.1", 8080); assertEquals("10.0.0.1", HttpUtil.formatHostnameForHttp(socketAddress)); } + + @Test + public void testKeepAliveIfConnectionHeaderAbsent() { + HttpMessage http11Message = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, + "http:localhost/http_1_1"); + assertTrue(HttpUtil.isKeepAlive(http11Message)); + + HttpMessage http10Message = new DefaultHttpRequest(HttpVersion.HTTP_1_0, HttpMethod.GET, + "http:localhost/http_1_0"); + assertFalse(HttpUtil.isKeepAlive(http10Message)); + } }
val
val
"2018-09-11T20:34:37"
"2018-09-07T13:43:52Z"
nikita-mishchenko
val
netty/netty/8132_8293
netty/netty
netty/netty/8132
netty/netty/8293
[ "timestamp(timedelta=129791.0, similarity=0.8804280547473731)" ]
04001fdad1ca3c72625cddb1b1c7789381cb5f30
0ddc62cec0b4715ae37cef0e6a9f8c79d42d74e9
[ "I dont think so... if the user requests renegotiation but the protocol does not allow it it should fail (which it does atm)." ]
[ "@carl-mastrangelo @ejona86 @ryanoneill @bryce-anderson @Scottmitch @trustin not sure about this one...", "@carl-mastrangelo @ejona86 @ryanoneill @bryce-anderson @Scottmitch @trustin this needs to be fixed before merging as it may waste memory. ", "Currently checking what to do best... ", "Sounds kinda niche. Maybe we can wait until someone asks for it?", "minProtocolIndex = Math.min(minProtocolIndex, OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_3);", "Why not also asume the version string above?", "@carl-mastrangelo I could do this I just wanted to keep it consistent with the rest.", "@carl-mastrangelo because I want to have the test run with all versions and just test for different things. ", "Tls13 or TlsV13?", "Hmm, did we change our coding style?", "not sure :D I can revert this if you want.", "Maybe it's time to adopt https://github.com/google/google-java-format for love and peace? :smile:", "I actually like `isTlsv13Supported` best... are you strong here ?", "Lowercase v is what Google's style guide says, if it matters." ]
"2018-09-17T13:53:04Z"
[]
Do not use TLS renegotiation if TLS >= 1.3
TLS 1.3 disables renegotiation . Need to check and guard against running the renegotiation code in the SSLHandler and other? places. Noticed there's a pull request on Java 11 testing and TLS 1.3 is part of 11 (so might as well test it too).
[ "handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java", "handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java", "handler/src/main/java/io/netty/handler/ssl/OpenSsl.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslClientContext.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java", "handler/src/main/java/io/netty/handler/ssl/SslUtils.java", "pom.xml" ]
[ "handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java", "handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java", "handler/src/main/java/io/netty/handler/ssl/OpenSsl.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslClientContext.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java", "handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java", "handler/src/main/java/io/netty/handler/ssl/SslUtils.java", "pom.xml" ]
[ "handler/src/test/java/io/netty/handler/ssl/CipherSuiteCanaryTest.java", "handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java", "handler/src/test/java/io/netty/handler/ssl/ConscryptJdkSslEngineInteropTest.java", "handler/src/test/java/io/netty/handler/ssl/ConscryptSslEngineTest.java", "handler/src/test/java/io/netty/handler/ssl/JdkConscryptSslEngineInteropTest.java", "handler/src/test/java/io/netty/handler/ssl/JdkOpenSslEngineInteroptTest.java", "handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java", "handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java", "handler/src/test/java/io/netty/handler/ssl/OpenSslJdkSslEngineInteroptTest.java", "handler/src/test/java/io/netty/handler/ssl/OpenSslTestUtils.java", "handler/src/test/java/io/netty/handler/ssl/ParameterizedSslHandlerTest.java", "handler/src/test/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngineTest.java", "handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java", "handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java", "handler/src/test/java/io/netty/handler/ssl/SslErrorTest.java", "handler/src/test/java/io/netty/handler/ssl/SslUtilsTest.java", "testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslClientRenegotiateTest.java", "testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslEchoTest.java", "testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslSessionReuseTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java b/handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java index 1fac36c1bca..deeeb0cfeb3 100644 --- a/handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java +++ b/handler/src/main/java/io/netty/handler/ssl/CipherSuiteConverter.java @@ -122,33 +122,6 @@ static boolean isO2JCached(String key, String protocol, String value) { } } - /** - * Converts the specified Java cipher suites to the colon-separated OpenSSL cipher suite specification. - */ - static String toOpenSsl(Iterable<String> javaCipherSuites) { - final StringBuilder buf = new StringBuilder(); - for (String c: javaCipherSuites) { - if (c == null) { - break; - } - - String converted = toOpenSsl(c); - if (converted != null) { - c = converted; - } - - buf.append(c); - buf.append(':'); - } - - if (buf.length() > 0) { - buf.setLength(buf.length() - 1); - return buf.toString(); - } else { - return ""; - } - } - /** * Converts the specified Java cipher suite to its corresponding OpenSSL cipher suite name. * @@ -423,5 +396,47 @@ private static String toJavaHmacAlgo(String hmacAlgo) { return hmacAlgo; } + /** + * Convert the given ciphers if needed to OpenSSL format and append them to the correct {@link StringBuilder} + * depending on if its a TLSv1.3 cipher or not. If this methods returns without throwing an exception its + * guaranteed that at least one of the {@link StringBuilder}s contain some ciphers that can be used to configure + * OpenSSL. + */ + static void convertToCipherStrings( + Iterable<String> cipherSuites, StringBuilder cipherBuilder, StringBuilder cipherTLSv13Builder) { + for (String c: cipherSuites) { + if (c == null) { + break; + } + + String converted = toOpenSsl(c); + if (converted == null) { + converted = c; + } + + if (!OpenSsl.isCipherSuiteAvailable(converted)) { + throw new IllegalArgumentException("unsupported cipher suite: " + c + '(' + converted + ')'); + } + + if (SslUtils.isTLSv13Cipher(converted)) { + cipherTLSv13Builder.append(converted); + cipherTLSv13Builder.append(':'); + } else { + cipherBuilder.append(converted); + cipherBuilder.append(':'); + } + } + + if (cipherBuilder.length() == 0 && cipherTLSv13Builder.length() == 0) { + throw new IllegalArgumentException("empty cipher suites"); + } + if (cipherBuilder.length() > 0) { + cipherBuilder.setLength(cipherBuilder.length() - 1); + } + if (cipherTLSv13Builder.length() > 0) { + cipherTLSv13Builder.setLength(cipherTLSv13Builder.length() - 1); + } + } + private CipherSuiteConverter() { } } diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java index 2b61391b491..25ed511727c 100644 --- a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java @@ -17,6 +17,7 @@ package io.netty.handler.ssl; import io.netty.buffer.ByteBufAllocator; +import io.netty.util.ReferenceCountUtil; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; @@ -26,6 +27,7 @@ import java.security.KeyException; import java.security.KeyStoreException; import java.security.NoSuchAlgorithmException; +import java.security.Provider; import java.security.Security; import java.security.UnrecoverableKeyException; import java.security.cert.CertificateException; @@ -34,6 +36,7 @@ import java.util.Arrays; import java.util.Collections; import java.util.HashSet; +import java.util.LinkedHashSet; import java.util.List; import java.util.Set; @@ -58,11 +61,13 @@ public class JdkSslContext extends SslContext { static final String PROTOCOL = "TLS"; private static final String[] DEFAULT_PROTOCOLS; private static final List<String> DEFAULT_CIPHERS; + private static final List<String> DEFAULT_CIPHERS_NON_TLSV13; private static final Set<String> SUPPORTED_CIPHERS; + private static final Set<String> SUPPORTED_CIPHERS_NON_TLSV13; + private static final Provider DEFAULT_PROVIDER; static { SSLContext context; - int i; try { context = SSLContext.getInstance(PROTOCOL); context.init(null, null, null); @@ -70,31 +75,54 @@ public class JdkSslContext extends SslContext { throw new Error("failed to initialize the default SSL context", e); } + DEFAULT_PROVIDER = context.getProvider(); + SSLEngine engine = context.createSSLEngine(); + DEFAULT_PROTOCOLS = defaultProtocols(engine); + + SUPPORTED_CIPHERS = Collections.unmodifiableSet(supportedCiphers(engine)); + DEFAULT_CIPHERS = Collections.unmodifiableList(defaultCiphers(engine, SUPPORTED_CIPHERS)); + + List<String> ciphersNonTLSv13 = new ArrayList<String>(DEFAULT_CIPHERS); + ciphersNonTLSv13.removeAll(Arrays.asList(SslUtils.DEFAULT_TLSV13_CIPHER_SUITES)); + DEFAULT_CIPHERS_NON_TLSV13 = Collections.unmodifiableList(ciphersNonTLSv13); + + Set<String> suppertedCiphersNonTLSv13 = new LinkedHashSet<String>(SUPPORTED_CIPHERS); + suppertedCiphersNonTLSv13.removeAll(Arrays.asList(SslUtils.DEFAULT_TLSV13_CIPHER_SUITES)); + SUPPORTED_CIPHERS_NON_TLSV13 = Collections.unmodifiableSet(suppertedCiphersNonTLSv13); + + if (logger.isDebugEnabled()) { + logger.debug("Default protocols (JDK): {} ", Arrays.asList(DEFAULT_PROTOCOLS)); + logger.debug("Default cipher suites (JDK): {}", DEFAULT_CIPHERS); + } + } + private static String[] defaultProtocols(SSLEngine engine) { // Choose the sensible default list of protocols. final String[] supportedProtocols = engine.getSupportedProtocols(); Set<String> supportedProtocolsSet = new HashSet<String>(supportedProtocols.length); - for (i = 0; i < supportedProtocols.length; ++i) { + for (int i = 0; i < supportedProtocols.length; ++i) { supportedProtocolsSet.add(supportedProtocols[i]); } List<String> protocols = new ArrayList<String>(); addIfSupported( supportedProtocolsSet, protocols, - "TLSv1.2", "TLSv1.1", "TLSv1"); + // Do not include TLSv1.3 for now by default. + SslUtils.PROTOCOL_TLS_V1_2, SslUtils.PROTOCOL_TLS_V1_1, SslUtils.PROTOCOL_TLS_V1); if (!protocols.isEmpty()) { - DEFAULT_PROTOCOLS = protocols.toArray(new String[0]); - } else { - DEFAULT_PROTOCOLS = engine.getEnabledProtocols(); + return protocols.toArray(new String[0]); } + return engine.getEnabledProtocols(); + } + private static Set<String> supportedCiphers(SSLEngine engine) { // Choose the sensible default list of cipher suites. final String[] supportedCiphers = engine.getSupportedCipherSuites(); - SUPPORTED_CIPHERS = new HashSet<String>(supportedCiphers.length); - for (i = 0; i < supportedCiphers.length; ++i) { + Set<String> supportedCiphersSet = new LinkedHashSet<String>(supportedCiphers.length); + for (int i = 0; i < supportedCiphers.length; ++i) { String supportedCipher = supportedCiphers[i]; - SUPPORTED_CIPHERS.add(supportedCipher); + supportedCiphersSet.add(supportedCipher); // IBM's J9 JVM utilizes a custom naming scheme for ciphers and only returns ciphers with the "SSL_" // prefix instead of the "TLS_" prefix (as defined in the JSSE cipher suite names [1]). According to IBM's // documentation [2] the "SSL_" prefix is "interchangeable" with the "TLS_" prefix. @@ -108,21 +136,29 @@ public class JdkSslContext extends SslContext { final String tlsPrefixedCipherName = "TLS_" + supportedCipher.substring("SSL_".length()); try { engine.setEnabledCipherSuites(new String[]{tlsPrefixedCipherName}); - SUPPORTED_CIPHERS.add(tlsPrefixedCipherName); + supportedCiphersSet.add(tlsPrefixedCipherName); } catch (IllegalArgumentException ignored) { // The cipher is not supported ... move on to the next cipher. } } } + return supportedCiphersSet; + } + + private static List<String> defaultCiphers(SSLEngine engine, Set<String> supportedCiphers) { List<String> ciphers = new ArrayList<String>(); - addIfSupported(SUPPORTED_CIPHERS, ciphers, DEFAULT_CIPHER_SUITES); + addIfSupported(supportedCiphers, ciphers, DEFAULT_CIPHER_SUITES); useFallbackCiphersIfDefaultIsEmpty(ciphers, engine.getEnabledCipherSuites()); - DEFAULT_CIPHERS = Collections.unmodifiableList(ciphers); + return ciphers; + } - if (logger.isDebugEnabled()) { - logger.debug("Default protocols (JDK): {} ", Arrays.asList(DEFAULT_PROTOCOLS)); - logger.debug("Default cipher suites (JDK): {}", DEFAULT_CIPHERS); + private static boolean isTlsV13Supported(String[] protocols) { + for (String protocol: protocols) { + if (SslUtils.PROTOCOL_TLS_V1_3.equals(protocol)) { + return true; + } } + return false; } private final String[] protocols; @@ -169,11 +205,49 @@ public JdkSslContext(SSLContext sslContext, boolean isClient, Iterable<String> c super(startTls); this.apn = checkNotNull(apn, "apn"); this.clientAuth = checkNotNull(clientAuth, "clientAuth"); + this.sslContext = checkNotNull(sslContext, "sslContext"); + + final List<String> defaultCiphers; + final Set<String> supportedCiphers; + if (DEFAULT_PROVIDER.equals(sslContext.getProvider())) { + this.protocols = protocols == null? DEFAULT_PROTOCOLS : protocols; + if (isTlsV13Supported(this.protocols)) { + supportedCiphers = SUPPORTED_CIPHERS; + defaultCiphers = DEFAULT_CIPHERS; + } else { + // TLSv1.3 is not supported, ensure we do not include any TLSv1.3 ciphersuite. + supportedCiphers = SUPPORTED_CIPHERS_NON_TLSV13; + defaultCiphers = DEFAULT_CIPHERS_NON_TLSV13; + } + } else { + // This is a different Provider then the one used by the JDK by default so we can not just assume + // the same protocols and ciphers are supported. For example even if Java11+ is used Conscrypt will + // not support TLSv1.3 and the TLSv1.3 ciphersuites. + SSLEngine engine = sslContext.createSSLEngine(); + try { + if (protocols == null) { + this.protocols = defaultProtocols(engine); + } else { + this.protocols = protocols; + } + supportedCiphers = supportedCiphers(engine); + defaultCiphers = defaultCiphers(engine, supportedCiphers); + if (!isTlsV13Supported(this.protocols)) { + // TLSv1.3 is not supported, ensure we do not include any TLSv1.3 ciphersuite. + for (String cipher: SslUtils.DEFAULT_TLSV13_CIPHER_SUITES) { + supportedCiphers.remove(cipher); + defaultCiphers.remove(cipher); + } + } + } finally { + ReferenceCountUtil.release(engine); + } + } + cipherSuites = checkNotNull(cipherFilter, "cipherFilter").filterCipherSuites( - ciphers, DEFAULT_CIPHERS, SUPPORTED_CIPHERS); - this.protocols = protocols == null ? DEFAULT_PROTOCOLS : protocols; + ciphers, defaultCiphers, supportedCiphers); + unmodifiableCipherSuites = Collections.unmodifiableList(Arrays.asList(cipherSuites)); - this.sslContext = checkNotNull(sslContext, "sslContext"); this.isClient = isClient; } diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java b/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java index e614fccdcd3..955404b25e1 100644 --- a/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java +++ b/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java @@ -31,6 +31,7 @@ import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; +import javax.net.ssl.SSLException; import java.security.AccessController; import java.security.PrivilegedAction; import java.util.ArrayList; @@ -48,6 +49,7 @@ import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1; import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_1; import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_2; +import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_3; /** * Tells if <a href="http://netty.io/wiki/forked-tomcat-native.html">{@code netty-tcnative}</a> and its OpenSSL support @@ -66,6 +68,12 @@ public final class OpenSsl { private static final boolean SUPPORTS_HOSTNAME_VALIDATION; private static final boolean USE_KEYMANAGER_FACTORY; private static final boolean SUPPORTS_OCSP; + private static final String TLSV13_CIPHERS = "TLS_AES_256_GCM_SHA384" + ':' + + "TLS_CHACHA20_POLY1305_SHA256" + ':' + + "TLS_AES_128_GCM_SHA256" + ':' + + "TLS_AES_128_CCM_8_SHA256" + ':' + + "TLS_AES_128_CCM_SHA256"; + private static final boolean TLSV13_SUPPORTED; static final Set<String> SUPPORTED_PROTOCOLS_SET; @@ -139,17 +147,30 @@ public final class OpenSsl { boolean supportsKeyManagerFactory = false; boolean useKeyManagerFactory = false; boolean supportsHostNameValidation = false; + boolean tlsv13Supported = false; + try { final long sslCtx = SSLContext.make(SSL.SSL_PROTOCOL_ALL, SSL.SSL_MODE_SERVER); long certBio = 0; SelfSignedCertificate cert = null; try { - SSLContext.setCipherSuite(sslCtx, "ALL"); + if (PlatformDependent.javaVersion() >= 11) { + try { + SSLContext.setCipherSuite(sslCtx, TLSV13_CIPHERS, true); + tlsv13Supported = true; + } catch (Exception ignore) { + tlsv13Supported = false; + } + } + SSLContext.setCipherSuite(sslCtx, "ALL", false); + final long ssl = SSL.newSSL(sslCtx, true); try { for (String c: SSL.getCiphers(ssl)) { // Filter out bad input. - if (c == null || c.isEmpty() || availableOpenSslCipherSuites.contains(c)) { + if (c == null || c.isEmpty() || availableOpenSslCipherSuites.contains(c) || + // Filter out TLSv1.3 ciphers if not supported. + !tlsv13Supported && SslUtils.isTLSv13Cipher(c)) { continue; } availableOpenSslCipherSuites.add(c); @@ -200,8 +221,13 @@ public Boolean run() { AVAILABLE_OPENSSL_CIPHER_SUITES.size() * 2); for (String cipher: AVAILABLE_OPENSSL_CIPHER_SUITES) { // Included converted but also openssl cipher name - availableJavaCipherSuites.add(CipherSuiteConverter.toJava(cipher, "TLS")); - availableJavaCipherSuites.add(CipherSuiteConverter.toJava(cipher, "SSL")); + if (!SslUtils.isTLSv13Cipher(cipher)) { + availableJavaCipherSuites.add(CipherSuiteConverter.toJava(cipher, "TLS")); + availableJavaCipherSuites.add(CipherSuiteConverter.toJava(cipher, "SSL")); + } else { + // TLSv1.3 ciphers have the correct format. + availableJavaCipherSuites.add(cipher); + } } addIfSupported(availableJavaCipherSuites, defaultCiphers, DEFAULT_CIPHER_SUITES); @@ -239,6 +265,18 @@ public Boolean run() { protocols.add(PROTOCOL_TLS_V1_2); } + // This is only supported by java11 and later. + if (tlsv13Supported && doesSupportProtocol(SSL.SSL_PROTOCOL_TLSV1_3, SSL.SSL_OP_NO_TLSv1_3) + && PlatformDependent.javaVersion() >= 11) { + // We can only support TLS1.3 when using Java 11 or higher as otherwise it will fail to create the + // internal instance of an sun.security.ssl.ProtocolVersion as can not parse the version string :/ + // See http://mail.openjdk.java.net/pipermail/security-dev/2018-September/018242.html + protocols.add(PROTOCOL_TLS_V1_3); + TLSV13_SUPPORTED = true; + } else { + TLSV13_SUPPORTED = false; + } + SUPPORTED_PROTOCOLS_SET = Collections.unmodifiableSet(protocols); SUPPORTS_OCSP = doesSupportOcsp(); @@ -256,6 +294,7 @@ public Boolean run() { USE_KEYMANAGER_FACTORY = false; SUPPORTED_PROTOCOLS_SET = Collections.emptySet(); SUPPORTS_OCSP = false; + TLSV13_SUPPORTED = false; } } @@ -450,4 +489,8 @@ static void releaseIfNeeded(ReferenceCounted counted) { ReferenceCountUtil.safeRelease(counted); } } + + static boolean isTlsv13Supported() { + return TLSV13_SUPPORTED; + } } diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslClientContext.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslClientContext.java index 9e524a7a008..4972905baaf 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslClientContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslClientContext.java @@ -24,8 +24,11 @@ import java.security.KeyStore; import java.security.PrivateKey; import java.security.cert.X509Certificate; + +import java.util.Arrays; import java.util.Collections; import java.util.HashSet; +import java.util.LinkedHashSet; import java.util.Set; import javax.net.ssl.KeyManagerFactory; @@ -47,6 +50,12 @@ public final class ReferenceCountedOpenSslClientContext extends ReferenceCountedOpenSslContext { private static final InternalLogger logger = InternalLoggerFactory.getInstance(ReferenceCountedOpenSslClientContext.class); + private static final Set<String> SUPPORTED_KEY_TYPES = Collections.unmodifiableSet(new LinkedHashSet<String>( + Arrays.asList(OpenSslKeyMaterialManager.KEY_TYPE_RSA, + OpenSslKeyMaterialManager.KEY_TYPE_DH_RSA, + OpenSslKeyMaterialManager.KEY_TYPE_EC, + OpenSslKeyMaterialManager.KEY_TYPE_EC_RSA, + OpenSslKeyMaterialManager.KEY_TYPE_EC_EC))); private final OpenSslSessionContext sessionContext; ReferenceCountedOpenSslClientContext(X509Certificate[] trustCertCollection, TrustManagerFactory trustManagerFactory, @@ -277,7 +286,8 @@ public void handle(long ssl, byte[] keyTypeBytes, byte[][] asn1DerEncodedPrincip */ private static Set<String> supportedClientKeyTypes(byte[] clientCertificateTypes) { if (clientCertificateTypes == null) { - return Collections.emptySet(); + // Try all of the supported key types. + return SUPPORTED_KEY_TYPES; } Set<String> result = new HashSet<String>(clientCertificateTypes.length); for (byte keyTypeCode : clientCertificateTypes) { diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java index 714f473f7a7..6f471b361a4 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslContext.java @@ -225,26 +225,71 @@ public String run() { boolean success = false; try { try { - int opts = SSL.SSL_PROTOCOL_SSLV3 | SSL.SSL_PROTOCOL_TLSV1 | - SSL.SSL_PROTOCOL_TLSV1_1 | SSL.SSL_PROTOCOL_TLSV1_2; - ctx = SSLContext.make(opts, mode); + int protocolOpts = SSL.SSL_PROTOCOL_SSLV3 | SSL.SSL_PROTOCOL_TLSV1 | + SSL.SSL_PROTOCOL_TLSV1_1 | SSL.SSL_PROTOCOL_TLSV1_2; + if (OpenSsl.isTlsv13Supported()) { + protocolOpts |= SSL.SSL_PROTOCOL_TLSV1_3; + } + ctx = SSLContext.make(protocolOpts, mode); } catch (Exception e) { throw new SSLException("failed to create an SSL_CTX", e); } - SSLContext.setOptions(ctx, SSLContext.getOptions(ctx) | - SSL.SSL_OP_NO_SSLv2 | - SSL.SSL_OP_NO_SSLv3 | - SSL.SSL_OP_CIPHER_SERVER_PREFERENCE | + boolean tlsv13Supported = OpenSsl.isTlsv13Supported(); + StringBuilder cipherBuilder = new StringBuilder(); + StringBuilder cipherTLSv13Builder = new StringBuilder(); + + /* List the ciphers that are permitted to negotiate. */ + try { + if (unmodifiableCiphers.isEmpty()) { + // Set non TLSv1.3 ciphers. + SSLContext.setCipherSuite(ctx, StringUtil.EMPTY_STRING, false); + if (tlsv13Supported) { + // Set TLSv1.3 ciphers. + SSLContext.setCipherSuite(ctx, StringUtil.EMPTY_STRING, true); + } + } else { + CipherSuiteConverter.convertToCipherStrings( + unmodifiableCiphers, cipherBuilder, cipherTLSv13Builder); + + // Set non TLSv1.3 ciphers. + SSLContext.setCipherSuite(ctx, cipherBuilder.toString(), false); + if (tlsv13Supported) { + // Set TLSv1.3 ciphers. + SSLContext.setCipherSuite(ctx, cipherTLSv13Builder.toString(), true); + } + } + } catch (SSLException e) { + throw e; + } catch (Exception e) { + throw new SSLException("failed to set cipher suite: " + unmodifiableCiphers, e); + } + + int options = SSLContext.getOptions(ctx) | + SSL.SSL_OP_NO_SSLv2 | + SSL.SSL_OP_NO_SSLv3 | + // Disable TLSv1.3 by default for now. Even if TLSv1.3 is not supported this will + // work fine as in this case SSL_OP_NO_TLSv1_3 will be 0. + SSL.SSL_OP_NO_TLSv1_3 | - // We do not support compression at the moment so we should explicitly disable it. - SSL.SSL_OP_NO_COMPRESSION | + SSL.SSL_OP_CIPHER_SERVER_PREFERENCE | - // Disable ticket support by default to be more inline with SSLEngineImpl of the JDK. - // This also let SSLSession.getId() work the same way for the JDK implementation and the - // OpenSSLEngine. If tickets are supported SSLSession.getId() will only return an ID on the - // server-side if it could make use of tickets. - SSL.SSL_OP_NO_TICKET); + // We do not support compression at the moment so we should explicitly disable it. + SSL.SSL_OP_NO_COMPRESSION | + + // Disable ticket support by default to be more inline with SSLEngineImpl of the JDK. + // This also let SSLSession.getId() work the same way for the JDK implementation and the + // OpenSSLEngine. If tickets are supported SSLSession.getId() will only return an ID on the + // server-side if it could make use of tickets. + SSL.SSL_OP_NO_TICKET; + + if (cipherBuilder.length() == 0) { + // No ciphers that are compatible with SSLv2 / SSLv3 / TLSv1 / TLSv1.1 / TLSv1.2 + options |= SSL.SSL_OP_NO_SSLv2 | SSL.SSL_OP_NO_SSLv3 | SSL.SSL_OP_NO_TLSv1 + | SSL.SSL_OP_NO_TLSv1_1 | SSL.SSL_OP_NO_TLSv1_2; + } + + SSLContext.setOptions(ctx, options); // We need to enable SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER as the memory address may change between // calling OpenSSLEngine.wrap(...). @@ -255,15 +300,6 @@ public String run() { SSLContext.setTmpDHLength(ctx, DH_KEY_LENGTH); } - /* List the ciphers that are permitted to negotiate. */ - try { - SSLContext.setCipherSuite(ctx, CipherSuiteConverter.toOpenSsl(unmodifiableCiphers)); - } catch (SSLException e) { - throw e; - } catch (Exception e) { - throw new SSLException("failed to set cipher suite: " + unmodifiableCiphers, e); - } - List<String> nextProtoList = apn.protocols(); /* Set next protocols for next protocol negotiation extension, if specified */ if (!nextProtoList.isEmpty()) { diff --git a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java index 49851c93b5a..4ddcc2aefa2 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java +++ b/handler/src/main/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngine.java @@ -66,9 +66,8 @@ import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1; import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_1; import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_2; +import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_3; import static io.netty.handler.ssl.SslUtils.SSL_RECORD_HEADER_LENGTH; -import static io.netty.internal.tcnative.SSL.SSL_MAX_PLAINTEXT_LENGTH; -import static io.netty.internal.tcnative.SSL.SSL_MAX_RECORD_LENGTH; import static io.netty.util.internal.EmptyArrays.EMPTY_CERTIFICATES; import static io.netty.util.internal.EmptyArrays.EMPTY_JAVAX_X509_CERTIFICATES; import static io.netty.util.internal.ObjectUtil.checkNotNull; @@ -109,12 +108,14 @@ public class ReferenceCountedOpenSslEngine extends SSLEngine implements Referenc private static final int OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1 = 2; private static final int OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_1 = 3; private static final int OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_2 = 4; + private static final int OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_3 = 5; private static final int[] OPENSSL_OP_NO_PROTOCOLS = { SSL.SSL_OP_NO_SSLv2, SSL.SSL_OP_NO_SSLv3, SSL.SSL_OP_NO_TLSv1, SSL.SSL_OP_NO_TLSv1_1, - SSL.SSL_OP_NO_TLSv1_2 + SSL.SSL_OP_NO_TLSv1_2, + SSL.SSL_OP_NO_TLSv1_3 }; /** * <a href="https://www.openssl.org/docs/man1.0.2/crypto/X509_check_host.html">The flags argument is usually 0</a>. @@ -124,11 +125,11 @@ public class ReferenceCountedOpenSslEngine extends SSLEngine implements Referenc /** * Depends upon tcnative ... only use if tcnative is available! */ - static final int MAX_PLAINTEXT_LENGTH = SSL_MAX_PLAINTEXT_LENGTH; + static final int MAX_PLAINTEXT_LENGTH = SSL.SSL_MAX_PLAINTEXT_LENGTH; /** * Depends upon tcnative ... only use if tcnative is available! */ - private static final int MAX_RECORD_SIZE = SSL_MAX_RECORD_LENGTH; + private static final int MAX_RECORD_SIZE = SSL.SSL_MAX_RECORD_LENGTH; private static final AtomicIntegerFieldUpdater<ReferenceCountedOpenSslEngine> DESTROYED_UPDATER = AtomicIntegerFieldUpdater.newUpdater(ReferenceCountedOpenSslEngine.class, "destroyed"); @@ -1206,7 +1207,10 @@ private void rejectRemoteInitiatedRenegotiation() throws SSLHandshakeException { // As rejectRemoteInitiatedRenegotiation() is called in a finally block we also need to check if we shutdown // the engine before as otherwise SSL.getHandshakeCount(ssl) will throw an NPE if the passed in ssl is 0. // See https://github.com/netty/netty/issues/7353 - if (!isDestroyed() && SSL.getHandshakeCount(ssl) > 1) { + if (!isDestroyed() && SSL.getHandshakeCount(ssl) > 1 && + // As we may count multiple handshakes when TLSv1.3 is used we should just ignore this here as + // renegotiation is not supported in TLSv1.3 as per spec. + !SslUtils.PROTOCOL_TLS_V1_3.equals(session.getProtocol()) && handshakeState == HandshakeState.FINISHED) { // TODO: In future versions me may also want to send a fatal_alert to the client and so notify it // that the renegotiation failed. shutdown(); @@ -1379,15 +1383,18 @@ public final String[] getEnabledCipherSuites() { if (enabled == null) { return EmptyArrays.EMPTY_STRINGS; } else { + List<String> enabledList = new ArrayList<String>(); synchronized (this) { for (int i = 0; i < enabled.length; i++) { String mapped = toJavaCipherSuite(enabled[i]); - if (mapped != null) { - enabled[i] = mapped; + final String cipher = mapped == null ? enabled[i] : mapped; + if (!OpenSsl.isTlsv13Supported() && SslUtils.isTLSv13Cipher(cipher)) { + continue; } + enabledList.add(cipher); } } - return enabled; + return enabledList.toArray(new String[0]); } } @@ -1396,35 +1403,28 @@ public final void setEnabledCipherSuites(String[] cipherSuites) { checkNotNull(cipherSuites, "cipherSuites"); final StringBuilder buf = new StringBuilder(); - for (String c: cipherSuites) { - if (c == null) { - break; - } - - String converted = CipherSuiteConverter.toOpenSsl(c); - if (converted == null) { - converted = c; - } - - if (!OpenSsl.isCipherSuiteAvailable(converted)) { - throw new IllegalArgumentException("unsupported cipher suite: " + c + '(' + converted + ')'); - } - - buf.append(converted); - buf.append(':'); - } - - if (buf.length() == 0) { - throw new IllegalArgumentException("empty cipher suites"); - } - buf.setLength(buf.length() - 1); + final StringBuilder bufTLSv13 = new StringBuilder(); + CipherSuiteConverter.convertToCipherStrings(Arrays.asList(cipherSuites), buf, bufTLSv13); final String cipherSuiteSpec = buf.toString(); + final String cipherSuiteSpecTLSv13 = bufTLSv13.toString(); + if (!OpenSsl.isTlsv13Supported() && !cipherSuiteSpecTLSv13.isEmpty()) { + throw new IllegalArgumentException("TLSv1.3 is not supported by this java version."); + } synchronized (this) { if (!isDestroyed()) { + // TODO: Should we also adjust the protocols based on if there are any ciphers left that can be used + // for TLSv1.3 or for previor SSL/TLS versions ? try { - SSL.setCipherSuites(ssl, cipherSuiteSpec); + // Set non TLSv1.3 ciphers. + SSL.setCipherSuites(ssl, cipherSuiteSpec, false); + + if (OpenSsl.isTlsv13Supported()) { + // Set TLSv1.3 ciphers. + SSL.setCipherSuites(ssl, cipherSuiteSpecTLSv13, true); + } + } catch (Exception e) { throw new IllegalStateException("failed to enable cipher suites: " + cipherSuiteSpec, e); } @@ -1462,6 +1462,9 @@ public final String[] getEnabledProtocols() { if (isProtocolEnabled(opts, SSL.SSL_OP_NO_TLSv1_2, PROTOCOL_TLS_V1_2)) { enabled.add(PROTOCOL_TLS_V1_2); } + if (isProtocolEnabled(opts, SSL.SSL_OP_NO_TLSv1_3, PROTOCOL_TLS_V1_3)) { + enabled.add(PROTOCOL_TLS_V1_3); + } if (isProtocolEnabled(opts, SSL.SSL_OP_NO_SSLv2, PROTOCOL_SSL_V2)) { enabled.add(PROTOCOL_SSL_V2); } @@ -1533,13 +1536,20 @@ public final void setEnabledProtocols(String[] protocols) { if (maxProtocolIndex < OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_2) { maxProtocolIndex = OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_2; } + } else if (p.equals(PROTOCOL_TLS_V1_3)) { + if (minProtocolIndex > OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_3) { + minProtocolIndex = OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_3; + } + if (maxProtocolIndex < OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_3) { + maxProtocolIndex = OPENSSL_OP_NO_PROTOCOL_INDEX_TLSv1_3; + } } } synchronized (this) { if (!isDestroyed()) { // Clear out options which disable protocols SSL.clearOptions(ssl, SSL.SSL_OP_NO_SSLv2 | SSL.SSL_OP_NO_SSLv3 | SSL.SSL_OP_NO_TLSv1 | - SSL.SSL_OP_NO_TLSv1_1 | SSL.SSL_OP_NO_TLSv1_2); + SSL.SSL_OP_NO_TLSv1_1 | SSL.SSL_OP_NO_TLSv1_2 | SSL.SSL_OP_NO_TLSv1_3); int opts = 0; for (int i = 0; i < minProtocolIndex; ++i) { diff --git a/handler/src/main/java/io/netty/handler/ssl/SslUtils.java b/handler/src/main/java/io/netty/handler/ssl/SslUtils.java index 414c9d1d18f..f1f000fa8fd 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SslUtils.java +++ b/handler/src/main/java/io/netty/handler/ssl/SslUtils.java @@ -22,9 +22,14 @@ import io.netty.handler.codec.base64.Base64; import io.netty.handler.codec.base64.Base64Dialect; import io.netty.util.NetUtil; +import io.netty.util.internal.EmptyArrays; +import io.netty.util.internal.PlatformDependent; import java.nio.ByteBuffer; import java.nio.ByteOrder; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; import java.util.List; import java.util.Set; @@ -36,7 +41,11 @@ * Constants for SSL packets. */ final class SslUtils { - + // See https://tools.ietf.org/html/rfc8446#appendix-B.4 + private static final Set<String> TLSV13_CIPHERS = Collections.unmodifiableSet(new HashSet<String>( + asList("TLS_AES_256_GCM_SHA384", "TLS_CHACHA20_POLY1305_SHA256", + "TLS_AES_128_GCM_SHA256", "TLS_AES_128_CCM_8_SHA256", + "TLS_AES_128_CCM_SHA256"))); // Protocols static final String PROTOCOL_SSL_V2_HELLO = "SSLv2Hello"; static final String PROTOCOL_SSL_V2 = "SSLv2"; @@ -44,6 +53,7 @@ final class SslUtils { static final String PROTOCOL_TLS_V1 = "TLSv1"; static final String PROTOCOL_TLS_V1_1 = "TLSv1.1"; static final String PROTOCOL_TLS_V1_2 = "TLSv1.2"; + static final String PROTOCOL_TLS_V1_3 = "TLSv1.3"; /** * change cipher spec @@ -85,20 +95,36 @@ final class SslUtils { */ static final int NOT_ENCRYPTED = -2; - static final String[] DEFAULT_CIPHER_SUITES = { + static final String[] DEFAULT_CIPHER_SUITES; + static final String[] DEFAULT_TLSV13_CIPHER_SUITES; + + static { + if (PlatformDependent.javaVersion() >= 11) { + DEFAULT_TLSV13_CIPHER_SUITES = new String[] { "TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384" }; + } else { + DEFAULT_TLSV13_CIPHER_SUITES = EmptyArrays.EMPTY_STRINGS; + } + + List<String> defaultCiphers = new ArrayList<String>(); // GCM (Galois/Counter Mode) requires JDK 8. - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", - "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", + defaultCiphers.add("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"); + defaultCiphers.add("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"); + defaultCiphers.add("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"); + defaultCiphers.add("TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA"); // AES256 requires JCE unlimited strength jurisdiction policy files. - "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", + defaultCiphers.add("TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA"); // GCM (Galois/Counter Mode) requires JDK 8. - "TLS_RSA_WITH_AES_128_GCM_SHA256", - "TLS_RSA_WITH_AES_128_CBC_SHA", + defaultCiphers.add("TLS_RSA_WITH_AES_128_GCM_SHA256"); + defaultCiphers.add("TLS_RSA_WITH_AES_128_CBC_SHA"); // AES256 requires JCE unlimited strength jurisdiction policy files. - "TLS_RSA_WITH_AES_256_CBC_SHA" - }; + defaultCiphers.add("TLS_RSA_WITH_AES_256_CBC_SHA"); + + for (String tlsv13Cipher: DEFAULT_TLSV13_CIPHER_SUITES) { + defaultCiphers.add(tlsv13Cipher); + } + + DEFAULT_CIPHER_SUITES = defaultCiphers.toArray(new String[0]); + } /** * Add elements from {@code names} into {@code enabled} if they are in {@code supported}. @@ -361,6 +387,14 @@ static boolean isValidHostNameForSNI(String hostname) { !NetUtil.isValidIpV6Address(hostname); } + /** + * Returns {@code true} if the the given cipher (in openssl format) is for TLSv1.3, {@code false} otherwise. + */ + static boolean isTLSv13Cipher(String cipher) { + // See https://tools.ietf.org/html/rfc8446#appendix-B.4 + return TLSV13_CIPHERS.contains(cipher); + } + private SslUtils() { } } diff --git a/pom.xml b/pom.xml index 6c42c7977b0..9f7a80b7591 100644 --- a/pom.xml +++ b/pom.xml @@ -241,7 +241,7 @@ <!-- Fedora-"like" systems. This is currently only used for the netty-tcnative dependency --> <os.detection.classifierWithLikes>fedora</os.detection.classifierWithLikes> <tcnative.artifactId>netty-tcnative</tcnative.artifactId> - <tcnative.version>2.0.17.Final</tcnative.version> + <tcnative.version>2.0.18.Final</tcnative.version> <tcnative.classifier>${os.detected.classifier}</tcnative.classifier> <conscrypt.groupId>org.conscrypt</conscrypt.groupId> <conscrypt.artifactId>conscrypt-openjdk-uber</conscrypt.artifactId>
diff --git a/handler/src/test/java/io/netty/handler/ssl/CipherSuiteCanaryTest.java b/handler/src/test/java/io/netty/handler/ssl/CipherSuiteCanaryTest.java index 22e5f43bf88..9c394ccf637 100644 --- a/handler/src/test/java/io/netty/handler/ssl/CipherSuiteCanaryTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/CipherSuiteCanaryTest.java @@ -125,12 +125,16 @@ public void testHandshake() throws Exception { final SslContext sslServerContext = SslContextBuilder.forServer(CERT.certificate(), CERT.privateKey()) .sslProvider(serverSslProvider) .ciphers(ciphers) + // As this is not a TLSv1.3 cipher we should ensure we talk something else. + .protocols(SslUtils.PROTOCOL_TLS_V1_2) .build(); try { final SslContext sslClientContext = SslContextBuilder.forClient() .sslProvider(clientSslProvider) .ciphers(ciphers) + // As this is not a TLSv1.3 cipher we should ensure we talk something else. + .protocols(SslUtils.PROTOCOL_TLS_V1_2) .trustManager(InsecureTrustManagerFactory.INSTANCE) .build(); diff --git a/handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java b/handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java index ffe53d2ba8b..f70da234c70 100644 --- a/handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/CipherSuiteConverterTest.java @@ -22,8 +22,7 @@ import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.sameInstance; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertThat; +import static org.junit.Assert.*; public class CipherSuiteConverterTest { diff --git a/handler/src/test/java/io/netty/handler/ssl/ConscryptJdkSslEngineInteropTest.java b/handler/src/test/java/io/netty/handler/ssl/ConscryptJdkSslEngineInteropTest.java index 870f71ef364..d666535b194 100644 --- a/handler/src/test/java/io/netty/handler/ssl/ConscryptJdkSslEngineInteropTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/ConscryptJdkSslEngineInteropTest.java @@ -31,17 +31,17 @@ @RunWith(Parameterized.class) public class ConscryptJdkSslEngineInteropTest extends SSLEngineTest { - @Parameterized.Parameters(name = "{index}: bufferType = {0}") - public static Collection<Object> data() { - List<Object> params = new ArrayList<Object>(); + @Parameterized.Parameters(name = "{index}: bufferType = {0}, combo = {1}") + public static Collection<Object[]> data() { + List<Object[]> params = new ArrayList<Object[]>(); for (BufferType type: BufferType.values()) { - params.add(type); + params.add(new Object[] { type, ProtocolCipherCombo.tlsv12()}); } return params; } - public ConscryptJdkSslEngineInteropTest(BufferType type) { - super(type); + public ConscryptJdkSslEngineInteropTest(BufferType type, ProtocolCipherCombo combo) { + super(type, combo); } @BeforeClass diff --git a/handler/src/test/java/io/netty/handler/ssl/ConscryptSslEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/ConscryptSslEngineTest.java index e57fd58be0f..8c6121b6fb5 100644 --- a/handler/src/test/java/io/netty/handler/ssl/ConscryptSslEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/ConscryptSslEngineTest.java @@ -30,17 +30,17 @@ @RunWith(Parameterized.class) public class ConscryptSslEngineTest extends SSLEngineTest { - @Parameterized.Parameters(name = "{index}: bufferType = {0}") - public static Collection<Object> data() { - List<Object> params = new ArrayList<Object>(); + @Parameterized.Parameters(name = "{index}: bufferType = {0}, combo = {1}") + public static Collection<Object[]> data() { + List<Object[]> params = new ArrayList<Object[]>(); for (BufferType type: BufferType.values()) { - params.add(type); + params.add(new Object[] { type, ProtocolCipherCombo.tlsv12()}); } return params; } - public ConscryptSslEngineTest(BufferType type) { - super(type); + public ConscryptSslEngineTest(BufferType type, ProtocolCipherCombo combo) { + super(type, combo); } @BeforeClass diff --git a/handler/src/test/java/io/netty/handler/ssl/JdkConscryptSslEngineInteropTest.java b/handler/src/test/java/io/netty/handler/ssl/JdkConscryptSslEngineInteropTest.java index 6d8862a0f23..309490af592 100644 --- a/handler/src/test/java/io/netty/handler/ssl/JdkConscryptSslEngineInteropTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/JdkConscryptSslEngineInteropTest.java @@ -16,6 +16,7 @@ package io.netty.handler.ssl; import java.security.Provider; + import org.junit.BeforeClass; import org.junit.Ignore; import org.junit.Test; @@ -31,17 +32,17 @@ @RunWith(Parameterized.class) public class JdkConscryptSslEngineInteropTest extends SSLEngineTest { - @Parameterized.Parameters(name = "{index}: bufferType = {0}") - public static Collection<Object> data() { - List<Object> params = new ArrayList<Object>(); + @Parameterized.Parameters(name = "{index}: bufferType = {0}, combo = {1}") + public static Collection<Object[]> data() { + List<Object[]> params = new ArrayList<Object[]>(); for (BufferType type: BufferType.values()) { - params.add(type); + params.add(new Object[] { type, ProtocolCipherCombo.tlsv12()}); } return params; } - public JdkConscryptSslEngineInteropTest(BufferType type) { - super(type); + public JdkConscryptSslEngineInteropTest(BufferType type, ProtocolCipherCombo combo) { + super(type, combo); } @BeforeClass diff --git a/handler/src/test/java/io/netty/handler/ssl/JdkOpenSslEngineInteroptTest.java b/handler/src/test/java/io/netty/handler/ssl/JdkOpenSslEngineInteroptTest.java index 0eed0b3087d..a85b665ff2c 100644 --- a/handler/src/test/java/io/netty/handler/ssl/JdkOpenSslEngineInteroptTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/JdkOpenSslEngineInteroptTest.java @@ -15,6 +15,7 @@ */ package io.netty.handler.ssl; +import io.netty.util.internal.PlatformDependent; import org.junit.BeforeClass; import org.junit.Test; import org.junit.runner.RunWith; @@ -32,17 +33,21 @@ @RunWith(Parameterized.class) public class JdkOpenSslEngineInteroptTest extends SSLEngineTest { - @Parameterized.Parameters(name = "{index}: bufferType = {0}") - public static Collection<Object> data() { - List<Object> params = new ArrayList<Object>(); + @Parameterized.Parameters(name = "{index}: bufferType = {0}, combo = {1}") + public static Collection<Object[]> data() { + List<Object[]> params = new ArrayList<Object[]>(); for (BufferType type: BufferType.values()) { - params.add(type); + params.add(new Object[] { type, ProtocolCipherCombo.tlsv12()}); + + if (PlatformDependent.javaVersion() >= 11 && OpenSsl.isTlsv13Supported()) { + params.add(new Object[] { type, ProtocolCipherCombo.tlsv13() }); + } } return params; } - public JdkOpenSslEngineInteroptTest(BufferType type) { - super(type); + public JdkOpenSslEngineInteroptTest(BufferType type, ProtocolCipherCombo protocolCipherCombo) { + super(type, protocolCipherCombo); } @BeforeClass diff --git a/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java index f37a6aff251..74f000fd01e 100644 --- a/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java @@ -26,6 +26,7 @@ import java.util.ArrayList; import java.util.Collection; +import io.netty.util.internal.EmptyArrays; import io.netty.util.internal.PlatformDependent; import org.junit.Ignore; import org.junit.Test; @@ -141,12 +142,15 @@ final void activate(JdkSslEngineTest instance) { private static final String FALLBACK_APPLICATION_LEVEL_PROTOCOL = "my-protocol-http1_1"; private static final String APPLICATION_LEVEL_PROTOCOL_NOT_COMPATIBLE = "my-protocol-FOO"; - @Parameterized.Parameters(name = "{index}: providerType = {0}, bufferType = {1}") + @Parameterized.Parameters(name = "{index}: providerType = {0}, bufferType = {1}, combo = {2}") public static Collection<Object[]> data() { List<Object[]> params = new ArrayList<Object[]>(); for (ProviderType providerType : ProviderType.values()) { for (BufferType bufferType : BufferType.values()) { - params.add(new Object[]{providerType, bufferType}); + params.add(new Object[]{ providerType, bufferType, ProtocolCipherCombo.tlsv12()}); + if (PlatformDependent.javaVersion() >= 11) { + params.add(new Object[] { providerType, bufferType, ProtocolCipherCombo.tlsv13() }); + } } } return params; @@ -156,8 +160,8 @@ public static Collection<Object[]> data() { private Provider provider; - public JdkSslEngineTest(ProviderType providerType, BufferType bufferType) { - super(bufferType); + public JdkSslEngineTest(ProviderType providerType, BufferType bufferType, ProtocolCipherCombo protocolCipherCombo) { + super(bufferType, protocolCipherCombo); this.providerType = providerType; } @@ -235,9 +239,11 @@ public String select(List<String> protocols) { InsecureTrustManagerFactory.INSTANCE, null, IdentityCipherSuiteFilter.INSTANCE, clientApn, 0, 0); - setupHandlers(serverSslCtx, clientSslCtx); + setupHandlers(new TestDelegatingSslContext(serverSslCtx), new TestDelegatingSslContext(clientSslCtx)); assertTrue(clientLatch.await(2, TimeUnit.SECONDS)); - assertTrue(clientException instanceof SSLHandshakeException); + // When using TLSv1.3 the handshake is NOT sent in an extra round trip which means there will be + // no exception reported in this case but just the channel will be closed. + assertTrue(clientException instanceof SSLHandshakeException || clientException == null); } } catch (SkipTestException e) { // ALPN availability is dependent on the java version. If ALPN is not available because of @@ -358,4 +364,16 @@ private static final class SkipTestException extends RuntimeException { super(message); } } + + private final class TestDelegatingSslContext extends DelegatingSslContext { + TestDelegatingSslContext(SslContext ctx) { + super(ctx); + } + + @Override + protected void initEngine(SSLEngine engine) { + engine.setEnabledProtocols(protocols()); + engine.setEnabledCipherSuites(ciphers().toArray(EmptyArrays.EMPTY_STRINGS)); + } + } } diff --git a/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java index 99fb8fa82e2..630de226d6d 100644 --- a/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java @@ -67,17 +67,21 @@ public class OpenSslEngineTest extends SSLEngineTest { private static final String PREFERRED_APPLICATION_LEVEL_PROTOCOL = "my-protocol-http2"; private static final String FALLBACK_APPLICATION_LEVEL_PROTOCOL = "my-protocol-http1_1"; - @Parameterized.Parameters(name = "{index}: bufferType = {0}") - public static Collection<Object> data() { - List<Object> params = new ArrayList<Object>(); + @Parameterized.Parameters(name = "{index}: bufferType = {0}, combo = {1}") + public static Collection<Object[]> data() { + List<Object[]> params = new ArrayList<Object[]>(); for (BufferType type: BufferType.values()) { - params.add(type); + params.add(new Object[] { type, ProtocolCipherCombo.tlsv12()}); + + if (PlatformDependent.javaVersion() >= 11 && OpenSsl.isTlsv13Supported()) { + params.add(new Object[] { type, ProtocolCipherCombo.tlsv13() }); + } } return params; } - public OpenSslEngineTest(BufferType type) { - super(type); + public OpenSslEngineTest(BufferType type, ProtocolCipherCombo cipherCombo) { + super(type, cipherCombo); } @BeforeClass @@ -206,13 +210,17 @@ public void testEnablingAnAlreadyDisabledSslProtocol() throws Exception { @Test public void testWrapBuffersNoWritePendingError() throws Exception { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine clientEngine = null; SSLEngine serverEngine = null; try { @@ -240,13 +248,17 @@ public void testWrapBuffersNoWritePendingError() throws Exception { @Test public void testOnlySmallBufferNeededForWrap() throws Exception { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine clientEngine = null; SSLEngine serverEngine = null; try { @@ -291,13 +303,17 @@ public void testOnlySmallBufferNeededForWrap() throws Exception { @Test public void testNeededDstCapacityIsCorrectlyCalculated() throws Exception { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine clientEngine = null; SSLEngine serverEngine = null; try { @@ -327,13 +343,17 @@ public void testNeededDstCapacityIsCorrectlyCalculated() throws Exception { @Test public void testSrcsLenOverFlowCorrectlyHandled() throws Exception { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine clientEngine = null; SSLEngine serverEngine = null; try { @@ -374,9 +394,11 @@ public void testSrcsLenOverFlowCorrectlyHandled() throws Exception { @Test public void testCalculateOutNetBufSizeOverflow() throws SSLException { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine clientEngine = null; try { clientEngine = clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT); @@ -390,9 +412,11 @@ public void testCalculateOutNetBufSizeOverflow() throws SSLException { @Test public void testCalculateOutNetBufSize0() throws SSLException { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine clientEngine = null; try { clientEngine = clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT); @@ -415,13 +439,17 @@ public void testCorrectlyCalculateSpaceForAlertJDKCompatabilityModeOff() throws private void testCorrectlyCalculateSpaceForAlert(boolean jdkCompatabilityMode) throws Exception { SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine clientEngine = null; SSLEngine serverEngine = null; try { @@ -473,13 +501,13 @@ protected void mySetupMutualAuthServerInitSslHandler(SslHandler handler) { @Test public void testWrapWithDifferentSizesTLSv1() throws Exception { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .build(); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .build(); testWrapWithDifferentSizes(PROTOCOL_TLS_V1, "AES128-SHA"); testWrapWithDifferentSizes(PROTOCOL_TLS_V1, "ECDHE-RSA-AES128-SHA"); @@ -504,13 +532,13 @@ public void testWrapWithDifferentSizesTLSv1() throws Exception { @Test public void testWrapWithDifferentSizesTLSv1_1() throws Exception { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .build(); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .build(); testWrapWithDifferentSizes(PROTOCOL_TLS_V1_1, "ECDHE-RSA-AES256-SHA"); testWrapWithDifferentSizes(PROTOCOL_TLS_V1_1, "AES256-SHA"); @@ -613,12 +641,16 @@ public void testMultipleRecordsInOneBufferWithNonZeroPositionJDKCompatabilityMod .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newHandler(UnpooledByteBufAllocator.DEFAULT).engine()); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newHandler(UnpooledByteBufAllocator.DEFAULT).engine()); @@ -690,12 +722,16 @@ public void testInputTooBigAndFillsUpBuffersJDKCompatabilityModeOff() throws Exc .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newHandler(UnpooledByteBufAllocator.DEFAULT).engine()); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newHandler(UnpooledByteBufAllocator.DEFAULT).engine()); @@ -774,12 +810,16 @@ public void testPartialPacketUnwrapJDKCompatabilityModeOff() throws Exception { .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newHandler(UnpooledByteBufAllocator.DEFAULT).engine()); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newHandler(UnpooledByteBufAllocator.DEFAULT).engine()); @@ -849,12 +889,16 @@ public void testBufferUnderFlowAvoidedIfJDKCompatabilityModeOff() throws Excepti .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newHandler(UnpooledByteBufAllocator.DEFAULT).engine()); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newHandler(UnpooledByteBufAllocator.DEFAULT).engine()); @@ -982,8 +1026,10 @@ public void testSNIMatchersDoesNotThrow() throws Exception { assumeTrue(PlatformDependent.javaVersion() >= 8); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine engine = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); try { @@ -1002,8 +1048,10 @@ public void testSNIMatchersWithSNINameWithUnderscore() throws Exception { byte[] name = "rb8hx3pww30y3tvw0mwy.v1_1".getBytes(CharsetUtil.UTF_8); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine engine = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); try { @@ -1022,8 +1070,10 @@ public void testSNIMatchersWithSNINameWithUnderscore() throws Exception { public void testAlgorithmConstraintsThrows() throws Exception { SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .build(); + .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); SSLEngine engine = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); try { diff --git a/handler/src/test/java/io/netty/handler/ssl/OpenSslJdkSslEngineInteroptTest.java b/handler/src/test/java/io/netty/handler/ssl/OpenSslJdkSslEngineInteroptTest.java index d2a00c5432c..bc1106e7a34 100644 --- a/handler/src/test/java/io/netty/handler/ssl/OpenSslJdkSslEngineInteroptTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/OpenSslJdkSslEngineInteroptTest.java @@ -15,6 +15,7 @@ */ package io.netty.handler.ssl; +import io.netty.util.internal.PlatformDependent; import org.junit.BeforeClass; import org.junit.Ignore; import org.junit.Test; @@ -34,17 +35,21 @@ @RunWith(Parameterized.class) public class OpenSslJdkSslEngineInteroptTest extends SSLEngineTest { - @Parameterized.Parameters(name = "{index}: bufferType = {0}") - public static Collection<Object> data() { - List<Object> params = new ArrayList<Object>(); + @Parameterized.Parameters(name = "{index}: bufferType = {0}, combo = {1}") + public static Collection<Object[]> data() { + List<Object[]> params = new ArrayList<Object[]>(); for (BufferType type: BufferType.values()) { - params.add(type); + params.add(new Object[] { type, ProtocolCipherCombo.tlsv12()}); + + if (PlatformDependent.javaVersion() >= 11 && OpenSsl.isTlsv13Supported()) { + params.add(new Object[] { type, ProtocolCipherCombo.tlsv13() }); + } } return params; } - public OpenSslJdkSslEngineInteroptTest(BufferType type) { - super(type); + public OpenSslJdkSslEngineInteroptTest(BufferType type, ProtocolCipherCombo combo) { + super(type, combo); } @BeforeClass diff --git a/handler/src/test/java/io/netty/handler/ssl/OpenSslTestUtils.java b/handler/src/test/java/io/netty/handler/ssl/OpenSslTestUtils.java index e8c46ed7a05..0a3ff97f324 100644 --- a/handler/src/test/java/io/netty/handler/ssl/OpenSslTestUtils.java +++ b/handler/src/test/java/io/netty/handler/ssl/OpenSslTestUtils.java @@ -15,6 +15,12 @@ */ package io.netty.handler.ssl; +import io.netty.util.internal.PlatformDependent; + +import java.util.Arrays; +import java.util.Collections; + +import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_2; import static org.junit.Assume.assumeTrue; final class OpenSslTestUtils { @@ -28,4 +34,17 @@ static void checkShouldUseKeyManagerFactory() { static boolean isBoringSSL() { return "BoringSSL".equals(OpenSsl.versionString()); } + + static SslContextBuilder configureProtocolForMutualAuth( + SslContextBuilder ctx, SslProvider sslClientProvider, SslProvider sslServerProvider) { + if (PlatformDependent.javaVersion() >= 11 + && sslClientProvider == SslProvider.JDK && sslServerProvider != SslProvider.JDK) { + // Make sure we do not use TLSv1.3 as there seems to be a bug currently in the JDK TLSv1.3 implementation. + // See: + // - http://mail.openjdk.java.net/pipermail/security-dev/2018-September/018191.html + // - https://bugs.openjdk.java.net/projects/JDK/issues/JDK-8210846 + ctx.protocols(PROTOCOL_TLS_V1_2).ciphers(Collections.singleton("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256")); + } + return ctx; + } } diff --git a/handler/src/test/java/io/netty/handler/ssl/ParameterizedSslHandlerTest.java b/handler/src/test/java/io/netty/handler/ssl/ParameterizedSslHandlerTest.java index 2abc33e8a31..780ddf670ee 100644 --- a/handler/src/test/java/io/netty/handler/ssl/ParameterizedSslHandlerTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/ParameterizedSslHandlerTest.java @@ -381,12 +381,21 @@ private void testCloseNotify(final long closeNotifyReadTimeout, final boolean ti SelfSignedCertificate ssc = new SelfSignedCertificate(); final SslContext sslServerCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(serverProvider) - .build(); + .sslProvider(serverProvider) + // Use TLSv1.2 as we depend on the fact that the handshake + // is done in an extra round trip in the test which + // is not true in TLSv1.3 + .protocols(SslUtils.PROTOCOL_TLS_V1_2) + .build(); final SslContext sslClientCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(clientProvider).build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(clientProvider) + // Use TLSv1.2 as we depend on the fact that the handshake + // is done in an extra round trip in the test which + // is not true in TLSv1.3 + .protocols(SslUtils.PROTOCOL_TLS_V1_2) + .build(); EventLoopGroup group = new NioEventLoopGroup(); Channel sc = null; diff --git a/handler/src/test/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngineTest.java index ddbd0b16ded..588619d3a7c 100644 --- a/handler/src/test/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/ReferenceCountedOpenSslEngineTest.java @@ -23,8 +23,8 @@ public class ReferenceCountedOpenSslEngineTest extends OpenSslEngineTest { - public ReferenceCountedOpenSslEngineTest(BufferType type) { - super(type); + public ReferenceCountedOpenSslEngineTest(BufferType type, ProtocolCipherCombo combo) { + super(type, combo); } @Override @@ -60,9 +60,11 @@ protected void cleanupServerSslEngine(SSLEngine engine) { @Test(expected = NullPointerException.class) public void testNotLeakOnException() throws Exception { clientSslCtx = SslContextBuilder.forClient() - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .sslProvider(sslClientProvider()) - .build(); + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); clientSslCtx.newEngine(null); } diff --git a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java index 4248e1fca48..0094c92d3ec 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java @@ -33,6 +33,7 @@ import io.netty.channel.socket.SocketChannel; import io.netty.channel.socket.nio.NioServerSocketChannel; import io.netty.channel.socket.nio.NioSocketChannel; +import io.netty.handler.ssl.ApplicationProtocolConfig.Protocol; import io.netty.handler.ssl.util.InsecureTrustManagerFactory; import io.netty.handler.ssl.util.SelfSignedCertificate; import io.netty.handler.ssl.util.SimpleTrustManagerFactory; @@ -67,6 +68,7 @@ import java.security.cert.Certificate; import java.security.cert.CertificateException; import java.util.Arrays; +import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Set; @@ -79,6 +81,7 @@ import javax.net.ssl.SNIHostName; import javax.net.ssl.SSLEngine; import javax.net.ssl.SSLEngineResult; +import javax.net.ssl.SSLEngineResult.Status; import javax.net.ssl.SSLException; import javax.net.ssl.SSLHandshakeException; import javax.net.ssl.SSLParameters; @@ -89,14 +92,7 @@ import javax.net.ssl.X509TrustManager; import javax.security.cert.X509Certificate; -import static io.netty.handler.ssl.SslUtils.PROTOCOL_SSL_V2; -import static io.netty.handler.ssl.SslUtils.PROTOCOL_SSL_V2_HELLO; -import static io.netty.handler.ssl.SslUtils.PROTOCOL_SSL_V3; -import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1; -import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_1; -import static io.netty.handler.ssl.SslUtils.PROTOCOL_TLS_V1_2; -import static io.netty.handler.ssl.SslUtils.SSL_RECORD_HEADER_LENGTH; - +import static io.netty.handler.ssl.SslUtils.*; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; @@ -220,10 +216,42 @@ enum BufferType { Mixed } + static final class ProtocolCipherCombo { + private static final ProtocolCipherCombo TLSV12 = new ProtocolCipherCombo( + PROTOCOL_TLS_V1_2, "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"); + private static final ProtocolCipherCombo TLSV13 = new ProtocolCipherCombo( + PROTOCOL_TLS_V1_3, "TLS_AES_128_GCM_SHA256"); + final String protocol; + final String cipher; + + private ProtocolCipherCombo(String protocol, String cipher) { + this.protocol = protocol; + this.cipher = cipher; + } + + static ProtocolCipherCombo tlsv12() { + return TLSV12; + } + + static ProtocolCipherCombo tlsv13() { + return TLSV13; + } + + @Override + public String toString() { + return "ProtocolCipherCombo{" + + "protocol='" + protocol + '\'' + + ", cipher='" + cipher + '\'' + + '}'; + } + } + private final BufferType type; + private final ProtocolCipherCombo protocolCipherCombo; - protected SSLEngineTest(BufferType type) { + protected SSLEngineTest(BufferType type, ProtocolCipherCombo protocolCipherCombo) { this.type = type; + this.protocolCipherCombo = protocolCipherCombo; } protected ByteBuffer allocateBuffer(int len) { @@ -620,36 +648,46 @@ protected boolean mySetupMutualAuthServerIsValidClientException(Throwable cause) } protected boolean mySetupMutualAuthServerIsValidException(Throwable cause) { - return cause instanceof SSLHandshakeException || cause instanceof ClosedChannelException; + // As in TLSv1.3 the handshake is sent without an extra roundtrip an SSLException is valid as well. + return cause instanceof SSLException || cause instanceof ClosedChannelException; } protected void mySetupMutualAuthServerInitSslHandler(SslHandler handler) { } + private SslContextBuilder configureProtocolForMutualAuth(SslContextBuilder ctx) { + return OpenSslTestUtils.configureProtocolForMutualAuth(ctx, sslClientProvider(), sslServerProvider()); + } + private void mySetupMutualAuth(KeyManagerFactory serverKMF, final File serverTrustManager, KeyManagerFactory clientKMF, File clientTrustManager, ClientAuth clientAuth, final boolean failureExpected, final boolean serverInitEngine) throws SSLException, InterruptedException { - serverSslCtx = SslContextBuilder.forServer(serverKMF) - .sslProvider(sslServerProvider()) - .sslContextProvider(serverSslContextProvider()) - .trustManager(serverTrustManager) - .clientAuth(clientAuth) - .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) - .sessionCacheSize(0) - .sessionTimeout(0) - .build(); + serverSslCtx = configureProtocolForMutualAuth( + SslContextBuilder.forServer(serverKMF) + .protocols(protocols()) + .ciphers(ciphers()) + .sslProvider(sslServerProvider()) + .sslContextProvider(serverSslContextProvider()) + .trustManager(serverTrustManager) + .clientAuth(clientAuth) + .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) + .sessionCacheSize(0) + .sessionTimeout(0)).build(); + + clientSslCtx = configureProtocolForMutualAuth( + SslContextBuilder.forClient() + .protocols(protocols()) + .ciphers(ciphers()) + .sslProvider(sslClientProvider()) + .sslContextProvider(clientSslContextProvider()) + .trustManager(clientTrustManager) + .keyManager(clientKMF) + .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) + .sessionCacheSize(0) + .sessionTimeout(0)).build(); - clientSslCtx = SslContextBuilder.forClient() - .sslProvider(sslClientProvider()) - .sslContextProvider(clientSslContextProvider()) - .trustManager(clientTrustManager) - .keyManager(clientKMF) - .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) - .sessionCacheSize(0) - .sessionTimeout(0) - .build(); serverConnectedChannel = null; sb = new ServerBootstrap(); cb = new Bootstrap(); @@ -711,10 +749,11 @@ protected void initChannel(Channel ch) throws Exception { @Override public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { if (evt == SslHandshakeCompletionEvent.SUCCESS) { - if (failureExpected) { - clientException = new IllegalStateException("handshake complete. expected failure"); + // With TLS1.3 a mutal auth error will not be propagated as a handshake error most of the + // time as the handshake needs NO extra roundtrip. + if (!failureExpected) { + clientLatch.countDown(); } - clientLatch.countDown(); } else if (evt instanceof SslHandshakeCompletionEvent) { clientException = ((SslHandshakeCompletionEvent) evt).cause(); clientLatch.countDown(); @@ -724,7 +763,7 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { - if (cause.getCause() instanceof SSLHandshakeException) { + if (cause.getCause() instanceof SSLException) { clientException = cause.getCause(); clientLatch.countDown(); } else { @@ -735,7 +774,7 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E } }); - serverChannel = sb.bind(new InetSocketAddress(0)).sync().channel(); + serverChannel = sb.bind(new InetSocketAddress(8443)).sync().channel(); int port = ((InetSocketAddress) serverChannel.localAddress()).getPort(); ChannelFuture ccf = cb.connect(new InetSocketAddress(NetUtil.LOCALHOST, port)); @@ -776,6 +815,8 @@ private void mySetupClientHostnameValidation(File serverCrtFile, File serverKeyF final String expectedHost = "localhost"; serverSslCtx = SslContextBuilder.forServer(serverCrtFile, serverKeyFile, null) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .sslContextProvider(serverSslContextProvider()) .trustManager(InsecureTrustManagerFactory.INSTANCE) .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) @@ -785,12 +826,15 @@ private void mySetupClientHostnameValidation(File serverCrtFile, File serverKeyF clientSslCtx = SslContextBuilder.forClient() .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .sslContextProvider(clientSslContextProvider()) .trustManager(clientTrustCrtFile) .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) .sessionCacheSize(0) .sessionTimeout(0) .build(); + serverConnectedChannel = null; sb = new ServerBootstrap(); cb = new Bootstrap(); @@ -897,24 +941,28 @@ private void mySetupMutualAuth( File servertTrustCrtFile, File serverKeyFile, final File serverCrtFile, String serverKeyPassword, File clientTrustCrtFile, File clientKeyFile, File clientCrtFile, String clientKeyPassword) throws InterruptedException, SSLException { - serverSslCtx = SslContextBuilder.forServer(serverCrtFile, serverKeyFile, serverKeyPassword) - .sslProvider(sslServerProvider()) - .sslContextProvider(serverSslContextProvider()) - .trustManager(servertTrustCrtFile) - .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) - .sessionCacheSize(0) - .sessionTimeout(0) - .build(); + serverSslCtx = configureProtocolForMutualAuth( + SslContextBuilder.forServer(serverCrtFile, serverKeyFile, serverKeyPassword) + .sslProvider(sslServerProvider()) + .sslContextProvider(serverSslContextProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .trustManager(servertTrustCrtFile) + .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) + .sessionCacheSize(0) + .sessionTimeout(0)).build(); + clientSslCtx = configureProtocolForMutualAuth( + SslContextBuilder.forClient() + .sslProvider(sslClientProvider()) + .sslContextProvider(clientSslContextProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .trustManager(clientTrustCrtFile) + .keyManager(clientCrtFile, clientKeyFile, clientKeyPassword) + .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) + .sessionCacheSize(0) + .sessionTimeout(0)).build(); - clientSslCtx = SslContextBuilder.forClient() - .sslProvider(sslClientProvider()) - .sslContextProvider(clientSslContextProvider()) - .trustManager(clientTrustCrtFile) - .keyManager(clientCrtFile, clientKeyFile, clientKeyPassword) - .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) - .sessionCacheSize(0) - .sessionTimeout(0) - .build(); serverConnectedChannel = null; sb = new ServerBootstrap(); cb = new Bootstrap(); @@ -930,6 +978,7 @@ protected void initChannel(Channel ch) { SSLEngine engine = serverSslCtx.newEngine(ch.alloc()); engine.setUseClientMode(false); engine.setNeedClientAuth(true); + p.addLast(new SslHandler(engine)); p.addLast(new MessageDelegatorChannelHandler(serverReceiver, serverLatch)); p.addLast(new ChannelInboundHandlerAdapter() { @@ -986,13 +1035,14 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc protected void initChannel(Channel ch) throws Exception { ch.config().setAllocator(new TestByteBufAllocator(ch.config().getAllocator(), type)); + SslHandler handler = clientSslCtx.newHandler(ch.alloc()); + handler.engine().setNeedClientAuth(true); ChannelPipeline p = ch.pipeline(); - p.addLast(clientSslCtx.newHandler(ch.alloc())); + p.addLast(handler); p.addLast(new MessageDelegatorChannelHandler(clientReceiver, clientLatch)); p.addLast(new ChannelInboundHandlerAdapter() { @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { - cause.printStackTrace(); if (cause.getCause() instanceof SSLHandshakeException) { clientException = cause.getCause(); clientLatch.countDown(); @@ -1081,11 +1131,15 @@ public void testSessionInvalidate() throws Exception { .trustManager(InsecureTrustManagerFactory.INSTANCE) .sslProvider(sslClientProvider()) .sslContextProvider(clientSslContextProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) .sslProvider(sslServerProvider()) .sslContextProvider(serverSslContextProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine clientEngine = null; SSLEngine serverEngine = null; @@ -1110,11 +1164,15 @@ public void testSSLSessionId() throws Exception { clientSslCtx = SslContextBuilder.forClient() .trustManager(InsecureTrustManagerFactory.INSTANCE) .sslProvider(sslClientProvider()) + // This test only works for non TLSv1.3 for now + .protocols(PROTOCOL_TLS_V1_2) .sslContextProvider(clientSslContextProvider()) .build(); SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) .sslProvider(sslServerProvider()) + // This test only works for non TLSv1.3 for now + .protocols(PROTOCOL_TLS_V1_2) .sslContextProvider(serverSslContextProvider()) .build(); SSLEngine clientEngine = null; @@ -1145,8 +1203,11 @@ public void clientInitiatedRenegotiationWithFatalAlertDoesNotInfiniteLoopServer( throws CertificateException, SSLException, InterruptedException, ExecutionException { final SelfSignedCertificate ssc = new SelfSignedCertificate(); serverSslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(sslServerProvider()) - .sslContextProvider(serverSslContextProvider()).build(); + .sslProvider(sslServerProvider()) + .sslContextProvider(serverSslContextProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); sb = new ServerBootstrap() .group(new NioEventLoopGroup(1)) .channel(NioServerSocketChannel.class) @@ -1196,8 +1257,12 @@ public void channelInactive(ChannelHandlerContext ctx) { serverChannel = sb.bind(new InetSocketAddress(0)).syncUninterruptibly().channel(); clientSslCtx = SslContextBuilder.forClient() - .sslProvider(SslProvider.JDK) // OpenSslEngine doesn't support renegotiation on client side - .trustManager(InsecureTrustManagerFactory.INSTANCE).build(); + // OpenSslEngine doesn't support renegotiation on client side + .sslProvider(SslProvider.JDK) + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); cb = new Bootstrap(); cb.group(new NioEventLoopGroup(1)) @@ -1257,9 +1322,11 @@ protected void testEnablingAnAlreadyDisabledSslProtocol(String[] protocols1, Str File serverKeyFile = new File(getClass().getResource("test_unencrypted.pem").getFile()); File serverCrtFile = new File(getClass().getResource("test.crt").getFile()); serverSslCtx = SslContextBuilder.forServer(serverCrtFile, serverKeyFile) - .sslProvider(sslServerProvider()) - .sslContextProvider(serverSslContextProvider()) - .build(); + .sslProvider(sslServerProvider()) + .sslContextProvider(serverSslContextProvider()) + .protocols(protocols()) + .ciphers(ciphers()) + .build(); sslEngine = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -1338,7 +1405,10 @@ protected void handshake(SSLEngine clientEngine, SSLEngine serverEngine) throws cTOsPos = cTOs.position(); sTOcPos = sTOc.position(); - if (!clientHandshakeFinished) { + if (!clientHandshakeFinished || + // After the handshake completes it is possible we have more data that was send by the server as + // the server will send session updates after the handshake. In this case continue to unwrap. + SslUtils.PROTOCOL_TLS_V1_3.equals(clientEngine.getSession().getProtocol())) { int clientAppReadBufferPos = clientAppReadBuffer.position(); clientResult = clientEngine.unwrap(sTOc, clientAppReadBuffer); @@ -1350,7 +1420,7 @@ protected void handshake(SSLEngine clientEngine, SSLEngine serverEngine) throws clientHandshakeFinished = true; } } else { - assertFalse(sTOc.hasRemaining()); + assertEquals(0, sTOc.remaining()); } if (!serverHandshakeFinished) { @@ -1433,24 +1503,35 @@ protected void setupHandlers(ApplicationProtocolConfig serverApn, ApplicationPro SelfSignedCertificate ssc = new SelfSignedCertificate(); try { - setupHandlers(SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey(), null) - .sslProvider(sslServerProvider()) - .sslContextProvider(serverSslContextProvider()) - .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) - .applicationProtocolConfig(serverApn) - .sessionCacheSize(0) - .sessionTimeout(0) - .build(), - - SslContextBuilder.forClient() - .sslProvider(sslClientProvider()) - .sslContextProvider(clientSslContextProvider()) - .applicationProtocolConfig(clientApn) - .trustManager(InsecureTrustManagerFactory.INSTANCE) - .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) - .sessionCacheSize(0) - .sessionTimeout(0) - .build()); + SslContextBuilder serverCtxBuilder = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey(), null) + .sslProvider(sslServerProvider()) + .sslContextProvider(serverSslContextProvider()) + .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) + .applicationProtocolConfig(serverApn) + .sessionCacheSize(0) + .sessionTimeout(0); + if (serverApn.protocol() == Protocol.NPN || serverApn.protocol() == Protocol.NPN_AND_ALPN) { + // NPN is not really well supported with TLSv1.3 so force to use TLSv1.2 + // See https://github.com/openssl/openssl/issues/3665 + serverCtxBuilder.protocols(PROTOCOL_TLS_V1_2); + } + + SslContextBuilder clientCtxBuilder = SslContextBuilder.forClient() + .sslProvider(sslClientProvider()) + .sslContextProvider(clientSslContextProvider()) + .applicationProtocolConfig(clientApn) + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .ciphers(null, IdentityCipherSuiteFilter.INSTANCE) + .sessionCacheSize(0) + .sessionTimeout(0); + + if (clientApn.protocol() == Protocol.NPN || clientApn.protocol() == Protocol.NPN_AND_ALPN) { + // NPN is not really well supported with TLSv1.3 so force to use TLSv1.2 + // See https://github.com/openssl/openssl/issues/3665 + clientCtxBuilder.protocols(PROTOCOL_TLS_V1_2); + } + + setupHandlers(serverCtxBuilder.build(), clientCtxBuilder.build()); } finally { ssc.delete(); } @@ -1511,6 +1592,11 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E ctx.fireExceptionCaught(cause); } } + + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + clientLatch.countDown(); + } }); } }); @@ -1524,12 +1610,15 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E @Test(timeout = 30000) public void testMutualAuthSameCertChain() throws Exception { - serverSslCtx = SslContextBuilder.forServer( - new ByteArrayInputStream(X509_CERT_PEM.getBytes(CharsetUtil.UTF_8)), - new ByteArrayInputStream(PRIVATE_KEY_PEM.getBytes(CharsetUtil.UTF_8))) - .trustManager(new ByteArrayInputStream(X509_CERT_PEM.getBytes(CharsetUtil.UTF_8))) - .clientAuth(ClientAuth.REQUIRE).sslProvider(sslServerProvider()) - .sslContextProvider(serverSslContextProvider()).build(); + serverSslCtx = configureProtocolForMutualAuth( + SslContextBuilder.forServer( + new ByteArrayInputStream(X509_CERT_PEM.getBytes(CharsetUtil.UTF_8)), + new ByteArrayInputStream(PRIVATE_KEY_PEM.getBytes(CharsetUtil.UTF_8))) + .trustManager(new ByteArrayInputStream(X509_CERT_PEM.getBytes(CharsetUtil.UTF_8))) + .clientAuth(ClientAuth.REQUIRE).sslProvider(sslServerProvider()) + .sslContextProvider(serverSslContextProvider()) + .protocols(protocols()) + .ciphers(ciphers())).build(); sb = new ServerBootstrap(); sb.group(new NioEventLoopGroup(), new NioEventLoopGroup()); @@ -1580,13 +1669,14 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc } }).bind(new InetSocketAddress(0)).syncUninterruptibly().channel(); - clientSslCtx = SslContextBuilder.forClient() - .keyManager( + clientSslCtx = configureProtocolForMutualAuth( + SslContextBuilder.forClient().keyManager( new ByteArrayInputStream(CLIENT_X509_CERT_CHAIN_PEM.getBytes(CharsetUtil.UTF_8)), new ByteArrayInputStream(CLIENT_PRIVATE_KEY_PEM.getBytes(CharsetUtil.UTF_8))) .trustManager(new ByteArrayInputStream(X509_CERT_PEM.getBytes(CharsetUtil.UTF_8))) .sslProvider(sslClientProvider()) - .sslContextProvider(clientSslContextProvider()).build(); + .sslContextProvider(clientSslContextProvider()) + .protocols(protocols()).ciphers(ciphers())).build(); cb = new Bootstrap(); cb.group(new NioEventLoopGroup()); cb.channel(NioSocketChannel.class); @@ -1610,12 +1700,16 @@ public void testUnwrapBehavior() throws Exception { .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -1790,12 +1884,16 @@ public void testPacketBufferSizeLimit() throws Exception { .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -1829,6 +1927,8 @@ public void testSSLEngineUnwrapNoSslRecord() throws Exception { clientSslCtx = SslContextBuilder .forClient() .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -1857,6 +1957,8 @@ public void testBeginHandshakeAfterEngineClosed() throws SSLException { clientSslCtx = SslContextBuilder .forClient() .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -1881,12 +1983,16 @@ public void testBeginHandshakeCloseOutbound() throws Exception { clientSslCtx = SslContextBuilder .forClient() .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -1928,12 +2034,16 @@ public void testCloseInboundAfterBeginHandshake() throws Exception { clientSslCtx = SslContextBuilder .forClient() .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -1965,12 +2075,16 @@ public void testCloseNotifySequence() throws Exception { .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + // This test only works for non TLSv1.3 for now + .protocols(PROTOCOL_TLS_V1_2) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + // This test only works for non TLSv1.3 for now + .protocols(PROTOCOL_TLS_V1_2) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -2032,6 +2146,7 @@ public void testCloseNotifySequence() throws Exception { result = server.wrap(empty, encryptedServerToClient); encryptedServerToClient.flip(); + assertEquals(SSLEngineResult.Status.CLOSED, result.getStatus()); // UNWRAP/WRAP are not expected after this point assertEquals(SSLEngineResult.HandshakeStatus.NOT_HANDSHAKING, result.getHandshakeStatus()); @@ -2046,6 +2161,7 @@ public void testCloseNotifySequence() throws Exception { assertTrue(server.isInboundDone()); result = client.unwrap(encryptedServerToClient, plainClientOut); + plainClientOut.flip(); assertEquals(SSLEngineResult.Status.CLOSED, result.getStatus()); // UNWRAP/WRAP are not expected after this point @@ -2106,12 +2222,16 @@ public void testWrapAfterCloseOutbound() throws Exception { .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -2145,12 +2265,16 @@ public void testMultipleRecordsInOneBufferWithNonZeroPosition() throws Exception .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -2220,12 +2344,16 @@ public void testMultipleRecordsInOneBufferBiggerThenPacketBufferSize() throws Ex .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -2240,21 +2368,35 @@ public void testMultipleRecordsInOneBufferBiggerThenPacketBufferSize() throws Ex int srcLen = plainClientOut.remaining(); SSLEngineResult result; - while (encClientToServer.position() <= server.getSession().getPacketBufferSize()) { + int count = 0; + do { + int plainClientOutPosition = plainClientOut.position(); + int encClientToServerPosition = encClientToServer.position(); result = client.wrap(plainClientOut, encClientToServer); + if (result.getStatus() == Status.BUFFER_OVERFLOW) { + // We did not have enough room to wrap + assertEquals(plainClientOutPosition, plainClientOut.position()); + assertEquals(encClientToServerPosition, encClientToServer.position()); + break; + } assertEquals(SSLEngineResult.Status.OK, result.getStatus()); assertEquals(srcLen, result.bytesConsumed()); assertTrue(result.bytesProduced() > 0); plainClientOut.clear(); - } + ++count; + } while (encClientToServer.position() < server.getSession().getPacketBufferSize()); + + // Check that we were able to wrap multiple times. + assertTrue(count >= 2); encClientToServer.flip(); result = server.unwrap(encClientToServer, plainServerOut); assertEquals(SSLEngineResult.Status.OK, result.getStatus()); assertTrue(result.bytesConsumed() > 0); assertTrue(result.bytesProduced() > 0); + assertTrue(encClientToServer.hasRemaining()); } finally { cert.delete(); cleanupClientSslEngine(client); @@ -2270,12 +2412,16 @@ public void testBufferUnderFlow() throws Exception { .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -2341,12 +2487,16 @@ public void testWrapDoesNotZeroOutSrc() throws Exception { .forClient() .trustManager(cert.cert()) .sslProvider(sslClientProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine client = wrapEngine(clientSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); serverSslCtx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(serverSslCtx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -2393,6 +2543,8 @@ private void testDisableProtocols(String protocol, String... disabledProtocols) SslContext ctx = SslContextBuilder .forServer(cert.certificate(), cert.privateKey()) .sslProvider(sslServerProvider()) + .protocols(protocols()) + .ciphers(ciphers()) .build(); SSLEngine server = wrapEngine(ctx.newEngine(UnpooledByteBufAllocator.DEFAULT)); @@ -2495,4 +2647,12 @@ protected void engineInit(ManagerFactoryParameters managerFactoryParameters) { protected SSLEngine wrapEngine(SSLEngine engine) { return engine; } + + protected List<String> ciphers() { + return Collections.singletonList(protocolCipherCombo.cipher); + } + + protected String[] protocols() { + return new String[] { protocolCipherCombo.protocol }; + } } diff --git a/handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java b/handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java index 0a9429e9c1a..20f2ccbb145 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SslContextBuilderTest.java @@ -15,18 +15,19 @@ */ package io.netty.handler.ssl; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; - import io.netty.buffer.UnpooledByteBufAllocator; import io.netty.handler.ssl.util.SelfSignedCertificate; import org.junit.Assume; +import org.junit.Ignore; +import org.junit.Rule; import org.junit.Test; import javax.net.ssl.SSLEngine; import javax.net.ssl.SSLException; import java.util.Collections; +import static org.junit.Assert.*; + public class SslContextBuilderTest { @Test @@ -79,10 +80,19 @@ public void testInvalidCipherJdk() throws Exception { testInvalidCipher(SslProvider.JDK); } - @Test(expected = SSLException.class) + @Test public void testInvalidCipherOpenSSL() throws Exception { Assume.assumeTrue(OpenSsl.isAvailable()); - testInvalidCipher(SslProvider.OPENSSL); + try { + // This may fail or not depending on the OpenSSL version used + // See https://github.com/openssl/openssl/issues/7196 + testInvalidCipher(SslProvider.OPENSSL); + if (!OpenSsl.versionString().contains("1.1.1")) { + fail(); + } + } catch (SSLException expected) { + // ok + } } private static void testInvalidCipher(SslProvider provider) throws Exception { diff --git a/handler/src/test/java/io/netty/handler/ssl/SslErrorTest.java b/handler/src/test/java/io/netty/handler/ssl/SslErrorTest.java index d935cdf683d..ab7de93787c 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SslErrorTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SslErrorTest.java @@ -124,9 +124,10 @@ public void testCorrectAlert() throws Exception { Assume.assumeTrue(OpenSsl.isAvailable()); SelfSignedCertificate ssc = new SelfSignedCertificate(); - final SslContext sslServerCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) - .sslProvider(serverProvider) - .trustManager(new SimpleTrustManagerFactory() { + final SslContext sslServerCtx = OpenSslTestUtils.configureProtocolForMutualAuth( + SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()) + .sslProvider(serverProvider) + .trustManager(new SimpleTrustManagerFactory() { @Override protected void engineInit(KeyStore keyStore) { } @Override @@ -154,13 +155,13 @@ public X509Certificate[] getAcceptedIssuers() { } } }; } - }).clientAuth(ClientAuth.REQUIRE).build(); + }).clientAuth(ClientAuth.REQUIRE), clientProvider, serverProvider).build(); - final SslContext sslClientCtx = SslContextBuilder.forClient() + final SslContext sslClientCtx = OpenSslTestUtils.configureProtocolForMutualAuth(SslContextBuilder.forClient() .trustManager(InsecureTrustManagerFactory.INSTANCE) .keyManager(new File(getClass().getResource("test.crt").getFile()), new File(getClass().getResource("test_unencrypted.pem").getFile())) - .sslProvider(clientProvider).build(); + .sslProvider(clientProvider), clientProvider, serverProvider).build(); Channel serverChannel = null; Channel clientChannel = null; diff --git a/handler/src/test/java/io/netty/handler/ssl/SslUtilsTest.java b/handler/src/test/java/io/netty/handler/ssl/SslUtilsTest.java index c1de33dd6e3..ce9a22d717f 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SslUtilsTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SslUtilsTest.java @@ -28,6 +28,7 @@ import static io.netty.handler.ssl.SslUtils.getEncryptedPacketLength; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; public class SslUtilsTest { @@ -63,4 +64,15 @@ private static SSLEngine newEngine() throws SSLException, NoSuchAlgorithmExcepti engine.beginHandshake(); return engine; } + + @Test + public void testIsTLSv13Cipher() { + assertTrue(SslUtils.isTLSv13Cipher("TLS_AES_128_GCM_SHA256")); + assertTrue(SslUtils.isTLSv13Cipher("TLS_AES_256_GCM_SHA384")); + assertTrue(SslUtils.isTLSv13Cipher("TLS_CHACHA20_POLY1305_SHA256")); + assertTrue(SslUtils.isTLSv13Cipher("TLS_AES_128_CCM_SHA256")); + assertTrue(SslUtils.isTLSv13Cipher("TLS_AES_128_CCM_8_SHA256")); + assertFalse(SslUtils.isTLSv13Cipher("TLS_DHE_RSA_WITH_AES_128_GCM_SHA256")); + } + } diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslClientRenegotiateTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslClientRenegotiateTest.java index 1a49bde78bb..8036f081f4a 100644 --- a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslClientRenegotiateTest.java +++ b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslClientRenegotiateTest.java @@ -138,7 +138,8 @@ public void testSslRenegotiationRejected(ServerBootstrap sb, Bootstrap cb) throw public void initChannel(Channel sch) throws Exception { serverChannel = sch; serverSslHandler = serverCtx.newHandler(sch.alloc()); - + // As we test renegotiation we should use a protocol that support it. + serverSslHandler.engine().setEnabledProtocols(new String[] { "TLSv1.2" }); sch.pipeline().addLast("ssl", serverSslHandler); sch.pipeline().addLast("handler", serverHandler); } @@ -150,7 +151,8 @@ public void initChannel(Channel sch) throws Exception { public void initChannel(Channel sch) throws Exception { clientChannel = sch; clientSslHandler = clientCtx.newHandler(sch.alloc()); - + // As we test renegotiation we should use a protocol that support it. + clientSslHandler.engine().setEnabledProtocols(new String[] { "TLSv1.2" }); sch.pipeline().addLast("ssl", clientSslHandler); sch.pipeline().addLast("handler", clientHandler); } diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslEchoTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslEchoTest.java index 7c94b5a6710..4cdeae98beb 100644 --- a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslEchoTest.java +++ b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslEchoTest.java @@ -123,17 +123,33 @@ public String toString() { "autoRead = {5}, useChunkedWriteHandler = {6}, useCompositeByteBuf = {7}") public static Collection<Object[]> data() throws Exception { List<SslContext> serverContexts = new ArrayList<SslContext>(); - serverContexts.add(SslContextBuilder.forServer(CERT_FILE, KEY_FILE).sslProvider(SslProvider.JDK).build()); + serverContexts.add(SslContextBuilder.forServer(CERT_FILE, KEY_FILE) + .sslProvider(SslProvider.JDK) + // As we test renegotiation we should use a protocol that support it. + .protocols("TLSv1.2") + .build()); List<SslContext> clientContexts = new ArrayList<SslContext>(); - clientContexts.add(SslContextBuilder.forClient().sslProvider(SslProvider.JDK).trustManager(CERT_FILE).build()); + clientContexts.add(SslContextBuilder.forClient() + .sslProvider(SslProvider.JDK) + .trustManager(CERT_FILE) + // As we test renegotiation we should use a protocol that support it. + .protocols("TLSv1.2") + .build()); boolean hasOpenSsl = OpenSsl.isAvailable(); if (hasOpenSsl) { serverContexts.add(SslContextBuilder.forServer(CERT_FILE, KEY_FILE) - .sslProvider(SslProvider.OPENSSL).build()); - clientContexts.add(SslContextBuilder.forClient().sslProvider(SslProvider.OPENSSL) - .trustManager(CERT_FILE).build()); + .sslProvider(SslProvider.OPENSSL) + // As we test renegotiation we should use a protocol that support it. + .protocols("TLSv1.2") + .build()); + clientContexts.add(SslContextBuilder.forClient() + .sslProvider(SslProvider.OPENSSL) + .trustManager(CERT_FILE) + // As we test renegotiation we should use a protocol that support it. + .protocols("TLSv1.2") + .build()); } else { logger.warn("OpenSSL is unavailable and thus will not be tested.", OpenSsl.unavailabilityCause()); } diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslSessionReuseTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslSessionReuseTest.java index 4071c846104..5d0fd0a5e42 100644 --- a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslSessionReuseTest.java +++ b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketSslSessionReuseTest.java @@ -98,7 +98,7 @@ public void testSslSessionReuse() throws Throwable { public void testSslSessionReuse(ServerBootstrap sb, Bootstrap cb) throws Throwable { final ReadAndDiscardHandler sh = new ReadAndDiscardHandler(true, true); final ReadAndDiscardHandler ch = new ReadAndDiscardHandler(false, true); - final String[] protocols = new String[]{ "TLSv1", "TLSv1.1", "TLSv1.2" }; + final String[] protocols = { "TLSv1", "TLSv1.1", "TLSv1.2" }; sb.childHandler(new ChannelInitializer<SocketChannel>() { @Override
train
val
"2018-10-16T07:05:45"
"2018-07-17T13:56:18Z"
re-thc
val
netty/netty/8299_8305
netty/netty
netty/netty/8299
netty/netty/8305
[ "timestamp(timedelta=105685.0, similarity=0.8497947028522919)" ]
a80c49828f8d52b70e68b8a4c80f677ca44af0af
9a3be347af35107201f7d4a72416e76cf6b39a82
[ "Are you interested in providing a PR?", "@swrdlgc ", "@swrdlgc actually this can never happen as `deflate(...)` will return the number of bytes written to the output buffer which will be 0 if it was not writable.", "The good thing is because of this bug report I think I spotted a bug which is unrelated to the reported problem tho ;)", "thanks for replay, you are right, this can never happen.\r\n" ]
[]
"2018-09-21T22:14:20Z"
[]
Infinite loop in JdkZlibEncoder.deflate()
<pre> private void deflate(ByteBuf out) { int numBytes; do { int writerIndex = out.writerIndex(); numBytes = deflater.deflate( out.array(), out.arrayOffset() + writerIndex, out.writableBytes(), Deflater.SYNC_FLUSH); out.writerIndex(writerIndex + numBytes); } while (numBytes > 0); } </pre> if out.writableBytes() is ZERO and numBytes>0, the function will never return!
[ "codec/src/main/java/io/netty/handler/codec/compression/JdkZlibEncoder.java" ]
[ "codec/src/main/java/io/netty/handler/codec/compression/JdkZlibEncoder.java" ]
[]
diff --git a/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibEncoder.java b/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibEncoder.java index f039fa66e84..276d7f86b0c 100644 --- a/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibEncoder.java +++ b/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibEncoder.java @@ -225,8 +225,18 @@ protected void encode(ChannelHandlerContext ctx, ByteBuf uncompressed, ByteBuf o } deflater.setInput(inAry, offset, len); - while (!deflater.needsInput()) { + for (;;) { deflate(out); + if (deflater.needsInput()) { + // Consumed everything + break; + } else { + if (!out.isWritable()) { + // We did not consume everything but the buffer is not writable anymore. Increase the capacity to + // make more room. + out.ensureWritable(out.writerIndex()); + } + } } }
null
test
val
"2018-09-21T17:04:24"
"2018-09-20T03:09:11Z"
swrdlgc
val
netty/netty/8331_8335
netty/netty
netty/netty/8331
netty/netty/8335
[ "keyword_pr_to_issue" ]
59973e93dd7da715eee709788573e3515cc50238
6138541033ee20fbef864dc5535bcd0db370593e
[ "@rkapsi doh... Want to send over a pr ?", "@normanmaurer consider it done (hopefully tomorrow morning)" ]
[]
"2018-09-28T14:07:13Z"
[ "cleanup" ]
EpollSocketChannelConfig is repeating the channel field 3x
The`EpollSocketChannelConfig` class is repeating the `channel` field 3x through inheritance. The field has also protected and package-private visibility in the super classes which is potentially bug-prone. ![screenshot from 2018-09-27 13-46-22](https://user-images.githubusercontent.com/191635/46164564-542fd200-c25c-11e8-988b-0fa7f3d14eea.png) ### Expected behavior ### Actual behavior ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ### Netty version ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelConfig.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerChannelConfig.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueDatagramChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerSocketChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueSocketChannelConfig.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelConfig.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerChannelConfig.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueDatagramChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerSocketChannelConfig.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueSocketChannelConfig.java" ]
[]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelConfig.java index c0cceb255af..2d2610c0e27 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelConfig.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelConfig.java @@ -29,12 +29,10 @@ import static io.netty.channel.unix.Limits.SSIZE_MAX; public class EpollChannelConfig extends DefaultChannelConfig { - final AbstractEpollChannel channel; private volatile long maxBytesPerGatheringWrite = SSIZE_MAX; EpollChannelConfig(AbstractEpollChannel channel) { super(channel); - this.channel = channel; } @Override @@ -136,7 +134,7 @@ public EpollChannelConfig setMessageSizeEstimator(MessageSizeEstimator estimator * {@link EpollMode#LEVEL_TRIGGERED}. */ public EpollMode getEpollMode() { - return channel.isFlagSet(Native.EPOLLET) + return ((AbstractEpollChannel) channel).isFlagSet(Native.EPOLLET) ? EpollMode.EDGE_TRIGGERED : EpollMode.LEVEL_TRIGGERED; } @@ -156,11 +154,11 @@ public EpollChannelConfig setEpollMode(EpollMode mode) { switch (mode) { case EDGE_TRIGGERED: checkChannelNotRegistered(); - channel.setFlag(Native.EPOLLET); + ((AbstractEpollChannel) channel).setFlag(Native.EPOLLET); break; case LEVEL_TRIGGERED: checkChannelNotRegistered(); - channel.clearFlag(Native.EPOLLET); + ((AbstractEpollChannel) channel).clearFlag(Native.EPOLLET); break; default: throw new Error(); @@ -179,7 +177,7 @@ private void checkChannelNotRegistered() { @Override protected final void autoReadCleared() { - channel.clearEpollIn(); + ((AbstractEpollChannel) channel).clearEpollIn(); } final void setMaxBytesPerGatheringWrite(long maxBytesPerGatheringWrite) { diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java index fbc44c1bcc1..f3de6ac5947 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java @@ -31,12 +31,10 @@ public final class EpollDatagramChannelConfig extends EpollChannelConfig implements DatagramChannelConfig { private static final RecvByteBufAllocator DEFAULT_RCVBUF_ALLOCATOR = new FixedRecvByteBufAllocator(2048); - private final EpollDatagramChannel datagramChannel; private boolean activeOnOpen; EpollDatagramChannelConfig(EpollDatagramChannel channel) { super(channel); - datagramChannel = channel; setRecvByteBufAllocator(DEFAULT_RCVBUF_ALLOCATOR); } @@ -219,7 +217,7 @@ public EpollDatagramChannelConfig setMaxMessagesPerRead(int maxMessagesPerRead) @Override public int getSendBufferSize() { try { - return datagramChannel.socket.getSendBufferSize(); + return ((EpollDatagramChannel) channel).socket.getSendBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -228,7 +226,7 @@ public int getSendBufferSize() { @Override public EpollDatagramChannelConfig setSendBufferSize(int sendBufferSize) { try { - datagramChannel.socket.setSendBufferSize(sendBufferSize); + ((EpollDatagramChannel) channel).socket.setSendBufferSize(sendBufferSize); return this; } catch (IOException e) { throw new ChannelException(e); @@ -238,7 +236,7 @@ public EpollDatagramChannelConfig setSendBufferSize(int sendBufferSize) { @Override public int getReceiveBufferSize() { try { - return datagramChannel.socket.getReceiveBufferSize(); + return ((EpollDatagramChannel) channel).socket.getReceiveBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -247,7 +245,7 @@ public int getReceiveBufferSize() { @Override public EpollDatagramChannelConfig setReceiveBufferSize(int receiveBufferSize) { try { - datagramChannel.socket.setReceiveBufferSize(receiveBufferSize); + ((EpollDatagramChannel) channel).socket.setReceiveBufferSize(receiveBufferSize); return this; } catch (IOException e) { throw new ChannelException(e); @@ -257,7 +255,7 @@ public EpollDatagramChannelConfig setReceiveBufferSize(int receiveBufferSize) { @Override public int getTrafficClass() { try { - return datagramChannel.socket.getTrafficClass(); + return ((EpollDatagramChannel) channel).socket.getTrafficClass(); } catch (IOException e) { throw new ChannelException(e); } @@ -266,7 +264,7 @@ public int getTrafficClass() { @Override public EpollDatagramChannelConfig setTrafficClass(int trafficClass) { try { - datagramChannel.socket.setTrafficClass(trafficClass); + ((EpollDatagramChannel) channel).socket.setTrafficClass(trafficClass); return this; } catch (IOException e) { throw new ChannelException(e); @@ -276,7 +274,7 @@ public EpollDatagramChannelConfig setTrafficClass(int trafficClass) { @Override public boolean isReuseAddress() { try { - return datagramChannel.socket.isReuseAddress(); + return ((EpollDatagramChannel) channel).socket.isReuseAddress(); } catch (IOException e) { throw new ChannelException(e); } @@ -285,7 +283,7 @@ public boolean isReuseAddress() { @Override public EpollDatagramChannelConfig setReuseAddress(boolean reuseAddress) { try { - datagramChannel.socket.setReuseAddress(reuseAddress); + ((EpollDatagramChannel) channel).socket.setReuseAddress(reuseAddress); return this; } catch (IOException e) { throw new ChannelException(e); @@ -295,7 +293,7 @@ public EpollDatagramChannelConfig setReuseAddress(boolean reuseAddress) { @Override public boolean isBroadcast() { try { - return datagramChannel.socket.isBroadcast(); + return ((EpollDatagramChannel) channel).socket.isBroadcast(); } catch (IOException e) { throw new ChannelException(e); } @@ -304,7 +302,7 @@ public boolean isBroadcast() { @Override public EpollDatagramChannelConfig setBroadcast(boolean broadcast) { try { - datagramChannel.socket.setBroadcast(broadcast); + ((EpollDatagramChannel) channel).socket.setBroadcast(broadcast); return this; } catch (IOException e) { throw new ChannelException(e); @@ -362,7 +360,7 @@ public EpollDatagramChannelConfig setEpollMode(EpollMode mode) { */ public boolean isReusePort() { try { - return datagramChannel.socket.isReusePort(); + return ((EpollDatagramChannel) channel).socket.isReusePort(); } catch (IOException e) { throw new ChannelException(e); } @@ -377,7 +375,7 @@ public boolean isReusePort() { */ public EpollDatagramChannelConfig setReusePort(boolean reusePort) { try { - datagramChannel.socket.setReusePort(reusePort); + ((EpollDatagramChannel) channel).socket.setReusePort(reusePort); return this; } catch (IOException e) { throw new ChannelException(e); @@ -390,7 +388,7 @@ public EpollDatagramChannelConfig setReusePort(boolean reusePort) { */ public boolean isIpTransparent() { try { - return datagramChannel.socket.isIpTransparent(); + return ((EpollDatagramChannel) channel).socket.isIpTransparent(); } catch (IOException e) { throw new ChannelException(e); } @@ -402,7 +400,7 @@ public boolean isIpTransparent() { */ public EpollDatagramChannelConfig setIpTransparent(boolean ipTransparent) { try { - datagramChannel.socket.setIpTransparent(ipTransparent); + ((EpollDatagramChannel) channel).socket.setIpTransparent(ipTransparent); return this; } catch (IOException e) { throw new ChannelException(e); @@ -415,7 +413,7 @@ public EpollDatagramChannelConfig setIpTransparent(boolean ipTransparent) { */ public boolean isIpRecvOrigDestAddr() { try { - return datagramChannel.socket.isIpRecvOrigDestAddr(); + return ((EpollDatagramChannel) channel).socket.isIpRecvOrigDestAddr(); } catch (IOException e) { throw new ChannelException(e); } @@ -427,7 +425,7 @@ public boolean isIpRecvOrigDestAddr() { */ public EpollDatagramChannelConfig setIpRecvOrigDestAddr(boolean ipTransparent) { try { - datagramChannel.socket.setIpRecvOrigDestAddr(ipTransparent); + ((EpollDatagramChannel) channel).socket.setIpRecvOrigDestAddr(ipTransparent); return this; } catch (IOException e) { throw new ChannelException(e); diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerChannelConfig.java index 5d6394b1428..514b6c34cc4 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerChannelConfig.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerChannelConfig.java @@ -32,13 +32,11 @@ import static io.netty.channel.ChannelOption.SO_REUSEADDR; public class EpollServerChannelConfig extends EpollChannelConfig implements ServerSocketChannelConfig { - protected final AbstractEpollChannel channel; private volatile int backlog = NetUtil.SOMAXCONN; private volatile int pendingFastOpenRequestsThreshold; EpollServerChannelConfig(AbstractEpollChannel channel) { super(channel); - this.channel = channel; } @Override @@ -85,7 +83,7 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { public boolean isReuseAddress() { try { - return channel.socket.isReuseAddress(); + return ((AbstractEpollChannel) channel).socket.isReuseAddress(); } catch (IOException e) { throw new ChannelException(e); } @@ -93,7 +91,7 @@ public boolean isReuseAddress() { public EpollServerChannelConfig setReuseAddress(boolean reuseAddress) { try { - channel.socket.setReuseAddress(reuseAddress); + ((AbstractEpollChannel) channel).socket.setReuseAddress(reuseAddress); return this; } catch (IOException e) { throw new ChannelException(e); @@ -102,7 +100,7 @@ public EpollServerChannelConfig setReuseAddress(boolean reuseAddress) { public int getReceiveBufferSize() { try { - return channel.socket.getReceiveBufferSize(); + return ((AbstractEpollChannel) channel).socket.getReceiveBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -110,7 +108,7 @@ public int getReceiveBufferSize() { public EpollServerChannelConfig setReceiveBufferSize(int receiveBufferSize) { try { - channel.socket.setReceiveBufferSize(receiveBufferSize); + ((AbstractEpollChannel) channel).socket.setReceiveBufferSize(receiveBufferSize); return this; } catch (IOException e) { throw new ChannelException(e); diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java index dfccb199c98..91861f0c137 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java @@ -191,7 +191,7 @@ public EpollServerSocketChannelConfig setTcpMd5Sig(Map<InetAddress, byte[]> keys */ public boolean isReusePort() { try { - return channel.socket.isReusePort(); + return ((EpollServerSocketChannel) channel).socket.isReusePort(); } catch (IOException e) { throw new ChannelException(e); } @@ -206,7 +206,7 @@ public boolean isReusePort() { */ public EpollServerSocketChannelConfig setReusePort(boolean reusePort) { try { - channel.socket.setReusePort(reusePort); + ((EpollServerSocketChannel) channel).socket.setReusePort(reusePort); return this; } catch (IOException e) { throw new ChannelException(e); @@ -219,7 +219,7 @@ public EpollServerSocketChannelConfig setReusePort(boolean reusePort) { */ public boolean isFreeBind() { try { - return channel.socket.isIpFreeBind(); + return ((EpollServerSocketChannel) channel).socket.isIpFreeBind(); } catch (IOException e) { throw new ChannelException(e); } @@ -231,7 +231,7 @@ public boolean isFreeBind() { */ public EpollServerSocketChannelConfig setFreeBind(boolean freeBind) { try { - channel.socket.setIpFreeBind(freeBind); + ((EpollServerSocketChannel) channel).socket.setIpFreeBind(freeBind); return this; } catch (IOException e) { throw new ChannelException(e); @@ -244,7 +244,7 @@ public EpollServerSocketChannelConfig setFreeBind(boolean freeBind) { */ public boolean isIpTransparent() { try { - return channel.socket.isIpTransparent(); + return ((EpollServerSocketChannel) channel).socket.isIpTransparent(); } catch (IOException e) { throw new ChannelException(e); } @@ -256,7 +256,7 @@ public boolean isIpTransparent() { */ public EpollServerSocketChannelConfig setIpTransparent(boolean transparent) { try { - channel.socket.setIpTransparent(transparent); + ((EpollServerSocketChannel) channel).socket.setIpTransparent(transparent); return this; } catch (IOException e) { throw new ChannelException(e); @@ -268,7 +268,7 @@ public EpollServerSocketChannelConfig setIpTransparent(boolean transparent) { */ public EpollServerSocketChannelConfig setTcpDeferAccept(int deferAccept) { try { - channel.socket.setTcpDeferAccept(deferAccept); + ((EpollServerSocketChannel) channel).socket.setTcpDeferAccept(deferAccept); return this; } catch (IOException e) { throw new ChannelException(e); @@ -280,7 +280,7 @@ public EpollServerSocketChannelConfig setTcpDeferAccept(int deferAccept) { */ public int getTcpDeferAccept() { try { - return channel.socket.getTcpDeferAccept(); + return ((EpollServerSocketChannel) channel).socket.getTcpDeferAccept(); } catch (IOException e) { throw new ChannelException(e); } diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java index d61bf19e27e..f3d04dd3f95 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java @@ -38,7 +38,6 @@ import static io.netty.channel.ChannelOption.TCP_NODELAY; public final class EpollSocketChannelConfig extends EpollChannelConfig implements SocketChannelConfig { - private final EpollSocketChannel channel; private volatile boolean allowHalfClosure; /** @@ -47,7 +46,6 @@ public final class EpollSocketChannelConfig extends EpollChannelConfig implement EpollSocketChannelConfig(EpollSocketChannel channel) { super(channel); - this.channel = channel; if (PlatformDependent.canEnableTcpNoDelayByDefault()) { setTcpNoDelay(true); } @@ -179,7 +177,7 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { @Override public int getReceiveBufferSize() { try { - return channel.socket.getReceiveBufferSize(); + return ((EpollSocketChannel) channel).socket.getReceiveBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -188,7 +186,7 @@ public int getReceiveBufferSize() { @Override public int getSendBufferSize() { try { - return channel.socket.getSendBufferSize(); + return ((EpollSocketChannel) channel).socket.getSendBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -197,7 +195,7 @@ public int getSendBufferSize() { @Override public int getSoLinger() { try { - return channel.socket.getSoLinger(); + return ((EpollSocketChannel) channel).socket.getSoLinger(); } catch (IOException e) { throw new ChannelException(e); } @@ -206,7 +204,7 @@ public int getSoLinger() { @Override public int getTrafficClass() { try { - return channel.socket.getTrafficClass(); + return ((EpollSocketChannel) channel).socket.getTrafficClass(); } catch (IOException e) { throw new ChannelException(e); } @@ -215,7 +213,7 @@ public int getTrafficClass() { @Override public boolean isKeepAlive() { try { - return channel.socket.isKeepAlive(); + return ((EpollSocketChannel) channel).socket.isKeepAlive(); } catch (IOException e) { throw new ChannelException(e); } @@ -224,7 +222,7 @@ public boolean isKeepAlive() { @Override public boolean isReuseAddress() { try { - return channel.socket.isReuseAddress(); + return ((EpollSocketChannel) channel).socket.isReuseAddress(); } catch (IOException e) { throw new ChannelException(e); } @@ -233,7 +231,7 @@ public boolean isReuseAddress() { @Override public boolean isTcpNoDelay() { try { - return channel.socket.isTcpNoDelay(); + return ((EpollSocketChannel) channel).socket.isTcpNoDelay(); } catch (IOException e) { throw new ChannelException(e); } @@ -244,7 +242,7 @@ public boolean isTcpNoDelay() { */ public boolean isTcpCork() { try { - return channel.socket.isTcpCork(); + return ((EpollSocketChannel) channel).socket.isTcpCork(); } catch (IOException e) { throw new ChannelException(e); } @@ -255,7 +253,7 @@ public boolean isTcpCork() { */ public int getSoBusyPoll() { try { - return channel.socket.getSoBusyPoll(); + return ((EpollSocketChannel) channel).socket.getSoBusyPoll(); } catch (IOException e) { throw new ChannelException(e); } @@ -267,7 +265,7 @@ public int getSoBusyPoll() { */ public long getTcpNotSentLowAt() { try { - return channel.socket.getTcpNotSentLowAt(); + return ((EpollSocketChannel) channel).socket.getTcpNotSentLowAt(); } catch (IOException e) { throw new ChannelException(e); } @@ -278,7 +276,7 @@ public long getTcpNotSentLowAt() { */ public int getTcpKeepIdle() { try { - return channel.socket.getTcpKeepIdle(); + return ((EpollSocketChannel) channel).socket.getTcpKeepIdle(); } catch (IOException e) { throw new ChannelException(e); } @@ -289,7 +287,7 @@ public int getTcpKeepIdle() { */ public int getTcpKeepIntvl() { try { - return channel.socket.getTcpKeepIntvl(); + return ((EpollSocketChannel) channel).socket.getTcpKeepIntvl(); } catch (IOException e) { throw new ChannelException(e); } @@ -300,7 +298,7 @@ public int getTcpKeepIntvl() { */ public int getTcpKeepCnt() { try { - return channel.socket.getTcpKeepCnt(); + return ((EpollSocketChannel) channel).socket.getTcpKeepCnt(); } catch (IOException e) { throw new ChannelException(e); } @@ -311,7 +309,7 @@ public int getTcpKeepCnt() { */ public int getTcpUserTimeout() { try { - return channel.socket.getTcpUserTimeout(); + return ((EpollSocketChannel) channel).socket.getTcpUserTimeout(); } catch (IOException e) { throw new ChannelException(e); } @@ -320,7 +318,7 @@ public int getTcpUserTimeout() { @Override public EpollSocketChannelConfig setKeepAlive(boolean keepAlive) { try { - channel.socket.setKeepAlive(keepAlive); + ((EpollSocketChannel) channel).socket.setKeepAlive(keepAlive); return this; } catch (IOException e) { throw new ChannelException(e); @@ -336,7 +334,7 @@ public EpollSocketChannelConfig setPerformancePreferences( @Override public EpollSocketChannelConfig setReceiveBufferSize(int receiveBufferSize) { try { - channel.socket.setReceiveBufferSize(receiveBufferSize); + ((EpollSocketChannel) channel).socket.setReceiveBufferSize(receiveBufferSize); return this; } catch (IOException e) { throw new ChannelException(e); @@ -346,7 +344,7 @@ public EpollSocketChannelConfig setReceiveBufferSize(int receiveBufferSize) { @Override public EpollSocketChannelConfig setReuseAddress(boolean reuseAddress) { try { - channel.socket.setReuseAddress(reuseAddress); + ((EpollSocketChannel) channel).socket.setReuseAddress(reuseAddress); return this; } catch (IOException e) { throw new ChannelException(e); @@ -356,7 +354,7 @@ public EpollSocketChannelConfig setReuseAddress(boolean reuseAddress) { @Override public EpollSocketChannelConfig setSendBufferSize(int sendBufferSize) { try { - channel.socket.setSendBufferSize(sendBufferSize); + ((EpollSocketChannel) channel).socket.setSendBufferSize(sendBufferSize); calculateMaxBytesPerGatheringWrite(); return this; } catch (IOException e) { @@ -367,7 +365,7 @@ public EpollSocketChannelConfig setSendBufferSize(int sendBufferSize) { @Override public EpollSocketChannelConfig setSoLinger(int soLinger) { try { - channel.socket.setSoLinger(soLinger); + ((EpollSocketChannel) channel).socket.setSoLinger(soLinger); return this; } catch (IOException e) { throw new ChannelException(e); @@ -377,7 +375,7 @@ public EpollSocketChannelConfig setSoLinger(int soLinger) { @Override public EpollSocketChannelConfig setTcpNoDelay(boolean tcpNoDelay) { try { - channel.socket.setTcpNoDelay(tcpNoDelay); + ((EpollSocketChannel) channel).socket.setTcpNoDelay(tcpNoDelay); return this; } catch (IOException e) { throw new ChannelException(e); @@ -389,7 +387,7 @@ public EpollSocketChannelConfig setTcpNoDelay(boolean tcpNoDelay) { */ public EpollSocketChannelConfig setTcpCork(boolean tcpCork) { try { - channel.socket.setTcpCork(tcpCork); + ((EpollSocketChannel) channel).socket.setTcpCork(tcpCork); return this; } catch (IOException e) { throw new ChannelException(e); @@ -401,7 +399,7 @@ public EpollSocketChannelConfig setTcpCork(boolean tcpCork) { */ public EpollSocketChannelConfig setSoBusyPoll(int loopMicros) { try { - channel.socket.setSoBusyPoll(loopMicros); + ((EpollSocketChannel) channel).socket.setSoBusyPoll(loopMicros); return this; } catch (IOException e) { throw new ChannelException(e); @@ -414,7 +412,7 @@ public EpollSocketChannelConfig setSoBusyPoll(int loopMicros) { */ public EpollSocketChannelConfig setTcpNotSentLowAt(long tcpNotSentLowAt) { try { - channel.socket.setTcpNotSentLowAt(tcpNotSentLowAt); + ((EpollSocketChannel) channel).socket.setTcpNotSentLowAt(tcpNotSentLowAt); return this; } catch (IOException e) { throw new ChannelException(e); @@ -424,7 +422,7 @@ public EpollSocketChannelConfig setTcpNotSentLowAt(long tcpNotSentLowAt) { @Override public EpollSocketChannelConfig setTrafficClass(int trafficClass) { try { - channel.socket.setTrafficClass(trafficClass); + ((EpollSocketChannel) channel).socket.setTrafficClass(trafficClass); return this; } catch (IOException e) { throw new ChannelException(e); @@ -436,7 +434,7 @@ public EpollSocketChannelConfig setTrafficClass(int trafficClass) { */ public EpollSocketChannelConfig setTcpKeepIdle(int seconds) { try { - channel.socket.setTcpKeepIdle(seconds); + ((EpollSocketChannel) channel).socket.setTcpKeepIdle(seconds); return this; } catch (IOException e) { throw new ChannelException(e); @@ -448,7 +446,7 @@ public EpollSocketChannelConfig setTcpKeepIdle(int seconds) { */ public EpollSocketChannelConfig setTcpKeepIntvl(int seconds) { try { - channel.socket.setTcpKeepIntvl(seconds); + ((EpollSocketChannel) channel).socket.setTcpKeepIntvl(seconds); return this; } catch (IOException e) { throw new ChannelException(e); @@ -468,7 +466,7 @@ public EpollSocketChannelConfig setTcpKeepCntl(int probes) { */ public EpollSocketChannelConfig setTcpKeepCnt(int probes) { try { - channel.socket.setTcpKeepCnt(probes); + ((EpollSocketChannel) channel).socket.setTcpKeepCnt(probes); return this; } catch (IOException e) { throw new ChannelException(e); @@ -480,7 +478,7 @@ public EpollSocketChannelConfig setTcpKeepCnt(int probes) { */ public EpollSocketChannelConfig setTcpUserTimeout(int milliseconds) { try { - channel.socket.setTcpUserTimeout(milliseconds); + ((EpollSocketChannel) channel).socket.setTcpUserTimeout(milliseconds); return this; } catch (IOException e) { throw new ChannelException(e); @@ -493,7 +491,7 @@ public EpollSocketChannelConfig setTcpUserTimeout(int milliseconds) { */ public boolean isIpTransparent() { try { - return channel.socket.isIpTransparent(); + return ((EpollSocketChannel) channel).socket.isIpTransparent(); } catch (IOException e) { throw new ChannelException(e); } @@ -505,7 +503,7 @@ public boolean isIpTransparent() { */ public EpollSocketChannelConfig setIpTransparent(boolean transparent) { try { - channel.socket.setIpTransparent(transparent); + ((EpollSocketChannel) channel).socket.setIpTransparent(transparent); return this; } catch (IOException e) { throw new ChannelException(e); @@ -519,7 +517,7 @@ public EpollSocketChannelConfig setIpTransparent(boolean transparent) { */ public EpollSocketChannelConfig setTcpMd5Sig(Map<InetAddress, byte[]> keys) { try { - channel.setTcpMd5Sig(keys); + ((EpollSocketChannel) channel).setTcpMd5Sig(keys); return this; } catch (IOException e) { throw new ChannelException(e); @@ -532,7 +530,7 @@ public EpollSocketChannelConfig setTcpMd5Sig(Map<InetAddress, byte[]> keys) { */ public EpollSocketChannelConfig setTcpQuickAck(boolean quickAck) { try { - channel.socket.setTcpQuickAck(quickAck); + ((EpollSocketChannel) channel).socket.setTcpQuickAck(quickAck); return this; } catch (IOException e) { throw new ChannelException(e); @@ -545,7 +543,7 @@ public EpollSocketChannelConfig setTcpQuickAck(boolean quickAck) { */ public boolean isTcpQuickAck() { try { - return channel.socket.isTcpQuickAck(); + return ((EpollSocketChannel) channel).socket.isTcpQuickAck(); } catch (IOException e) { throw new ChannelException(e); } @@ -559,7 +557,7 @@ public boolean isTcpQuickAck() { */ public EpollSocketChannelConfig setTcpFastOpenConnect(boolean fastOpenConnect) { try { - channel.socket.setTcpFastOpenConnect(fastOpenConnect); + ((EpollSocketChannel) channel).socket.setTcpFastOpenConnect(fastOpenConnect); return this; } catch (IOException e) { throw new ChannelException(e); @@ -571,7 +569,7 @@ public EpollSocketChannelConfig setTcpFastOpenConnect(boolean fastOpenConnect) { */ public boolean isTcpFastOpenConnect() { try { - return channel.socket.isTcpFastOpenConnect(); + return ((EpollSocketChannel) channel).socket.isTcpFastOpenConnect(); } catch (IOException e) { throw new ChannelException(e); } diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueChannelConfig.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueChannelConfig.java index 878663c5e74..c2b1debe983 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueChannelConfig.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueChannelConfig.java @@ -31,13 +31,11 @@ @UnstableApi public class KQueueChannelConfig extends DefaultChannelConfig { - final AbstractKQueueChannel channel; private volatile boolean transportProvidesGuess; private volatile long maxBytesPerGatheringWrite = SSIZE_MAX; KQueueChannelConfig(AbstractKQueueChannel channel) { super(channel); - this.channel = channel; } @Override @@ -154,7 +152,7 @@ public KQueueChannelConfig setMessageSizeEstimator(MessageSizeEstimator estimato @Override protected final void autoReadCleared() { - channel.clearReadFilter(); + ((AbstractKQueueChannel) channel).clearReadFilter(); } final void setMaxBytesPerGatheringWrite(long maxBytesPerGatheringWrite) { diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueDatagramChannelConfig.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueDatagramChannelConfig.java index c64417485b9..478d5544d16 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueDatagramChannelConfig.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueDatagramChannelConfig.java @@ -45,12 +45,10 @@ @UnstableApi public final class KQueueDatagramChannelConfig extends KQueueChannelConfig implements DatagramChannelConfig { private static final RecvByteBufAllocator DEFAULT_RCVBUF_ALLOCATOR = new FixedRecvByteBufAllocator(2048); - private final KQueueDatagramChannel datagramChannel; private boolean activeOnOpen; KQueueDatagramChannelConfig(KQueueDatagramChannel channel) { super(channel); - this.datagramChannel = channel; setRecvByteBufAllocator(DEFAULT_RCVBUF_ALLOCATOR); } @@ -153,7 +151,7 @@ boolean getActiveOnOpen() { */ public boolean isReusePort() { try { - return datagramChannel.socket.isReusePort(); + return ((KQueueDatagramChannel) channel).socket.isReusePort(); } catch (IOException e) { throw new ChannelException(e); } @@ -168,7 +166,7 @@ public boolean isReusePort() { */ public KQueueDatagramChannelConfig setReusePort(boolean reusePort) { try { - datagramChannel.socket.setReusePort(reusePort); + ((KQueueDatagramChannel) channel).socket.setReusePort(reusePort); return this; } catch (IOException e) { throw new ChannelException(e); @@ -253,7 +251,7 @@ public KQueueDatagramChannelConfig setMaxMessagesPerRead(int maxMessagesPerRead) @Override public int getSendBufferSize() { try { - return datagramChannel.socket.getSendBufferSize(); + return ((KQueueDatagramChannel) channel).socket.getSendBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -262,7 +260,7 @@ public int getSendBufferSize() { @Override public KQueueDatagramChannelConfig setSendBufferSize(int sendBufferSize) { try { - datagramChannel.socket.setSendBufferSize(sendBufferSize); + ((KQueueDatagramChannel) channel).socket.setSendBufferSize(sendBufferSize); return this; } catch (IOException e) { throw new ChannelException(e); @@ -272,7 +270,7 @@ public KQueueDatagramChannelConfig setSendBufferSize(int sendBufferSize) { @Override public int getReceiveBufferSize() { try { - return datagramChannel.socket.getReceiveBufferSize(); + return ((KQueueDatagramChannel) channel).socket.getReceiveBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -281,7 +279,7 @@ public int getReceiveBufferSize() { @Override public KQueueDatagramChannelConfig setReceiveBufferSize(int receiveBufferSize) { try { - datagramChannel.socket.setReceiveBufferSize(receiveBufferSize); + ((KQueueDatagramChannel) channel).socket.setReceiveBufferSize(receiveBufferSize); return this; } catch (IOException e) { throw new ChannelException(e); @@ -291,7 +289,7 @@ public KQueueDatagramChannelConfig setReceiveBufferSize(int receiveBufferSize) { @Override public int getTrafficClass() { try { - return datagramChannel.socket.getTrafficClass(); + return ((KQueueDatagramChannel) channel).socket.getTrafficClass(); } catch (IOException e) { throw new ChannelException(e); } @@ -300,7 +298,7 @@ public int getTrafficClass() { @Override public KQueueDatagramChannelConfig setTrafficClass(int trafficClass) { try { - datagramChannel.socket.setTrafficClass(trafficClass); + ((KQueueDatagramChannel) channel).socket.setTrafficClass(trafficClass); return this; } catch (IOException e) { throw new ChannelException(e); @@ -310,7 +308,7 @@ public KQueueDatagramChannelConfig setTrafficClass(int trafficClass) { @Override public boolean isReuseAddress() { try { - return datagramChannel.socket.isReuseAddress(); + return ((KQueueDatagramChannel) channel).socket.isReuseAddress(); } catch (IOException e) { throw new ChannelException(e); } @@ -319,7 +317,7 @@ public boolean isReuseAddress() { @Override public KQueueDatagramChannelConfig setReuseAddress(boolean reuseAddress) { try { - datagramChannel.socket.setReuseAddress(reuseAddress); + ((KQueueDatagramChannel) channel).socket.setReuseAddress(reuseAddress); return this; } catch (IOException e) { throw new ChannelException(e); @@ -329,7 +327,7 @@ public KQueueDatagramChannelConfig setReuseAddress(boolean reuseAddress) { @Override public boolean isBroadcast() { try { - return datagramChannel.socket.isBroadcast(); + return ((KQueueDatagramChannel) channel).socket.isBroadcast(); } catch (IOException e) { throw new ChannelException(e); } @@ -338,7 +336,7 @@ public boolean isBroadcast() { @Override public KQueueDatagramChannelConfig setBroadcast(boolean broadcast) { try { - datagramChannel.socket.setBroadcast(broadcast); + ((KQueueDatagramChannel) channel).socket.setBroadcast(broadcast); return this; } catch (IOException e) { throw new ChannelException(e); diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerChannelConfig.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerChannelConfig.java index 7f878dc2577..09291f58dbf 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerChannelConfig.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerChannelConfig.java @@ -34,12 +34,10 @@ @UnstableApi public class KQueueServerChannelConfig extends KQueueChannelConfig implements ServerSocketChannelConfig { - protected final AbstractKQueueChannel channel; private volatile int backlog = NetUtil.SOMAXCONN; KQueueServerChannelConfig(AbstractKQueueChannel channel) { super(channel); - this.channel = channel; } @Override @@ -81,7 +79,7 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { public boolean isReuseAddress() { try { - return channel.socket.isReuseAddress(); + return ((AbstractKQueueChannel) channel).socket.isReuseAddress(); } catch (IOException e) { throw new ChannelException(e); } @@ -89,7 +87,7 @@ public boolean isReuseAddress() { public KQueueServerChannelConfig setReuseAddress(boolean reuseAddress) { try { - channel.socket.setReuseAddress(reuseAddress); + ((AbstractKQueueChannel) channel).socket.setReuseAddress(reuseAddress); return this; } catch (IOException e) { throw new ChannelException(e); @@ -98,7 +96,7 @@ public KQueueServerChannelConfig setReuseAddress(boolean reuseAddress) { public int getReceiveBufferSize() { try { - return channel.socket.getReceiveBufferSize(); + return ((AbstractKQueueChannel) channel).socket.getReceiveBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -106,7 +104,7 @@ public int getReceiveBufferSize() { public KQueueServerChannelConfig setReceiveBufferSize(int receiveBufferSize) { try { - channel.socket.setReceiveBufferSize(receiveBufferSize); + ((AbstractKQueueChannel) channel).socket.setReceiveBufferSize(receiveBufferSize); return this; } catch (IOException e) { throw new ChannelException(e); diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerSocketChannelConfig.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerSocketChannelConfig.java index dce3e6e71b8..a743e039de6 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerSocketChannelConfig.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueServerSocketChannelConfig.java @@ -75,7 +75,7 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { public KQueueServerSocketChannelConfig setReusePort(boolean reusePort) { try { - channel.socket.setReusePort(reusePort); + ((KQueueServerSocketChannel) channel).socket.setReusePort(reusePort); return this; } catch (IOException e) { throw new ChannelException(e); @@ -84,7 +84,7 @@ public KQueueServerSocketChannelConfig setReusePort(boolean reusePort) { public boolean isReusePort() { try { - return channel.socket.isReusePort(); + return ((KQueueServerSocketChannel) channel).socket.isReusePort(); } catch (IOException e) { throw new ChannelException(e); } @@ -92,7 +92,7 @@ public boolean isReusePort() { public KQueueServerSocketChannelConfig setAcceptFilter(AcceptFilter acceptFilter) { try { - channel.socket.setAcceptFilter(acceptFilter); + ((KQueueServerSocketChannel) channel).socket.setAcceptFilter(acceptFilter); return this; } catch (IOException e) { throw new ChannelException(e); @@ -101,7 +101,7 @@ public KQueueServerSocketChannelConfig setAcceptFilter(AcceptFilter acceptFilter public AcceptFilter getAcceptFilter() { try { - return channel.socket.getAcceptFilter(); + return ((KQueueServerSocketChannel) channel).socket.getAcceptFilter(); } catch (IOException e) { throw new ChannelException(e); } diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueSocketChannelConfig.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueSocketChannelConfig.java index 8662e55c7b9..b5c718b9113 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueSocketChannelConfig.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueSocketChannelConfig.java @@ -41,12 +41,10 @@ @UnstableApi public final class KQueueSocketChannelConfig extends KQueueChannelConfig implements SocketChannelConfig { - private final KQueueSocketChannel channel; private volatile boolean allowHalfClosure; KQueueSocketChannelConfig(KQueueSocketChannel channel) { super(channel); - this.channel = channel; if (PlatformDependent.canEnableTcpNoDelayByDefault()) { setTcpNoDelay(true); } @@ -131,7 +129,7 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { @Override public int getReceiveBufferSize() { try { - return channel.socket.getReceiveBufferSize(); + return ((KQueueSocketChannel) channel).socket.getReceiveBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -140,7 +138,7 @@ public int getReceiveBufferSize() { @Override public int getSendBufferSize() { try { - return channel.socket.getSendBufferSize(); + return ((KQueueSocketChannel) channel).socket.getSendBufferSize(); } catch (IOException e) { throw new ChannelException(e); } @@ -149,7 +147,7 @@ public int getSendBufferSize() { @Override public int getSoLinger() { try { - return channel.socket.getSoLinger(); + return ((KQueueSocketChannel) channel).socket.getSoLinger(); } catch (IOException e) { throw new ChannelException(e); } @@ -158,7 +156,7 @@ public int getSoLinger() { @Override public int getTrafficClass() { try { - return channel.socket.getTrafficClass(); + return ((KQueueSocketChannel) channel).socket.getTrafficClass(); } catch (IOException e) { throw new ChannelException(e); } @@ -167,7 +165,7 @@ public int getTrafficClass() { @Override public boolean isKeepAlive() { try { - return channel.socket.isKeepAlive(); + return ((KQueueSocketChannel) channel).socket.isKeepAlive(); } catch (IOException e) { throw new ChannelException(e); } @@ -176,7 +174,7 @@ public boolean isKeepAlive() { @Override public boolean isReuseAddress() { try { - return channel.socket.isReuseAddress(); + return ((KQueueSocketChannel) channel).socket.isReuseAddress(); } catch (IOException e) { throw new ChannelException(e); } @@ -185,7 +183,7 @@ public boolean isReuseAddress() { @Override public boolean isTcpNoDelay() { try { - return channel.socket.isTcpNoDelay(); + return ((KQueueSocketChannel) channel).socket.isTcpNoDelay(); } catch (IOException e) { throw new ChannelException(e); } @@ -193,7 +191,7 @@ public boolean isTcpNoDelay() { public int getSndLowAt() { try { - return channel.socket.getSndLowAt(); + return ((KQueueSocketChannel) channel).socket.getSndLowAt(); } catch (IOException e) { throw new ChannelException(e); } @@ -201,7 +199,7 @@ public int getSndLowAt() { public void setSndLowAt(int sndLowAt) { try { - channel.socket.setSndLowAt(sndLowAt); + ((KQueueSocketChannel) channel).socket.setSndLowAt(sndLowAt); } catch (IOException e) { throw new ChannelException(e); } @@ -209,7 +207,7 @@ public void setSndLowAt(int sndLowAt) { public boolean isTcpNoPush() { try { - return channel.socket.isTcpNoPush(); + return ((KQueueSocketChannel) channel).socket.isTcpNoPush(); } catch (IOException e) { throw new ChannelException(e); } @@ -217,7 +215,7 @@ public boolean isTcpNoPush() { public void setTcpNoPush(boolean tcpNoPush) { try { - channel.socket.setTcpNoPush(tcpNoPush); + ((KQueueSocketChannel) channel).socket.setTcpNoPush(tcpNoPush); } catch (IOException e) { throw new ChannelException(e); } @@ -226,7 +224,7 @@ public void setTcpNoPush(boolean tcpNoPush) { @Override public KQueueSocketChannelConfig setKeepAlive(boolean keepAlive) { try { - channel.socket.setKeepAlive(keepAlive); + ((KQueueSocketChannel) channel).socket.setKeepAlive(keepAlive); return this; } catch (IOException e) { throw new ChannelException(e); @@ -236,7 +234,7 @@ public KQueueSocketChannelConfig setKeepAlive(boolean keepAlive) { @Override public KQueueSocketChannelConfig setReceiveBufferSize(int receiveBufferSize) { try { - channel.socket.setReceiveBufferSize(receiveBufferSize); + ((KQueueSocketChannel) channel).socket.setReceiveBufferSize(receiveBufferSize); return this; } catch (IOException e) { throw new ChannelException(e); @@ -246,7 +244,7 @@ public KQueueSocketChannelConfig setReceiveBufferSize(int receiveBufferSize) { @Override public KQueueSocketChannelConfig setReuseAddress(boolean reuseAddress) { try { - channel.socket.setReuseAddress(reuseAddress); + ((KQueueSocketChannel) channel).socket.setReuseAddress(reuseAddress); return this; } catch (IOException e) { throw new ChannelException(e); @@ -256,7 +254,7 @@ public KQueueSocketChannelConfig setReuseAddress(boolean reuseAddress) { @Override public KQueueSocketChannelConfig setSendBufferSize(int sendBufferSize) { try { - channel.socket.setSendBufferSize(sendBufferSize); + ((KQueueSocketChannel) channel).socket.setSendBufferSize(sendBufferSize); calculateMaxBytesPerGatheringWrite(); return this; } catch (IOException e) { @@ -267,7 +265,7 @@ public KQueueSocketChannelConfig setSendBufferSize(int sendBufferSize) { @Override public KQueueSocketChannelConfig setSoLinger(int soLinger) { try { - channel.socket.setSoLinger(soLinger); + ((KQueueSocketChannel) channel).socket.setSoLinger(soLinger); return this; } catch (IOException e) { throw new ChannelException(e); @@ -277,7 +275,7 @@ public KQueueSocketChannelConfig setSoLinger(int soLinger) { @Override public KQueueSocketChannelConfig setTcpNoDelay(boolean tcpNoDelay) { try { - channel.socket.setTcpNoDelay(tcpNoDelay); + ((KQueueSocketChannel) channel).socket.setTcpNoDelay(tcpNoDelay); return this; } catch (IOException e) { throw new ChannelException(e); @@ -287,7 +285,7 @@ public KQueueSocketChannelConfig setTcpNoDelay(boolean tcpNoDelay) { @Override public KQueueSocketChannelConfig setTrafficClass(int trafficClass) { try { - channel.socket.setTrafficClass(trafficClass); + ((KQueueSocketChannel) channel).socket.setTrafficClass(trafficClass); return this; } catch (IOException e) { throw new ChannelException(e);
null
train
val
"2018-09-28T11:34:38"
"2018-09-27T17:53:14Z"
rkapsi
val
netty/netty/8348_8350
netty/netty
netty/netty/8348
netty/netty/8350
[ "timestamp(timedelta=1.0, similarity=1.0000000000000002)" ]
0e4186c5525eacf8fd1f3ce706c5908546c307ec
5b3b8db07fdb66d05e1a44193ca7bcaa64420051
[ "I can fill in some of the blanks about the environment where we've seen the issue:\r\n\r\n### JVM version (e.g. java -version)\r\n\r\n```\r\nopenjdk version \"1.8.0_181\"\r\nOpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1~deb9u1-b13)\r\nOpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)\r\n```\r\n\r\n### OS version (e.g. name -a)\r\n\r\n```\r\nLinux e4fff31d-4e97-44d3-5088-942369c43954 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 17 11:07:07 UTC 2018 x86_64 GNU/Linux\r\n```\r\n\r\n", "@wilkinsona this sounds right... would you be able to submit a PR to fix it with a unit test included.\r\n\r\n/cc @simonbasle \r\n\r\nAlso @carl-mastrangelo as he did the pr...", "I'm releasing reactor-netty today and will submit a PR for the fix right after, once I got a Linux VM set up to reproduce and verify the fix", "@simonbasle thanks a lot! ", "@simonbasle I can lend a hand, standby.", "@normanmaurer there's something funky with the caught exception, IDE doesn't show anything, and when you try and access it, it throws another invisible exception..", "![screenshot from 2018-10-11 12-13-30](https://user-images.githubusercontent.com/323497/46797313-20ba7080-cd4f-11e8-8326-ae7d34081c25.png)\r\n", "oh wait it uses the compiled class from the jar iirc, need to reinstall the artifact each time changes are made for the debugger to work correctly.", "Created PR https://github.com/netty/netty/pull/8350", "@simonbasle @wilkinsona could you verify https://github.com/netty/netty/pull/8350 ?", "@normanmaurer any easier way of obtaining a snapshot of that branch other than building locally? (knowing that the issue manifested in CI)", "@simonbasle I just pushed a snapshot with the fix included to oss.sonatype.org. 4.1.31.Final-SNAPSHOT should have it. ", "outstanding, thanks @normanmaurer we'll look at the snapshot to validate the fix", "Unfortunately, I'm still seeing the hang with 4.1.31.Final-SNAPSHOT. The log output produced by the test that hangs:\r\n\r\n```\r\n2018-10-19 11:36:37.884 INFO ${sys:PID} --- [ main] ationConfigReactiveWebApplicationContext : Registering annotated classes: [class org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfig\r\nuration,class org.springframework.boot.autoconfigure.jackson.JacksonAutoConfiguration,class org.springframework.boot.autoconfigure.web.reactive.WebFluxAutoConfiguration,class org.springframework.boot.actuate.autoconfigur\r\ne.health.HealthIndicatorAutoConfiguration,class org.springframework.boot.actuate.autoconfigure.health.HealthEndpointAutoConfiguration,class org.springframework.boot.actuate.autoconfigure.cloudfoundry.reactive.ReactiveClo\r\nudFoundryActuatorAutoConfiguration,class org.springframework.boot.actuate.autoconfigure.cloudfoundry.reactive.ReactiveCloudFoundryActuatorAutoConfigurationTests$WebClientCustomizerConfig,class org.springframework.boot.ac\r\ntuate.autoconfigure.endpoint.EndpointAutoConfiguration,class org.springframework.boot.actuate.autoconfigure.endpoint.web.WebEndpointAutoConfiguration,class org.springframework.boot.autoconfigure.http.HttpMessageConverter\r\nsAutoConfiguration,class org.springframework.boot.autoconfigure.security.reactive.ReactiveSecurityAutoConfiguration,class org.springframework.boot.autoconfigure.security.reactive.ReactiveUserDetailsServiceAutoConfigurati\r\non,class org.springframework.boot.autoconfigure.web.reactive.function.client.WebClientAutoConfiguration,class org.springframework.boot.actuate.autoconfigure.web.server.ManagementContextAutoConfiguration]\r\n2018-10-19 11:36:38.379 INFO ${sys:PID} --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.boot.actuate.autoconfigure.cloudfoundry.reactive.ReactiveCloudFoundryActuatorAutoConfigu\r\nration$IgnoredPathsSecurityConfiguration' of type [org.springframework.boot.actuate.autoconfigure.cloudfoundry.reactive.ReactiveCloudFoundryActuatorAutoConfiguration$IgnoredPathsSecurityConfiguration$$EnhancerBySpringCGL\r\nIB$$fcdef00b] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)\r\n2018-10-19 11:36:39.646 INFO ${sys:PID} --- [ main] ctiveUserDetailsServiceAutoConfiguration : \r\n\r\nUsing generated security password: 94605c80-ffae-4448-8f75-ac3e6d06e1b4\r\n\r\n2018-10-19 11:36:39.689 INFO ${sys:PID} --- [ main] o.h.v.i.u.Version : HV000001: Hibernate Validator 6.0.13.Final\r\nSLF4J: Class path contains multiple SLF4J bindings.\r\nSLF4J: Found binding in [jar:file:/root/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]\r\nSLF4J: Found binding in [jar:file:/root/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.11.1/log4j-slf4j-impl-2.11.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]\r\nSLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\r\nSLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]\r\n11:36:40.044 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework\r\n11:36:40.063 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false\r\n11:36:40.063 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 8\r\n11:36:40.065 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available\r\n11:36:40.065 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available\r\n11:36:40.066 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available\r\n11:36:40.066 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: available\r\n11:36:40.067 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true\r\n11:36:40.067 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9\r\n11:36:40.067 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available\r\n11:36:40.067 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available\r\n11:36:40.068 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)\r\n11:36:40.068 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)\r\n11:36:40.069 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: 954728448 bytes\r\n11:36:40.069 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1\r\n11:36:40.070 [main] DEBUG io.netty.util.internal.CleanerJava6 - java.nio.ByteBuffer.cleaner(): available\r\n11:36:40.070 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false\r\n11:36:40.081 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple\r\n11:36:40.081 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 9\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 9\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192\r\n11:36:40.084 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true\r\n11:36:40.088 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024\r\n11:36:40.088 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096\r\n11:36:40.096 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false\r\n11:36:40.096 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false\r\n11:36:40.097 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (lo, 127.0.0.1)\r\n11:36:40.098 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128\r\n11:36:40.102 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework\r\n11:36:40.132 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 12\r\n11:36:40.154 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false\r\n11:36:40.154 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512\r\n11:36:40.161 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available\r\n11:36:40.177 [main] WARN reactor.netty.tcp.TcpResources - [http] resources will use the default LoopResources: DefaultLoopResources {prefix=reactor-http, daemon=true, selectCount=6, workerCount=6}\r\n11:36:40.177 [main] WARN reactor.netty.tcp.TcpResources - [http] resources will use the default ConnectionProvider: PooledConnectionProvider {name=http, poolFactory=reactor.netty.resources.ConnectionProvider$$Lambda$270/\r\n21063905@16a5c7e4}\r\n2018-10-19 11:36:40.282 INFO ${sys:PID} --- [ main] o.s.b.a.e.w.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path '/actuator'\r\n11:36:40.571 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)\r\n11:36:40.571 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true\r\n11:36:40.571 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true\r\n11:36:40.574 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Unable to load the library 'netty_transport_native_epoll_x86_64', trying other loading mechanism.\r\njava.lang.UnsatisfiedLinkError: no netty_transport_native_epoll_x86_64 in java.library.path\r\n\tat java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)\r\n\tat java.lang.Runtime.loadLibrary0(Runtime.java:870)\r\n\tat java.lang.System.loadLibrary(System.java:1122)\r\n\tat io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\tat io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:336)\r\n\tat java.security.AccessController.doPrivileged(Native Method)\r\n\tat io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:328)\r\n\tat io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:306)\r\n\tat io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:136)\r\n\tat io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:198)\r\n\tat io.netty.channel.epoll.Native.<clinit>(Native.java:61)\r\n\tat io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:38)\r\n\tat java.lang.Class.forName0(Native Method)\r\n\tat java.lang.Class.forName(Class.java:264)\r\n\tat reactor.netty.resources.DefaultLoopEpoll.<clinit>(DefaultLoopEpoll.java:47)\r\n\tat reactor.netty.resources.LoopResources.preferNative(LoopResources.java:216)\r\n\tat reactor.netty.resources.DefaultLoopResources.onClient(DefaultLoopResources.java:156)\r\n\tat reactor.netty.tcp.TcpResources.onClient(TcpResources.java:168)\r\n\tat reactor.netty.http.client.HttpClientConnect$HttpTcpClient.connect(HttpClientConnect.java:141)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClient.connect(TcpClient.java:185)\r\n\tat reactor.netty.http.client.HttpClientFinalizer.connect(HttpClientFinalizer.java:68)\r\n\tat reactor.netty.http.client.HttpClientFinalizer.responseConnection(HttpClientFinalizer.java:85)\r\n\tat org.springframework.http.client.reactive.ReactorClientHttpConnector.connect(ReactorClientHttpConnector.java:111)\r\n\tat org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.exchange(ExchangeFunctions.java:103)\r\n\tat org.springframework.web.reactive.function.client.DefaultWebClient$DefaultRequestBodyUriSpec.exchange(DefaultWebClient.java:321)\r\n\tat org.springframework.boot.actuate.autoconfigure.cloudfoundry.reactive.ReactiveCloudFoundryActuatorAutoConfigurationTests.lambda$sslValidationNotSkippedByDefault$13(ReactiveCloudFoundryActuatorAutoConfigurationT\r\nests.java:339)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.accept(AbstractApplicationContextRunner.java:346)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.lambda$null$0(AbstractApplicationContextRunner.java:280)\r\n\tat org.springframework.boot.test.util.TestPropertyValues.applyToSystemProperties(TestPropertyValues.java:130)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.lambda$run$1(AbstractApplicationContextRunner.java:278)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.withContextClassLoader(AbstractApplicationContextRunner.java:290)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.run(AbstractApplicationContextRunner.java:277)\r\n\tat org.springframework.boot.actuate.autoconfigure.cloudfoundry.reactive.ReactiveCloudFoundryActuatorAutoConfigurationTests.sslValidationNotSkippedByDefault(ReactiveCloudFoundryActuatorAutoConfigurationTests.java:\r\n327)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\tat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)\r\n\tat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)\r\n\tat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)\r\n\tat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)\r\n\tat org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)\r\n\tat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)\r\n\tat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)\r\n\tat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)\r\n\tat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)\r\n\tat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)\r\n\tat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)\r\n\tat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)\r\n\tat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)\r\n\tat org.junit.runners.ParentRunner.run(ParentRunner.java:363)\r\n\tat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)\r\n\tat org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)\r\n\tat org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)\r\n\tat org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)\r\n\tat org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)\r\n\tat org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)\r\n\tat org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)\r\n\tat org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)\r\n11:36:40.575 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - netty_transport_native_epoll_x86_64 cannot be loaded from java.libary.path, now trying export to -Dio.netty.native.workdir: /tmp\r\njava.lang.UnsatisfiedLinkError: no netty_transport_native_epoll_x86_64 in java.library.path\r\n\tat java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)\r\n\tat java.lang.Runtime.loadLibrary0(Runtime.java:870)\r\n\tat java.lang.System.loadLibrary(System.java:1122)\r\n\tat io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)\r\n\tat io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:316)\r\n\tat io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:136)\r\n\tat io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:198)\r\n\tat io.netty.channel.epoll.Native.<clinit>(Native.java:61)\r\n\tat io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:38)\r\n\tat java.lang.Class.forName0(Native Method)\r\n\tat java.lang.Class.forName(Class.java:264)\r\n\tat reactor.netty.resources.DefaultLoopEpoll.<clinit>(DefaultLoopEpoll.java:47)\r\n\tat reactor.netty.resources.LoopResources.preferNative(LoopResources.java:216)\r\n\tat reactor.netty.resources.DefaultLoopResources.onClient(DefaultLoopResources.java:156)\r\n\tat reactor.netty.tcp.TcpResources.onClient(TcpResources.java:168)\r\n\tat reactor.netty.http.client.HttpClientConnect$HttpTcpClient.connect(HttpClientConnect.java:141)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClientOperator.connect(TcpClientOperator.java:43)\r\n\tat reactor.netty.tcp.TcpClient.connect(TcpClient.java:185)\r\n\tat reactor.netty.http.client.HttpClientFinalizer.connect(HttpClientFinalizer.java:68)\r\n\tat reactor.netty.http.client.HttpClientFinalizer.responseConnection(HttpClientFinalizer.java:85)\r\n\tat org.springframework.http.client.reactive.ReactorClientHttpConnector.connect(ReactorClientHttpConnector.java:111)\r\n\tat org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.exchange(ExchangeFunctions.java:103)\r\n\tat org.springframework.web.reactive.function.client.DefaultWebClient$DefaultRequestBodyUriSpec.exchange(DefaultWebClient.java:321)\r\n\tat org.springframework.boot.actuate.autoconfigure.cloudfoundry.reactive.ReactiveCloudFoundryActuatorAutoConfigurationTests.lambda$sslValidationNotSkippedByDefault$13(ReactiveCloudFoundryActuatorAutoConfigurationT\r\nests.java:339)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.accept(AbstractApplicationContextRunner.java:346)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.lambda$null$0(AbstractApplicationContextRunner.java:280)\r\n\tat org.springframework.boot.test.util.TestPropertyValues.applyToSystemProperties(TestPropertyValues.java:130)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.lambda$run$1(AbstractApplicationContextRunner.java:278)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.withContextClassLoader(AbstractApplicationContextRunner.java:290)\r\n\tat org.springframework.boot.test.context.runner.AbstractApplicationContextRunner.run(AbstractApplicationContextRunner.java:277)\r\n\tat org.springframework.boot.actuate.autoconfigure.cloudfoundry.reactive.ReactiveCloudFoundryActuatorAutoConfigurationTests.sslValidationNotSkippedByDefault(ReactiveCloudFoundryActuatorAutoConfigurationTests.java:\r\n327)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\tat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)\r\n\tat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)\r\n\tat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)\r\n\tat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)\r\n\tat org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)\r\n\tat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)\r\n\tat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)\r\n\tat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)\r\n\tat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)\r\n\tat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)\r\n\tat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)\r\n\tat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)\r\n\tat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)\r\n\tat org.junit.runners.ParentRunner.run(ParentRunner.java:363)\r\n\tat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)\r\n\tat org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)\r\n\tat org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)\r\n\tat org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)\r\n\tat org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)\r\n\tat org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)\r\n\tat org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)\r\n\tat org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)\r\n\tSuppressed: java.lang.UnsatisfiedLinkError: no netty_transport_native_epoll_x86_64 in java.library.path\r\n\t\tat java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)\r\n\t\tat java.lang.Runtime.loadLibrary0(Runtime.java:870)\r\n\t\tat java.lang.System.loadLibrary(System.java:1122)\r\n\t\tat io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)\r\n\t\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\t\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\t\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\t\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\t\tat io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:336)\r\n\t\tat java.security.AccessController.doPrivileged(Native Method)\r\n\t\tat io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:328)\r\n\t\tat io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:306)\r\n\t\t... 56 common frames omitted\r\n11:36:40.586 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_641906979574480854256.so\r\n11:36:40.586 [main] DEBUG reactor.netty.resources.DefaultLoopEpoll - Default Epoll support : true\r\n11:36:40.587 [main] DEBUG reactor.netty.resources.DefaultLoopKQueue - Default KQueue support : false\r\n11:36:40.637 [main] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative not in the classpath; OpenSslEngine will be unavailable.\r\n11:36:40.756 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default protocols (JDK): [TLSv1.2, TLSv1.1, TLSv1] \r\n11:36:40.756 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default cipher suites (JDK): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_EC\r\nDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA]\r\n11:36:40.780 [main] DEBUG reactor.netty.resources.PooledConnectionProvider - Creating new client pool [http] for self-signed.badssl.com:443\r\n11:36:40.795 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 8426 (auto-detected)\r\n11:36:40.797 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 02:42:ac:ff:fe:11:00:02 (auto-detected)\r\n11:36:40.820 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled\r\n11:36:40.820 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0\r\n11:36:40.820 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384\r\n11:36:40.835 [reactor-http-client-epoll-10] DEBUG reactor.netty.resources.PooledConnectionProvider - [id: 0x07b0bdab] Created new pooled channel, now 0 active connections and 1 inactive connections\r\n11:36:40.864 [reactor-http-client-epoll-10] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true\r\n11:36:40.864 [reactor-http-client-epoll-10] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true\r\n11:36:40.865 [reactor-http-client-epoll-10] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@5b2d4d1e\r\n11:36:40.883 [reactor-http-client-epoll-10] DEBUG reactor.netty.tcp.SslProvider - [id: 0x07b0bdab] SSL enabled using engine SSLEngineImpl and SNI self-signed.badssl.com:443\r\n11:36:40.890 [reactor-http-client-epoll-10] DEBUG reactor.netty.channel.BootstrapHandlers - [id: 0x07b0bdab] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (react\r\nor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (SimpleChannelPool$1#0 = io.nett\r\ny.channel.pool.SimpleChannelPool$1), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.left.decompressor = io.netty.handler.codec.http.HttpContentDecompressor), (reactor.right.reactiveBridg\r\ne = reactor.netty.channel.ChannelOperationsHandler)}\r\n11:36:41.028 [reactor-http-client-epoll-10] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096\r\n11:36:41.029 [reactor-http-client-epoll-10] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2\r\n11:36:41.029 [reactor-http-client-epoll-10] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16\r\n11:36:41.029 [reactor-http-client-epoll-10] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8\r\n11:36:41.041 [reactor-http-client-epoll-10] DEBUG reactor.netty.resources.PooledConnectionProvider - [id: 0x07b0bdab, L:/172.17.0.2:50472 - R:self-signed.badssl.com/104.154.89.105:443] Registering pool release on close e\r\nvent for channel\r\n11:36:41.042 [reactor-http-client-epoll-10] DEBUG reactor.netty.resources.PooledConnectionProvider - [id: 0x07b0bdab, L:/172.17.0.2:50472 - R:self-signed.badssl.com/104.154.89.105:443] Channel connected, now 1 active con\r\nnections and 0 inactive connections\r\n11:36:41.042 [reactor-http-client-epoll-10] WARN io.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop.\r\nio.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument\r\n\tat io.netty.channel.epoll.Native.epollWait0(Native Method)\r\n\tat io.netty.channel.epoll.Native.epollWait(Native.java:114)\r\n\tat io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:253)\r\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:278)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n11:36:42.045 [reactor-http-client-epoll-10] WARN io.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop.\r\nio.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument\r\n\tat io.netty.channel.epoll.Native.epollWait0(Native Method)\r\n\tat io.netty.channel.epoll.Native.epollWait(Native.java:114)\r\n\tat io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:253)\r\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:278)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n11:36:43.047 [reactor-http-client-epoll-10] WARN io.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop.\r\nio.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument\r\n\tat io.netty.channel.epoll.Native.epollWait0(Native Method)\r\n\tat io.netty.channel.epoll.Native.epollWait(Native.java:114)\r\n\tat io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:253)\r\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:278)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n11:36:44.050 [reactor-http-client-epoll-10] WARN io.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop.\r\nio.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument\r\n\tat io.netty.channel.epoll.Native.epollWait0(Native Method)\r\n\tat io.netty.channel.epoll.Native.epollWait(Native.java:114)\r\n\tat io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:253)\r\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:278)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n11:36:45.052 [reactor-http-client-epoll-10] WARN io.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop.\r\nio.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument\r\n\tat io.netty.channel.epoll.Native.epollWait0(Native Method)\r\n\tat io.netty.channel.epoll.Native.epollWait(Native.java:114)\r\n\tat io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:253)\r\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:278)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n11:36:46.053 [reactor-http-client-epoll-10] WARN io.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop.\r\nio.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument\r\n\tat io.netty.channel.epoll.Native.epollWait0(Native Method)\r\n\tat io.netty.channel.epoll.Native.epollWait(Native.java:114)\r\n\tat io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:253)\r\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:278)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n11:36:47.056 [reactor-http-client-epoll-10] WARN io.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop.\r\nio.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument\r\n\tat io.netty.channel.epoll.Native.epollWait0(Native Method)\r\n\tat io.netty.channel.epoll.Native.epollWait(Native.java:114)\r\n\tat io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:253)\r\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:278)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n11:36:48.058 [reactor-http-client-epoll-10] WARN io.netty.channel.epoll.EpollEventLoop - Unexpected exception in the selector loop.\r\nio.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument\r\n\tat io.netty.channel.epoll.Native.epollWait0(Native Method)\r\n\tat io.netty.channel.epoll.Native.epollWait(Native.java:114)\r\n\tat io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:253)\r\n\tat io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:278)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n```\r\n\r\nI realise this isn't much to go on. I'll see if I can set up an environment where I can more easily attach a debugger and see exactly what's happening.", "I am doubting myself now. I have set up an Ubuntu VM and, while the test still hangs, I am not seeing the `io.netty.channel.ChannelException`. Please disregard the above for the time being at least.\r\n\r\nI still suspect that the hang is due to something that's changed in Netty as it does not occur with 4.1.29.Final but I don't yet know what the cause is.", "Ok, I've recreated the `io.netty.channel.ChannelException` with the debugger attached using 4.1.31.Final-SNAPSHOT. When it happens, `EpollEventLoop.epollWait(boolean)` calls `Native.epollWait(FileDescriptor, EpollEventArray, FileDescriptor, int, int)` passing in `-1` for both `timeoutSec` and `timeoutNs`. That points to #7816 again as it introduced [the if-branch](https://github.com/netty/netty/pull/7816/files#diff-db3e069239a403b954e3ebc024ba9507R244) that results in `-1` being used.", "@carl-mastrangelo can you take a look ?", "@normanmaurer I can take a look, but i'm in a crunch time for the next few days. I'll put it on my todo", "@normanmaurer Is it worth opening a separate issue for the on-going problem or perhaps re-opening this one?", "@wilkinsona I would open a new one as the error itself is gone (I was also not able to reproduce yet :( )." ]
[]
"2018-10-11T10:55:55Z"
[ "defect" ]
epoll_wait produces an EINVAL error since 4.1.30
### Expected behavior epoll_wait should work in 4.1.30 like it did in 4.1.29 ### Actual behavior Since switching to 4.1.30, `EpollEventLoop`'s `handleLoopException` is triggered with `io.netty.channel.ChannelException: timerfd_settime() failed: Invalid argument`, which points to `timerfd_settime`. This causes an epoll thread to be "blocked" sleeping. The issue is visible in a Spring Boot test dealing with bad SSL certificates, which uses reactor/reactor-netty. While investigating this remotely with limited resources (partial access to the logs and reproduction case, no local linux machine to test on), I found that the 4.1.30 suspiciously contained an issue related to epoll_wait. Looking at the PR I think I might have found the regression: https://github.com/netty/netty/pull/7816/files#diff-db3e069239a403b954e3ebc024ba9507R251 `Integer.MAX_VALUE` should be `MAX_SCHEDULED_TIMERFD_NS` (`999,999,999`) like it was before the PR, else `timerfd_settime` might return `EINVAL` if it is too large. ### Steps to reproduce The issue is triggered during tests of Spring Boot, but this is a smaller reproduction snippet that is using Spring Framework 5: ```java @Test public void strippedDown() { assertThatExceptionOfType(RuntimeException.class) .isThrownBy(() -> WebClient.create().get() .uri("https://" + "self-signed.badssl.com/").exchange() .block(Duration.ofSeconds(10))) .withCauseInstanceOf(SSLException.class); } ``` I can try to spin up a repository with a maven project that reproduces the issue and can be run without set up if you need. ### Netty version 4.1.30 ### JVM version (e.g. `java -version`) ?? ### OS version (e.g. `uname -a`) ??
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java" ]
[ "transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java" ]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java index a2707d96a99..33adf862796 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java @@ -35,7 +35,6 @@ import java.util.ArrayList; import java.util.Collection; import java.util.Queue; -import java.util.concurrent.Callable; import java.util.concurrent.Executor; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; @@ -44,7 +43,7 @@ /** * {@link EventLoop} which uses epoll under the covers. Only works on Linux! */ -final class EpollEventLoop extends SingleThreadEventLoop { +class EpollEventLoop extends SingleThreadEventLoop { private static final InternalLogger logger = InternalLoggerFactory.getInstance(EpollEventLoop.class); private static final AtomicIntegerFieldUpdater<EpollEventLoop> WAKEN_UP_UPDATER = AtomicIntegerFieldUpdater.newUpdater(EpollEventLoop.class, "wakenUp"); @@ -75,6 +74,7 @@ public int get() throws Exception { return epollWaitNow(); } }; + @SuppressWarnings("unused") // AtomicIntegerFieldUpdater private volatile int wakenUp; private volatile int ioRatio = 50; @@ -248,7 +248,7 @@ private int epollWait(boolean oldWakeup) throws IOException { long totalDelay = delayNanos(System.nanoTime()); prevDeadlineNanos = curDeadlineNanos; delaySeconds = (int) min(totalDelay / 1000000000L, Integer.MAX_VALUE); - delayNanos = (int) min(totalDelay - delaySeconds * 1000000000L, Integer.MAX_VALUE); + delayNanos = (int) min(totalDelay - delaySeconds * 1000000000L, MAX_SCHEDULED_TIMERFD_NS); } return Native.epollWait(epollFd, events, timerFd, delaySeconds, delayNanos); } @@ -356,7 +356,10 @@ protected void run() { } } - private static void handleLoopException(Throwable t) { + /** + * Visible only for testing! + */ + void handleLoopException(Throwable t) { logger.warn("Unexpected exception in the selector loop.", t); // Prevent possible consecutive immediate failures that lead to
diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java index ebf529f7328..4e51114422b 100644 --- a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java +++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java @@ -15,32 +15,52 @@ */ package io.netty.channel.epoll; +import io.netty.channel.DefaultSelectStrategyFactory; import io.netty.channel.EventLoop; import io.netty.channel.EventLoopGroup; +import io.netty.util.concurrent.DefaultThreadFactory; import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.RejectedExecutionHandlers; +import io.netty.util.concurrent.ThreadPerTaskExecutor; import org.junit.Test; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicReference; import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; public class EpollEventLoopTest { @Test public void testScheduleBigDelayNotOverflow() { - EventLoopGroup group = new EpollEventLoopGroup(1); + final AtomicReference<Throwable> capture = new AtomicReference<Throwable>(); - final EventLoop el = group.next(); - Future<?> future = el.schedule(new Runnable() { + final EventLoopGroup group = new EpollEventLoop(null, + new ThreadPerTaskExecutor(new DefaultThreadFactory(getClass())), 0, + DefaultSelectStrategyFactory.INSTANCE.newSelectStrategy(), RejectedExecutionHandlers.reject()) { @Override - public void run() { - // NOOP + void handleLoopException(Throwable t) { + capture.set(t); + super.handleLoopException(t); } - }, Long.MAX_VALUE, TimeUnit.MILLISECONDS); + }; - assertFalse(future.awaitUninterruptibly(1000)); - assertTrue(future.cancel(true)); - group.shutdownGracefully(); + try { + final EventLoop eventLoop = group.next(); + Future<?> future = eventLoop.schedule(new Runnable() { + @Override + public void run() { + // NOOP + } + }, Long.MAX_VALUE, TimeUnit.MILLISECONDS); + + assertFalse(future.awaitUninterruptibly(1000)); + assertTrue(future.cancel(true)); + assertNull(capture.get()); + } finally { + group.shutdownGracefully(); + } } }
val
val
"2018-10-11T08:59:47"
"2018-10-11T09:43:34Z"
simonbasle
val
netty/netty/8384_8389
netty/netty
netty/netty/8384
netty/netty/8389
[ "keyword_pr_to_issue" ]
04001fdad1ca3c72625cddb1b1c7789381cb5f30
9eebe7ed742e4ebeca17913782f327412babcf38
[ "@normanmaurer please let me know if I can send a PR", "@slandelle I wonder if you would not be better of to just create your `SSLContext` (the java one) directly and then construct the `JdkSslContext` from it (we have a constructor that can wrap an existing `SSLContext`) ?\r\n\r\n", "Oh right, I didn't notice you were keeping the `JdkSslContext` constructor public.\r\nYes, that would work for me, thanks!", "Actually, the \"full\" constructor is package protected, so there's no way to force the enabled protocols.", "@slandelle what about adding another public constructor that allows this ? Also in the meantime you can also just wrap the `JdkSslContext` that you created via a `DelegatingSslContext` and override `initEngine` where you can just set the protocols etc. ", "PR sent: #8389" ]
[ "@slandelle do we need to call `protocols.clone()` if these are not null ?", "@slandelle maybe mark this constructor as `@deprecated` ?", "sure", "right, that would be consistent with SslContextBuilder", "@slandelle please use java docs style via `{@link ... }`", "crossing fingers checkstyle won't complain on line length...\r\n\r\n" ]
"2018-10-16T12:31:36Z"
[]
Feature request: let one pass a SecureRandom instance when building SslContext
### Expected behavior When spawning multiple `JdkSslContext`, it should be possible to reuse a single SecureRandom instance and save allocations. ### Actual behavior `SslContextBuilder` won't let one pass a SecureRandom instance. Both `JdkSslServerContext#newSSLContext` and `JdkSslClientContext#newSSLContext` pass a null random value to `SSLContext#init` that allocates a new `SecureRandom` instance. ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) NA ### Netty version 4.1.30 ### JVM version (e.g. `java -version`) NA ### OS version (e.g. `uname -a`) NA
[ "handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java index 2b61391b491..6aef52a246f 100644 --- a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java @@ -140,7 +140,10 @@ public class JdkSslContext extends SslContext { * @param sslContext the {@link SSLContext} to use. * @param isClient {@code true} if this context should create {@link SSLEngine}s for client-side usage. * @param clientAuth the {@link ClientAuth} to use. This will only be used when {@param isClient} is {@code false}. + * @deprecated Use {@link #JdkSslContext(SSLContext, boolean, Iterable, CipherSuiteFilter, + * ApplicationProtocolConfig, ClientAuth, String[], boolean)} */ + @Deprecated public JdkSslContext(SSLContext sslContext, boolean isClient, ClientAuth clientAuth) { this(sslContext, isClient, null, IdentityCipherSuiteFilter.INSTANCE, @@ -156,11 +159,44 @@ public JdkSslContext(SSLContext sslContext, boolean isClient, * @param cipherFilter the filter to use. * @param apn the {@link ApplicationProtocolConfig} to use. * @param clientAuth the {@link ClientAuth} to use. This will only be used when {@param isClient} is {@code false}. + * @deprecated Use {@link #JdkSslContext(SSLContext, boolean, Iterable, CipherSuiteFilter, + * ApplicationProtocolConfig, ClientAuth, String[], boolean)} */ + @Deprecated public JdkSslContext(SSLContext sslContext, boolean isClient, Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn, ClientAuth clientAuth) { - this(sslContext, isClient, ciphers, cipherFilter, toNegotiator(apn, !isClient), clientAuth, null, false); + this(sslContext, isClient, ciphers, cipherFilter, apn, clientAuth, null, false); + } + + /** + * Creates a new {@link JdkSslContext} from a pre-configured {@link SSLContext}. + * + * @param sslContext the {@link SSLContext} to use. + * @param isClient {@code true} if this context should create {@link SSLEngine}s for client-side usage. + * @param ciphers the ciphers to use or {@code null} if the standard should be used. + * @param cipherFilter the filter to use. + * @param apn the {@link ApplicationProtocolConfig} to use. + * @param clientAuth the {@link ClientAuth} to use. This will only be used when {@param isClient} is {@code false}. + * @param protocols the protocols to enable, or {@code null} to enable the default protocols. + * @param startTls {@code true} if the first write request shouldn't be encrypted + */ + public JdkSslContext(SSLContext sslContext, + boolean isClient, + Iterable<String> ciphers, + CipherSuiteFilter cipherFilter, + ApplicationProtocolConfig apn, + ClientAuth clientAuth, + String[] protocols, + boolean startTls) { + this(sslContext, + isClient, + ciphers, + cipherFilter, + toNegotiator(apn, !isClient), + clientAuth, + protocols == null ? null : protocols.clone(), + startTls); } @SuppressWarnings("deprecation")
null
val
val
"2018-10-16T07:05:45"
"2018-10-15T11:31:15Z"
slandelle
val
netty/netty/8400_8403
netty/netty
netty/netty/8400
netty/netty/8403
[ "timestamp(timedelta=1.0, similarity=0.9536387833107202)" ]
3a4a0432d309dd17de86bd1cf4742030174fc190
69545aedc444a380729c3b4cf441cf5b438f939d
[ "@atcurtis thanks for reporting... Should be fixed by https://github.com/netty/netty/pull/8403. PTAL" ]
[]
"2018-10-18T13:16:34Z"
[ "defect" ]
CompositeByteBuf.decompose() doesn't work as expected.
### Expected behavior It is expected that the arguments to CompositeByteBuf decompose() are the same as for slice() ### Actual behavior We get random garbage at the start. ### Steps to reproduce Call decompose with valid arguments. ### Minimal yet complete reproducer code (or URL to code) ByteBufAllocator alloc = PooledByteBufAllocator.DEFAULT; ByteBuf buf = alloc.directBuffer(16384).setIndex(0, 16384); StringBuilder sb = new StringBuilder(16384); while (sb.length() < 16000) { sb.append(UUID.randomUUID()); } buf.setBytes(0, sb.toString().getBytes()); CompositeByteBuf composite = alloc.compositeBuffer(); composite.addComponents(true, buf.retainedSlice(100, 200), buf.retainedSlice(300, 400), buf.retainedSlice(10000, 1000)); ByteBuf test1 = composite.slice(150, 700); ByteBuf test2 = Unpooled.wrappedBuffer(composite.decompose(150, 700).toArray(new ByteBuf[0])); Assert.assertTrue(ByteBufUtil.equals(test1, test2)); // this equality test fails. ### Netty version 4.1.22 ### JVM version (e.g. `java -version`) java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) ### OS version (e.g. `uname -a`) Linux acurtis-ld1 3.10.0-514.36.5.el7.x86_64 #1 SMP Thu Dec 28 21:42:18 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
[ "buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java" ]
[ "buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java" ]
[ "buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java index 85160c79a6e..f987e48681c 100644 --- a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java @@ -526,39 +526,31 @@ public List<ByteBuf> decompose(int offset, int length) { } int componentId = toComponentIndex(offset); - List<ByteBuf> slice = new ArrayList<ByteBuf>(components.size()); - + int bytesToSlice = length; // The first component Component firstC = components.get(componentId); - ByteBuf first = firstC.buf.duplicate(); - first.readerIndex(offset - firstC.offset); + int firstBufOffset = offset - firstC.offset; - ByteBuf buf = first; - int bytesToSlice = length; - do { - int readableBytes = buf.readableBytes(); - if (bytesToSlice <= readableBytes) { - // Last component - buf.writerIndex(buf.readerIndex() + bytesToSlice); - slice.add(buf); - break; - } else { - // Not the last component - slice.add(buf); - bytesToSlice -= readableBytes; - componentId ++; + ByteBuf slice = firstC.buf.slice(firstBufOffset + firstC.buf.readerIndex(), + Math.min(firstC.length - firstBufOffset, bytesToSlice)); + bytesToSlice -= slice.readableBytes(); - // Fetch the next component. - buf = components.get(componentId).buf.duplicate(); - } - } while (bytesToSlice > 0); - - // Slice all components because only readable bytes are interesting. - for (int i = 0; i < slice.size(); i ++) { - slice.set(i, slice.get(i).slice()); + if (bytesToSlice == 0) { + return Collections.singletonList(slice); } - return slice; + List<ByteBuf> sliceList = new ArrayList<ByteBuf>(components.size() - componentId); + sliceList.add(slice); + + // Add all the slices until there is nothing more left and then return the List. + do { + Component component = components.get(++componentId); + slice = component.buf.slice(component.buf.readerIndex(), Math.min(component.length, bytesToSlice)); + bytesToSlice -= slice.readableBytes(); + sliceList.add(slice); + } while (bytesToSlice > 0); + + return sliceList; } @Override
diff --git a/buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java b/buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java index 22ca0546aa9..03e55913a06 100644 --- a/buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java +++ b/buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java @@ -16,6 +16,7 @@ package io.netty.buffer; import io.netty.util.ReferenceCountUtil; +import io.netty.util.internal.PlatformDependent; import org.junit.Assume; import org.junit.Test; @@ -1136,4 +1137,45 @@ private void testAllocatorIsSameWhenCopy(boolean withIndexAndLength) { buffer.release(); copy.release(); } + + @Test + public void testDecomposeMultiple() { + testDecompose(150, 500, 3); + } + + @Test + public void testDecomposeOne() { + testDecompose(310, 50, 1); + } + + @Test + public void testDecomposeNone() { + testDecompose(310, 0, 0); + } + + private static void testDecompose(int offset, int length, int expectedListSize) { + byte[] bytes = new byte[1024]; + PlatformDependent.threadLocalRandom().nextBytes(bytes); + ByteBuf buf = wrappedBuffer(bytes); + + CompositeByteBuf composite = compositeBuffer(); + composite.addComponents(true, + buf.retainedSlice(100, 200), + buf.retainedSlice(300, 400), + buf.retainedSlice(700, 100)); + + ByteBuf slice = composite.slice(offset, length); + List<ByteBuf> bufferList = composite.decompose(offset, length); + assertEquals(expectedListSize, bufferList.size()); + ByteBuf wrapped = wrappedBuffer(bufferList.toArray(new ByteBuf[0])); + + assertEquals(slice, wrapped); + composite.release(); + buf.release(); + + for (ByteBuf buffer: bufferList) { + assertEquals(0, buffer.refCnt()); + } + } + }
train
val
"2018-10-18T19:31:01"
"2018-10-18T04:37:58Z"
atcurtis
val
netty/netty/8398_8408
netty/netty
netty/netty/8398
netty/netty/8408
[ "timestamp(timedelta=365.0, similarity=0.9060999842022229)" ]
7f391426a2179d3c68e3a116567aa9a0aaf0524c
25f0450bd9ccc9407e2f6721454c52d8e078c60f
[ "Here is a tentative fix for the described problem: https://github.com/lutovich/netty/commit/bb6b371f0e974814ce1175283c0c06e660180202. I can make it into a PR if the change is acceptable.", "@lutovich I think this looks good from a quick look. Can you open a pr ?", "Thanks, @normanmaurer. I created a PR.", "Fixed in https://github.com/netty/netty/pull/9226." ]
[ "@lutovich wouldn't that throw and exception when called from within the `EventLoop` as it may possible deadlock ?", "This appears racy; more than one channel could see isEmpty() == true. So this should be trySuccess()", "It looks like the previous code called awaitUninterruptibly.", "Even when on the EventLoop ?", "Yeah, that's a problem. Will fix", "Previous code executed `channel.close().awaitUninterruptibly()` unconditionally. This can be a problem, I think. Should be enough to not call `awaitUninterruptibly()` when in event loop that is responsible for `Future` returned by `closeIdleChannels()`, right? Will fix this", "`sync...()` ?", "`trySuccess(...)` ?", "@normanmaurer, why was it suggested to swap to trySuccess here?", "@ejona86 couldn't this race ?", "A pet peeve of mine is when code behaves differently when running on the event loop...\r\n\r\nIn this code, implicitly no longer waiting for completion when on the event loop is broken, as the caller could have been expecting the blocking behavior (as that's what started this series of bug fixes!). When having one Channel interact with another Channel, it is random whether the two Channels share the same event loop. Code should either always block or never block, always schedule a task or never schedule a task, etc. I know that large portions of the Netty codebase don't agree with me on that :) (although that may change in Netty 5).\r\n\r\nThere's also a secondary fallacy involved here when checking inEventLoop(), since it uses await/syncUninterruptibly. Just because you aren't in _the_ event loop doesn't mean you aren't in _a_ event loop. It seems that issue was added in #7927 and it is possible to deadlock (if you are worried about being called from an event loop. Which is seems like you are concerned about that case since you checked inEventLoop to begin with).\r\n\r\nI agree we're between a rock and a hard place, but I wonder if this is going about it the wrong way.\r\n\r\nTo me either 1) the method should always awaitUninterruptibly, and we just say you shouldn't call it from an event loop or 2) we make the method async again and solve the original bug another way. For (2) we could maybe create a new method that lets you wait until the cleanup is done by returning a future, or awaiting the pool's termination.\r\n\r\nI may not understand these classes well enough, but it appears the code in this PR is even worse than the more general problem, because the `inEventLoop()` check is bonkers. It _looks_ like a normal check, but in actuality it doesn't protect anything. The `executor` was retrieved from `nextEventLoop()` which is gotten from `group.next()`. So it is a _random_ loop. Combined with the fact that we don't _directly_ schedule anything on that loop, it is sort of a senseless. Note that we do _indirectly_ schedule something on every event loop in the group: the listener. So if this was ever run from the event loop group it would deadlock (n-1)/n of the time (n is number of loops in group) and if run outside _this_ event loop group it would block still and thus could deadlock, but whether it actually deadlocks would depend on the full sum of behavior in the system.", "Doesn't look like it. It is a private method that is only used in one spot. The passed in promise is not used before this method and was not leaked to any other objects.", "@ejona86 I agree, changing method behavior depending on the caller thread is far from ideal. IMHO option (2) would be the best way to solve this and related problems. Not sure if it is fine wrt to backwards compatibility to restore the async behavior of `ChannelPool#close()` and add another interface method like `ChannelPool#closeFuture()`.\r\n\r\n@normanmaurer what do you think?", "This is scary to me. Modifying the list while you are iterating over it is not a good idea. Instead, you should keep track of how many operations have completed, and complete the future then. Consider using CountDownLatch instead.", "This is not a good idea either. If there were failures with closing the channel, this code throws away that potentially useful information. You need to keep track of the errors, and add them to a suppressed list is there is more than one.", "Sync uninterruptibly is not a good idea. When the program is trying to shutdown (which is presumably when this method would be called), getting an Interruption means give up. This should either declare it throws IE or wrap it in a runtime exception.", "Could you please elaborate on why is this a bad idea? Set of channels is thread-safe and channels are removed from it by event loop threads that execute close future callbacks. It could happen that channel is removed by the current thread but this should not cause any problems.\r\n\r\nI'm not sure how CountDownLatch can be used here. It only allows code to block waiting for the count to be zero. Blocking is not desirable here, I believe.\r\n\r\nIt could be an `AtomicInteger` for counting closed channels. However, I feel like the current code is a bit more explicit about what's going on. Set of channels to be closed is \"snapshotted\" and every closed channel is removed from the set.", "IMHO propagating idle channel closing errors from `SimpleChannelPool.close()` is not very caller-friendly. How would caller handle such error? Would it be useful to propagate an exception with 99 suppressed exceptions if pool had 100 idle channels and all of them failed to close?\r\n\r\nApache commons-pool swallows resource close errors when closed. Javadoc is [here](https://github.com/apache/commons-pool/blob/POOL_2_5_0/src/main/java/org/apache/commons/pool2/ObjectPool.java#L174) and impl is [here](https://github.com/apache/commons-pool/blob/POOL_2_5_0/src/main/java/org/apache/commons/pool2/impl/GenericObjectPool.java#L655-L667). HikariCP also just loggs such erros [here](https://github.com/brettwooldridge/HikariCP/blob/HikariCP-3.2.0/src/main/java/com/zaxxer/hikari/pool/HikariPool.java#L430-L447).\r\n\r\nCallers that care about channel close errors can always subscribe to close future of every created channel and gather the errors. Logging errors in the pool could be nice to have.", "This method could exit before all the channels are closed if it is responsive to interrupts. Event loop threads will still be running even though the current thread is free to go. I'm not sure if it's okay or not. Re-interrupt + runtime exception would be the best option wrt to backward compatibility - in my opinion.", "> Could you please elaborate on why is this a bad idea?\r\n\r\nSure, Most collections throw a ConcurrrentModification exception if you modify them while reading. The reason is because the iterator doesn't have a well defined meaning, or would be feasible to implement. Concurrent collections have to special case such modifications, but it's still not \"obviously\" correct. \r\n\r\nWhen I read this code, I had to do a double take to make sure there weren't bugs with the map being shared, or how the concurrent modifications were happening, how remove was implemented in CHM. It's not quick to understand.\r\n\r\nAlso, do you really need a map? Atomic Integer avoids that allocation.", "Since the Pool creates the channels, it's responsible for cleaning them up. It would not be a correct abstraction boundary for callers to watch the close future for a Channel they didn't create. \r\n\r\n", "SGTM" ]
"2018-10-19T09:29:03Z"
[]
FixedChannelPool#close() can return before idle channels are closed
### Expected behavior `FixedChannelPool#close()` should return only after all idle channels are closed. Such behavior is expected after https://github.com/netty/netty/pull/7927. Before this PR, method was asynchronous and provided no such guarantees. ### Actual behavior `FixedChannelPool#close()` offloads actual closing of idle channels to `GlobalEventExecutor` and can return before the submitted task completes. ### Steps to reproduce 1. Create a `FixedChannelPool` 2. Acquire and release a channel, but keep a reference to it 3. Call `#close()` on the pool 4. Assert channel close future is done ### Minimal yet complete reproducer code (or URL to code) The following test can be added to the `FixedChannelPoolTest` class to reproduce the problem: ```java @Test public void testClose() { LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID); Bootstrap cb = new Bootstrap(); cb.remoteAddress(addr); cb.group(group).channel(LocalChannel.class); ServerBootstrap sb = new ServerBootstrap(); sb.group(group) .channel(LocalServerChannel.class) .childHandler(new ChannelInitializer<LocalChannel>() { @Override public void initChannel(LocalChannel ch) throws Exception { ch.pipeline().addLast(new ChannelInboundHandlerAdapter()); } }); // Start server Channel sc = sb.bind(addr).syncUninterruptibly().channel(); FixedChannelPool pool = new FixedChannelPool(cb, new TestChannelPoolHandler(), 2); Channel channel = pool.acquire().syncUninterruptibly().getNow(); pool.release(channel).syncUninterruptibly().getNow(); pool.close(); assertTrue(channel.closeFuture().isDone()); // <<-- failing assertion sc.close().syncUninterruptibly(); } ``` ### Netty version 4.1.26+ ### JVM version (e.g. `java -version`) ``` java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode) ``` ### OS version (e.g. `uname -a`) ``` Darwin MacBook-Pro.local 17.7.0 Darwin Kernel Version 17.7.0: Fri Jul 6 19:54:51 PDT 2018; root:xnu-4570.71.3~2/RELEASE_X86_64 x86_64 ``` ### More details We use Netty in a client app which hangs extremely infrequently during shutdown when updated from 4.1.22 to 4.1.26+. Shutdown sequence looks roughly like this: ```java fixedChannelPool.close(); eventLoopGroup.shutdownGracefully(); eventLoopGroup.terminationFuture().awaitUninterruptibly(); // this line can block forever ``` Unfortunately, I'm unable to reproduce such hanging with a test. Was only able to catch it once in a debugger with the following code: https://gist.github.com/lutovich/dc1fe9414e728d12b29b65c15727df93. Stacktraces of the two blocked threads look like: ``` "main@1" prio=5 tid=0x1 nid=NA waiting java.lang.Thread.State: WAITING at java.lang.Object.wait(Object.java:-1) at java.lang.Object.wait(Object.java:502) at io.netty.util.concurrent.DefaultPromise.awaitUninterruptibly(DefaultPromise.java:253) at io.netty.util.concurrent.DefaultPromise.awaitUninterruptibly(DefaultPromise.java:33) at io.netty.channel.pool. NettyPoolCloseTest.poolClose(NettyIT.java:62) ... ``` ``` "globalEventExecutor-1-1@2059" prio=5 tid=0x1a nid=NA waiting java.lang.Thread.State: WAITING at java.lang.Object.wait(Object.java:-1) at java.lang.Object.wait(Object.java:502) at io.netty.util.concurrent.DefaultPromise.awaitUninterruptibly(DefaultPromise.java:253) at io.netty.channel.DefaultChannelPromise.awaitUninterruptibly(DefaultChannelPromise.java:137) at io.netty.channel.DefaultChannelPromise.awaitUninterruptibly(DefaultChannelPromise.java:30) at io.netty.channel.pool.SimpleChannelPool.close(SimpleChannelPool.java:398) at io.netty.channel.pool.FixedChannelPool.access$1301(FixedChannelPool.java:40) at io.netty.channel.pool.FixedChannelPool$6.run(FixedChannelPool.java:480) at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:248) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) ``` First one is the blocked executing `clientGroup.terminationFuture().awaitUninterruptibly()`. Second is blocked executing `channel.close().awaitUninterruptibly()` in `SimpleChannelPool#close()`. I also have a heap dump of a hanging test run and can upload it somewhere, if needed. I **think** the problem happens because `FixedChannelPool#close()` might exit before all pooled channels are closed and closing of the event loop group will race with closing of pooled channels. `FixedChannelPool#close()` offloads actual closing of idle channels to the `GlobalEventExecutor`, which executes `SimpleChannelPool#close()`. So `FixedChannelPool#close()` can return to the caller before idle channels are closed. This might be a scenario that causes hanging: 1. Given a `FixedChannelPool` with an idle connection 2. Code calls `FixedChannelPool#close()` which adds a task to close all idle channels to the `GlobalEventExecutor` and returns 3. Code calls `NioEventLoopGroup#close()` which now races with the `GlobalEventExecutor` who tries to close the channel 4. Somehow, `GlobalEventExecutor` manages to add a close task to the event executor while it is shutting down and a message "An event executor terminated with non-empty task queue (1)" is logged. Tbh, I can't understand how this step is possible by looking at Netty code :( 5. `GlobalEventExecutor` is blocked in `SimpleChannelPool#close()` waiting for the channel close future which will never be notified because executor has been terminated 6. Main thread is blocked on `NioEventLoopGroup#terminationFuture()`. Termination future in `NioEventLoopGroup` relies on `GlobalEventExecutor` to get notifications but `GlobalEventExecutor` is blocked, as described in step 5. 7. `GlobalEventExecutor` is blocked and will never receive the notification about the channel being closed. It also contains a queued task to notify the event loop termination future, which will never get executed. If `FixedChannelPool#close()` returns only after all idle channels are closed then there will be no race in step 3 and the problem will hopefully be solved.
[ "transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java", "transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java" ]
[ "transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java", "transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java" ]
[ "transport/src/test/java/io/netty/channel/pool/FixedChannelPoolTest.java", "transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java" ]
diff --git a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java index 5ca376f88d2..af927dbf7dd 100644 --- a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java +++ b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java @@ -20,7 +20,6 @@ import io.netty.util.concurrent.EventExecutor; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; -import io.netty.util.concurrent.GlobalEventExecutor; import io.netty.util.concurrent.Promise; import io.netty.util.internal.ObjectUtil; import io.netty.util.internal.ThrowableUtil; @@ -444,18 +443,21 @@ public void acquired() { @Override public void close() { if (executor.inEventLoop()) { - close0(); + failPendingAcquireOperations(); } else { executor.submit(new Runnable() { @Override public void run() { - close0(); + failPendingAcquireOperations(); } }).awaitUninterruptibly(); } + super.closeIdleChannels(executor); } - private void close0() { + private void failPendingAcquireOperations() { + assert executor.inEventLoop(); + if (!closed) { closed = true; for (;;) { @@ -471,15 +473,6 @@ private void close0() { } acquiredChannelCount.set(0); pendingAcquireCount = 0; - - // Ensure we dispatch this on another Thread as close0 will be called from the EventExecutor and we need - // to ensure we will not block in a EventExecutor. - GlobalEventExecutor.INSTANCE.execute(new Runnable() { - @Override - public void run() { - FixedChannelPool.super.close(); - } - }); } } } diff --git a/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java b/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java index 6fcfd4443fa..f0741cf5199 100644 --- a/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java +++ b/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java @@ -22,13 +22,17 @@ import io.netty.channel.ChannelInitializer; import io.netty.channel.EventLoop; import io.netty.util.AttributeKey; +import io.netty.util.concurrent.EventExecutor; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; import io.netty.util.concurrent.Promise; import io.netty.util.internal.PlatformDependent; import io.netty.util.internal.ThrowableUtil; +import java.util.Collections; import java.util.Deque; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; import static io.netty.util.internal.ObjectUtil.*; @@ -155,7 +159,7 @@ protected boolean releaseHealthCheck() { @Override public final Future<Channel> acquire() { - return acquire(bootstrap.config().group().next().<Channel>newPromise()); + return acquire(nextEventLoop().<Channel>newPromise()); } @Override @@ -389,13 +393,47 @@ protected boolean offerChannel(Channel channel) { @Override public void close() { + closeIdleChannels(nextEventLoop()); + } + + void closeIdleChannels(EventExecutor executor) { + Promise<Void> result = executor.newPromise(); + closeIdleChannels(result); + if (!executor.inEventLoop()) { + result.syncUninterruptibly(); + } + } + + private void closeIdleChannels(final Promise<Void> result) { + final Set<Channel> channelsToClose = Collections.newSetFromMap(new ConcurrentHashMap<Channel, Boolean>()); for (;;) { Channel channel = pollChannel(); if (channel == null) { break; } - // Just ignore any errors that are reported back from close(). - channel.close().awaitUninterruptibly(); + channelsToClose.add(channel); + } + if (channelsToClose.isEmpty()) { + // no idle channels in the pool - nothing to close + result.trySuccess(null); + } + + for (final Channel channel : channelsToClose) { + channel.close().addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + channelsToClose.remove(channel); + if (channelsToClose.isEmpty()) { + // last channel was closed - can complete the result promise + // it might already be completed by a concurrently executing listener so use trySuccess + result.trySuccess(null); + } + } + }); } } + + private EventLoop nextEventLoop() { + return bootstrap.config().group().next(); + } }
diff --git a/transport/src/test/java/io/netty/channel/pool/FixedChannelPoolTest.java b/transport/src/test/java/io/netty/channel/pool/FixedChannelPoolTest.java index bbf1debeac5..7b284f431cb 100644 --- a/transport/src/test/java/io/netty/channel/pool/FixedChannelPoolTest.java +++ b/transport/src/test/java/io/netty/channel/pool/FixedChannelPoolTest.java @@ -34,6 +34,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; +import static org.hamcrest.Matchers.*; import static org.junit.Assert.*; public class FixedChannelPoolTest { @@ -346,6 +347,110 @@ public void initChannel(LocalChannel ch) throws Exception { sc.close().syncUninterruptibly(); } + @Test + public void testCloseWithIdleChannels() throws Exception { + testCloseWithIdleChannels(false); + } + + @Test + public void testCloseWithIdleChannelsInEventLoop() throws Exception { + testCloseWithIdleChannels(true); + } + + @Test + public void testCloseWithOutstandingAcquireRequests() throws Exception { + testCloseWithOutstandingAcquireRequests(false); + } + + @Test + public void testCloseWithOutstandingAcquireRequestsInEventLoop() throws Exception { + testCloseWithOutstandingAcquireRequests(true); + } + + private void testCloseWithOutstandingAcquireRequests(boolean closeInEventLoop) throws Exception { + LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID); + Bootstrap cb = new Bootstrap(); + cb.remoteAddress(addr); + cb.group(group).channel(LocalChannel.class); + + ServerBootstrap sb = new ServerBootstrap(); + sb.group(group) + .channel(LocalServerChannel.class) + .childHandler(new ChannelInitializer<LocalChannel>() { + @Override + public void initChannel(LocalChannel ch) { + ch.pipeline().addLast(new ChannelInboundHandlerAdapter()); + } + }); + + Channel sc = sb.bind(addr).syncUninterruptibly().channel(); + + FixedChannelPool pool = new FixedChannelPool(cb, new TestChannelPoolHandler(), 1); + pool.acquire().get(); // acquire the only available channel + + Future<Channel> acquireRequest1 = pool.acquire(); + Future<Channel> acquireRequest2 = pool.acquire(); + + assertFalse(acquireRequest1.isDone()); + assertFalse(acquireRequest2.isDone()); + + closePool(pool, closeInEventLoop); + + assertTrue(acquireRequest1.isDone()); + assertThat(acquireRequest1.cause(), instanceOf(IllegalStateException.class)); + assertTrue(acquireRequest2.isDone()); + assertThat(acquireRequest2.cause(), instanceOf(IllegalStateException.class)); + + sc.close().syncUninterruptibly(); + } + + private void testCloseWithIdleChannels(boolean closeInEventLoop) throws Exception { + LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID); + Bootstrap cb = new Bootstrap(); + cb.remoteAddress(addr); + cb.group(group).channel(LocalChannel.class); + + ServerBootstrap sb = new ServerBootstrap(); + sb.group(group) + .channel(LocalServerChannel.class) + .childHandler(new ChannelInitializer<LocalChannel>() { + @Override + public void initChannel(LocalChannel ch) { + ch.pipeline().addLast(new ChannelInboundHandlerAdapter()); + } + }); + + Channel sc = sb.bind(addr).syncUninterruptibly().channel(); + + FixedChannelPool pool = new FixedChannelPool(cb, new TestChannelPoolHandler(), 2); + Channel channel1 = pool.acquire().get(); + Channel channel2 = pool.acquire().get(); + pool.release(channel1).get(); + pool.release(channel2).get(); + + closePool(pool, closeInEventLoop); + + assertTrue(channel1.closeFuture().isSuccess()); + assertTrue(channel2.closeFuture().isSuccess()); + + sc.close().syncUninterruptibly(); + } + + private void closePool(final FixedChannelPool pool, final boolean inEventLoop) throws Exception { + if (inEventLoop) { + EventLoopGroup eventLoopGroup = pool.bootstrap().config().group(); + Future<?> poolCloseFuture = eventLoopGroup.submit(new Runnable() { + @Override + public void run() { + pool.close(); + } + }); + poolCloseFuture.get(); + } else { + pool.close(); + } + } + private static final class TestChannelPoolHandler extends AbstractChannelPoolHandler { @Override public void channelCreated(Channel ch) throws Exception { diff --git a/transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java b/transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java index a91790c38be..8fe562dc810 100644 --- a/transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java +++ b/transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java @@ -20,6 +20,7 @@ import io.netty.channel.Channel; import io.netty.channel.ChannelInboundHandlerAdapter; import io.netty.channel.ChannelInitializer; +import io.netty.channel.DefaultEventLoopGroup; import io.netty.channel.EventLoopGroup; import io.netty.channel.local.LocalAddress; import io.netty.channel.local.LocalChannel; @@ -27,9 +28,7 @@ import io.netty.channel.local.LocalServerChannel; import io.netty.util.concurrent.Future; import org.hamcrest.CoreMatchers; -import org.junit.Rule; import org.junit.Test; -import org.junit.rules.ExpectedException; import java.util.Queue; import java.util.concurrent.LinkedBlockingQueue; @@ -237,7 +236,9 @@ public void initChannel(LocalChannel ch) throws Exception { @Test public void testBootstrap() { - final SimpleChannelPool pool = new SimpleChannelPool(new Bootstrap(), new CountingChannelPoolHandler()); + final EventLoopGroup group = new DefaultEventLoopGroup(); + final Bootstrap bootstrap = new Bootstrap().group(group); + final SimpleChannelPool pool = new SimpleChannelPool(bootstrap, new CountingChannelPoolHandler()); try { // Checking for the actual bootstrap object doesn't make sense here, since the pool uses a copy with a @@ -245,26 +246,32 @@ public void testBootstrap() { assertNotNull(pool.bootstrap()); } finally { pool.close(); + group.shutdownGracefully(); } } @Test public void testHandler() { final ChannelPoolHandler handler = new CountingChannelPoolHandler(); - final SimpleChannelPool pool = new SimpleChannelPool(new Bootstrap(), handler); + final EventLoopGroup group = new DefaultEventLoopGroup(); + final Bootstrap bootstrap = new Bootstrap().group(group); + final SimpleChannelPool pool = new SimpleChannelPool(bootstrap, handler); try { assertSame(handler, pool.handler()); } finally { pool.close(); + group.shutdownGracefully(); } } @Test public void testHealthChecker() { final ChannelHealthChecker healthChecker = ChannelHealthChecker.ACTIVE; + final EventLoopGroup group = new DefaultEventLoopGroup(); + final Bootstrap bootstrap = new Bootstrap().group(group); final SimpleChannelPool pool = new SimpleChannelPool( - new Bootstrap(), + bootstrap, new CountingChannelPoolHandler(), healthChecker); @@ -272,13 +279,16 @@ public void testHealthChecker() { assertSame(healthChecker, pool.healthChecker()); } finally { pool.close(); + group.shutdownGracefully(); } } @Test public void testReleaseHealthCheck() { + final EventLoopGroup group = new DefaultEventLoopGroup(); + final Bootstrap bootstrap = new Bootstrap().group(group); final SimpleChannelPool healthCheckOnReleasePool = new SimpleChannelPool( - new Bootstrap(), + bootstrap, new CountingChannelPoolHandler(), ChannelHealthChecker.ACTIVE, true); @@ -290,7 +300,7 @@ public void testReleaseHealthCheck() { } final SimpleChannelPool noHealthCheckOnReleasePool = new SimpleChannelPool( - new Bootstrap(), + bootstrap, new CountingChannelPoolHandler(), ChannelHealthChecker.ACTIVE, false); @@ -300,5 +310,59 @@ public void testReleaseHealthCheck() { } finally { noHealthCheckOnReleasePool.close(); } + + group.shutdownGracefully(); + } + + @Test + public void testCloseWithIdleChannels() throws Exception { + testCloseWithIdleChannelsInEventLoop(false); + } + + @Test + public void testCloseWithIdleChannelsInEventLoop() throws Exception { + testCloseWithIdleChannelsInEventLoop(true); + } + + private void testCloseWithIdleChannelsInEventLoop(boolean closeInEventLoop) throws Exception { + EventLoopGroup group = new DefaultEventLoopGroup(); + LocalAddress address = new LocalAddress(LOCAL_ADDR_ID); + Bootstrap cb = new Bootstrap().remoteAddress(address).group(group).channel(LocalChannel.class); + + ServerBootstrap sb = new ServerBootstrap(); + sb.group(group) + .channel(LocalServerChannel.class) + .childHandler(new ChannelInitializer<LocalChannel>() { + @Override + public void initChannel(LocalChannel ch) { + ch.pipeline().addLast(new ChannelInboundHandlerAdapter()); + } + }); + + Channel sc = sb.bind(address).sync().channel(); + + final ChannelPool pool = new SimpleChannelPool(cb, new CountingChannelPoolHandler()); + + Channel channel1 = pool.acquire().get(); + Channel channel2 = pool.acquire().get(); + pool.release(channel1).get(); + pool.release(channel2).get(); + + if (closeInEventLoop) { + group.submit(new Runnable() { + @Override + public void run() { + pool.close(); + } + }).get(); + } else { + pool.close(); + } + + assertTrue(channel1.closeFuture().isSuccess()); + assertTrue(channel2.closeFuture().isSuccess()); + + sc.close().get(); + group.shutdownGracefully(); } }
test
val
"2019-09-12T12:54:25"
"2018-10-17T15:02:56Z"
lutovich
val
netty/netty/8429_8448
netty/netty
netty/netty/8429
netty/netty/8448
[ "timestamp(timedelta=167495.0, similarity=0.9443557595221874)" ]
d4b1202e62e52dc9b5619e666f5b0033d8a32bc9
2cdf28216a9cd3745b972f8b7ad250f81ce6173d
[ "@Bennett-Lynch that's a good question.. I like the idea of the `ChannelOption` but I would need to investigate if this is easily doable without breaking any API (I suspect it should be doable).", "@Bennett-Lynch https://github.com/netty/netty/pull/8448 PTAL", "@normanmaurer I would like to re-open this issue, as I believe that the current implementation of making users manually inject `ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE` to writes is still cumbersome, and problematic in more complex pipelines that might have writes triggered by multiple sources.\r\n\r\nThe `MessageToMessageEncoder` fix was helpful, but I am still encountering use cases where I would like to be able to default this behavior to ON for an entire pipeline.\r\n\r\nFor example, if I have a handler that detects some special case and writes a response object to a pipeline (think `100-Continue`, where this is not the default write entry point), I have to ensure that it writes to the `Channel`, and not the `ChannelHandlerContext`, in order to pass through my `FIRE_EXCEPTION_ON_FAILURE` injecting-handler. Otherwise, I would have to insert a `FIRE_EXCEPTION_ON_FAILURE` injecting-handler before and after every single handler, as writes may enter the pipeline from any point, and they mail fail at any point. This introduces some unnecessary coupling and is just generally awkward.\r\n\r\nI want to be able to know about all outbound failures (even if they're my own programming fault), in a simple and transparent manner. I'm okay paying a small performance cost for it, if it's opt-in.\r\n\r\n@trustin @bryce-anderson @ejona86 Can I convince you of the same? Is there a better solution that I am not thinking of? The motivation that I'm after is fairly simple, in that I want guaranteed visibility on any outbound failure.", "> I have to ensure that it writes to the Channel, and not the ChannelHandlerContext, in order to pass through my FIRE_EXCEPTION_ON_FAILURE injecting-handler\r\n\r\nThis is generally necessary. While I agree it isn't obvious to new users, any time you get it wrong is a recipe for weird things to happen.\r\n\r\nAlthough I'm also not quite sure why writing to the Channel would break things in this particular case; it would eventually go through the current Handler's ctx either way.\r\n\r\n> Otherwise, I would have to insert a FIRE_EXCEPTION_ON_FAILURE injecting-handler before and after every single handler, as writes may enter the pipeline from any point, and they mail fail at any point.\r\n\r\nIf you need it between ever handler, then I'm not sure how FIRE_EXCEPTION_ON_FAILURE would be implemented. It seems in both cases there's a problem with adding a listener for every handler in the pipeline, which would not be great.\r\n\r\n", "@ejona86 My concern is that I'm essentially *required* to write to the tail of the pipeline every time. And there are valid use cases where you don't *always* want to write to the tail end of the pipeline. A simple encoder-like handler would be one.\r\n\r\nIf I have a pipeline like so:\r\n\r\n`{Socket} - [ A ] - [ B ] - [ C ] - [ D ]`\r\n\r\nWhere `A` is the first inbound handler, and `D` is the tail.\r\n\r\nAssume that writes can insert the pipeline at any of the handlers. And assume that any of the handlers can have their write fail for some unpredictable reason.\r\n\r\nTo safely cover my \"requirements\" (guaranteed visibility on any outbound failure), I would need to insert a handler that injects `ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE` in between all of the handlers picture above.\r\n\r\nThis is actually doable today, but it entails incredibly clunky behavior in place of what seems like should be a simple channel option.\r\n\r\nThere's not strictly a problem with adding a listener for every handler. Every handler can simply remove-then-add to ensure they are idempotent:\r\n\r\n```\r\n@Override\r\npublic void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {\r\n promise.removeListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE);\r\n promise.addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE);\r\n ctx.write(msg, promise);\r\n}\r\n```\r\n\r\nBesides being clunky, this probably carries unnecessary performance costs as well.", "Okay. I see what you were meaning now. In any case a \"ChannelOption for FIRE_EXCEPTION_ON_FAILURE\" wouldn't fix this behavior; the Channel has just as hard of a time managing the future failure as you are having now.\r\n\r\nThe only idea I have would be for {ctx, channel, pipeline}.newPromise() to return a promise with the listener already added. The main way to \"break\" that behavior would be to construct a DefaultChannelPromise manually, which would be pretty rare. But having a promise with an auto-installed listener seems a bit too magical.", "@ejona86 The ask isn't so much to magically auto-install a listener, but to simply offer a channel option that will revert the behavior back to how it used to behave in Netty 3: https://github.com/netty/netty/pull/8448" ]
[]
"2018-10-31T08:15:05Z"
[]
ChannelOption for FIRE_EXCEPTION_ON_FAILURE?
I want to be able to react to *any* outbound channel failure. I can add a handler to the tail of my pipeline that will call `promise.addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE)` for every outbound event, but this is not sufficient. Specifically, in the case of a `MessageToMessageEncoder` that outputs more than one object, new promises are created for all but the last object: https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec/src/main/java/io/netty/handler/codec/MessageToMessageEncoder.java#L112-L127 If one of those objects then triggers a failure on a subsequent outbound handler, the exception is suppressed. There might be other scenarios where this can happen as well. How can I react to these failures when I don't have the opportunity to inject a `ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE`? Would it be reasonable to have a `ChannelOption` for `FIRE_EXCEPTION_ON_FAILURE`? ### Netty version 4.1.30.Final
[ "transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java", "transport/src/main/java/io/netty/channel/ChannelOption.java", "transport/src/main/java/io/netty/channel/DefaultChannelConfig.java", "transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java" ]
[ "transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java", "transport/src/main/java/io/netty/channel/ChannelOption.java", "transport/src/main/java/io/netty/channel/DefaultChannelConfig.java", "transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java index 0a81bd6b0f5..1b3058e4e4d 100644 --- a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java +++ b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java @@ -878,12 +878,13 @@ private static boolean inExceptionCaught(Throwable cause) { @Override public ChannelPromise newPromise() { - return new DefaultChannelPromise(channel(), executor()); + return DefaultChannelPipeline.fireExceptionOndFailure(new DefaultChannelPromise(channel(), executor())); } @Override public ChannelProgressivePromise newProgressivePromise() { - return new DefaultChannelProgressivePromise(channel(), executor()); + return DefaultChannelPipeline.fireExceptionOndFailure( + new DefaultChannelProgressivePromise(channel(), executor())); } @Override @@ -897,7 +898,7 @@ public ChannelFuture newSucceededFuture() { @Override public ChannelFuture newFailedFuture(Throwable cause) { - return new FailedChannelFuture(channel(), executor(), cause); + return DefaultChannelPipeline.fireExceptionOndFailure(new FailedChannelFuture(channel(), executor(), cause)); } private boolean isNotValidPromise(ChannelPromise promise, boolean allowVoidPromise) { diff --git a/transport/src/main/java/io/netty/channel/ChannelOption.java b/transport/src/main/java/io/netty/channel/ChannelOption.java index 97bf31545c4..626cf700423 100644 --- a/transport/src/main/java/io/netty/channel/ChannelOption.java +++ b/transport/src/main/java/io/netty/channel/ChannelOption.java @@ -129,6 +129,12 @@ public static <T> ChannelOption<T> newInstance(String name) { public static final ChannelOption<Boolean> SINGLE_EVENTEXECUTOR_PER_GROUP = valueOf("SINGLE_EVENTEXECUTOR_PER_GROUP"); + /** + * If {@code true} {@link ChannelPipeline#fireExceptionCaught(Throwable)} is called whenever a {@link ChannelFuture} + * is failed. + */ + public static final ChannelOption<Boolean> FIRE_EXCEPTION_ON_FAILURE = valueOf("FIRE_EXCEPTION_ON_FAILURE"); + /** * Creates a new {@link ChannelOption} with the specified unique {@code name}. */ diff --git a/transport/src/main/java/io/netty/channel/DefaultChannelConfig.java b/transport/src/main/java/io/netty/channel/DefaultChannelConfig.java index 4118708637c..198e5580596 100644 --- a/transport/src/main/java/io/netty/channel/DefaultChannelConfig.java +++ b/transport/src/main/java/io/netty/channel/DefaultChannelConfig.java @@ -23,18 +23,7 @@ import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; -import static io.netty.channel.ChannelOption.ALLOCATOR; -import static io.netty.channel.ChannelOption.AUTO_CLOSE; -import static io.netty.channel.ChannelOption.AUTO_READ; -import static io.netty.channel.ChannelOption.CONNECT_TIMEOUT_MILLIS; -import static io.netty.channel.ChannelOption.MAX_MESSAGES_PER_READ; -import static io.netty.channel.ChannelOption.MESSAGE_SIZE_ESTIMATOR; -import static io.netty.channel.ChannelOption.RCVBUF_ALLOCATOR; -import static io.netty.channel.ChannelOption.SINGLE_EVENTEXECUTOR_PER_GROUP; -import static io.netty.channel.ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK; -import static io.netty.channel.ChannelOption.WRITE_BUFFER_LOW_WATER_MARK; -import static io.netty.channel.ChannelOption.WRITE_BUFFER_WATER_MARK; -import static io.netty.channel.ChannelOption.WRITE_SPIN_COUNT; +import static io.netty.channel.ChannelOption.*; import static io.netty.util.internal.ObjectUtil.checkNotNull; /** @@ -64,6 +53,7 @@ public class DefaultChannelConfig implements ChannelConfig { private volatile boolean autoClose = true; private volatile WriteBufferWaterMark writeBufferWaterMark = WriteBufferWaterMark.DEFAULT; private volatile boolean pinEventExecutor = true; + private volatile boolean fireExceptionOnFailure; public DefaultChannelConfig(Channel channel) { this(channel, new AdaptiveRecvByteBufAllocator()); @@ -82,7 +72,7 @@ public Map<ChannelOption<?>, Object> getOptions() { CONNECT_TIMEOUT_MILLIS, MAX_MESSAGES_PER_READ, WRITE_SPIN_COUNT, ALLOCATOR, AUTO_READ, AUTO_CLOSE, RCVBUF_ALLOCATOR, WRITE_BUFFER_HIGH_WATER_MARK, WRITE_BUFFER_LOW_WATER_MARK, WRITE_BUFFER_WATER_MARK, MESSAGE_SIZE_ESTIMATOR, - SINGLE_EVENTEXECUTOR_PER_GROUP); + SINGLE_EVENTEXECUTOR_PER_GROUP, FIRE_EXCEPTION_ON_FAILURE); } protected Map<ChannelOption<?>, Object> getOptions( @@ -156,6 +146,9 @@ public <T> T getOption(ChannelOption<T> option) { if (option == SINGLE_EVENTEXECUTOR_PER_GROUP) { return (T) Boolean.valueOf(getPinEventExecutorPerGroup()); } + if (option == FIRE_EXCEPTION_ON_FAILURE) { + return (T) Boolean.valueOf(getFireExceptionOnFailure()); + } return null; } @@ -188,6 +181,8 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { setMessageSizeEstimator((MessageSizeEstimator) value); } else if (option == SINGLE_EVENTEXECUTOR_PER_GROUP) { setPinEventExecutorPerGroup((Boolean) value); + } else if (option == FIRE_EXCEPTION_ON_FAILURE) { + setFireExceptionOnFailure((Boolean) value); } else { return false; } @@ -436,4 +431,13 @@ private boolean getPinEventExecutorPerGroup() { return pinEventExecutor; } + private ChannelConfig setFireExceptionOnFailure(boolean fireExceptionOnFailure) { + this.fireExceptionOnFailure = fireExceptionOnFailure; + return this; + } + + // Package-private to allow fast access without using ChannelOption in DefaultChannelPipeline. + boolean getFireExceptionOnFailure() { + return fireExceptionOnFailure; + } } diff --git a/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java b/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java index 0d3307a5351..207d3cbdd0d 100644 --- a/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java +++ b/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java @@ -67,6 +67,7 @@ protected Map<Class<?>, String> initialValue() throws Exception { private final Channel channel; private final ChannelFuture succeededFuture; private final VoidChannelPromise voidPromise; + private final boolean touch = ResourceLeakDetector.isEnabled(); private Map<EventExecutorGroup, EventExecutor> childExecutors; @@ -1073,12 +1074,20 @@ public final ChannelFuture writeAndFlush(Object msg) { @Override public final ChannelPromise newPromise() { - return new DefaultChannelPromise(channel); + return fireExceptionOndFailure(new DefaultChannelPromise(channel)); + } + + static <F extends ChannelFuture> F fireExceptionOndFailure(F future) { + ChannelConfig config = future.channel().config(); + if (config instanceof DefaultChannelConfig && ((DefaultChannelConfig) config).getFireExceptionOnFailure()) { + future.addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE); + } + return future; } @Override public final ChannelProgressivePromise newProgressivePromise() { - return new DefaultChannelProgressivePromise(channel); + return fireExceptionOndFailure(new DefaultChannelProgressivePromise(channel)); } @Override @@ -1088,11 +1097,13 @@ public final ChannelFuture newSucceededFuture() { @Override public final ChannelFuture newFailedFuture(Throwable cause) { - return new FailedChannelFuture(channel, null, cause); + return fireExceptionOndFailure(new FailedChannelFuture(channel, null, cause)); } @Override public final ChannelPromise voidPromise() { + // The voidPromise always calls fireExceptionCaught(...) so we do not need to take + // ChannelOption.FIRE_EXCEPTION_ON_FAILURE into account. return voidPromise; }
null
test
val
"2018-10-30T19:38:02"
"2018-10-24T20:26:02Z"
Bennett-Lynch
val
netty/netty/8475_8476
netty/netty
netty/netty/8475
netty/netty/8476
[ "keyword_pr_to_issue", "timestamp(timedelta=0.0, similarity=0.8643144759902545)" ]
11ec7d892e8eca8dc6e05062d02a9f59b6883dce
845a65b31c93eadd86414e6d0a753cd7a93b04c1
[ "@vietj thanks! Will provide a fix soon. " ]
[ "Add a bit of javadoc about the meaning of the returned boolean? (especially as we no longer delegate a call to the queue which could have effectively provided that information via its javadoc). Ditto the other `removeTask` methods.", "Yes this makes things less racy. Why not use a CAS loop here to make even less racy (especially as we have a volatile field updater available)? Might be overkill I grant.", "I see the `shutdown` method is `@Deprecated`. Is that to remove it from the public API but it will still be called by something? A bit of javadoc about the deprecation would be great.", "Just noticed that `addTask` already has that `if (isShutdown()) reject()` logic in it!", "@davidmoten sure.. ", "oh duh! you are right... ", "Will? :laughing: ", "ah I meant what does true mean and what does false mean. Does true mean it was found on the queue and removed? Does false mean it is either not supported or not found on the queue?", "Another thing we could do is just catch `UnsupportedOperationException`. WDYT ?", "Why is this code even caring about `isShutdown()`? It looks like this block should just be deleted. `isShutdown()` is checked by `addTask()`/`offerTask()`. If we added the entry to the queue, then it seems we should commit ourselves to it; attempting to remove it (which is not guaranteed to be successful, since the task may have already run) seems weird.\r\n\r\nIf this `isShutdown()` check was atomic with `startThread()`, I could sort of understand the removal since we know we didn't start the thread, but as it is it just seems like useless effort." ]
"2018-11-07T13:38:31Z"
[]
NioEventLoop task execution might throw UnsupportedOperationException on shutdown
### Expected behavior Execution instead throws `RejectedExecutionException`. ### Actual behavior There is a racy `UnsupportedOperationException` instead because the task removal is delegated to `MpscChunkedArrayQueue` that does not support removal. This happens with `SingleThreadEventExecutor` that overrides the `newTaskQueue` to return an MPSC queue instead of the `LinkedBlockingQueue` returned by the base class such as `NioEventLoop`, `EpollEventLoop` and `KQueueEventLoop`. ### Steps to reproduce Schedule task to an event loop concurrently to an event loop shutdown. ### Minimal yet complete reproducer code (or URL to code) ``` @Test public void testTaskRemovalOnShutdownThrowsUnsupportedOperationException() throws Exception { NioEventLoopGroup group = new NioEventLoopGroup(1); final NioEventLoop loop = (NioEventLoop) group.next(); final Runnable task = new Runnable() { @Override public void run() { try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } } }; new Thread() { @Override public void run() { while (true) { loop.execute(task); } } }.start(); Thread.sleep(10); loop.shutdownNow(); Thread.sleep(100000); } ``` this is a racy test, the event loop shutdown race against the the task producing thread, if you run it enough it will fail: ``` Exception in thread "Thread-0" java.lang.UnsupportedOperationException at org.jctools.queues.BaseMpscLinkedArrayQueue.iterator(BaseMpscLinkedArrayQueue.java:201) at java.util.AbstractCollection.remove(AbstractCollection.java:282) at io.netty.util.concurrent.SingleThreadEventExecutor.removeTask(SingleThreadEventExecutor.java:340) at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:781) at io.netty.channel.nio.NioEventLoopTest$8.run(NioEventLoopTest.java:195) ``` ### Netty version At least 4.1.x ### JVM version (e.g. `java -version`) N/A ### OS version (e.g. `uname -a`) N/A
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java" ]
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java" ]
[ "transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java" ]
diff --git a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java index 67959016152..dab7e9500d0 100644 --- a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java +++ b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java @@ -778,8 +778,20 @@ public void execute(Runnable task) { addTask(task); if (!inEventLoop) { startThread(); - if (isShutdown() && removeTask(task)) { - reject(); + if (isShutdown()) { + boolean reject = false; + try { + if (removeTask(task)) { + reject = true; + } + } catch (UnsupportedOperationException e) { + // The task queue does not support removal so the best thing we can do is to just move on and + // hope we will be able to pick-up the task before its completely terminated. + // In worst case we will log on termination. + } + if (reject) { + reject(); + } } }
diff --git a/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java b/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java index d3412c2d996..15fcb212442 100644 --- a/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java +++ b/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java @@ -22,6 +22,7 @@ import io.netty.channel.socket.ServerSocketChannel; import io.netty.channel.socket.nio.NioServerSocketChannel; import io.netty.util.concurrent.Future; +import org.hamcrest.core.IsInstanceOf; import org.junit.Test; import java.net.InetSocketAddress; @@ -29,7 +30,9 @@ import java.nio.channels.Selector; import java.nio.channels.SocketChannel; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.RejectedExecutionException; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicReference; import static org.junit.Assert.*; @@ -171,4 +174,41 @@ public void channelUnregistered(SocketChannel ch, Throwable cause) { group.shutdownGracefully(); } } + + @SuppressWarnings("deprecation") + @Test + public void testTaskRemovalOnShutdownThrowsNoUnsupportedOperationException() throws Exception { + final AtomicReference<Throwable> error = new AtomicReference<Throwable>(); + final Runnable task = new Runnable() { + @Override + public void run() { + // NOOP + } + }; + // Just run often enough to trigger it normally. + for (int i = 0; i < 1000; i++) { + NioEventLoopGroup group = new NioEventLoopGroup(1); + final NioEventLoop loop = (NioEventLoop) group.next(); + + Thread t = new Thread(new Runnable() { + @Override + public void run() { + try { + for (;;) { + loop.execute(task); + } + } catch (Throwable cause) { + error.set(cause); + } + } + }); + t.start(); + group.shutdownNow(); + t.join(); + group.terminationFuture().syncUninterruptibly(); + assertThat(error.get(), IsInstanceOf.instanceOf(RejectedExecutionException.class)); + error.set(null); + } + } + }
train
val
"2018-11-14T08:19:06"
"2018-11-07T12:25:19Z"
vietj
val
netty/netty/8480_8482
netty/netty
netty/netty/8480
netty/netty/8482
[ "timestamp(timedelta=0.0, similarity=0.9107569309733597)" ]
8a24df88a4ff5519176767aee01bf6186015d471
a140e6dcad0c95b8867f4c6ff1e14537d5fd6cab
[ "@bryce-anderson ouch... yes this is not good :( That said I suspect there will most likely not a lot of problems in real world use-cases as you either use SSL or not for all Channel (sharing between different parent Channels seems very unlikely).\r\n\r\nThat said we should fix. I guess an attribute on the parent channel should do. ", "I also don't think it's a serious problem. In Finagle we simply make one per-channel. The only reason I found this is what I was about to cache instances of these and happened to notice that the `scheme` field was mutated after construction.\r\n\r\nI'll put up a patch." ]
[ "nit: you can remove the `()`", "nit: you can remove the `()`" ]
"2018-11-08T21:31:50Z"
[]
Http2StreamFrameToHttpObjectCodec is marked @Sharable but is mutated by channel specific state
### Expected behavior The `Http2StreamFrameToHttpObjectCodec` is either _not_ `@Sharable` _or_ it doesn't mutate its `scheme` field when added to a pipeline as it currently does [here](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodec.java#L221). ### Netty version 4.1 as of SHA 8a24df88a4ff55.
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodec.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodec.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodec.java index 36a7fba1c0c..e13a45fe015 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodec.java @@ -41,6 +41,8 @@ import io.netty.handler.codec.http.HttpVersion; import io.netty.handler.codec.http.LastHttpContent; import io.netty.handler.ssl.SslHandler; +import io.netty.util.Attribute; +import io.netty.util.AttributeKey; import io.netty.util.internal.UnstableApi; import java.util.List; @@ -57,16 +59,17 @@ @UnstableApi @Sharable public class Http2StreamFrameToHttpObjectCodec extends MessageToMessageCodec<Http2StreamFrame, HttpObject> { + + private static final AttributeKey<HttpScheme> SCHEME_ATTR_KEY = + AttributeKey.valueOf(HttpScheme.class, "STREAMFRAMECODEC_SCHEME"); + private final boolean isServer; private final boolean validateHeaders; - private HttpScheme scheme; - public Http2StreamFrameToHttpObjectCodec(final boolean isServer, final boolean validateHeaders) { this.isServer = isServer; this.validateHeaders = validateHeaders; - scheme = HttpScheme.HTTP; } public Http2StreamFrameToHttpObjectCodec(final boolean isServer) { @@ -154,7 +157,7 @@ protected void encode(ChannelHandlerContext ctx, HttpObject obj, List<Object> ou final HttpResponse res = (HttpResponse) obj; if (res.status().equals(HttpResponseStatus.CONTINUE)) { if (res instanceof FullHttpResponse) { - final Http2Headers headers = toHttp2Headers(res); + final Http2Headers headers = toHttp2Headers(ctx, res); out.add(new DefaultHttp2HeadersFrame(headers, false)); return; } else { @@ -165,7 +168,7 @@ protected void encode(ChannelHandlerContext ctx, HttpObject obj, List<Object> ou } if (obj instanceof HttpMessage) { - Http2Headers headers = toHttp2Headers((HttpMessage) obj); + Http2Headers headers = toHttp2Headers(ctx, (HttpMessage) obj); boolean noMoreFrames = false; if (obj instanceof FullHttpMessage) { FullHttpMessage full = (FullHttpMessage) obj; @@ -184,11 +187,11 @@ protected void encode(ChannelHandlerContext ctx, HttpObject obj, List<Object> ou } } - private Http2Headers toHttp2Headers(final HttpMessage msg) { + private Http2Headers toHttp2Headers(final ChannelHandlerContext ctx, final HttpMessage msg) { if (msg instanceof HttpRequest) { msg.headers().set( HttpConversionUtil.ExtensionHeaderNames.SCHEME.text(), - scheme.name()); + connectionScheme(ctx)); } return HttpConversionUtil.toHttp2Headers(msg, validateHeaders); @@ -213,17 +216,35 @@ private FullHttpMessage newFullMessage(final int id, public void handlerAdded(final ChannelHandlerContext ctx) throws Exception { super.handlerAdded(ctx); - // this handler is typically used on an Http2StreamChannel. at this + // this handler is typically used on an Http2StreamChannel. At this // stage, ssl handshake should've been established. checking for the // presence of SslHandler in the parent's channel pipeline to // determine the HTTP scheme should suffice, even for the case where // SniHandler is used. - scheme = isSsl(ctx) ? HttpScheme.HTTPS : HttpScheme.HTTP; + final Attribute<HttpScheme> schemeAttribute = connectionSchemeAttribute(ctx); + if (schemeAttribute.get() == null) { + final HttpScheme scheme = isSsl(ctx) ? HttpScheme.HTTPS : HttpScheme.HTTP; + schemeAttribute.set(scheme); + } } protected boolean isSsl(final ChannelHandlerContext ctx) { - final Channel ch = ctx.channel(); - final Channel connChannel = (ch instanceof Http2StreamChannel) ? ch.parent() : ch; + final Channel connChannel = connectionChannel(ctx); return null != connChannel.pipeline().get(SslHandler.class); } + + private static HttpScheme connectionScheme(ChannelHandlerContext ctx) { + final HttpScheme scheme = connectionSchemeAttribute(ctx).get(); + return scheme == null ? HttpScheme.HTTP : scheme; + } + + private static Attribute<HttpScheme> connectionSchemeAttribute(ChannelHandlerContext ctx) { + final Channel ch = connectionChannel(ctx); + return ch.attr(SCHEME_ATTR_KEY); + } + + private static Channel connectionChannel(ChannelHandlerContext ctx) { + final Channel ch = ctx.channel(); + return ch instanceof Http2StreamChannel ? ch.parent() : ch; + } }
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodecTest.java index 45e781c86c4..393a4060ef4 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2StreamFrameToHttpObjectCodecTest.java @@ -19,6 +19,7 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufAllocator; import io.netty.buffer.Unpooled; +import io.netty.channel.ChannelHandler; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelOutboundHandlerAdapter; import io.netty.channel.ChannelPromise; @@ -871,4 +872,65 @@ public void testPassThroughOtherAsClient() throws Exception { frame.release(); } } + + @Test + public void testIsSharableBetweenChannels() throws Exception { + final Queue<Http2StreamFrame> frames = new ConcurrentLinkedQueue<Http2StreamFrame>(); + final ChannelHandler sharedHandler = new Http2StreamFrameToHttpObjectCodec(false); + + final SslContext ctx = SslContextBuilder.forClient().sslProvider(SslProvider.JDK).build(); + EmbeddedChannel tlsCh = new EmbeddedChannel(ctx.newHandler(ByteBufAllocator.DEFAULT), + new ChannelOutboundHandlerAdapter() { + @Override + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) { + if (msg instanceof Http2StreamFrame) { + frames.add((Http2StreamFrame) msg); + promise.setSuccess(); + } else { + ctx.write(msg, promise); + } + } + }, sharedHandler); + + EmbeddedChannel plaintextCh = new EmbeddedChannel( + new ChannelOutboundHandlerAdapter() { + @Override + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) { + if (msg instanceof Http2StreamFrame) { + frames.add((Http2StreamFrame) msg); + promise.setSuccess(); + } else { + ctx.write(msg, promise); + } + } + }, sharedHandler); + + FullHttpRequest req = new DefaultFullHttpRequest( + HttpVersion.HTTP_1_1, HttpMethod.GET, "/hello/world"); + assertTrue(tlsCh.writeOutbound(req)); + assertTrue(tlsCh.finishAndReleaseAll()); + + Http2HeadersFrame headersFrame = (Http2HeadersFrame) frames.poll(); + Http2Headers headers = headersFrame.headers(); + + assertThat(headers.scheme().toString(), is("https")); + assertThat(headers.method().toString(), is("GET")); + assertThat(headers.path().toString(), is("/hello/world")); + assertTrue(headersFrame.isEndStream()); + assertNull(frames.poll()); + + // Run the plaintext channel + req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/hello/world"); + assertFalse(plaintextCh.writeOutbound(req)); + assertFalse(plaintextCh.finishAndReleaseAll()); + + headersFrame = (Http2HeadersFrame) frames.poll(); + headers = headersFrame.headers(); + + assertThat(headers.scheme().toString(), is("http")); + assertThat(headers.method().toString(), is("GET")); + assertThat(headers.path().toString(), is("/hello/world")); + assertTrue(headersFrame.isEndStream()); + assertNull(frames.poll()); + } }
test
val
"2018-11-08T15:22:33"
"2018-11-08T19:19:38Z"
bryce-anderson
val
netty/netty/8479_8486
netty/netty
netty/netty/8479
netty/netty/8486
[ "timestamp(timedelta=0.0, similarity=0.9238030710950786)" ]
8a24df88a4ff5519176767aee01bf6186015d471
c0dfb568a2e13feb0872243e6d1a57c5dab15fd5
[ "@tbrooks8 thanks... I will try to find some time to fix this or if you are interested in providing a PR I am happy to review as well :)", "I might have time tomorrow to put together a PR. I'll let you know.", "@tbrooks8 don't worry I am able to reproduce with a unit test... Seems to only happen on Java11 tho :) Should have a fix soon.", "Should be fixed by https://github.com/netty/netty/pull/8486" ]
[ "Could we include the condition under which we can expect this, specifically that the handshake must have already happened/started (and failed?) before the channelActive event was fired.", "@bryce-anderson yep.. Let me update the comment" ]
"2018-11-09T14:20:46Z"
[ "defect" ]
SSLHandler can fail if writes occur before `channelActive` fired
### Expected behavior Attempting to write to a channel with an SslHandler when the `channel.writeable.isWritable()` returns true should be allowed. ### Actual behavior If you attempt to write to a channel with an `SslHandler` prior to `channelActive` being called you can hit an assertion. In particular - if you write to a channel it forces some handshaking (through flush calls) to occur. If the handshake fails - the promise is set to failed. This happens around line `1573` in the `SslHandler.java`. ``` if (handshakePromise.tryFailure(cause) || alwaysFlushAndClose) { SslUtils.handleHandshakeFailure(ctx, cause, notify); } ``` I can tell that `channelActive` has not been called because I am in the debugger and the `handshakeStarted` field returns false. However, `ctx.channel().isWriteable()` returns true. So I do not think I am doing anything incorrect. Eventually netty gets around to calling `channelActive`. This call makes its way down to: ``` private void handshake(final Promise<Channel> newHandshakePromise) { final Promise<Channel> p; if (newHandshakePromise != null) { final Promise<Channel> oldHandshakePromise = handshakePromise; if (!oldHandshakePromise.isDone()) { // There's no need to handshake because handshake is in progress already. // Merge the new promise into the old one. oldHandshakePromise.addListener(new FutureListener<Channel>() { @Override public void operationComplete(Future<Channel> future) throws Exception { if (future.isSuccess()) { newHandshakePromise.setSuccess(future.getNow()); } else { newHandshakePromise.setFailure(future.cause()); } } }); return; } handshakePromise = p = newHandshakePromise; } else if (engine.getHandshakeStatus() != HandshakeStatus.NOT_HANDSHAKING) { // Not all SSLEngine implementations support calling beginHandshake multiple times while a handshake // is in progress. See https://github.com/netty/netty/issues/4718. return; } else { // Forced to reuse the old handshake. p = handshakePromise; assert !p.isDone(); } // Begin handshake. final ChannelHandlerContext ctx = this.ctx; try { engine.beginHandshake(); wrapNonAppData(ctx, false); } catch (Throwable e) { setHandshakeFailure(ctx, e); } finally { forceFlush(ctx); } applyHandshakeTimeout(p); } ``` via: ``` at io.netty.handler.ssl.SslHandler.handshake(SslHandler.java:1747) at io.netty.handler.ssl.SslHandler.startHandshakeProcessing(SslHandler.java:1666) at io.netty.handler.ssl.SslHandler.channelActive(SslHandler.java:1807) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:213) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:199) at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:192) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelActive(DefaultChannelPipeline.java:1422) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:213) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:199) at io.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:941) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:311) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632) at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) at java.base/java.lang.Thread.run(Thread.java:834) ``` There is an assertion at line 1747 that triggers because the handshake status is set to HandshakeStatus.NOT_HANDSHAKING and the handshake promise is done: ``` } else if (engine.getHandshakeStatus() != HandshakeStatus.NOT_HANDSHAKING) { // Not all SSLEngine implementations support calling beginHandshake multiple times while a handshake // is in progress. See https://github.com/netty/netty/issues/4718. return; } else { // Forced to reuse the old handshake. p = handshakePromise; assert !p.isDone(); } ``` ### Steps to reproduce I reliably reproduce this by attaching a listener to the netty connect future that initiates an application level write. When the connection is complete we start a write the does enough to cause the handshake to fail before the `channelActive` is called. The reason the handshake is failing is a specific scenario we are testing: ``` javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate) at sun.security.ssl.HandshakeContext.<init>(HandshakeContext.java:163) ~[?:?] at sun.security.ssl.ClientHandshakeContext.<init>(ClientHandshakeContext.java:95) ~[?:?] at sun.security.ssl.TransportContext.kickstart(TransportContext.java:217) ~[?:?] at sun.security.ssl.SSLEngineImpl.writeRecord(SSLEngineImpl.java:167) ~[?:?] at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:136) ~[?:?] at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:116) ~[?:?] ``` Which explains why the handshake is failing so fast. The stacktrace for the connection callback -> write process is: ``` at io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1573) at io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1542) at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:776) at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) at io.netty.handler.logging.LoggingHandler.flush(LoggingHandler.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814) at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1066) at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:305) at org.elasticsearch.transport.netty4.Netty4TcpChannel.sendMessage(Netty4TcpChannel.java:142) <Irrelevant Elasticsearch code> org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$new$1(Netty4TcpChannel.java:67) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:103) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632) at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) at java.base/java.lang.Thread.run(Thread.java:834) ``` ### Netty version Netty 4.1.30-Final ### JVM version (e.g. `java -version`) openjdk version "11" 2018-09-25 ### OS version (e.g. `uname -a`) Darwin 18.2.0 Darwin Kernel Version 18.2.0
[ "handler/src/main/java/io/netty/handler/ssl/SslHandler.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/SslHandler.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java index f1de1360bfe..0e3db8d83a8 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java +++ b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java @@ -1745,9 +1745,16 @@ public void operationComplete(Future<Channel> future) throws Exception { // is in progress. See https://github.com/netty/netty/issues/4718. return; } else { + if (handshakePromise.isDone()) { + // If the handshake is done already lets just return directly as there is no need to trigger it again. + // This can happen if the handshake(...) was triggered before we called channelActive(...) by a + // flush() that was triggered by a ChannelFutureListener that was added to the ChannelFuture returned + // from the connect(...) method. In this case we will see the flush() happen before we had a chance to + // call fireChannelActive() on the pipeline. + return; + } // Forced to reuse the old handshake. p = handshakePromise; - assert !p.isDone(); } // Begin handshake.
diff --git a/handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java b/handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java index 75375008f78..ab7a63c908c 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java @@ -54,6 +54,7 @@ import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; import io.netty.util.concurrent.Promise; +import org.hamcrest.CoreMatchers; import org.junit.Test; import java.net.InetSocketAddress; @@ -63,6 +64,7 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; @@ -679,4 +681,77 @@ public void testOutboundClosedAfterChannelInactive() throws Exception { assertTrue(engine.isOutboundDone()); } + + @Test(timeout = 10000) + public void testHandshakeFailedByWriteBeforeChannelActive() throws Exception { + final SslContext sslClientCtx = SslContextBuilder.forClient() + .protocols(SslUtils.PROTOCOL_SSL_V3) + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(SslProvider.JDK).build(); + + EventLoopGroup group = new NioEventLoopGroup(); + Channel sc = null; + Channel cc = null; + final CountDownLatch activeLatch = new CountDownLatch(1); + final AtomicReference<AssertionError> errorRef = new AtomicReference<AssertionError>(); + final SslHandler sslHandler = sslClientCtx.newHandler(UnpooledByteBufAllocator.DEFAULT); + try { + sc = new ServerBootstrap() + .group(group) + .channel(NioServerSocketChannel.class) + .childHandler(new ChannelInboundHandlerAdapter()) + .bind(new InetSocketAddress(0)).syncUninterruptibly().channel(); + + cc = new Bootstrap() + .group(group) + .channel(NioSocketChannel.class) + .handler(new ChannelInitializer<Channel>() { + @Override + protected void initChannel(Channel ch) throws Exception { + ch.pipeline().addLast(sslHandler); + ch.pipeline().addLast(new ChannelInboundHandlerAdapter() { + @Override + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) + throws Exception { + if (cause instanceof AssertionError) { + errorRef.set((AssertionError) cause); + } + } + + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + activeLatch.countDown(); + } + }); + } + }).connect(sc.localAddress()).addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) throws Exception { + // Write something to trigger the handshake before fireChannelActive is called. + future.channel().writeAndFlush(wrappedBuffer(new byte [] { 1, 2, 3, 4 })); + } + }).syncUninterruptibly().channel(); + + // Ensure there is no AssertionError thrown by having the handshake failed by the writeAndFlush(...) before + // channelActive(...) was called. Let's first wait for the activeLatch countdown to happen and after this + // check if we saw and AssertionError (even if we timed out waiting). + activeLatch.await(5, TimeUnit.SECONDS); + AssertionError error = errorRef.get(); + if (error != null) { + throw error; + } + assertThat(sslHandler.handshakeFuture().await().cause(), + CoreMatchers.<Throwable>instanceOf(SSLException.class)); + } finally { + if (cc != null) { + cc.close().syncUninterruptibly(); + } + if (sc != null) { + sc.close().syncUninterruptibly(); + } + group.shutdownGracefully(); + + ReferenceCountUtil.release(sslClientCtx); + } + } }
train
val
"2018-11-08T15:22:33"
"2018-11-08T19:15:29Z"
Tim-Brooks
val
netty/netty/8493_8494
netty/netty
netty/netty/8493
netty/netty/8494
[ "timestamp(timedelta=0.0, similarity=0.9500296752188567)" ]
c0dfb568a2e13feb0872243e6d1a57c5dab15fd5
4c73d24ea8f1bc4d51d2cd85580edbb48aaef5bb
[ "The same is also true when `starttls` is used." ]
[]
"2018-11-11T18:43:56Z"
[ "defect" ]
Handshake timeout may never be scheduled if handshake starts via a flush
Reported by @tbrooks8: A client (like my case) initiates the handshake in a flush call. The client hello message is generated and flushed. The SSLEngine now returns a handshake status of NEED_UNWRAP. The channelActive call happens, since the handshake status is not HandshakeStatus.NOT_HANDSHAKING (line 1740), the method returns immediately without scheduling a timeout. There is now no timeout for this handshake. The flush route for starting a handshake does not called startHandshakeProcessing. That is only called when the handler is added and the channel is already active or in the channelActive call. The flush route allows for the SSLEngine handshake to start prior to these things happening. And then the status of the engine, causes the handshake(final Promise<Channel> newHandshakePromise) method to exit early. This was reported as part of https://github.com/netty/netty/pull/8486
[ "handler/src/main/java/io/netty/handler/ssl/SslHandler.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/SslHandler.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java index 0e3db8d83a8..4e73796e98c 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java +++ b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java @@ -767,6 +767,9 @@ public void flush(ChannelHandlerContext ctx) throws Exception { sentFirstMessage = true; pendingUnencryptedWrites.writeAndRemoveAll(ctx); forceFlush(ctx); + // Explicit start handshake processing once we send the first message. This will also ensure + // we will schedule the timeout if needed. + startHandshakeProcessing(); return; } @@ -1661,14 +1664,16 @@ public void handlerAdded(final ChannelHandlerContext ctx) throws Exception { } private void startHandshakeProcessing() { - handshakeStarted = true; - if (engine.getUseClientMode()) { - // Begin the initial handshake. - // channelActive() event has been fired already, which means this.channelActive() will - // not be invoked. We have to initialize here instead. - handshake(null); - } else { - applyHandshakeTimeout(null); + if (!handshakeStarted) { + handshakeStarted = true; + if (engine.getUseClientMode()) { + // Begin the initial handshake. + // channelActive() event has been fired already, which means this.channelActive() will + // not be invoked. We have to initialize here instead. + handshake(null, true); + } else { + applyHandshakeTimeout(null); + } } } @@ -1702,13 +1707,13 @@ public Future<Channel> renegotiate(final Promise<Channel> promise) { executor.execute(new Runnable() { @Override public void run() { - handshake(promise); + handshake(promise, false); } }); return promise; } - handshake(promise); + handshake(promise, false); return promise; } @@ -1719,7 +1724,7 @@ public void run() { * assuming that the current negotiation has not been finished. * Currently, {@code null} is expected only for the initial handshake. */ - private void handshake(final Promise<Channel> newHandshakePromise) { + private void handshake(final Promise<Channel> newHandshakePromise, boolean initialHandshake) { final Promise<Channel> p; if (newHandshakePromise != null) { final Promise<Channel> oldHandshakePromise = handshakePromise; @@ -1741,6 +1746,11 @@ public void operationComplete(Future<Channel> future) throws Exception { handshakePromise = p = newHandshakePromise; } else if (engine.getHandshakeStatus() != HandshakeStatus.NOT_HANDSHAKING) { + if (initialHandshake) { + // This is the intial handshake either triggered by handlerAdded(...), channelActive(...) or + // flush(...) when starttls was used. In all the cases we need to ensure we schedule a timeout. + applyHandshakeTimeout(null); + } // Not all SSLEngine implementations support calling beginHandshake multiple times while a handshake // is in progress. See https://github.com/netty/netty/issues/4718. return;
diff --git a/handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java b/handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java index ab7a63c908c..772a15f9ebc 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SslHandlerTest.java @@ -74,9 +74,7 @@ import javax.net.ssl.SSLProtocolException; import static io.netty.buffer.Unpooled.wrappedBuffer; -import static org.hamcrest.CoreMatchers.instanceOf; -import static org.hamcrest.CoreMatchers.is; -import static org.hamcrest.CoreMatchers.nullValue; +import static org.hamcrest.CoreMatchers.*; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertThat; @@ -754,4 +752,76 @@ public void operationComplete(ChannelFuture future) throws Exception { ReferenceCountUtil.release(sslClientCtx); } } + + @Test(timeout = 10000) + public void testHandshakeTimeoutFlushStartsHandshake() throws Exception { + testHandshakeTimeout0(false); + } + + @Test(timeout = 10000) + public void testHandshakeTimeoutStartTLS() throws Exception { + testHandshakeTimeout0(true); + } + + private static void testHandshakeTimeout0(final boolean startTls) throws Exception { + final SslContext sslClientCtx = SslContextBuilder.forClient() + .startTls(true) + .trustManager(InsecureTrustManagerFactory.INSTANCE) + .sslProvider(SslProvider.JDK).build(); + + EventLoopGroup group = new NioEventLoopGroup(); + Channel sc = null; + Channel cc = null; + final SslHandler sslHandler = sslClientCtx.newHandler(UnpooledByteBufAllocator.DEFAULT); + sslHandler.setHandshakeTimeout(500, TimeUnit.MILLISECONDS); + + try { + sc = new ServerBootstrap() + .group(group) + .channel(NioServerSocketChannel.class) + .childHandler(new ChannelInboundHandlerAdapter()) + .bind(new InetSocketAddress(0)).syncUninterruptibly().channel(); + + ChannelFuture future = new Bootstrap() + .group(group) + .channel(NioSocketChannel.class) + .handler(new ChannelInitializer<Channel>() { + @Override + protected void initChannel(Channel ch) throws Exception { + ch.pipeline().addLast(sslHandler); + if (startTls) { + ch.pipeline().addLast(new ChannelInboundHandlerAdapter() { + @Override + public void channelActive(ChannelHandlerContext ctx) throws Exception { + ctx.writeAndFlush(wrappedBuffer(new byte[] { 1, 2, 3, 4 })); + } + }); + } + } + }).connect(sc.localAddress()); + if (!startTls) { + future.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) throws Exception { + // Write something to trigger the handshake before fireChannelActive is called. + future.channel().writeAndFlush(wrappedBuffer(new byte [] { 1, 2, 3, 4 })); + } + }); + } + cc = future.syncUninterruptibly().channel(); + + Throwable cause = sslHandler.handshakeFuture().await().cause(); + assertThat(cause, CoreMatchers.<Throwable>instanceOf(SSLException.class)); + assertThat(cause.getMessage(), containsString("timed out")); + } finally { + if (cc != null) { + cc.close().syncUninterruptibly(); + } + if (sc != null) { + sc.close().syncUninterruptibly(); + } + group.shutdownGracefully(); + ReferenceCountUtil.release(sslClientCtx); + } + } }
train
val
"2018-11-11T07:23:08"
"2018-11-11T06:25:14Z"
normanmaurer
val
netty/netty/8495_8497
netty/netty
netty/netty/8495
netty/netty/8497
[ "keyword_pr_to_issue" ]
4c73d24ea8f1bc4d51d2cd85580edbb48aaef5bb
804e1fa9ccf426603e6d478f63c63bc00baae0fa
[ "@rkapsi so just to clarify... this only happens with 4.1.32.Final-SNAPSHOT correct ?", "Correct, but I built the JAR myself instead of pulling it from maven central.", "@rkapsi I wonder what happens if you revert: 6563f23a9b72e0efa3b3ededd3bc2ee9911f7402 ?", "If this does not help my guess would be 44cca1a26f5c395420111fafa122bed5aeecfeb7 . Could you try both ?", "It's not 6563f23, trying the other change now.", "And it's also not 44cca1a. To summarize, reverting 6563f23 has no effect (error is present) and 44cca1a is good (no error, I had trouble reverting it and built from that commit instead).\r\n\r\n... going to bisect.\r\n\r\n", "My bet is on this one now https://github.com/netty/netty/commit/10539f4dc738c48e8d63c46b72ca32906d7f40ec", "Thanks and let me know once you find anything\n\n> Am 12.11.2018 um 18:33 schrieb Roger <notifications@github.com>:\n> \n> And it's also not 44cca1a. To summarize, reverting 6563f23 has no effect (error is present) and 44cca1a is good (no error).\n> \n> ... going to bisect that range.\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "There's also https://github.com/netty/netty/commit/28f9136824499d7bca318f4496339d82fb42c46a", "Bisect says...\r\n\r\n```bash\r\n10539f4dc738c48e8d63c46b72ca32906d7f40ec is the first bad commit\r\ncommit 10539f4dc738c48e8d63c46b72ca32906d7f40ec\r\nAuthor: Nick Hill <nickhill@us.ibm.com>\r\nDate: Sat Nov 3 02:37:07 2018 -0700\r\n\r\n Streamline CompositeByteBuf internals (#8437)\r\n \r\n Motivation:\r\n \r\n CompositeByteBuf is a powerful and versatile abstraction, allowing for\r\n manipulation of large data without copying bytes. There is still a\r\n non-negligible cost to reading/writing however relative to \"singular\"\r\n ByteBufs, and this can be mostly eliminated with some rework of the\r\n internals.\r\n \r\n My use case is message modification/transformation while zero-copy\r\n proxying. For example replacing a string within a large message with one\r\n of a different length\r\n \r\n Modifications:\r\n \r\n - No longer slice added buffers and unwrap added slices\r\n - Components store target buf offset relative to position in\r\n composite buf\r\n - Less allocations, object footprint, pointer indirection, offset\r\n arithmetic\r\n - Use Component[] rather than ArrayList<Component>\r\n - Avoid pointer indirection and duplicate bounds check, more\r\n efficient backing array growth\r\n - Facilitates optimization when doing bulk-inserts - inserting n\r\n ByteBufs behind m is now O(m + n) instead of O(mn)\r\n - Avoid unnecessary casting and method call indirection via superclass\r\n - Eliminate some duplicate range/ref checks via non-checking versions of\r\n toComponentIndex and findComponent\r\n - Add simple fast-path for toComponentIndex(0); add racy cache of\r\n last-accessed Component to findComponent(int)\r\n - Override forEachByte0(...) and forEachByteDesc0(...) methods\r\n - Make use of RecyclableArrayList in nioBuffers(int, int) (in line with\r\n FasterCompositeByteBuf impl)\r\n - Modify addComponents0(boolean,int,Iterable) to use the Iterable\r\n directly rather than copy to an array first (and possibly to an\r\n ArrayList before that)\r\n - Optimize addComponents0(boolean,int,ByteBuf[],int) to not perform\r\n repeated array insertions and avoid second loop for offset updates\r\n - Simplify other logic in various places, in particular the general\r\n pattern used where a sub-range is iterated over\r\n - Add benchmarks to demonstrate some improvements\r\n \r\n While refactoring I also came across a couple of clear bugs. They are\r\n fixed in these changes but I will open another PR with unit tests and\r\n fixes to the current version.\r\n \r\n Result:\r\n \r\n Much faster creation, manipulation, and access; many fewer allocations\r\n and smaller footprint. Benchmark results to follow.\r\n\r\n:040000 040000 1cb457424e3c5b6355c17161e9db1b98f38a524b bd4544c4c9901e5b2842afb0312606efa1c90dcb M\tbuffer\r\n:040000 040000 a82ea3094430c7080504cfb9a14edc93b9582615 85096540606c3ba74f60d9c25a5fd01f9ce908f4 M\tmicrobench\r\n```", "@rkapsi thanks will check... @njhill FYI", "Thanks @rkapsi @normanmaurer ... I'm looking now too, and apologies in advance for the likely introduced bug!", "Current hypothesis is that the problem is with the unwrapping optimization done specifically in the `PooledSlicedByteBuf` case. I think there may be ref-counting behaviour not properly preserved. I will try to do simple repro based on this and if it _is_ the problem then an easy worst-case fix is just to remove this `else if` block: https://github.com/netty/netty/blob/88e4817cefb4b4d092e4f6a12f0599c54646db18/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java#L319-L323", "@normanmaurer @rkapsi PTAL at #8497", "Thanks a lot!" ]
[ "I think this would be easier to read like this:\r\n\r\n```\r\nByteBuf buffer = s == null ? buf : s;\r\nbuffer.release();\r\n```", "Can you add a comment that using the PooledByteBufAllocator is important here ?", "same as above", "just use `composite.release()`", "just use `buffer.release()`", "@normanmaurer Actually I don't think this one would make a difference since it's only used if/when the composite buffer gets expanded right?", "@njhill ah yeah true... then fix it please to use Unpooled. ", "@njhill I am confused... why did you need to to have `s` and `toRelease` here ? Which not just do what I suggested ? \r\n\r\nWhich would be:\r\n\r\n```\r\nByteBuf buffer = slice == null ? buf : slice;\r\nbuffer.release();\r\n```", "@normanmaurer oh, sorry... I misinterpreted your original suggestion since it used `s`. Maybe I am being overly cautious, but just based on the fact that `slice` gets nulled after the component is released I thought it would be safer to do the check on a local var (I ack that it probably is only possible to hit an NPE problem if there is another bug or incorrect use though).\r\n\r\nHappy to change it to use `slice` directly if you think that's fine...", "yeah I think I would go for the easier code or would at least not write it so compact to make it easier to understand ", "done - changed to use if/else, hopefully this is the most readable of all :)" ]
"2018-11-12T23:08:37Z"
[]
Some new leak/reference counting busted in 4.1.32
I don't have much info yet and no local repro. I basically attempted to open up TLS 1.3 to a broader audience within our corp network and I started instantly getting `IllegalReferenceCountException` being thrown and and ERRORs from the `ResourceLeakDetector`. I don't think it's related to TLS 1.3 though. 1. I started out with your [ssl_cipher](https://github.com/netty/netty/pull/8485) branch. Broken. 2. I then took your branch and rebased it with latest 4.1. Still broken. 3. Disabled TLS 1.3. Still broken. I think it's some change between 4.1.31 release and the ssl_cipher branche's branch point. ```java 2018-11-12 16:11:17,328 16928 [ConnectorThread-7] ResourceLeakDetector ERROR: LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: Created at: io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:331) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176) io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1919) io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1292) io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1211) io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1245) io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799) io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:433) io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:330) io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) java.lang.Thread.run(Thread.java:745) ``` ```java 2018-11-12 15:48:37,180 54359 [HttpWorkerThread-4] ResourceLeakDetector ERROR: LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: Created at: io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:331) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176) io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53) io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:77) io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:784) io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:433) io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:330) io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) java.lang.Thread.run(Thread.java:745) ``` ```java 2018-11-12 15:48:35,677 54359 [HttpsWorkerThread-5] ReferenceCountUtil WARN : Failed to release a message: UnpooledSlicedByteBuf(freed) io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1 at io.netty.buffer.AbstractReferenceCountedByteBuf.release0(AbstractReferenceCountedByteBuf.java:124) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:107) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.AbstractDerivedByteBuf.release0(AbstractDerivedByteBuf.java:89) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.AbstractDerivedByteBuf.release(AbstractDerivedByteBuf.java:85) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:88) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.util.ReferenceCountUtil.safeRelease(ReferenceCountUtil.java:113) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractCoalescingBufferQueue.releaseAndCompleteAll(AbstractCoalescingBufferQueue.java:338) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractCoalescingBufferQueue.releaseAndFailAll(AbstractCoalescingBufferQueue.java:207) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.releaseAndFailAll(SslHandler.java:1586) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1580) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1545) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.codec.http2.Http2ConnectionHandler.flush(Http2ConnectionHandler.java:201) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.codec.http2.Http2ConnectionHandler.channelWritabilityChanged(Http2ConnectionHandler.java:440) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelWritabilityChanged(DefaultChannelPipeline.java:1457) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.DefaultChannelPipeline.fireChannelWritabilityChanged(DefaultChannelPipeline.java:977) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.fireChannelWritabilityChanged(ChannelOutboundBuffer.java:607) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.setWritable(ChannelOutboundBuffer.java:573) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.decrementPendingOutboundBytes(ChannelOutboundBuffer.java:194) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.remove(ChannelOutboundBuffer.java:259) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.removeBytes(ChannelOutboundBuffer.java:338) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.AbstractEpollStreamChannel.writeBytesMultiple(AbstractEpollStreamChannel.java:319) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.AbstractEpollStreamChannel.doWriteMultiple(AbstractEpollStreamChannel.java:522) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.AbstractEpollStreamChannel.doWrite(AbstractEpollStreamChannel.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:934) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:525) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:423) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:330) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111] ``` ```java 2018-11-12 15:48:40,119 54359 [HttpsWorkerThread-4] ReferenceCountUtil WARN : Failed to release a message: UnpooledSlicedByteBuf(freed) io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1 at io.netty.buffer.AbstractReferenceCountedByteBuf.release0(AbstractReferenceCountedByteBuf.java:124) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:107) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.CompositeByteBuf$Component.freeIfNecessary(CompositeByteBuf.java:1818) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.CompositeByteBuf.deallocate(CompositeByteBuf.java:2124) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.AbstractReferenceCountedByteBuf.release0(AbstractReferenceCountedByteBuf.java:118) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:107) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.AbstractDerivedByteBuf.release0(AbstractDerivedByteBuf.java:89) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.buffer.AbstractDerivedByteBuf.release(AbstractDerivedByteBuf.java:85) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:88) ~[netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.util.ReferenceCountUtil.safeRelease(ReferenceCountUtil.java:113) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractCoalescingBufferQueue.releaseAndCompleteAll(AbstractCoalescingBufferQueue.java:338) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractCoalescingBufferQueue.releaseAndFailAll(AbstractCoalescingBufferQueue.java:207) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.releaseAndFailAll(SslHandler.java:1586) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1580) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1545) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.codec.http2.Http2ConnectionHandler.flush(Http2ConnectionHandler.java:201) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.codec.http2.Http2ConnectionHandler.channelWritabilityChanged(Http2ConnectionHandler.java:440) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelInboundHandlerAdapter.channelWritabilityChanged(ChannelInboundHandlerAdapter.java:119) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.fireChannelWritabilityChanged(AbstractChannelHandlerContext.java:409) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelWritabilityChanged(DefaultChannelPipeline.java:1457) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:416) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.DefaultChannelPipeline.fireChannelWritabilityChanged(DefaultChannelPipeline.java:977) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.fireChannelWritabilityChanged(ChannelOutboundBuffer.java:607) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.setWritable(ChannelOutboundBuffer.java:573) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.decrementPendingOutboundBytes(ChannelOutboundBuffer.java:194) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.remove(ChannelOutboundBuffer.java:259) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundBuffer.removeBytes(ChannelOutboundBuffer.java:338) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.AbstractEpollStreamChannel.writeBytesMultiple(AbstractEpollStreamChannel.java:319) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.AbstractEpollStreamChannel.doWriteMultiple(AbstractEpollStreamChannel.java:522) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.AbstractEpollStreamChannel.doWrite(AbstractEpollStreamChannel.java:434) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:934) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:512) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:901) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1396) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.forceFlush(SslHandler.java:1808) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:797) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:774) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.handler.codec.http2.Http2ConnectionHandler.flush(Http2ConnectionHandler.java:201) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:1013) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:248) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at com.squarespace.echo.http2.streams.HttpToHttp2Handler.flush(HttpToHttp2Handler.java:92) [echo-http-2.37-20181112-rkapsi_netty-4.1.31-tcn-2.0.19-tls-1.3-30.jar:?] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at com.squarespace.echo.core.handler.scope.ScopeHandler.flush(ScopeHandler.java:321) [echo-core-2.37-20181112-rkapsi_netty-4.1.31-tcn-2.0.19-tls-1.3-30.jar:?] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at com.squarespace.echo.core.handler.scope.ScopeHandler.flush(ScopeHandler.java:321) [echo-core-2.37-20181112-rkapsi_netty-4.1.31-tcn-2.0.19-tls-1.3-30.jar:?] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext.access$1500(AbstractChannelHandlerContext.java:38) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.AbstractChannelHandlerContext$16.run(AbstractChannelHandlerContext.java:756) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:335) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.32.Final-SQSP-20181112-1.jar:4.1.32.Final-SNAPSHOT] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111] ``` ... and many many more. ### Expected behavior ### Actual behavior ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ### Netty version ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java" ]
[ "buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java" ]
[ "buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java index 77ba13fcaec..c9212d5c6e7 100644 --- a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java @@ -513,11 +513,11 @@ private void updateComponentOffsets(int cIndex) { public CompositeByteBuf removeComponent(int cIndex) { checkComponentIndex(cIndex); Component comp = components[cIndex]; - removeComp(cIndex); if (lastAccessed == comp) { lastAccessed = null; } comp.freeIfNecessary(); + removeComp(cIndex); if (comp.length() > 0) { // Only need to call updateComponentOffsets if the length was > 0 updateComponentOffsets(cIndex); @@ -1815,7 +1815,15 @@ ByteBuf duplicate() { } void freeIfNecessary() { - buf.release(); // We should not get a NPE here. If so, it must be a bug. + // Release the slice if present since it may have a different + // refcount to the unwrapped buf if it is a PooledSlicedByteBuf + ByteBuf buffer = slice; + if (buffer != null) { + buffer.release(); + } else { + buf.release(); + } + // null out in either case since it could be racy slice = null; } } @@ -2181,7 +2189,6 @@ private void removeCompRange(int from, int to) { } int newSize = size - to + from; for (int i = newSize; i < size; i++) { - components[i].slice = null; components[i] = null; } componentCount = newSize;
diff --git a/buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java b/buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java index 363b25d8c74..8e0f148cdf0 100644 --- a/buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java +++ b/buffer/src/test/java/io/netty/buffer/AbstractCompositeByteBufTest.java @@ -1144,6 +1144,40 @@ public void testReleasesItsComponents() { assertEquals(0, buffer.refCnt()); } + @Test + public void testReleasesItsComponents2() { + // It is important to use a pooled allocator here to ensure + // the slices returned by readRetainedSlice are of type + // PooledSlicedByteBuf, which maintains an independent refcount + // (so that we can be sure to cover this case) + ByteBuf buffer = PooledByteBufAllocator.DEFAULT.buffer(); // 1 + + buffer.writeBytes(new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}); + + // use readRetainedSlice this time - produces different kind of slices + ByteBuf s1 = buffer.readRetainedSlice(2); // 2 + ByteBuf s2 = s1.readRetainedSlice(2); // 3 + ByteBuf s3 = s2.readRetainedSlice(2); // 4 + ByteBuf s4 = s3.readRetainedSlice(2); // 5 + + ByteBuf composite = Unpooled.compositeBuffer() + .addComponent(s1) + .addComponents(s2, s3, s4) + .order(ByteOrder.LITTLE_ENDIAN); + + assertEquals(1, composite.refCnt()); + assertEquals(2, buffer.refCnt()); + + // releasing composite should release the 4 components + composite.release(); + assertEquals(0, composite.refCnt()); + assertEquals(1, buffer.refCnt()); + + // last remaining ref to buffer + buffer.release(); + assertEquals(0, buffer.refCnt()); + } + @Test public void testReleasesOnShrink() {
val
val
"2018-11-13T19:22:38"
"2018-11-12T16:26:57Z"
rkapsi
val
netty/netty/8501_8502
netty/netty
netty/netty/8501
netty/netty/8502
[ "timestamp(timedelta=3652.0, similarity=0.8621364628325043)" ]
88e4817cefb4b4d092e4f6a12f0599c54646db18
9d5f8035f269325ebdb87777d4ebf75c2d50ce2f
[ "This is not actually a bug, because the spec states that the method returns `null` if the channel is not connected:\r\n\r\n> {@code null} if this channel is not connected.\r\n\r\nBut could be an improvement because it is useful to have this information in some error handlers.", "Agree. If `channelInactive` cannot see remote address, how could it reconnect?" ]
[]
"2018-11-13T07:29:17Z"
[]
In case of connection issue channel.remoteAddress() returns null
### Expected behavior It should return the request address, which is correctly passed into the `AnnotatedConnectionException`: ` protected final Throwable annotateConnectException(Throwable cause, SocketAddress remoteAddress) { ... } ` ### Actual behavior It returns `null` when trying to get it from the channel on receiving an exception. ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ### Netty version 4.x.x. ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java" ]
[ "transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java index 01467b98361..74f798e7aca 100644 --- a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java +++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java @@ -515,4 +515,13 @@ protected void doClose() throws Exception { connectTimeoutFuture = null; } } + + @Override + public SocketAddress remoteAddress() { + final SocketAddress remoteAddress = super.remoteAddress(); + if (remoteAddress == null) { + return requestedRemoteAddress; + } + return remoteAddress; + } }
null
train
val
"2018-11-12T20:59:44"
"2018-11-13T07:24:17Z"
Andremoniy
val
netty/netty/8504_8505
netty/netty
netty/netty/8504
netty/netty/8505
[ "timestamp(timedelta=357.0, similarity=0.907145332590514)" ]
88e4817cefb4b4d092e4f6a12f0599c54646db18
d42fee2b85d051e2dfd75539ee452cd1bcd04fd7
[]
[]
"2018-11-13T11:07:38Z"
[]
Expose requestedRemoteAddress in the Channel interface (improvement)
### Expected behavior Most part of Channel abstract implementations contain `requestedRemoteAddress` object variable, which might be useful for debug purposes. When channel fails to connect in certain circumstances we lose information about what has been requested connect to. ### Actual behavior `Channel` interface should expose `requestedRemoteAddress()` getter method ### Steps to reproduce n/a ### Minimal yet complete reproducer code (or URL to code) ### Netty version ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueChannel.java", "transport/src/main/java/io/netty/channel/AbstractChannel.java", "transport/src/main/java/io/netty/channel/Channel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueChannel.java", "transport/src/main/java/io/netty/channel/AbstractChannel.java", "transport/src/main/java/io/netty/channel/Channel.java", "transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java" ]
[]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java index a7ca0bb8311..dabd0c1970a 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2MultiplexCodec.java @@ -460,6 +460,8 @@ private final class DefaultHttp2StreamChannel extends DefaultAttributeMap implem // We start with the writability of the channel when creating the StreamChannel. private volatile boolean writable; + private volatile SocketAddress requestedRemoteAddress; + private boolean outboundClosed; /** * This variable represents if a read is in progress for the current channel. Note that depending upon the @@ -567,6 +569,11 @@ public SocketAddress remoteAddress() { return parent().remoteAddress(); } + @Override + public SocketAddress requestedRemoteAddress() { + return requestedRemoteAddress; + } + @Override public ChannelFuture closeFuture() { return closePromise; @@ -618,11 +625,13 @@ public ChannelFuture bind(SocketAddress localAddress) { @Override public ChannelFuture connect(SocketAddress remoteAddress) { + this.requestedRemoteAddress = remoteAddress; return pipeline().connect(remoteAddress); } @Override public ChannelFuture connect(SocketAddress remoteAddress, SocketAddress localAddress) { + this.requestedRemoteAddress = remoteAddress; return pipeline().connect(remoteAddress, localAddress); } diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java index 25ae95b2d4d..0e4ad67394d 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java @@ -67,7 +67,6 @@ abstract class AbstractEpollChannel extends AbstractChannel implements UnixChann */ private ChannelPromise connectPromise; private ScheduledFuture<?> connectTimeoutFuture; - private SocketAddress requestedRemoteAddress; private volatile SocketAddress local; private volatile SocketAddress remote; @@ -556,7 +555,6 @@ public void connect( fulfillConnectPromise(promise, wasActive); } else { connectPromise = promise; - requestedRemoteAddress = remoteAddress; // Schedule connect timeout. int connectTimeoutMillis = config().getConnectTimeoutMillis(); @@ -645,7 +643,7 @@ private void finishConnect() { } fulfillConnectPromise(connectPromise, wasActive); } catch (Throwable t) { - fulfillConnectPromise(connectPromise, annotateConnectException(t, requestedRemoteAddress)); + fulfillConnectPromise(connectPromise, annotateConnectException(t, requestedRemoteAddress())); } finally { if (!connectStillInProgress) { // Check for null as the connectTimeoutFuture is only created if a connectTimeoutMillis > 0 is used @@ -664,10 +662,11 @@ private void finishConnect() { private boolean doFinishConnect() throws Exception { if (socket.finishConnect()) { clearFlag(Native.EPOLLOUT); + SocketAddress requestedRemoteAddress = requestedRemoteAddress(); if (requestedRemoteAddress instanceof InetSocketAddress) { remote = computeRemoteAddr((InetSocketAddress) requestedRemoteAddress, socket.remoteAddress()); } - requestedRemoteAddress = null; + invalidateRequestedRemoteAddress(); return true; } diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueChannel.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueChannel.java index 6073fa9ca12..87c48142c0e 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueChannel.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueChannel.java @@ -62,7 +62,6 @@ abstract class AbstractKQueueChannel extends AbstractChannel implements UnixChan */ private ChannelPromise connectPromise; private ScheduledFuture<?> connectTimeoutFuture; - private SocketAddress requestedRemoteAddress; final BsdSocket socket; private boolean readFilterEnabled; @@ -562,7 +561,6 @@ public void connect( fulfillConnectPromise(promise, wasActive); } else { connectPromise = promise; - requestedRemoteAddress = remoteAddress; // Schedule connect timeout. int connectTimeoutMillis = config().getConnectTimeoutMillis(); @@ -651,7 +649,7 @@ private void finishConnect() { } fulfillConnectPromise(connectPromise, wasActive); } catch (Throwable t) { - fulfillConnectPromise(connectPromise, annotateConnectException(t, requestedRemoteAddress)); + fulfillConnectPromise(connectPromise, annotateConnectException(t, requestedRemoteAddress())); } finally { if (!connectStillInProgress) { // Check for null as the connectTimeoutFuture is only created if a connectTimeoutMillis > 0 is used @@ -667,10 +665,11 @@ private void finishConnect() { private boolean doFinishConnect() throws Exception { if (socket.finishConnect()) { writeFilter(false); + SocketAddress requestedRemoteAddress = requestedRemoteAddress(); if (requestedRemoteAddress instanceof InetSocketAddress) { remote = computeRemoteAddr((InetSocketAddress) requestedRemoteAddress, socket.remoteAddress()); } - requestedRemoteAddress = null; + invalidateRequestedRemoteAddress(); return true; } writeFilter(true); diff --git a/transport/src/main/java/io/netty/channel/AbstractChannel.java b/transport/src/main/java/io/netty/channel/AbstractChannel.java index ad671423df9..3b16f8335e8 100644 --- a/transport/src/main/java/io/netty/channel/AbstractChannel.java +++ b/transport/src/main/java/io/netty/channel/AbstractChannel.java @@ -64,6 +64,7 @@ public abstract class AbstractChannel extends DefaultAttributeMap implements Cha private volatile SocketAddress localAddress; private volatile SocketAddress remoteAddress; + private volatile SocketAddress requestedRemoteAddress; private volatile EventLoop eventLoop; private volatile boolean registered; private boolean closeInitiated; @@ -220,11 +221,13 @@ public ChannelFuture bind(SocketAddress localAddress) { @Override public ChannelFuture connect(SocketAddress remoteAddress) { + this.requestedRemoteAddress = remoteAddress; return pipeline.connect(remoteAddress); } @Override public ChannelFuture connect(SocketAddress remoteAddress, SocketAddress localAddress) { + this.requestedRemoteAddress = remoteAddress; return pipeline.connect(remoteAddress, localAddress); } @@ -1196,4 +1199,14 @@ public Throwable fillInStackTrace() { return this; } } + + protected void invalidateRequestedRemoteAddress() { + requestedRemoteAddress = null; + } + + @Override + public SocketAddress requestedRemoteAddress() { + return requestedRemoteAddress; + } + } diff --git a/transport/src/main/java/io/netty/channel/Channel.java b/transport/src/main/java/io/netty/channel/Channel.java index 6acb59d3b31..4890087cd83 100644 --- a/transport/src/main/java/io/netty/channel/Channel.java +++ b/transport/src/main/java/io/netty/channel/Channel.java @@ -146,6 +146,17 @@ public interface Channel extends AttributeMap, ChannelOutboundInvoker, Comparabl */ SocketAddress remoteAddress(); + /** + * Returns the remote address where this channel has been requested connected to. The + * returned {@link SocketAddress} is supposed to be down-cast into more + * concrete type such as {@link InetSocketAddress} to retrieve the detailed + * information. + * + * @return the requested remote address of this channel. + * {@code null} if this channel has not been requested to connect yet. + */ + SocketAddress requestedRemoteAddress(); + /** * Returns the {@link ChannelFuture} which will be notified when this * channel is closed. This method always returns the same future instance. diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java index 01467b98361..71185d9ed42 100644 --- a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java +++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java @@ -71,7 +71,6 @@ public void run() { */ private ChannelPromise connectPromise; private ScheduledFuture<?> connectTimeoutFuture; - private SocketAddress requestedRemoteAddress; /** * Create a new instance @@ -255,7 +254,6 @@ public final void connect( fulfillConnectPromise(promise, wasActive); } else { connectPromise = promise; - requestedRemoteAddress = remoteAddress; // Schedule connect timeout. int connectTimeoutMillis = config().getConnectTimeoutMillis(); @@ -340,7 +338,7 @@ public final void finishConnect() { doFinishConnect(); fulfillConnectPromise(connectPromise, wasActive); } catch (Throwable t) { - fulfillConnectPromise(connectPromise, annotateConnectException(t, requestedRemoteAddress)); + fulfillConnectPromise(connectPromise, annotateConnectException(t, requestedRemoteAddress())); } finally { // Check for null as the connectTimeoutFuture is only created if a connectTimeoutMillis > 0 is used // See https://github.com/netty/netty/issues/1770
null
val
val
"2018-11-12T20:59:44"
"2018-11-13T10:13:16Z"
Andremoniy
val
netty/netty/8566_8569
netty/netty
netty/netty/8566
netty/netty/8569
[ "timestamp(timedelta=0.0, similarity=0.8615711780694038)" ]
63dc1f5aaac474f0bac60db861bfd6089d7fb688
278b49b2a791968c6b80ed0995ef25771b3fd654
[ "@qinliujie Please check https://github.com/netty/netty/pull/8569", "@normanmaurer ok, thx~" ]
[ "fix indenting?" ]
"2018-11-16T16:58:11Z"
[]
NioEventLoop selector crash and can't recover
### Expected behavior ``` WARN [HSF-Worker-2-thread-5:t.h.remoting] [] [] [] Unexpected exception in the selector loop. java.io.IOException: Broken pipe at sun.nio.ch.EPollArrayWrapper.interrupt(Native Method) at sun.nio.ch.EPollArrayWrapper.interrupt(EPollArrayWrapper.java:323) at sun.nio.ch.EPollSelectorImpl.wakeup(EPollSelectorImpl.java:207) at io.netty.channel.nio.SelectedSelectionKeySetSelector.wakeup(SelectedSelectionKeySetSelector.java:73) at io.netty.channel.nio.NioEventLoop.selectNow(NioEventLoop.java:722) at io.netty.channel.nio.NioEventLoop$1.get(NioEventLoop.java:71) at io.netty.channel.DefaultSelectStrategy.calculateStrategy(DefaultSelectStrategy.java:30) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:405) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at com.taobao.hsf.io.netty.util.PooledThreadFactory$PooledByteBufRunnable.run(PooledThreadFactory.java:37) at java.lang.Thread.run(Thread.java:882) ``` NioEventLoop can recover from exception or rebuild a new selector ### Actual behavior NioEventLoop dead totally ### Steps to reproduce Can't reproduce this case,but we can mock; see below code ``` protected void run() { for (;;) { try { switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) { case SelectStrategy.CONTINUE: continue; case SelectStrategy.SELECT: ..... } catch (Throwable t) { handleLoopException(t); } ``` notice that,`(selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())` can throw an exception every loop; now we look at `selectNowSupplier`, ``` private final IntSupplier selectNowSupplier = new IntSupplier() { @Override public int get() throws Exception { return selectNow(); } }; ``` and I find `selector` api document ``` /** * Selects a set of keys whose corresponding channels are ready for I/O * operations. * * <p> This method performs a non-blocking <a href="#selop">selection * operation</a>. If no channels have become selectable since the previous * selection operation then this method immediately returns zero. * * <p> Invoking this method clears the effect of any previous invocations * of the {@link #wakeup wakeup} method. </p> * * @return The number of keys, possibly zero, whose ready-operation sets * were updated by the selection operation * * @throws IOException * If an I/O error occurs * * @throws ClosedSelectorException * If this selector is closed */ public abstract int selectNow() throws IOException; ``` java.nio.channels.Selector#selectNow may throw an exception ! but we don't handle it! ### Minimal yet complete reproducer code (or URL to code) ### Netty version 4.1.16.Final ### JVM version (e.g. `java -version`) 1.8 ### OS version (e.g. `uname -a`)
[ "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[ "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[ "transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java" ]
diff --git a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java index 187e1ec1a0b..f1aed11bc46 100644 --- a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java +++ b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java @@ -401,7 +401,8 @@ private void rebuildSelector0() { protected void run() { for (;;) { try { - switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) { + try { + switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) { case SelectStrategy.CONTINUE: continue; @@ -444,6 +445,13 @@ protected void run() { } // fall through default: + } + } catch (IOException e) { + // If we receive an IOException here its because the Selector is messed up. Let's rebuild + // the selector and retry. https://github.com/netty/netty/issues/8566 + rebuildSelector0(); + handleLoopException(e); + continue; } cancelledKeys = 0;
diff --git a/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java b/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java index 15fcb212442..8b176bc71c0 100644 --- a/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java +++ b/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java @@ -19,16 +19,22 @@ import io.netty.channel.Channel; import io.netty.channel.EventLoop; import io.netty.channel.EventLoopGroup; +import io.netty.channel.SelectStrategy; +import io.netty.channel.SelectStrategyFactory; import io.netty.channel.socket.ServerSocketChannel; import io.netty.channel.socket.nio.NioServerSocketChannel; +import io.netty.util.IntSupplier; +import io.netty.util.concurrent.DefaultThreadFactory; import io.netty.util.concurrent.Future; import org.hamcrest.core.IsInstanceOf; import org.junit.Test; +import java.io.IOException; import java.net.InetSocketAddress; import java.nio.channels.SelectionKey; import java.nio.channels.Selector; import java.nio.channels.SocketChannel; +import java.nio.channels.spi.SelectorProvider; import java.util.concurrent.CountDownLatch; import java.util.concurrent.RejectedExecutionException; import java.util.concurrent.TimeUnit; @@ -211,4 +217,45 @@ public void run() { } } + @Test + public void testRebuildSelectorOnIOException() { + SelectStrategyFactory selectStrategyFactory = new SelectStrategyFactory() { + @Override + public SelectStrategy newSelectStrategy() { + return new SelectStrategy() { + + private boolean thrown; + + @Override + public int calculateStrategy(IntSupplier selectSupplier, boolean hasTasks) throws Exception { + if (!thrown) { + thrown = true; + throw new IOException(); + } + return -1; + } + }; + } + }; + + EventLoopGroup group = new NioEventLoopGroup(1, new DefaultThreadFactory("ioPool"), + SelectorProvider.provider(), selectStrategyFactory); + final NioEventLoop loop = (NioEventLoop) group.next(); + try { + Channel channel = new NioServerSocketChannel(); + Selector selector = loop.unwrappedSelector(); + + loop.register(channel).syncUninterruptibly(); + + Selector newSelector = ((NioEventLoop) channel.eventLoop()).unwrappedSelector(); + assertTrue(newSelector.isOpen()); + assertNotSame(selector, newSelector); + assertFalse(selector.isOpen()); + + channel.close().syncUninterruptibly(); + } finally { + group.shutdownGracefully(); + } + } + }
train
val
"2018-11-16T17:22:03"
"2018-11-16T12:56:19Z"
qinliujie
val
netty/netty/8483_8595
netty/netty
netty/netty/8483
netty/netty/8595
[ "keyword_issue_to_pr" ]
af636267772da6dcde43618b20c5e1be8e418d5f
f4e4147df85b4684cbc8800a8be5ff0fe8bfc58e
[ "Should be fixed by #8595" ]
[ "Presumably there's no need to also pass the args now? If so it would be better to pass null for the last arg to avoid varargs array allocation.", "from other implementations I think you still should pass the args. ", "I still don't follow (sorry I'm probably missing something) ... the additional args are only passed for the purpose of later substitution into the message, but now the substitution is done upfront. So the message passed won't have `{}` corresponding to the args..?", "@njhill when looking at other implementation they all forwarded the \"args\", so I did the same as I thought maybe they are used somehow later on. I think at worse you have one more alloc but not passing these down may produce other side-effects depending on the underlying implementations.\r\n\r\nThat said maybe you are right and we should just not do it. \r\n\r\n\r\n", "@njhill PTAL again... I think this addresses your concern and is also cleaner :)", "@normanmaurer LGTM, I was about to suggest something similar :)", "Not part of this PR but I think methods like this one don't need the `if (isXxxEnabled())` right?" ]
"2018-11-26T12:55:22Z"
[ "defect" ]
some problems with netty debug log
### Expected behavior netty can print rigth debug log with its parameter. ### Actual behavior can not print its parameter. [DEBUG][2018-11-08 16:26:58,872][io.netty.util.internal.logging.InternalLoggerFactory]Using SLF4J as the default logging framework [DEBUG][2018-11-08 16:26:58,882][io.netty.channel.MultithreadEventLoopGroup]-Dio.netty.eventLoopThreads: {} [DEBUG][2018-11-08 16:26:58,913][io.netty.channel.nio.NioEventLoop]-Dio.netty.noKeySetOptimization: {} [DEBUG][2018-11-08 16:26:58,913][io.netty.channel.nio.NioEventLoop]-Dio.netty.selectorAutoRebuildThreshold: {} ### Steps to reproduce problem happpened when using slf4j-log4j12 for logger. when i just use log4j it is ok. How to solve? <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.25</version> </dependency> ### Minimal yet complete reproducer code (or URL to code) ### Netty version 4.1.31 ### JVM version (e.g. `java -version`) 1.8 ### OS version (e.g. `uname -a`)
[ "common/src/main/java/io/netty/util/internal/logging/LocationAwareSlf4JLogger.java" ]
[ "common/src/main/java/io/netty/util/internal/logging/LocationAwareSlf4JLogger.java" ]
[ "common/src/test/java/io/netty/util/internal/logging/Slf4JLoggerFactoryTest.java" ]
diff --git a/common/src/main/java/io/netty/util/internal/logging/LocationAwareSlf4JLogger.java b/common/src/main/java/io/netty/util/internal/logging/LocationAwareSlf4JLogger.java index 13073b02b41..33eb705a8f0 100644 --- a/common/src/main/java/io/netty/util/internal/logging/LocationAwareSlf4JLogger.java +++ b/common/src/main/java/io/netty/util/internal/logging/LocationAwareSlf4JLogger.java @@ -28,7 +28,7 @@ final class LocationAwareSlf4JLogger extends AbstractInternalLogger { // IMPORTANT: All our log methods first check if the log level is enabled before call the wrapped // LocationAwareLogger.log(...) method. This is done to reduce GC creation that is caused by varargs. - private static final String FQCN = LocationAwareSlf4JLogger.class.getName(); + static final String FQCN = LocationAwareSlf4JLogger.class.getName(); private static final long serialVersionUID = -8292030083201538180L; private final transient LocationAwareLogger logger; @@ -38,12 +38,16 @@ final class LocationAwareSlf4JLogger extends AbstractInternalLogger { this.logger = logger; } - private void log(final int level, final String message, final Object... params) { - logger.log(null, FQCN, level, message, params, null); + private void log(final int level, final String message) { + logger.log(null, FQCN, level, message, null, null); } - private void log(final int level, final String message, Throwable throwable, final Object... params) { - logger.log(null, FQCN, level, message, params, throwable); + private void log(final int level, final String message, Throwable cause) { + logger.log(null, FQCN, level, message, null, cause); + } + + private void log(final int level, final org.slf4j.helpers.FormattingTuple tuple) { + logger.log(null, FQCN, level, tuple.getMessage(), tuple.getArgArray(), tuple.getThrowable()); } @Override @@ -54,28 +58,28 @@ public boolean isTraceEnabled() { @Override public void trace(String msg) { if (isTraceEnabled()) { - log(TRACE_INT, msg, null); + log(TRACE_INT, msg); } } @Override public void trace(String format, Object arg) { if (isTraceEnabled()) { - log(TRACE_INT, format, arg); + log(TRACE_INT, org.slf4j.helpers.MessageFormatter.format(format, arg)); } } @Override public void trace(String format, Object argA, Object argB) { if (isTraceEnabled()) { - log(TRACE_INT, format, argA, argB); + log(TRACE_INT, org.slf4j.helpers.MessageFormatter.format(format, argA, argB)); } } @Override public void trace(String format, Object... argArray) { if (isTraceEnabled()) { - log(TRACE_INT, format, argArray); + log(TRACE_INT, org.slf4j.helpers.MessageFormatter.format(format, argArray)); } } @@ -101,21 +105,21 @@ public void debug(String msg) { @Override public void debug(String format, Object arg) { if (isDebugEnabled()) { - log(DEBUG_INT, format, arg); + log(DEBUG_INT, org.slf4j.helpers.MessageFormatter.format(format, arg)); } } @Override public void debug(String format, Object argA, Object argB) { if (isDebugEnabled()) { - log(DEBUG_INT, format, argA, argB); + log(DEBUG_INT, org.slf4j.helpers.MessageFormatter.format(format, argA, argB)); } } @Override public void debug(String format, Object... argArray) { if (isDebugEnabled()) { - log(DEBUG_INT, format, argArray); + log(DEBUG_INT, org.slf4j.helpers.MessageFormatter.format(format, argArray)); } } @@ -141,21 +145,21 @@ public void info(String msg) { @Override public void info(String format, Object arg) { if (isInfoEnabled()) { - log(INFO_INT, format, arg); + log(INFO_INT, org.slf4j.helpers.MessageFormatter.format(format, arg)); } } @Override public void info(String format, Object argA, Object argB) { if (isInfoEnabled()) { - log(INFO_INT, format, argA, argB); + log(INFO_INT, org.slf4j.helpers.MessageFormatter.format(format, argA, argB)); } } @Override public void info(String format, Object... argArray) { if (isInfoEnabled()) { - log(INFO_INT, format, argArray); + log(INFO_INT, org.slf4j.helpers.MessageFormatter.format(format, argArray)); } } @@ -181,21 +185,21 @@ public void warn(String msg) { @Override public void warn(String format, Object arg) { if (isWarnEnabled()) { - log(WARN_INT, format, arg); + log(WARN_INT, org.slf4j.helpers.MessageFormatter.format(format, arg)); } } @Override public void warn(String format, Object... argArray) { if (isWarnEnabled()) { - log(WARN_INT, format, argArray); + log(WARN_INT, org.slf4j.helpers.MessageFormatter.format(format, argArray)); } } @Override public void warn(String format, Object argA, Object argB) { if (isWarnEnabled()) { - log(WARN_INT, format, argA, argB); + log(WARN_INT, org.slf4j.helpers.MessageFormatter.format(format, argA, argB)); } } @@ -221,21 +225,21 @@ public void error(String msg) { @Override public void error(String format, Object arg) { if (isErrorEnabled()) { - log(ERROR_INT, format, arg); + log(ERROR_INT, org.slf4j.helpers.MessageFormatter.format(format, arg)); } } @Override public void error(String format, Object argA, Object argB) { if (isErrorEnabled()) { - log(ERROR_INT, format, argA, argB); + log(ERROR_INT, org.slf4j.helpers.MessageFormatter.format(format, argA, argB)); } } @Override public void error(String format, Object... argArray) { if (isErrorEnabled()) { - log(ERROR_INT, format, argArray); + log(ERROR_INT, org.slf4j.helpers.MessageFormatter.format(format, argArray)); } }
diff --git a/common/src/test/java/io/netty/util/internal/logging/Slf4JLoggerFactoryTest.java b/common/src/test/java/io/netty/util/internal/logging/Slf4JLoggerFactoryTest.java index 6b3cd850158..8cb2f84f7a9 100644 --- a/common/src/test/java/io/netty/util/internal/logging/Slf4JLoggerFactoryTest.java +++ b/common/src/test/java/io/netty/util/internal/logging/Slf4JLoggerFactoryTest.java @@ -16,13 +16,19 @@ package io.netty.util.internal.logging; import org.junit.Test; +import org.mockito.ArgumentCaptor; +import org.mockito.ArgumentMatchers; import org.slf4j.Logger; +import org.slf4j.Marker; import org.slf4j.spi.LocationAwareLogger; +import java.util.Iterator; + import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; +import static org.mockito.ArgumentMatchers.*; +import static org.mockito.Mockito.*; public class Slf4JLoggerFactoryTest { @@ -50,4 +56,61 @@ public void testCreationLocationAwareLogger() { assertTrue(internalLogger instanceof LocationAwareSlf4JLogger); assertEquals("testlogger", internalLogger.name()); } + + @Test + public void testFormatMessage() { + ArgumentCaptor<String> captor = ArgumentCaptor.forClass(String.class); + LocationAwareLogger logger = mock(LocationAwareLogger.class); + when(logger.isDebugEnabled()).thenReturn(true); + when(logger.isErrorEnabled()).thenReturn(true); + when(logger.isInfoEnabled()).thenReturn(true); + when(logger.isTraceEnabled()).thenReturn(true); + when(logger.isWarnEnabled()).thenReturn(true); + when(logger.getName()).thenReturn("testlogger"); + + InternalLogger internalLogger = Slf4JLoggerFactory.wrapLogger(logger); + internalLogger.debug("{}", "debug"); + internalLogger.debug("{} {}", "debug1", "debug2"); + + internalLogger.error("{}", "error"); + internalLogger.error("{} {}", "error1", "error2"); + + internalLogger.info("{}", "info"); + internalLogger.info("{} {}", "info1", "info2"); + + internalLogger.trace("{}", "trace"); + internalLogger.trace("{} {}", "trace1", "trace2"); + + internalLogger.warn("{}", "warn"); + internalLogger.warn("{} {}", "warn1", "warn2"); + + verify(logger, times(2)).log(ArgumentMatchers.<Marker>isNull(), eq(LocationAwareSlf4JLogger.FQCN), + eq(LocationAwareLogger.DEBUG_INT), captor.capture(), any(Object[].class), + ArgumentMatchers.<Throwable>isNull()); + verify(logger, times(2)).log(ArgumentMatchers.<Marker>isNull(), eq(LocationAwareSlf4JLogger.FQCN), + eq(LocationAwareLogger.ERROR_INT), captor.capture(), any(Object[].class), + ArgumentMatchers.<Throwable>isNull()); + verify(logger, times(2)).log(ArgumentMatchers.<Marker>isNull(), eq(LocationAwareSlf4JLogger.FQCN), + eq(LocationAwareLogger.INFO_INT), captor.capture(), any(Object[].class), + ArgumentMatchers.<Throwable>isNull()); + verify(logger, times(2)).log(ArgumentMatchers.<Marker>isNull(), eq(LocationAwareSlf4JLogger.FQCN), + eq(LocationAwareLogger.TRACE_INT), captor.capture(), any(Object[].class), + ArgumentMatchers.<Throwable>isNull()); + verify(logger, times(2)).log(ArgumentMatchers.<Marker>isNull(), eq(LocationAwareSlf4JLogger.FQCN), + eq(LocationAwareLogger.WARN_INT), captor.capture(), any(Object[].class), + ArgumentMatchers.<Throwable>isNull()); + + Iterator<String> logMessages = captor.getAllValues().iterator(); + assertEquals("debug", logMessages.next()); + assertEquals("debug1 debug2", logMessages.next()); + assertEquals("error", logMessages.next()); + assertEquals("error1 error2", logMessages.next()); + assertEquals("info", logMessages.next()); + assertEquals("info1 info2", logMessages.next()); + assertEquals("trace", logMessages.next()); + assertEquals("trace1 trace2", logMessages.next()); + assertEquals("warn", logMessages.next()); + assertEquals("warn1 warn2", logMessages.next()); + assertFalse(logMessages.hasNext()); + } }
val
val
"2018-11-25T21:46:14"
"2018-11-09T01:56:08Z"
strongmanwj
val
netty/netty/6473_8611
netty/netty
netty/netty/6473
netty/netty/8611
[ "keyword_pr_to_issue" ]
a0c3081d8264b3e16d98a87a90250f26c6d9ed53
d05666ae2d2068da7ee031a8bfc1ca572dbcc3f8
[ "thanks for reporting @ddossot !", "@ddossot <3" ]
[ "@madgnome can we please keep the original constructor as well ?", "please store this as non-mutable set ", "You always use the opposite `!canBeCombined` so IMHO it would be better to have a `cantBeCombined` instead (I would expect IntelliJ to complain as well).", "If I'm not missing something, this should be case insensitive. You should probably be using the hasher instead.", "Thats a very good point... @madgnome please also add a unit test for this.", "Could we just instead special-case this one key (with a plain condition)?", "Adding this constructor seems like an overkill, to let users provide their own new headers-that-are-broken since [HTTP itself only specifies Set-Cookie as an exception](https://tools.ietf.org/html/rfc7230#section-3.2.2).", "Oh yeah good point, at some point I was using `canBeCombined` without the negation and wanted to avoid the double negative but it's gone now, so will change it.", "I think you could just use `SET_COOKIE.contentEqualsIgnoreCase(...)`.", "Fixed!" ]
"2018-11-30T08:46:04Z"
[ "defect" ]
CombinedHttpHeaders indiscriminately combines all headers
### Expected behavior `CombinedHttpHeaders` combines all headers per name, even if they shouldn't be combined, like for example the `Set-Cookie` header (per [RFC-7230, section 3.2.2](https://tools.ietf.org/html/rfc7230#section-3.2.2)). In fact, there's only a limited set of headers that can be combined (see this [SO thread](http://stackoverflow.com/a/29550711/387927)). ### Actual behavior `CombinedHttpHeaders` should support a configurable set of header names that can safely be combined (or a set of headers that should never be combined, if we want to combine be default). A default set should be available to in order to provide a sensible behaviour out-of-the-box. ### Netty version `4.1.x`
[ "codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java b/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java index 2d43b7ad04a..ae494934d7c 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java @@ -26,6 +26,7 @@ import java.util.List; import java.util.Map; +import static io.netty.handler.codec.http.HttpHeaderNames.SET_COOKIE; import static io.netty.util.AsciiString.CASE_INSENSITIVE_HASHER; import static io.netty.util.internal.StringUtil.COMMA; import static io.netty.util.internal.StringUtil.unescapeCsvFields; @@ -87,7 +88,7 @@ public CombinedHttpHeadersImpl(HashingStrategy<CharSequence> nameHashingStrategy @Override public Iterator<CharSequence> valueIterator(CharSequence name) { Iterator<CharSequence> itr = super.valueIterator(name); - if (!itr.hasNext()) { + if (!itr.hasNext() || cannotBeCombined(name)) { return itr; } Iterator<CharSequence> unescapedItr = unescapeCsvFields(itr.next()).iterator(); @@ -100,7 +101,7 @@ public Iterator<CharSequence> valueIterator(CharSequence name) { @Override public List<CharSequence> getAll(CharSequence name) { List<CharSequence> values = super.getAll(name); - if (values.isEmpty()) { + if (values.isEmpty() || cannotBeCombined(name)) { return values; } if (values.size() != 1) { @@ -213,9 +214,13 @@ public CombinedHttpHeadersImpl setObject(CharSequence name, Iterable<?> values) return this; } + private static boolean cannotBeCombined(CharSequence name) { + return SET_COOKIE.contentEqualsIgnoreCase(name); + } + private CombinedHttpHeadersImpl addEscapedValue(CharSequence name, CharSequence escapedValue) { CharSequence currentValue = super.get(name); - if (currentValue == null) { + if (currentValue == null || cannotBeCombined(name)) { super.add(name, escapedValue); } else { super.set(name, commaSeparateEscapedValues(currentValue, escapedValue));
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java index e0433efb97c..c63c885a955 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java @@ -22,9 +22,12 @@ import java.util.Collections; import java.util.Iterator; +import static io.netty.handler.codec.http.HttpHeaderNames.SET_COOKIE; import static io.netty.util.AsciiString.contentEquals; +import static org.hamcrest.Matchers.hasSize; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; public class CombinedHttpHeadersTest { @@ -66,6 +69,28 @@ public void addCombinedHeadersWhenNotEmpty() { assertEquals("a,b,c", headers.get(HEADER_NAME).toString()); } + @Test + public void dontCombineSetCookieHeaders() { + final CombinedHttpHeaders headers = newCombinedHttpHeaders(); + headers.add(SET_COOKIE, "a"); + final CombinedHttpHeaders otherHeaders = newCombinedHttpHeaders(); + otherHeaders.add(SET_COOKIE, "b"); + otherHeaders.add(SET_COOKIE, "c"); + headers.add(otherHeaders); + assertThat(headers.getAll(SET_COOKIE), hasSize(3)); + } + + @Test + public void dontCombineSetCookieHeadersRegardlessOfCase() { + final CombinedHttpHeaders headers = newCombinedHttpHeaders(); + headers.add("Set-Cookie", "a"); + final CombinedHttpHeaders otherHeaders = newCombinedHttpHeaders(); + otherHeaders.add("set-cookie", "b"); + otherHeaders.add("SET-COOKIE", "c"); + headers.add(otherHeaders); + assertThat(headers.getAll(SET_COOKIE), hasSize(3)); + } + @Test public void setCombinedHeadersWhenNotEmpty() { final CombinedHttpHeaders headers = newCombinedHttpHeaders(); @@ -274,6 +299,15 @@ public void testGetAll() { assertEquals(Arrays.asList("a,b,c"), headers.getAll(HEADER_NAME)); } + @Test + public void getAllDontCombineSetCookie() { + final CombinedHttpHeaders headers = newCombinedHttpHeaders(); + headers.add(SET_COOKIE, "a"); + headers.add(SET_COOKIE, "b"); + assertThat(headers.getAll(SET_COOKIE), hasSize(2)); + assertEquals(Arrays.asList("a", "b"), headers.getAll(SET_COOKIE)); + } + @Test public void owsTrimming() { final CombinedHttpHeaders headers = newCombinedHttpHeaders(); @@ -314,6 +348,22 @@ public void valueIterator() { assertValueIterator(headers.valueCharSequenceIterator(HEADER_NAME)); } + @Test + public void nonCombinableHeaderIterator() { + final CombinedHttpHeaders headers = newCombinedHttpHeaders(); + headers.add(SET_COOKIE, "c"); + headers.add(SET_COOKIE, "b"); + headers.add(SET_COOKIE, "a"); + + final Iterator<String> strItr = headers.valueStringIterator(SET_COOKIE); + assertTrue(strItr.hasNext()); + assertEquals("a", strItr.next()); + assertTrue(strItr.hasNext()); + assertEquals("b", strItr.next()); + assertTrue(strItr.hasNext()); + assertEquals("c", strItr.next()); + } + private static void assertValueIterator(Iterator<? extends CharSequence> strItr) { assertTrue(strItr.hasNext()); assertEquals("a", strItr.next());
test
val
"2018-11-29T19:45:52"
"2017-02-28T18:19:41Z"
ddossot
val
netty/netty/5761_8619
netty/netty
netty/netty/5761
netty/netty/8619
[ "keyword_pr_to_issue" ]
aa3c57508b629ebc4aa26f2cf356b7d0a2cc727b
ffc3b2da72c09bb17fd1eb284e1b59bb4ed23b2a
[ "@buchgr thanks for having a look! I think I should write a maven plugin to do this as part of our build just as we do with the autobahn test suite (web sockets)\n", "@normanmaurer ... and first fix all the errors 😜 \n", "@buchgr great, thanks for looking at this! Yes ... fixing errors sgtm :stuck_out_tongue_winking_eye:\n", "Working on a fix. Most errors seem pretty straightfoward... \n", "Thanks a lot!\n\n> Am 29.08.2016 um 16:01 schrieb Jakob Buchgraber notifications@github.com:\n> \n> Working on a fix. Most errors seem pretty straightfoward...\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Everybody here, my platform in addition to use asynchronous circuit also used in the synchronous circuit, and I did not find netty synchronous architecture, thinking, I realized a framework, called net Rio, it to NiO based synchronization response mechanism, basic to meet the needs of our platform. The project was open source, hope you want to join\n", "thanks for uncovering and patching @buchgr !\n", "Running `h2spec` now: 70 tests, 48 passed, 1 skipped, 21 failed\n\nWhen increasing the `DEFAULT_MAX_HEADER_SIZE` to 64KiB, 6 fewer tests fail. Specfically those in `6.10 CONTINUATION`, as this test sends large HEADERS / CONTINUATION frames. It's totally legit for a HTTP/2 implementation to not support such large HEADERS though https://github.com/netty/netty/issues/5774#issuecomment-243618794 ...\n", "May I ask that if I could help on this issue?\r\nI will pick up some simple case to fix and create issue and PR for that.\r\nIt seems that most failed test are occurred at http2 validation.\r\nThanks!", "@chhsiao90 sure just open PRs, we love contributions " ]
[ "why this change ?", "This will allocate a new Iterator on each loop. Consider using an old style for loop.", "should we move this check above the for loop as the check is less expensive. ", "toString() is not needed here.", "use method chaining ?", "should we maybe rename tis this `newHttp2HeadersWithPseudoHeaders()`", "Oops will change that back", "Do we need to check for multiple `:method` headers?", "This is going to be relatively poor performing considering we're going to make a list for each mandatory header. Since the spec clearly defines which ones are mandatory could we just iterate through the header list in order and count them up, eg have `boolean methodObserved, pathObserved, ...` and make sure we don't do more than one and that we got all three?", "Do we worry about the client side validation somewhere else?", "Ah yeah good point! I should add that to h2spec as well", "Iterating through the whole list would scale linearly with the number of headers whereas this version has a constant run time. If we assume the number of headers is most of the time small iterating would be more performant.\r\n\r\nIdeally we would have a `count(name)` method on headers that would just count the headers with a given name, not convinced this usecase justify adding that to the API though.", "If we use my strategy of having an `observed` variable for each critical header we should count no more than 4 as at 4 we'd hit an exceptional condition. This actually gets to another conformance requirement of the RFC ([section 8.1.2.1](https://http2.github.io/http2-spec/#rfc.section.8.1.2.1)) which is that we should make sure we don't have bonus pseudo headers: `Pseudo-header fields defined for requests MUST NOT appear in responses; pseudo-header fields defined for responses MUST NOT appear in requests.`", "Nice, forgot about this requirement, will change it. \r\n(BTW 8.1.2.1 requirement is already checked in the HPackDecoder) ", "Ah, sure enough. Do you think it would be good to consolidate the header validation somewhere? Maybe it's too late, but I find it a bit surprising that the `HpackDecoder` takes it upon itself to enforce session semantics, and it seems that it can only do a subset of them. (consolidation could be a different PR if done at all, I'm just throwing half baked ideas around so fft tell me why my suggestions are ridiculous 😄)", "They validate at different levels: in `HPackdecoder` we check if the headerBlock is valid, here we check if all the headers together make sense, that being said it would be nice if it was done at a single place.\r\n\r\nThe problem is that `HPackDecoder` level validation needs to know the order of the headers, we lose the order once we return Http2Headers and `Http2HeadersValidation` needs to know more about the context: is this an initial header or trailing one? It didn't seem right to push that down to hpackdecoder.", "That's true regarding the order validation in hpack. Maybe we could shift request/response validation to where you're adding more validation logic, but probably not all of it and at a minimum this isn't obviously right.", "Baby steps, h2spec only validate server specification for now.", "This check was moved to `Http2HeadersValidator` as it doesn't rely on the ordering", "I'm not sure we want to put this check behind the `validateHeaders` check as it was always performed before. Someone else may have a different opinion though.", "Since this was moved out of HpackDecoder I think we should now validate the request pseudo headers as it leaves a hole otherwise.", "Since we're only validating the request specific headers I think we can make this another `else` clause in the branches below.", "H2 headers must always have lower case keys and pseudo-headers should be no exception, so we should be able to omit the `IngoreCase`.", "Even if we don't plan on doing the client side now we should probably prepare for that by making the name reflect that it is request specific.", "Oh yeah good point, will wait on other opinion for now", "This comment and the next one make me think we should probably leave this check in HPackDecoder. 🤔 ", "I moved that out of the condition in the end.", "spelling: specific" ]
"2018-12-04T02:20:57Z"
[]
HTTP/2: Many specification compliancy issues
I ran [h2spec](https://github.com/summerwind/h2spec) against the hello world HTTP/2 server (4.1 branch, with removed HTTP upgrade logic). The result: 70 tests, 37 passed, 1 skipped, **32 failed**. From skimming over the list of errors, the failed tests seem to be mostly due to Netty responding with wrong error codes. [Here](https://gist.github.com/buchgr/9b8bc81592258008641e20e6d96f61c2) is a detailed list of what went wrong. cc: @Scottmitch @nmittler @louiscryan @ejona86
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandlerBuilder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandlerBuilder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeadersValidator.java", "codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/HpackDecoderTest.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2HeadersValidatorTest.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilderTest.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java", "codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java", "testsuite-http2/pom.xml" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandlerBuilder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandlerBuilder.java index f262b114f09..557341c1897 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandlerBuilder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandlerBuilder.java @@ -535,8 +535,8 @@ private T buildFromConnection(Http2Connection connection) { encoder = new StreamBufferingEncoder(encoder); } - DefaultHttp2ConnectionDecoder decoder = new DefaultHttp2ConnectionDecoder(connection, encoder, reader, - promisedRequestVerifier(), isAutoAckSettingsFrame(), isAutoAckPingFrame()); + Http2ConnectionDecoder decoder = new DefaultHttp2ConnectionDecoder(connection, encoder, reader, + promisedRequestVerifier(), isAutoAckSettingsFrame(), isAutoAckPingFrame(), isValidateHeaders()); return buildFromCodec(decoder, encoder); } diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java index 09f16c0a4bd..01a5f3831cb 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java @@ -31,6 +31,9 @@ import static io.netty.handler.codec.http2.Http2Error.STREAM_CLOSED; import static io.netty.handler.codec.http2.Http2Exception.connectionError; import static io.netty.handler.codec.http2.Http2Exception.streamError; +import static io.netty.handler.codec.http2.Http2HeadersValidator.validateConnectionSpecificHeaders; +import static io.netty.handler.codec.http2.Http2HeadersValidator.validateRequestPseudoHeaders; +import static io.netty.handler.codec.http2.Http2HeadersValidator.validateResponsePseudoHeaders; import static io.netty.handler.codec.http2.Http2PromisedRequestVerifier.ALWAYS_VERIFY; import static io.netty.handler.codec.http2.Http2Stream.State.CLOSED; import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_REMOTE; @@ -59,6 +62,7 @@ public class DefaultHttp2ConnectionDecoder implements Http2ConnectionDecoder { private final Http2PromisedRequestVerifier requestVerifier; private final Http2SettingsReceivedConsumer settingsReceivedConsumer; private final boolean autoAckPing; + private final boolean validateHeaders; public DefaultHttp2ConnectionDecoder(Http2Connection connection, Http2ConnectionEncoder encoder, @@ -93,6 +97,15 @@ public DefaultHttp2ConnectionDecoder(Http2Connection connection, this(connection, encoder, frameReader, requestVerifier, autoAckSettings, true); } + public DefaultHttp2ConnectionDecoder(Http2Connection connection, + Http2ConnectionEncoder encoder, + Http2FrameReader frameReader, + Http2PromisedRequestVerifier requestVerifier, + boolean autoAckSettings, + boolean autoAckPing) { + this(connection, encoder, frameReader, requestVerifier, autoAckSettings, autoAckPing, false); + } + /** * Create a new instance. * @param connection The {@link Http2Connection} associated with this decoder. @@ -113,7 +126,8 @@ public DefaultHttp2ConnectionDecoder(Http2Connection connection, Http2FrameReader frameReader, Http2PromisedRequestVerifier requestVerifier, boolean autoAckSettings, - boolean autoAckPing) { + boolean autoAckPing, + boolean validateHeaders) { this.autoAckPing = autoAckPing; if (autoAckSettings) { settingsReceivedConsumer = null; @@ -128,6 +142,7 @@ public DefaultHttp2ConnectionDecoder(Http2Connection connection, this.frameReader = checkNotNull(frameReader, "frameReader"); this.encoder = checkNotNull(encoder, "encoder"); this.requestVerifier = checkNotNull(requestVerifier, "requestVerifier"); + this.validateHeaders = validateHeaders; if (connection.local().flowController() == null) { connection.local().flowController(new DefaultHttp2LocalFlowController(connection)); } @@ -344,6 +359,10 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers streamId, endOfStream, stream.state()); } + if (validateHeaders) { + validateHeaders(streamId, headers, stream); + } + switch (stream.state()) { case RESERVED_REMOTE: stream.open(endOfStream); @@ -631,6 +650,18 @@ private void verifyStreamMayHaveExisted(int streamId) throws Http2Exception { throw connectionError(PROTOCOL_ERROR, "Stream %d does not exist", streamId); } } + + private void validateHeaders(int streamId, Http2Headers headers, Http2Stream stream) throws Http2Exception { + if (connection.isServer()) { + if (!stream.isHeadersReceived() || stream.state() == HALF_CLOSED_REMOTE) { + validateRequestPseudoHeaders(headers, streamId); + } + } else { + validateResponsePseudoHeaders(headers, streamId); + } + + validateConnectionSpecificHeaders(headers, streamId); + } } private final class PrefaceFrameListener implements Http2FrameListener { diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java index 6070bc5f98b..1499d807527 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HpackDecoder.java @@ -393,13 +393,8 @@ private static HeaderType validate(int streamId, CharSequence name, throw streamError(streamId, PROTOCOL_ERROR, "Invalid HTTP/2 pseudo-header '%s' encountered.", name); } - final HeaderType currentHeaderType = pseudoHeader.isRequestOnly() ? + return pseudoHeader.isRequestOnly() ? HeaderType.REQUEST_PSEUDO_HEADER : HeaderType.RESPONSE_PSEUDO_HEADER; - if (previousHeaderType != null && currentHeaderType != previousHeaderType) { - throw streamError(streamId, PROTOCOL_ERROR, "Mix of request and response pseudo-headers."); - } - - return currentHeaderType; } return HeaderType.REGULAR_HEADER; diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeadersValidator.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeadersValidator.java new file mode 100644 index 00000000000..fa4ed9e115c --- /dev/null +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2HeadersValidator.java @@ -0,0 +1,159 @@ +/* + * Copyright 2018 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package io.netty.handler.codec.http2; + +import io.netty.handler.codec.http.HttpMethod; +import io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName; +import io.netty.util.AsciiString; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Map.Entry; + +import static io.netty.handler.codec.http.HttpHeaderNames.CONNECTION; +import static io.netty.handler.codec.http.HttpHeaderNames.KEEP_ALIVE; +import static io.netty.handler.codec.http.HttpHeaderNames.TE; +import static io.netty.handler.codec.http.HttpHeaderNames.TRANSFER_ENCODING; +import static io.netty.handler.codec.http.HttpHeaderNames.UPGRADE; +import static io.netty.handler.codec.http.HttpHeaderValues.TRAILERS; +import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR; +import static io.netty.handler.codec.http2.Http2Exception.streamError; +import static io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName.METHOD; +import static io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName.PATH; +import static io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName.SCHEME; +import static io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName.hasPseudoHeaderFormat; + +final class Http2HeadersValidator { + + private static final List<AsciiString> connectionSpecificHeaders = Collections.unmodifiableList( + Arrays.asList(CONNECTION, TRANSFER_ENCODING, KEEP_ALIVE, UPGRADE)); + + private Http2HeadersValidator() { + } + + /** + * Validates connection-specific headers according to + * <a href="https://tools.ietf.org/html/rfc7540#section-8.1.2.2">RFC7540, section-8.1.2.2</a> + */ + static void validateConnectionSpecificHeaders(Http2Headers headers, int streamId) throws Http2Exception { + for (int i = 0; i < connectionSpecificHeaders.size(); i++) { + final AsciiString header = connectionSpecificHeaders.get(i); + if (headers.contains(header)) { + throw streamError(streamId, PROTOCOL_ERROR, + "Connection-specific headers like [%s] must not be used with HTTP/2.", header); + } + } + + final CharSequence teHeader = headers.get(TE); + if (teHeader != null && !AsciiString.contentEqualsIgnoreCase(teHeader, TRAILERS)) { + throw streamError(streamId, PROTOCOL_ERROR, + "TE header must not contain any value other than \"%s\"", TRAILERS); + } + } + + /** + * Validates response pseudo-header fields + */ + static void validateResponsePseudoHeaders(Http2Headers headers, int streamId) throws Http2Exception { + for (Entry<CharSequence, CharSequence> entry : headers) { + final CharSequence key = entry.getKey(); + if (!hasPseudoHeaderFormat(key)) { + // We know that pseudo header appears first so we can stop + // looking once we get to the first non pseudo headers. + break; + } + + final PseudoHeaderName pseudoHeader = PseudoHeaderName.getPseudoHeader(key); + if (pseudoHeader.isRequestOnly()) { + throw streamError(streamId, PROTOCOL_ERROR, + "Request pseudo-header [%s] is not allowed in a response.", key); + } + } + } + + /** + * Validates request pseudo-header fields according to + * <a href="https://tools.ietf.org/html/rfc7540#section-8.1.2.3">RFC7540, section-8.1.2.3</a> + */ + static void validateRequestPseudoHeaders(Http2Headers headers, int streamId) throws Http2Exception { + final CharSequence method = headers.get(METHOD.value()); + if (method == null) { + throw streamError(streamId, PROTOCOL_ERROR, + "Mandatory header [:method] is missing."); + } + + if (HttpMethod.CONNECT.asciiName().contentEqualsIgnoreCase(method)) { + if (headers.contains(SCHEME.value())) { + throw streamError(streamId, PROTOCOL_ERROR, + "Header [:scheme] must be omitted when using CONNECT method."); + } + + if (headers.contains(PATH.value())) { + throw streamError(streamId, PROTOCOL_ERROR, + "Header [:path] must be omitted when using CONNECT method."); + } + + if (headers.getAll(METHOD.value()).size() > 1) { + throw streamError(streamId, PROTOCOL_ERROR, + "Header [:method] should have a unique value."); + } + } else { + final CharSequence path = headers.get(PATH.value()); + if (path != null && path.length() == 0) { + throw streamError(streamId, PROTOCOL_ERROR, "[:path] header cannot be empty."); + } + + int methodHeadersCount = 0; + int pathHeadersCount = 0; + int schemeHeadersCount = 0; + for (Entry<CharSequence, CharSequence> entry : headers) { + final CharSequence key = entry.getKey(); + if (!hasPseudoHeaderFormat(key)) { + // We know that pseudo header appears first so we can stop + // looking once we get to the first non pseudo headers. + break; + } + + final PseudoHeaderName pseudoHeader = PseudoHeaderName.getPseudoHeader(key); + if (METHOD.value().contentEquals(key)) { + methodHeadersCount++; + } else if (PATH.value().contentEquals(key)) { + pathHeadersCount++; + } else if (SCHEME.value().contentEquals(key)) { + schemeHeadersCount++; + } else if (!pseudoHeader.isRequestOnly()) { + throw streamError(streamId, PROTOCOL_ERROR, + "Response pseudo-header [%s] is not allowed in a request.", key); + } + } + + validatePseudoHeaderCount(streamId, methodHeadersCount, METHOD); + validatePseudoHeaderCount(streamId, pathHeadersCount, PATH); + validatePseudoHeaderCount(streamId, schemeHeadersCount, SCHEME); + } + } + + private static void validatePseudoHeaderCount(int streamId, int valueCount, PseudoHeaderName headerName) + throws Http2Exception { + if (valueCount == 0) { + throw streamError(streamId, PROTOCOL_ERROR, + "Mandatory header [%s] is missing.", headerName.value()); + } else if (valueCount > 1) { + throw streamError(streamId, PROTOCOL_ERROR, + "Header [%s] should have a unique value.", headerName.value()); + } + } +} diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java index 8ffbd285338..3aadce28ce9 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java @@ -391,9 +391,14 @@ public static Http2Headers toHttp2Headers(HttpMessage in, boolean validateHeader if (in instanceof HttpRequest) { HttpRequest request = (HttpRequest) in; URI requestTargetUri = URI.create(request.uri()); - out.path(toHttp2Path(requestTargetUri)); out.method(request.method().asciiName()); - setHttp2Scheme(inHeaders, requestTargetUri, out); + + // According to the spec https://tools.ietf.org/html/rfc7540#section-8.3 scheme and path + // should be omitted for CONNECT method + if (request.method() != HttpMethod.CONNECT) { + setHttp2Scheme(inHeaders, requestTargetUri, out); + out.path(toHttp2Path(requestTargetUri)); + } if (!isOriginForm(requestTargetUri) && !isAsteriskForm(requestTargetUri)) { // Attempt to take from HOST header before taking from the request-line
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java index 1d815a330cf..f3b5edffb9b 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java @@ -70,6 +70,7 @@ public class DataCompressionHttp2Test { private static final AsciiString GET = new AsciiString("GET"); private static final AsciiString POST = new AsciiString("POST"); private static final AsciiString PATH = new AsciiString("/some/path"); + private static final AsciiString SCHEME = new AsciiString("http"); @Mock private Http2FrameListener serverListener; @@ -144,7 +145,7 @@ public void teardown() throws InterruptedException { @Test public void justHeadersNoData() throws Exception { bootstrapEnv(0); - final Http2Headers headers = new DefaultHttp2Headers().method(GET).path(PATH) + final Http2Headers headers = new DefaultHttp2Headers().method(GET).path(PATH).scheme(SCHEME) .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP); runInChannel(clientChannel, new Http2Runnable() { @@ -165,7 +166,7 @@ public void gzipEncodingSingleEmptyMessage() throws Exception { final ByteBuf data = Unpooled.copiedBuffer(text.getBytes()); bootstrapEnv(data.readableBytes()); try { - final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH) + final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH).scheme(SCHEME) .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP); runInChannel(clientChannel, new Http2Runnable() { @@ -189,7 +190,7 @@ public void gzipEncodingSingleMessage() throws Exception { final ByteBuf data = Unpooled.copiedBuffer(text.getBytes()); bootstrapEnv(data.readableBytes()); try { - final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH) + final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH).scheme(SCHEME) .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP); runInChannel(clientChannel, new Http2Runnable() { @@ -215,7 +216,7 @@ public void gzipEncodingMultipleMessages() throws Exception { final ByteBuf data2 = Unpooled.copiedBuffer(text2.getBytes()); bootstrapEnv(data1.readableBytes() + data2.readableBytes()); try { - final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH) + final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH).scheme(SCHEME) .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP); runInChannel(clientChannel, new Http2Runnable() { @@ -243,7 +244,7 @@ public void deflateEncodingWriteLargeMessage() throws Exception { bootstrapEnv(BUFFER_SIZE); final ByteBuf data = Unpooled.wrappedBuffer(bytes); try { - final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH) + final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH).scheme(SCHEME) .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.DEFLATE); runInChannel(clientChannel, new Http2Runnable() { diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java index 7e87d52893c..75cfbf43ea5 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java @@ -22,6 +22,7 @@ import io.netty.channel.ChannelPromise; import io.netty.channel.DefaultChannelPromise; import io.netty.handler.codec.http.HttpResponseStatus; +import io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName; import junit.framework.AssertionFailedError; import org.junit.Before; import org.junit.Test; @@ -38,9 +39,11 @@ import static io.netty.buffer.Unpooled.wrappedBuffer; import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT; import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR; +import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_REMOTE; import static io.netty.handler.codec.http2.Http2Stream.State.IDLE; import static io.netty.handler.codec.http2.Http2Stream.State.OPEN; import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_REMOTE; +import static io.netty.handler.codec.http2.Http2TestUtil.newHttp2HeadersWithRequestPseudoHeaders; import static io.netty.util.CharsetUtil.UTF_8; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; @@ -192,7 +195,8 @@ public Http2Stream answer(InvocationOnMock in) throws Throwable { when(ctx.newPromise()).thenReturn(promise); when(ctx.write(any())).thenReturn(future); - decoder = new DefaultHttp2ConnectionDecoder(connection, encoder, reader); + decoder = new DefaultHttp2ConnectionDecoder(connection, encoder, reader, + Http2PromisedRequestVerifier.ALWAYS_VERIFY, true, true, true); decoder.lifecycleManager(lifecycleManager); decoder.frameListener(listener); @@ -492,6 +496,64 @@ public void headersReadForPromisedStreamShouldHalfOpenStream() throws Exception eq(DEFAULT_PRIORITY_WEIGHT), eq(false), eq(0), eq(false)); } + @Test(expected = Http2Exception.class) + public void requestPseudoHeadersInResponseThrows() throws Exception { + when(connection.isServer()).thenReturn(false); + when(connection.stream(STREAM_ID)).thenReturn(null); + when(connection.streamMayHaveExisted(STREAM_ID)).thenReturn(false); + when(remote.createStream(eq(STREAM_ID), anyBoolean())).thenReturn(stream); + when(stream.state()).thenReturn(HALF_CLOSED_REMOTE); + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + decode().onHeadersRead(ctx, STREAM_ID, headers, 0, false); + } + + @Test(expected = Http2Exception.class) + public void missingPseudoHeadersInLeadingHeaderThrows() throws Exception { + when(connection.isServer()).thenReturn(true); + when(connection.stream(STREAM_ID)).thenReturn(null); + when(connection.streamMayHaveExisted(STREAM_ID)).thenReturn(false); + when(remote.createStream(eq(STREAM_ID), anyBoolean())).thenReturn(stream); + when(stream.state()).thenReturn(HALF_CLOSED_REMOTE); + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.remove(PseudoHeaderName.METHOD.value()); + decode().onHeadersRead(ctx, STREAM_ID, headers, 0, false); + } + + @Test + public void missingPseudoHeadersInLeadingHeaderShouldNotThrowsIfValidationDisabled() throws Exception { + when(connection.isServer()).thenReturn(true); + when(connection.stream(STREAM_ID)).thenReturn(null); + when(connection.streamMayHaveExisted(STREAM_ID)).thenReturn(false); + when(remote.createStream(eq(STREAM_ID), anyBoolean())).thenReturn(stream); + when(stream.state()).thenReturn(HALF_CLOSED_REMOTE); + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.remove(PseudoHeaderName.METHOD.value()); + + decoder = new DefaultHttp2ConnectionDecoder(connection, encoder, reader, + Http2PromisedRequestVerifier.ALWAYS_VERIFY, true, true, false); + decoder.lifecycleManager(lifecycleManager); + decoder.frameListener(listener); + + // Simulate receiving the initial settings from the remote endpoint. + decode().onSettingsRead(ctx, new Http2Settings()); + // Simulate receiving the SETTINGS ACK for the initial settings. + decode().onSettingsAckRead(ctx); + + decode().onHeadersRead(ctx, STREAM_ID, headers, 0, false); + } + + @Test + public void missingPseudoHeadersInTrailerHeaderDoesNotThrow() throws Exception { + when(connection.isServer()).thenReturn(true); + when(connection.stream(STREAM_ID)).thenReturn(stream); + + decode().onHeadersRead(ctx, STREAM_ID, newHttp2HeadersWithRequestPseudoHeaders(), 0, false); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.remove(PseudoHeaderName.METHOD.value()); + decode().onHeadersRead(ctx, STREAM_ID, headers, 0, true); + } + @Test(expected = Http2Exception.class) public void trailersDoNotEndStreamThrows() throws Exception { decode().onHeadersRead(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, 0, false); diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/HpackDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/HpackDecoderTest.java index 7b26d90d170..7abbbe4598b 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/HpackDecoderTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/HpackDecoderTest.java @@ -575,46 +575,6 @@ public void disableHeaderValidation() throws Exception { } } - @Test - public void requestPseudoHeaderInResponse() throws Exception { - ByteBuf in = Unpooled.buffer(200); - try { - HpackEncoder hpackEncoder = new HpackEncoder(true); - - Http2Headers toEncode = new DefaultHttp2Headers(); - toEncode.add(":status", "200"); - toEncode.add(":method", "GET"); - hpackEncoder.encodeHeaders(1, in, toEncode, NEVER_SENSITIVE); - - Http2Headers decoded = new DefaultHttp2Headers(); - - expectedException.expect(Http2Exception.StreamException.class); - hpackDecoder.decode(1, in, decoded, true); - } finally { - in.release(); - } - } - - @Test - public void responsePseudoHeaderInRequest() throws Exception { - ByteBuf in = Unpooled.buffer(200); - try { - HpackEncoder hpackEncoder = new HpackEncoder(true); - - Http2Headers toEncode = new DefaultHttp2Headers(); - toEncode.add(":method", "GET"); - toEncode.add(":status", "200"); - hpackEncoder.encodeHeaders(1, in, toEncode, NEVER_SENSITIVE); - - Http2Headers decoded = new DefaultHttp2Headers(); - - expectedException.expect(Http2Exception.StreamException.class); - hpackDecoder.decode(1, in, decoded, true); - } finally { - in.release(); - } - } - @Test public void pseudoHeaderAfterRegularHeader() throws Exception { ByteBuf in = Unpooled.buffer(200); @@ -644,7 +604,7 @@ public void failedValidationDoesntCorruptHpack() throws Exception { Http2Headers toEncode = new DefaultHttp2Headers(); toEncode.add(":method", "GET"); - toEncode.add(":status", "200"); + toEncode.add(":unknownpseudoheader", "200"); toEncode.add("foo", "bar"); hpackEncoder.encodeHeaders(1, in1, toEncode, NEVER_SENSITIVE); @@ -664,7 +624,7 @@ public void failedValidationDoesntCorruptHpack() throws Exception { assertEquals(3, decoded.size()); assertEquals("GET", decoded.method().toString()); - assertEquals("200", decoded.status().toString()); + assertEquals("200", decoded.get(":unknownpseudoheader").toString()); assertEquals("bar", decoded.get("foo").toString()); } finally { in1.release(); diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2HeadersValidatorTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2HeadersValidatorTest.java new file mode 100644 index 00000000000..3f323a6ef05 --- /dev/null +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2HeadersValidatorTest.java @@ -0,0 +1,174 @@ +/* + * Copyright 2018 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package io.netty.handler.codec.http2; + +import io.netty.handler.codec.http2.Http2Exception.StreamException; +import org.junit.Rule; +import org.junit.Test; +import org.junit.rules.ExpectedException; + +import static io.netty.handler.codec.http.HttpHeaderNames.CONNECTION; +import static io.netty.handler.codec.http.HttpHeaderNames.TE; +import static io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName.METHOD; +import static io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName.PATH; +import static io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName.SCHEME; +import static io.netty.handler.codec.http2.Http2Headers.PseudoHeaderName.STATUS; +import static io.netty.handler.codec.http2.Http2HeadersValidator.validateConnectionSpecificHeaders; +import static io.netty.handler.codec.http2.Http2HeadersValidator.validateRequestPseudoHeaders; +import static io.netty.handler.codec.http2.Http2TestUtil.newHttp2HeadersWithRequestPseudoHeaders; + +public class Http2HeadersValidatorTest { + + private static final int STREAM_ID = 3; + + @Rule + public final ExpectedException expectedException = ExpectedException.none(); + + @Test + public void validateConnectionSpecificHeadersShouldThrowIfConnectionHeaderPresent() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Connection-speficic headers like [connection] must not be used with HTTP"); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.add(CONNECTION, "keep-alive"); + validateConnectionSpecificHeaders(headers, STREAM_ID); + } + + @Test + public void validateConnectionSpecificHeadersShouldThrowIfTeHeaderValueIsNotTrailers() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("TE header must not contain any value other than \"trailers\""); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.add(TE, "trailers, deflate"); + validateConnectionSpecificHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowWhenMethodHeaderIsMissing() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Mandatory header [:method] is missing."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.remove(METHOD.value()); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowWhenPathHeaderIsMissing() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Mandatory header [:path] is missing."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.remove(PATH.value()); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowWhenPathHeaderIsEmpty() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("[:path] header cannot be empty."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.set(PATH.value(), ""); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowWhenSchemeHeaderIsMissing() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Mandatory header [:scheme] is missing."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.remove(SCHEME.value()); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowIfMethodHeaderIsNotUnique() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Header [:method] should have a unique value."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.add(METHOD.value(), "GET"); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowIfPathHeaderIsNotUnique() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Header [:path] should have a unique value."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.add(PATH.value(), "/"); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowIfSchemeHeaderIsNotUnique() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Header [:scheme] should have a unique value."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.add(SCHEME.value(), "/"); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowIfMethodHeaderIsNotUniqueWhenMethodIsConnect() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Header [:method] should have a unique value."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.remove(SCHEME.value()); + headers.remove(PATH.value()); + headers.set(METHOD.value(), "CONNECT"); + headers.add(METHOD.value(), "CONNECT"); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowIfPathHeaderIsPresentWhenMethodIsConnect() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Header [:path] must be omitted when using CONNECT method."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.set(METHOD.value(), "CONNECT"); + headers.remove(SCHEME.value()); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowIfSchemeHeaderIsPresentWhenMethodIsConnect() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Header [:scheme] must be omitted when using CONNECT method."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.set(METHOD.value(), "CONNECT"); + headers.remove(PATH.value()); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + + @Test + public void validatePseudoHeadersShouldThrowIfResponseHeaderInRequest() throws Http2Exception { + expectedException.expect(StreamException.class); + expectedException.expectMessage("Response pseudo-header [:status] is not allowed in a request."); + + final Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); + headers.add(STATUS.value(), "200"); + validateRequestPseudoHeaders(headers, STREAM_ID); + } + +} diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilderTest.java index c12f5f8b7cc..17a548f5c31 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilderTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2MultiplexCodecBuilderTest.java @@ -42,6 +42,7 @@ import java.util.concurrent.CountDownLatch; import static io.netty.handler.codec.http2.Http2CodecUtil.isStreamIdValid; +import static io.netty.handler.codec.http2.Http2TestUtil.newHttp2HeadersWithRequestPseudoHeaders; import static java.util.concurrent.TimeUnit.SECONDS; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; @@ -157,8 +158,8 @@ public void multipleOutboundStreams() throws Exception { assertTrue(childChannel2.isActive()); assertFalse(isStreamIdValid(childChannel2.stream().id())); - Http2Headers headers1 = new DefaultHttp2Headers(); - Http2Headers headers2 = new DefaultHttp2Headers(); + Http2Headers headers1 = newHttp2HeadersWithRequestPseudoHeaders(); + Http2Headers headers2 = newHttp2HeadersWithRequestPseudoHeaders(); // Test that streams can be made active (headers sent) in different order than the corresponding channels // have been created. childChannel2.writeAndFlush(new DefaultHttp2HeadersFrame(headers2)); @@ -187,7 +188,7 @@ public void createOutboundStream() throws Exception { assertTrue(childChannel.isRegistered()); assertTrue(childChannel.isActive()); - Http2Headers headers = new DefaultHttp2Headers(); + Http2Headers headers = newHttp2HeadersWithRequestPseudoHeaders(); childChannel.writeAndFlush(new DefaultHttp2HeadersFrame(headers)); ByteBuf data = Unpooled.buffer(100).writeZero(100); childChannel.writeAndFlush(new DefaultHttp2DataFrame(data, true)); diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java index 6fa3449709a..a8ba26ba7fa 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java @@ -98,6 +98,13 @@ public static byte[] randomBytes(int size) { return data; } + public static Http2Headers newHttp2HeadersWithRequestPseudoHeaders() { + return new DefaultHttp2Headers(true) + .method("GET") + .path("/") + .scheme("https"); + } + /** * Returns an {@link AsciiString} that wraps a randomly-filled byte array. */ diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java index 0e4f3e95202..a66ed56f3a3 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java @@ -242,8 +242,8 @@ public void testAuthorityFormRequestTargetHandled() throws Exception { final HttpHeaders httpHeaders = request.headers(); httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 5); final Http2Headers http2Headers = - new DefaultHttp2Headers().method(new AsciiString("CONNECT")).path(new AsciiString("/")) - .scheme(new AsciiString("http")).authority(new AsciiString("www.example.com:80")); + new DefaultHttp2Headers().method(new AsciiString("CONNECT")) + .authority(new AsciiString("www.example.com:80")); ChannelPromise writePromise = newPromise(); verifyHeadersOnly(http2Headers, writePromise, clientChannel.writeAndFlush(request, writePromise)); diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java index 8fb4ebfc678..6e53d0e489c 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java @@ -43,6 +43,7 @@ import io.netty.handler.codec.http.HttpResponseStatus; import io.netty.handler.codec.http.HttpVersion; import io.netty.handler.codec.http2.Http2TestUtil.Http2Runnable; +import io.netty.handler.codec.http2.HttpConversionUtil.ExtensionHeaderNames; import io.netty.util.AsciiString; import io.netty.util.CharsetUtil; import io.netty.util.concurrent.Future; @@ -268,8 +269,11 @@ public void clientRequestOneDataFrame() throws Exception { httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 3); httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, text.length()); httpHeaders.setShort(HttpConversionUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), (short) 16); - final Http2Headers http2Headers = new DefaultHttp2Headers().method(new AsciiString("GET")).path( - new AsciiString("/some/path/resource2")); + httpHeaders.set(ExtensionHeaderNames.SCHEME.text(), "http"); + final Http2Headers http2Headers = new DefaultHttp2Headers() + .method(new AsciiString("GET")) + .scheme(new AsciiString("http")) + .path(new AsciiString("/some/path/resource2")); runInChannel(clientChannel, new Http2Runnable() { @Override public void run() throws Http2Exception { @@ -301,8 +305,11 @@ public void clientRequestMultipleDataFrames() throws Exception { httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 3); httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, text.length()); httpHeaders.setShort(HttpConversionUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), (short) 16); - final Http2Headers http2Headers = new DefaultHttp2Headers().method(new AsciiString("GET")).path( - new AsciiString("/some/path/resource2")); + httpHeaders.set(ExtensionHeaderNames.SCHEME.text(), "http"); + final Http2Headers http2Headers = new DefaultHttp2Headers() + .method(new AsciiString("GET")) + .scheme(new AsciiString("http")) + .path(new AsciiString("/some/path/resource2")); final int midPoint = text.length() / 2; runInChannel(clientChannel, new Http2Runnable() { @Override @@ -338,8 +345,11 @@ public void clientRequestMultipleEmptyDataFrames() throws Exception { httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 3); httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, text.length()); httpHeaders.setShort(HttpConversionUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), (short) 16); - final Http2Headers http2Headers = new DefaultHttp2Headers().method(new AsciiString("GET")).path( - new AsciiString("/some/path/resource2")); + httpHeaders.set(ExtensionHeaderNames.SCHEME.text(), "http"); + final Http2Headers http2Headers = new DefaultHttp2Headers() + .method(new AsciiString("GET")) + .scheme("http") + .path(new AsciiString("/some/path/resource2")); runInChannel(clientChannel, new Http2Runnable() { @Override public void run() throws Http2Exception { @@ -372,12 +382,15 @@ public void clientRequestTrailingHeaders() throws Exception { httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 3); httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, text.length()); httpHeaders.setShort(HttpConversionUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), (short) 16); + httpHeaders.set(ExtensionHeaderNames.SCHEME.text(), "http"); HttpHeaders trailingHeaders = request.trailingHeaders(); trailingHeaders.set(of("Foo"), of("goo")); trailingHeaders.set(of("fOo2"), of("goo2")); trailingHeaders.add(of("foO2"), of("goo3")); - final Http2Headers http2Headers = new DefaultHttp2Headers().method(new AsciiString("GET")).path( - new AsciiString("/some/path/resource2")); + final Http2Headers http2Headers = new DefaultHttp2Headers() + .method(new AsciiString("GET")) + .scheme(new AsciiString("http")) + .path(new AsciiString("/some/path/resource2")); final Http2Headers http2Headers2 = new DefaultHttp2Headers() .set(new AsciiString("foo"), new AsciiString("goo")) .set(new AsciiString("foo2"), new AsciiString("goo2")) @@ -418,15 +431,21 @@ public void clientRequestStreamDependencyInHttpMessageFlow() throws Exception { httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 3); httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, text.length()); httpHeaders.setShort(HttpConversionUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), (short) 16); + httpHeaders.set(ExtensionHeaderNames.SCHEME.text(), "http"); HttpHeaders httpHeaders2 = request2.headers(); httpHeaders2.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 5); httpHeaders2.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_DEPENDENCY_ID.text(), 3); httpHeaders2.setShort(HttpConversionUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), (short) 123); httpHeaders2.setInt(HttpHeaderNames.CONTENT_LENGTH, text2.length()); - final Http2Headers http2Headers = new DefaultHttp2Headers().method(new AsciiString("PUT")).path( - new AsciiString("/some/path/resource")); - final Http2Headers http2Headers2 = new DefaultHttp2Headers().method(new AsciiString("PUT")).path( - new AsciiString("/some/path/resource2")); + httpHeaders2.set(ExtensionHeaderNames.SCHEME.text(), "http"); + final Http2Headers http2Headers = new DefaultHttp2Headers() + .method(new AsciiString("PUT")) + .scheme(new AsciiString("http")) + .path(new AsciiString("/some/path/resource")); + final Http2Headers http2Headers2 = new DefaultHttp2Headers() + .method(new AsciiString("PUT")) + .scheme(new AsciiString("http")) + .path(new AsciiString("/some/path/resource2")); runInChannel(clientChannel, new Http2Runnable() { @Override public void run() throws Http2Exception { @@ -482,7 +501,10 @@ public void serverRequestPushPromise() throws Exception { httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 3); httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, 0); httpHeaders.setShort(HttpConversionUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), (short) 16); - final Http2Headers http2Headers3 = new DefaultHttp2Headers().method(new AsciiString("GET")) + httpHeaders.set(ExtensionHeaderNames.SCHEME.text(), "http"); + final Http2Headers http2Headers3 = new DefaultHttp2Headers() + .method(new AsciiString("GET")) + .scheme("http") .path(new AsciiString("/push/test")); runInChannel(clientChannel, new Http2Runnable() { @Override @@ -540,9 +562,11 @@ public void serverResponseHeaderInformational() throws Exception { httpHeaders.set(HttpHeaderNames.EXPECT, HttpHeaderValues.CONTINUE); httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, 0); httpHeaders.setShort(HttpConversionUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), (short) 16); + httpHeaders.set(ExtensionHeaderNames.SCHEME.text(), "http"); final Http2Headers http2Headers = new DefaultHttp2Headers().method(new AsciiString("PUT")) .path(new AsciiString("/info/test")) + .scheme(new AsciiString("http")) .set(new AsciiString(HttpHeaderNames.EXPECT.toString()), new AsciiString(HttpHeaderValues.CONTINUE.toString())); final FullHttpMessage response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CONTINUE); diff --git a/testsuite-http2/pom.xml b/testsuite-http2/pom.xml index 71c40af60db..1c405150d6a 100644 --- a/testsuite-http2/pom.xml +++ b/testsuite-http2/pom.xml @@ -94,15 +94,6 @@ <excludeSpec>5.1 - half closed (remote): Sends a HEADERS frame</excludeSpec> <excludeSpec>5.1 - closed: Sends a HEADERS frame</excludeSpec> <excludeSpec>5.1.1 - Sends stream identifier that is numerically smaller than previous</excludeSpec> - <excludeSpec>8.1.2.2 - Sends a HEADERS frame that contains the connection-specific header field</excludeSpec> - <excludeSpec>8.1.2.2 - Sends a HEADERS frame that contains the TE header field with any value other than "trailers"</excludeSpec> - <excludeSpec>8.1.2.3 - Sends a HEADERS frame with empty ":path" pseudo-header field</excludeSpec> - <excludeSpec>8.1.2.3 - Sends a HEADERS frame that omits ":method" pseudo-header field</excludeSpec> - <excludeSpec>8.1.2.3 - Sends a HEADERS frame that omits ":scheme" pseudo-header field</excludeSpec> - <excludeSpec>8.1.2.3 - Sends a HEADERS frame that omits ":path" pseudo-header field</excludeSpec> - <excludeSpec>8.1.2.3 - Sends a HEADERS frame with duplicated ":method" pseudo-header field</excludeSpec> - <excludeSpec>8.1.2.3 - Sends a HEADERS frame with duplicated ":method" pseudo-header field</excludeSpec> - <excludeSpec>8.1.2.3 - Sends a HEADERS frame with duplicated ":scheme" pseudo-header field</excludeSpec> <excludeSpec>8.1.2.6 - Sends a HEADERS frame with the "content-length" header field which does not equal the DATA frame payload length</excludeSpec> <excludeSpec>8.1.2.6 - Sends a HEADERS frame with the "content-length" header field which does not equal the sum of the multiple DATA frames payload length</excludeSpec> </excludeSpecs>
train
val
"2019-10-25T20:15:34"
"2016-08-29T13:22:29Z"
buchgr
val
netty/netty/8616_8620
netty/netty
netty/netty/8616
netty/netty/8620
[ "timestamp(timedelta=0.0, similarity=0.8848753518566707)" ]
268035742317ced8e8d43b9998244f4f0e53316a
8331248671b9c0ea07cf8dbdfa5d8d2f89fdf459
[ "@atcurtis can you please try netty 4.1.32.Final and let me know if the same problem happens there as well? I think I remember that we fixed something related to this at some point. \r\n\r\nPlease let me know if after the upgrade the problem still persists.", "I have tried with 4.1.32.Final and it fails exactly the same except with slightly different line numbers:\r\n```\r\n\tat io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:115)\r\n\tat io.netty.channel.ChannelInitializer.channelRegistered(ChannelInitializer.java:76)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRegistered(AbstractChannelHandlerContext.java:149)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.access$000(AbstractChannelHandlerContext.java:38)\r\n\tat io.netty.channel.AbstractChannelHandlerContext$1.run(AbstractChannelHandlerContext.java:140)\r\n```", "I will have a look\n\n> Am 03.12.2018 um 21:08 schrieb Antony T Curtis <notifications@github.com>:\n> \n> I have tried with 4.1.32.Final and it fails exactly the same except with slightly different line numbers:\n> \n> \tat io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:115)\n> \tat io.netty.channel.ChannelInitializer.channelRegistered(ChannelInitializer.java:76)\n> \tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRegistered(AbstractChannelHandlerContext.java:149)\n> \tat io.netty.channel.AbstractChannelHandlerContext.access$000(AbstractChannelHandlerContext.java:38)\n> \tat io.netty.channel.AbstractChannelHandlerContext$1.run(AbstractChannelHandlerContext.java:140)\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@atcurtis check https://github.com/netty/netty/pull/8620" ]
[ "What's the point of this line? Wouldn't we only get here if `handlerRemoved()` was called? Is the concern that someone could override `handlerRemoved()`?", "This will retain `ctx` until the channel is closed, even if `handlerRemoved()` is called. If this is a one-off initializer (not reused) then that could greatly increase the lifetime of some objects used during handshakes and the like.\r\n\r\nIs this seriously the only way we can clean up properly? Is the problem that the methods can be overridden? If so, then maybe we should fix that in Netty 5? Or maybe we could avoid the check-for-double-register problem by having an alternative approach to 26aa34853a8974d212e12b98e708790606bea5fa in Netty 5?\r\n\r\nIt is possible to cancel the future to clean up, but the amount of effort necessary to do something simple makes me question if this is more of systemic problem that should be addressed in Netty 5.", "yes :/", "@ejona86 yeah I think its the best we can do for netty 4 to ensure we not leak if the user overrides `handlerRemoved(...)`.Thats also why I check `isRemoved()` first. ", "Perhaps we could check handlerRemoved() was overridden at instantiation time using reflection and take the fast path?", "@trustin we could ... that said I think it does not really worth it as `ctx.isRemoved()` should be true if the `EventExecutor` is not some funky custom implemention that is is used with `ChannelInitializer`. So I would prefer to keep things simple for now.", "Agreed." ]
"2018-12-04T14:38:50Z"
[ "defect" ]
ChannelInitializer.initChannel() executed more than once when used with context executor
### Expected behavior Expected ChannelInitializer.initChannel() to be executed one. ### Actual behavior A subsequent execution causes exception to be thrown because it can't find itself or remove itself again. ``` 2018-12-01 01:22:02,527 [WARN] [pool-2-thread-1] ChannelInitializer ? Failed to initialize a channel. Closing: [id: 0xce54469f, L:local:test - R:local:E:d49bc512] java.lang.AssertionError: expected object to not be null at org.testng.Assert.fail(Assert.java:93) ~[testng-6.11.jar:?] at org.testng.Assert.assertNotNull(Assert.java:422) ~[testng-6.11.jar:?] at org.testng.Assert.assertNotNull(Assert.java:407) ~[testng-6.11.jar:?] at test.TestChannelInitializer$2.initChannel(TestChannelInitializer.java:93) ~[espresso-router-impl/:?] at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:113) [netty-all-4.1.22.Final.jar:4.1.22.Final] at io.netty.channel.ChannelInitializer.channelRegistered(ChannelInitializer.java:76) [netty-all-4.1.22.Final.jar:4.1.22.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRegistered(AbstractChannelHandlerContext.java:149) [netty-all-4.1.22.Final.jar:4.1.22.Final] at io.netty.channel.AbstractChannelHandlerContext.access$000(AbstractChannelHandlerContext.java:38) [netty-all-4.1.22.Final.jar:4.1.22.Final] at io.netty.channel.AbstractChannelHandlerContext$1.run(AbstractChannelHandlerContext.java:140) [netty-all-4.1.22.Final.jar:4.1.22.Final] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_121] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_121] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_121] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_121] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121] 2018-12-01 01:22:02,533 [WARN] [pool-2-thread-1] LoggingHandler - [id: 0xce54469f, L:local:test ! R:local:E:d49bc512] READ: Hello World 2018-12-01 01:22:02,533 [WARN] [pool-2-thread-1] LoggingHandler - [id: 0xce54469f, L:local:test ! R:local:E:d49bc512] READ COMPLETE 2018-12-01 01:22:02,533 [WARN] [pool-2-thread-1] LoggingHandler - [id: 0xce54469f, L:local:test ! R:local:E:d49bc512] ACTIVE 2018-12-01 01:22:02,533 [WARN] [pool-2-thread-1] LoggingHandler - [id: 0xce54469f, L:local:test ! R:local:E:d49bc512] REGISTERED 2018-12-01 01:22:02,534 [WARN] [pool-2-thread-1] LoggingHandler - [id: 0xce54469f, L:local:test ! R:local:E:d49bc512] INACTIVE 2018-12-01 01:22:02,534 [WARN] [pool-2-thread-1] LoggingHandler - [id: 0xce54469f, L:local:test ! R:local:E:d49bc512] UNREGISTERED java.lang.AssertionError: expected [1] but found [2] Expected :1 Actual :2 at org.testng.Assert.fail(Assert.java:93) at org.testng.Assert.failNotEquals(Assert.java:512) at org.testng.Assert.assertEqualsImpl(Assert.java:134) at org.testng.Assert.assertEquals(Assert.java:115) at org.testng.Assert.assertEquals(Assert.java:388) at org.testng.Assert.assertEquals(Assert.java:398) at test.TestChannelInitializer.test(TestChannelInitializer.java:137) ``` ### Steps to reproduce See repro code below. ### Minimal yet complete reproducer code (or URL to code) ```java package test; import io.netty.bootstrap.Bootstrap; import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.Channel; import io.netty.channel.ChannelHandler; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInitializer; import io.netty.channel.DefaultEventLoopGroup; import io.netty.channel.local.LocalAddress; import io.netty.channel.local.LocalChannel; import io.netty.channel.local.LocalServerChannel; import io.netty.handler.logging.LogLevel; import io.netty.handler.logging.LoggingHandler; import io.netty.util.concurrent.AbstractEventExecutor; import io.netty.util.concurrent.EventExecutor; import io.netty.util.concurrent.Future; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import org.testng.Assert; import org.testng.annotations.Test; public class TestChannelInitializer { @Test public void test() throws InterruptedException { AtomicInteger invokeCount = new AtomicInteger(); AtomicInteger completeCount = new AtomicInteger(); LocalAddress addr = new LocalAddress("test"); ChannelHandler logger = new LoggingHandler(LogLevel.WARN); ScheduledExecutorService execService = Executors.newSingleThreadScheduledExecutor(); EventExecutor exec = new AbstractEventExecutor() { @Override public void shutdown() { throw new IllegalStateException(); } @Override public boolean inEventLoop(Thread thread) { return false; } @Override public boolean isShuttingDown() { return false; } @Override public Future<?> shutdownGracefully(long quietPeriod, long timeout, TimeUnit unit) { throw new IllegalStateException(); } @Override public Future<?> terminationFuture() { throw new IllegalStateException(); } @Override public boolean isShutdown() { return execService.isShutdown(); } @Override public boolean isTerminated() { return execService.isTerminated(); } @Override public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException { return execService.awaitTermination(timeout, unit); } @Override public void execute(Runnable command) { execService.schedule(command, 1, TimeUnit.NANOSECONDS); } }; ChannelInitializer<Channel> otherInitializer = new ChannelInitializer<Channel>() { @Override protected void initChannel(Channel ch) throws Exception { invokeCount.incrementAndGet(); ChannelHandlerContext ctx = ch.pipeline().context(this); Assert.assertNotNull(ctx); // FAILS HERE ch.pipeline().addAfter(ctx.executor(), ctx.name(), null, logger); completeCount.decrementAndGet(); } }; DefaultEventLoopGroup group = new DefaultEventLoopGroup(1); ServerBootstrap serverBootstrap = new ServerBootstrap() .channel(LocalServerChannel.class) .group(group) .localAddress(addr) .childHandler(new ChannelInitializer<LocalChannel>() { @Override protected void initChannel(LocalChannel ch) throws Exception { ch.pipeline().addLast(exec, otherInitializer); } }); Channel server = serverBootstrap.bind().sync().channel(); Bootstrap clientBootstrap = new Bootstrap() .channel(LocalChannel.class) .group(group) .remoteAddress(addr) .handler(new ChannelInitializer<LocalChannel>() { @Override protected void initChannel(LocalChannel ch) throws Exception { } }); Channel client = clientBootstrap.connect().sync().channel(); client.writeAndFlush("Hello World").sync(); Thread.sleep(1000); client.close().sync(); server.close().sync(); execService.shutdown(); execService.awaitTermination(10, TimeUnit.SECONDS); Assert.assertEquals(invokeCount.get(), 1); // This seems to be 2 instead of 1 Assert.assertEquals(invokeCount.get(), completeCount.get()); } } ``` ### Netty version 4.1.22.Final ### JVM version (e.g. `java -version`) ``` java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) ``` ### OS version (e.g. `uname -a`) RHEL7 3.10.0-514.55.4.el7.x86_64 #1 SMP Fri Aug 10 17:03:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[ "transport/src/main/java/io/netty/channel/ChannelInitializer.java" ]
[ "transport/src/main/java/io/netty/channel/ChannelInitializer.java" ]
[ "transport/src/test/java/io/netty/channel/ChannelInitializerTest.java" ]
diff --git a/transport/src/main/java/io/netty/channel/ChannelInitializer.java b/transport/src/main/java/io/netty/channel/ChannelInitializer.java index 9ea1b182219..18344d200fa 100644 --- a/transport/src/main/java/io/netty/channel/ChannelInitializer.java +++ b/transport/src/main/java/io/netty/channel/ChannelInitializer.java @@ -18,11 +18,12 @@ import io.netty.bootstrap.Bootstrap; import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.ChannelHandler.Sharable; -import io.netty.util.internal.PlatformDependent; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; -import java.util.concurrent.ConcurrentMap; +import java.util.Collections; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; /** * A special {@link ChannelInboundHandler} which offers an easy way to initialize a {@link Channel} once it was @@ -53,9 +54,10 @@ public abstract class ChannelInitializer<C extends Channel> extends ChannelInboundHandlerAdapter { private static final InternalLogger logger = InternalLoggerFactory.getInstance(ChannelInitializer.class); - // We use a ConcurrentMap as a ChannelInitializer is usually shared between all Channels in a Bootstrap / + // We use a Set as a ChannelInitializer is usually shared between all Channels in a Bootstrap / // ServerBootstrap. This way we can reduce the memory usage compared to use Attributes. - private final ConcurrentMap<ChannelHandlerContext, Boolean> initMap = PlatformDependent.newConcurrentHashMap(); + private final Set<ChannelHandlerContext> initMap = Collections.newSetFromMap( + new ConcurrentHashMap<ChannelHandlerContext, Boolean>()); /** * This method will be called once the {@link Channel} was registered. After the method returns this instance @@ -108,9 +110,14 @@ public void handlerAdded(ChannelHandlerContext ctx) throws Exception { } } + @Override + public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { + initMap.remove(ctx); + } + @SuppressWarnings("unchecked") private boolean initChannel(ChannelHandlerContext ctx) throws Exception { - if (initMap.putIfAbsent(ctx, Boolean.TRUE) == null) { // Guard against re-entrance. + if (initMap.add(ctx)) { // Guard against re-entrance. try { initChannel((C) ctx.channel()); } catch (Throwable cause) { @@ -125,14 +132,25 @@ private boolean initChannel(ChannelHandlerContext ctx) throws Exception { return false; } - private void remove(ChannelHandlerContext ctx) { + private void remove(final ChannelHandlerContext ctx) { try { ChannelPipeline pipeline = ctx.pipeline(); if (pipeline.context(this) != null) { pipeline.remove(this); } } finally { - initMap.remove(ctx); + // The removal may happen in an async fashion if the EventExecutor we use does something funky. + if (ctx.isRemoved()) { + initMap.remove(ctx); + } else { + // Ensure we always remove from the Map in all cases to not produce a memory leak. + ctx.channel().closeFuture().addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + initMap.remove(ctx); + } + }); + } } } }
diff --git a/transport/src/test/java/io/netty/channel/ChannelInitializerTest.java b/transport/src/test/java/io/netty/channel/ChannelInitializerTest.java index 26b5e4e9fcf..2ac1bcdefa9 100644 --- a/transport/src/test/java/io/netty/channel/ChannelInitializerTest.java +++ b/transport/src/test/java/io/netty/channel/ChannelInitializerTest.java @@ -21,12 +21,16 @@ import io.netty.channel.local.LocalAddress; import io.netty.channel.local.LocalChannel; import io.netty.channel.local.LocalServerChannel; +import io.netty.util.concurrent.EventExecutor; +import io.netty.util.concurrent.Future; import org.junit.After; import org.junit.Before; import org.junit.Test; import java.util.Iterator; import java.util.Map; +import java.util.concurrent.Executors; +import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; @@ -35,6 +39,7 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertSame; public class ChannelInitializerTest { @@ -249,6 +254,127 @@ private void testChannelRegisteredEventPropagation(ChannelInitializer<LocalChann } } + @SuppressWarnings("deprecation") + @Test(timeout = 10000) + public void testChannelInitializerEventExecutor() throws Throwable { + final AtomicInteger invokeCount = new AtomicInteger(); + final AtomicInteger completeCount = new AtomicInteger(); + final AtomicReference<Throwable> errorRef = new AtomicReference<Throwable>(); + LocalAddress addr = new LocalAddress("test"); + + final EventExecutor executor = new DefaultEventLoop() { + private final ScheduledExecutorService execService = Executors.newSingleThreadScheduledExecutor(); + + @Override + public void shutdown() { + execService.shutdown(); + } + + @Override + public boolean inEventLoop(Thread thread) { + // Always return false which will ensure we always call execute(...) + return false; + } + + @Override + public boolean isShuttingDown() { + return false; + } + + @Override + public Future<?> shutdownGracefully(long quietPeriod, long timeout, TimeUnit unit) { + throw new IllegalStateException(); + } + + @Override + public Future<?> terminationFuture() { + throw new IllegalStateException(); + } + + @Override + public boolean isShutdown() { + return execService.isShutdown(); + } + + @Override + public boolean isTerminated() { + return execService.isTerminated(); + } + + @Override + public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException { + return execService.awaitTermination(timeout, unit); + } + + @Override + public void execute(Runnable command) { + execService.execute(command); + } + }; + + ServerBootstrap serverBootstrap = new ServerBootstrap() + .channel(LocalServerChannel.class) + .group(group) + .localAddress(addr) + .childHandler(new ChannelInitializer<LocalChannel>() { + @Override + protected void initChannel(LocalChannel ch) { + ch.pipeline().addLast(executor, new ChannelInitializer<Channel>() { + @Override + protected void initChannel(Channel ch) { + invokeCount.incrementAndGet(); + ChannelHandlerContext ctx = ch.pipeline().context(this); + assertNotNull(ctx); + ch.pipeline().addAfter(ctx.executor(), + ctx.name(), null, new ChannelInboundHandlerAdapter() { + @Override + public void channelRead(ChannelHandlerContext ctx, Object msg) { + // just drop on the floor. + } + }); + completeCount.incrementAndGet(); + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { + errorRef.set(cause); + } + }); + } + }); + + Channel server = serverBootstrap.bind().sync().channel(); + + Bootstrap clientBootstrap = new Bootstrap() + .channel(LocalChannel.class) + .group(group) + .remoteAddress(addr) + .handler(new ChannelInboundHandlerAdapter()); + + Channel client = clientBootstrap.connect().sync().channel(); + client.writeAndFlush("Hello World").sync(); + + client.close().sync(); + server.close().sync(); + + client.closeFuture().sync(); + server.closeFuture().sync(); + + // Give some time to execute everything that was submitted before. + Thread.sleep(1000); + + executor.shutdown(); + assertTrue(executor.awaitTermination(5, TimeUnit.SECONDS)); + + assertEquals(invokeCount.get(), 1); + assertEquals(invokeCount.get(), completeCount.get()); + + Throwable cause = errorRef.get(); + if (cause != null) { + throw cause; + } + } + private static void closeChannel(Channel c) { if (c != null) { c.close().syncUninterruptibly();
val
val
"2018-12-04T15:26:05"
"2018-12-01T09:28:21Z"
atcurtis
val
netty/netty/8654_8656
netty/netty
netty/netty/8654
netty/netty/8656
[ "timestamp(timedelta=0.0, similarity=0.9126785535287549)" ]
db6d94f82a0f860131392aa483f4d1e21dc74249
a3844da10bb3ae788017b4c0699373284d60b058
[ "@mrniko thanks for reporting... should be fixed by https://github.com/netty/netty/pull/8656" ]
[ "Optional: logging fine the the NamingExceptions was useful when developing gRPC's DNS support." ]
"2018-12-13T19:19:19Z"
[ "defect", "android" ]
NoClassDefFoundError on Android platform
### Expected behavior No exception ### Actual behavior Exception arise during invocation DnsServerAddressStreamProviders.platformDefault method in Android application: ```java java.lang.NoClassDefFoundError: Failed resolution of: Ljavax/naming/directory/InitialDirContext; at io.netty.resolver.dns.DefaultDnsServerAddressStreamProvider.<clinit>(DefaultDnsServerAddressStreamProvider.java:68) at io.netty.resolver.dns.UnixResolverDnsServerAddressStreamProvider.parseSilently(UnixResolverDnsServerAddressStreamProvider.java:76) at io.netty.resolver.dns.DnsServerAddressStreamProviders.<clinit>(DnsServerAddressStreamProviders.java:32) at io.netty.resolver.dns.DnsServerAddressStreamProviders.platformDefault(DnsServerAddressStreamProviders.java:45) ``` ### Steps to reproduce execute `DnsServerAddressStreamProviders.platformDefault` method ### Netty version 4.1.31.Final
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DirContextUtils.java" ]
[]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java index a5ca38d368b..00be0722952 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DefaultDnsServerAddressStreamProvider.java @@ -16,23 +16,17 @@ package io.netty.resolver.dns; import io.netty.util.NetUtil; +import io.netty.util.internal.PlatformDependent; import io.netty.util.internal.SocketUtils; import io.netty.util.internal.UnstableApi; import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; -import javax.naming.Context; -import javax.naming.NamingException; -import javax.naming.directory.DirContext; -import javax.naming.directory.InitialDirContext; import java.lang.reflect.Method; import java.net.Inet6Address; import java.net.InetSocketAddress; -import java.net.URI; -import java.net.URISyntaxException; import java.util.ArrayList; import java.util.Collections; -import java.util.Hashtable; import java.util.List; import static io.netty.resolver.dns.DnsServerAddresses.sequential; @@ -55,41 +49,10 @@ public final class DefaultDnsServerAddressStreamProvider implements DnsServerAdd static { final List<InetSocketAddress> defaultNameServers = new ArrayList<InetSocketAddress>(2); - - // Using jndi-dns to obtain the default name servers. - // - // See: - // - http://docs.oracle.com/javase/8/docs/technotes/guides/jndi/jndi-dns.html - // - http://mail.openjdk.java.net/pipermail/net-dev/2017-March/010695.html - Hashtable<String, String> env = new Hashtable<String, String>(); - env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.dns.DnsContextFactory"); - env.put("java.naming.provider.url", "dns://"); - try { - DirContext ctx = new InitialDirContext(env); - String dnsUrls = (String) ctx.getEnvironment().get("java.naming.provider.url"); - // Only try if not empty as otherwise we will produce an exception - if (dnsUrls != null && !dnsUrls.isEmpty()) { - String[] servers = dnsUrls.split(" "); - for (String server : servers) { - try { - URI uri = new URI(server); - String host = new URI(server).getHost(); - - if (host == null || host.isEmpty()) { - logger.debug( - "Skipping a nameserver URI as host portion could not be extracted: {}", server); - // If the host portion can not be parsed we should just skip this entry. - continue; - } - int port = uri.getPort(); - defaultNameServers.add(SocketUtils.socketAddress(uri.getHost(), port == -1 ? DNS_PORT : port)); - } catch (URISyntaxException e) { - logger.debug("Skipping a malformed nameserver URI: {}", server, e); - } - } - } - } catch (NamingException ignore) { - // Will try reflection if this fails. + if (!PlatformDependent.isAndroid()) { + // Only try to use when not on Android as the classes not exists there: + // See https://github.com/netty/netty/issues/8654 + DirContextUtils.addNameServers(defaultNameServers, DNS_PORT); } if (defaultNameServers.isEmpty()) { diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DirContextUtils.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DirContextUtils.java new file mode 100644 index 00000000000..45c1ac433f2 --- /dev/null +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DirContextUtils.java @@ -0,0 +1,77 @@ +/* + * Copyright 2018 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.resolver.dns; + +import io.netty.util.internal.SocketUtils; +import io.netty.util.internal.logging.InternalLogger; +import io.netty.util.internal.logging.InternalLoggerFactory; + +import javax.naming.Context; +import javax.naming.NamingException; +import javax.naming.directory.DirContext; +import javax.naming.directory.InitialDirContext; +import java.net.InetSocketAddress; +import java.net.URI; +import java.net.URISyntaxException; +import java.util.Hashtable; +import java.util.List; + +final class DirContextUtils { + private static final InternalLogger logger = + InternalLoggerFactory.getInstance(DirContextUtils.class); + + private DirContextUtils() { } + + static void addNameServers(List<InetSocketAddress> defaultNameServers, int defaultPort) { + // Using jndi-dns to obtain the default name servers. + // + // See: + // - http://docs.oracle.com/javase/8/docs/technotes/guides/jndi/jndi-dns.html + // - http://mail.openjdk.java.net/pipermail/net-dev/2017-March/010695.html + Hashtable<String, String> env = new Hashtable<String, String>(); + env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.dns.DnsContextFactory"); + env.put("java.naming.provider.url", "dns://"); + + try { + DirContext ctx = new InitialDirContext(env); + String dnsUrls = (String) ctx.getEnvironment().get("java.naming.provider.url"); + // Only try if not empty as otherwise we will produce an exception + if (dnsUrls != null && !dnsUrls.isEmpty()) { + String[] servers = dnsUrls.split(" "); + for (String server : servers) { + try { + URI uri = new URI(server); + String host = new URI(server).getHost(); + + if (host == null || host.isEmpty()) { + logger.debug( + "Skipping a nameserver URI as host portion could not be extracted: {}", server); + // If the host portion can not be parsed we should just skip this entry. + continue; + } + int port = uri.getPort(); + defaultNameServers.add(SocketUtils.socketAddress(uri.getHost(), port == -1 ? + defaultPort : port)); + } catch (URISyntaxException e) { + logger.debug("Skipping a malformed nameserver URI: {}", server, e); + } + } + } + } catch (NamingException ignore) { + // Will try reflection if this fails. + } + } +}
null
test
val
"2018-12-14T14:08:03"
"2018-12-13T10:12:41Z"
mrniko
val
netty/netty/8575_8660
netty/netty
netty/netty/8575
netty/netty/8660
[ "timestamp(timedelta=20.0, similarity=0.9711539373903925)" ]
873988676a2b1bb9cc6e5c1a80e5b27725b1d75c
52891baae169a47997928b4b228eb0e0a74f93c1
[ "@ssserj yes this looks like a bug... are you interested in providing a fix ?", "@normanmaurer I still don’t known how to fix it. I need investigate problem, find proper fix and prepare PR. I’ll try..", "@ssserj @normanmaurer Hi, could I ask for the state of this, as we sometimes also encounter this in our production server?", "\r\n@DATuan91 Any request of this type (see test above) produces an error. I'm looking for a fix. I note that in 4.1.16 there was no such error. In the free time I will try to compare these two branches and find the bug.", "Thank you, @ssserj. Because I'm not familiar with netty codebase so I couldn't find a proper fix myself.", "I have also encountered this problem when using Netty 4.1.31.Final. However, it does not occurs on some old versions like 4.1.13.Final.\r\nI compare the two versions and try to find out the difference. The issue may be enrolled by https://github.com/netty/netty/commit/2b4f6677917cbe4121c2b748ba7002f8ba4bb8b5.\r\nI've made some comments on this commit.\r\n\r\n", "FWIW this breaks our (unfiltered) tests and I’m not sure we can do anything on our end to fix it." ]
[ "can we also add a `assertEquals(0, req.refCnt());` after the `destroy()` call ?", "use `CharsetUtil.UTF_8`", "nit: `FullHttpRequest req = ...`", "Most probably, should be assertEquals(1, req.refCnt()) ?", "nit: please revert... we usually not do wildcard imports for this kind of things " ]
"2018-12-14T15:22:16Z"
[]
Unexpected IllegalReferenceCountException on decode multipart request
### Actual behavior ``` io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1 at io.netty.util.AbstractReferenceCounted.release0(AbstractReferenceCounted.java:87) at io.netty.util.AbstractReferenceCounted.release(AbstractReferenceCounted.java:71) at io.netty.handler.codec.http.multipart.MixedAttribute.release(MixedAttribute.java:320) at io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.destroy(HttpPostMultipartRequestDecoder.java:947) at io.netty.handler.codec.http.multipart.HttpPostRequestDecoder.destroy(HttpPostRequestDecoder.java:247) ``` ### Steps to reproduce ```java import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.handler.codec.http.DefaultFullHttpRequest; import io.netty.handler.codec.http.HttpMethod; import io.netty.handler.codec.http.HttpVersion; import io.netty.handler.codec.http.multipart.Attribute; import io.netty.handler.codec.http.multipart.DefaultHttpDataFactory; import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder; import io.netty.handler.codec.http.multipart.InterfaceHttpData; import org.junit.Test; import java.io.IOException; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; public class HttpMultipartRequestTest { String getPostParam(HttpPostRequestDecoder requestDecoder, String name) { final InterfaceHttpData data = requestDecoder.getBodyHttpData(name); if (data != null && InterfaceHttpData.HttpDataType.Attribute.equals(data.getHttpDataType())) { final Attribute attr = (Attribute)data; try { return attr.getValue(); } catch (IOException e) { return null; } } return null; } @Test public void testMultipartRequest() { ByteBuf byteBuf = Unpooled.wrappedBuffer(("------------------------------01f136d9282f\n" + "Content-Disposition: form-data; name=\"msg_id\"\n" + "\n" + "15200\n" + "------------------------------01f136d9282f\n" + "Content-Disposition: form-data; name=\"msg\"\n" + "\n" + "test message\n" + "------------------------------01f136d9282f--").getBytes()); DefaultFullHttpRequest req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_0, HttpMethod.POST, "/push_data", byteBuf); req.headers().add("Content-Type", "multipart/form-data; boundary=----------------------------01f136d9282f"); HttpPostRequestDecoder decoder = new HttpPostRequestDecoder(new DefaultHttpDataFactory(DefaultHttpDataFactory.MINSIZE), req, java.nio.charset.Charset.forName("utf-8")); assertEquals("test message", getPostParam(decoder, "msg")); assertEquals("15200", getPostParam(decoder, "msg_id")); assertTrue(decoder.isMultipart()); decoder.destroy(); } } ``` Decoder will produce IllegalReferenceCountException. ### Netty version 4.1.31.Final ### JVM version (e.g. `java -version`) java version "1.8.0_161" ### OS version (e.g. `uname -a`) Linux
[ "codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java index a065b879e11..c3404d9780d 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java @@ -931,19 +931,15 @@ protected InterfaceHttpData getFileUpload(String delimiter) { */ @Override public void destroy() { - checkDestroyed(); + // Release all data items, including those not yet pulled cleanFiles(); + destroyed = true; if (undecodedChunk != null && undecodedChunk.refCnt() > 0) { undecodedChunk.release(); undecodedChunk = null; } - - // release all data which was not yet pulled - for (int i = bodyListHttpDataRank; i < bodyListHttpData.size(); i++) { - bodyListHttpData.get(i).release(); - } } /**
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java index d29f6b3cd44..fc258cc8316 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java @@ -20,16 +20,7 @@ import io.netty.buffer.Unpooled; import io.netty.buffer.UnpooledByteBufAllocator; import io.netty.handler.codec.DecoderResult; -import io.netty.handler.codec.http.DefaultFullHttpRequest; -import io.netty.handler.codec.http.DefaultHttpContent; -import io.netty.handler.codec.http.DefaultHttpRequest; -import io.netty.handler.codec.http.DefaultLastHttpContent; -import io.netty.handler.codec.http.HttpContent; -import io.netty.handler.codec.http.HttpHeaderNames; -import io.netty.handler.codec.http.HttpHeaderValues; -import io.netty.handler.codec.http.HttpMethod; -import io.netty.handler.codec.http.HttpVersion; -import io.netty.handler.codec.http.LastHttpContent; +import io.netty.handler.codec.http.*; import io.netty.util.CharsetUtil; import org.junit.Test; @@ -689,4 +680,37 @@ public void testDecodeMalformedEmptyContentTypeFieldParameters() throws Exceptio assertEquals("tmp-0.txt", fileUpload.getFilename()); decoder.destroy(); } + + // https://github.com/netty/netty/issues/8575 + @Test + public void testMultipartRequest() throws Exception { + String BOUNDARY = "01f136d9282f"; + + ByteBuf byteBuf = Unpooled.wrappedBuffer(("--" + BOUNDARY + "\n" + + "Content-Disposition: form-data; name=\"msg_id\"\n" + + "\n" + + "15200\n" + + "--" + BOUNDARY + "\n" + + "Content-Disposition: form-data; name=\"msg\"\n" + + "\n" + + "test message\n" + + "--" + BOUNDARY + "--").getBytes()); + + FullHttpRequest req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_0, HttpMethod.POST, "/up", byteBuf); + req.headers().add(HttpHeaderNames.CONTENT_TYPE, "multipart/form-data; boundary=" + BOUNDARY); + + HttpPostRequestDecoder decoder = + new HttpPostRequestDecoder(new DefaultHttpDataFactory(DefaultHttpDataFactory.MINSIZE), + req, + CharsetUtil.UTF_8); + + assertTrue(decoder.isMultipart()); + assertFalse(decoder.getBodyHttpDatas().isEmpty()); + assertEquals(2, decoder.getBodyHttpDatas().size()); + assertEquals("test message", ((Attribute) decoder.getBodyHttpData("msg")).getValue()); + assertEquals("15200", ((Attribute) decoder.getBodyHttpData("msg_id")).getValue()); + + decoder.destroy(); + assertEquals(1, req.refCnt()); + } }
val
val
"2019-08-19T08:24:42"
"2018-11-20T09:59:30Z"
ssserj
val
netty/netty/8430_8671
netty/netty
netty/netty/8430
netty/netty/8671
[ "timestamp(timedelta=33.0, similarity=0.8961730063417848)" ]
26e14118976e37ecc3a6c0c52237fa4f68c54591
44b919b698889254959c7d94a327e495c5f8ad31
[ "For the reference, this is us unsetting the limit in Finagle: https://github.com/twitter/finagle/commit/990c8650366e5374ea062c753a4628c5971fc40e", "@bryce-anderson @vkostyukov agree... @Scottmitch also told me at some point he wanted to remove it. ", "@bryce-anderson want to do a PR against the master branch ?", "@normanmaurer, I'll take this one, probably early next week.", "@bryce-anderson thanks a lot!" ]
[ "Aren't `chunkSize` already an `int`?", "No, it's a `long`. The name is kind of misleading, it might be better as `remainingContent` or something, though that then becomes misleading for `transfer-encoding: chunked` content..." ]
"2018-12-18T17:23:07Z"
[ "cleanup" ]
[Netty 5]: Remove the HttpObjectDecoder `maxChunkSize` parameter in Netty 5.x
This is a cleanup request for Netty 5.x. The family of HTTP codecs passes a parameter called `maxChunkSize` to `HttpObjectDecoder` which limits the size of chunks that decoder will emit. Note that this doesn't cause the HTTP dispatch to switch from a `FullHttpMessage` to a regular message or to avoid buffering data, it only means that any already buffered content data may be split into smaller messages if its size exceeds the limit. For example, if the raw `buffer` has 16kb of message content but the `maxChunkSize` is 2kb, it will simply split the content into 8*2kb `HttpContent` instances instead of a single 16kb instance. The current value of this is questionable and was likely more useful in the Netty3 model when the decoder may have emitted fully buffered messages. ### Netty version 4.1.x
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java", "codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspObjectDecoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java", "codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspObjectDecoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpClientCodecTest.java", "codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecoderTest.java", "codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java", "codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java", "codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java index dd1da34742b..8770202d0d1 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java @@ -61,40 +61,40 @@ public final class HttpClientCodec extends CombinedChannelDuplexHandler<HttpResp * {@code maxChunkSize (8192)}). */ public HttpClientCodec() { - this(4096, 8192, 8192, false); + this(4096, 8192, false); } /** * Creates a new instance with the specified decoder options. */ - public HttpClientCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) { - this(maxInitialLineLength, maxHeaderSize, maxChunkSize, false); + public HttpClientCodec(int maxInitialLineLength, int maxHeaderSize) { + this(maxInitialLineLength, maxHeaderSize, false); } /** * Creates a new instance with the specified decoder options. */ public HttpClientCodec( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean failOnMissingResponse) { - this(maxInitialLineLength, maxHeaderSize, maxChunkSize, failOnMissingResponse, true); + int maxInitialLineLength, int maxHeaderSize, boolean failOnMissingResponse) { + this(maxInitialLineLength, maxHeaderSize, failOnMissingResponse, true); } /** * Creates a new instance with the specified decoder options. */ public HttpClientCodec( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean failOnMissingResponse, + int maxInitialLineLength, int maxHeaderSize, boolean failOnMissingResponse, boolean validateHeaders) { - this(maxInitialLineLength, maxHeaderSize, maxChunkSize, failOnMissingResponse, validateHeaders, false); + this(maxInitialLineLength, maxHeaderSize, failOnMissingResponse, validateHeaders, false); } /** * Creates a new instance with the specified decoder options. */ public HttpClientCodec( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean failOnMissingResponse, + int maxInitialLineLength, int maxHeaderSize, boolean failOnMissingResponse, boolean validateHeaders, boolean parseHttpAfterConnectRequest) { - init(new Decoder(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders), new Encoder()); + init(new Decoder(maxInitialLineLength, maxHeaderSize, validateHeaders), new Encoder()); this.failOnMissingResponse = failOnMissingResponse; this.parseHttpAfterConnectRequest = parseHttpAfterConnectRequest; } @@ -115,7 +115,7 @@ public HttpClientCodec( public HttpClientCodec( int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean failOnMissingResponse, boolean validateHeaders, int initialBufferSize, boolean parseHttpAfterConnectRequest) { - init(new Decoder(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders, initialBufferSize), + init(new Decoder(maxInitialLineLength, maxHeaderSize, validateHeaders, initialBufferSize), new Encoder()); this.parseHttpAfterConnectRequest = parseHttpAfterConnectRequest; this.failOnMissingResponse = failOnMissingResponse; @@ -177,13 +177,13 @@ protected void encode( } private final class Decoder extends HttpResponseDecoder { - Decoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders); + Decoder(int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders) { + super(maxInitialLineLength, maxHeaderSize, validateHeaders); } - Decoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders, + Decoder(int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders, int initialBufferSize) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders, initialBufferSize); + super(maxInitialLineLength, maxHeaderSize, validateHeaders, initialBufferSize); } @Override diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java index af1d642a039..d3546d82164 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java @@ -102,7 +102,6 @@ public abstract class HttpObjectDecoder extends ByteToMessageDecoder { private static final String EMPTY_VALUE = ""; - private final int maxChunkSize; private final boolean chunkedSupported; protected final boolean validateHeaders; private final HeaderParser headerParser; @@ -145,28 +144,28 @@ private enum State { * {@code maxChunkSize (8192)}. */ protected HttpObjectDecoder() { - this(4096, 8192, 8192, true); + this(4096, 8192, true); } /** * Creates a new instance with the specified parameters. */ protected HttpObjectDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean chunkedSupported) { - this(maxInitialLineLength, maxHeaderSize, maxChunkSize, chunkedSupported, true); + int maxInitialLineLength, int maxHeaderSize, boolean chunkedSupported) { + this(maxInitialLineLength, maxHeaderSize, chunkedSupported, true); } /** * Creates a new instance with the specified parameters. */ protected HttpObjectDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, + int maxInitialLineLength, int maxHeaderSize, boolean chunkedSupported, boolean validateHeaders) { - this(maxInitialLineLength, maxHeaderSize, maxChunkSize, chunkedSupported, validateHeaders, 128); + this(maxInitialLineLength, maxHeaderSize, chunkedSupported, validateHeaders, 128); } protected HttpObjectDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, + int maxInitialLineLength, int maxHeaderSize, boolean chunkedSupported, boolean validateHeaders, int initialBufferSize) { if (maxInitialLineLength <= 0) { throw new IllegalArgumentException( @@ -178,15 +177,9 @@ protected HttpObjectDecoder( "maxHeaderSize must be a positive integer: " + maxHeaderSize); } - if (maxChunkSize <= 0) { - throw new IllegalArgumentException( - "maxChunkSize must be a positive integer: " + - maxChunkSize); - } AppendableCharSequence seq = new AppendableCharSequence(initialBufferSize); lineParser = new LineParser(seq, maxInitialLineLength); headerParser = new HeaderParser(seq, maxHeaderSize); - this.maxChunkSize = maxChunkSize; this.chunkedSupported = chunkedSupported; this.validateHeaders = validateHeaders; } @@ -278,7 +271,7 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> ou } case READ_VARIABLE_LENGTH_CONTENT: { // Keep reading data as a chunk until the end of connection is reached. - int toRead = Math.min(buffer.readableBytes(), maxChunkSize); + int toRead = buffer.readableBytes(); if (toRead > 0) { ByteBuf content = buffer.readRetainedSlice(toRead); out.add(new DefaultHttpContent(content)); @@ -286,7 +279,7 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> ou return; } case READ_FIXED_LENGTH_CONTENT: { - int readLimit = buffer.readableBytes(); + int toRead = buffer.readableBytes(); // Check if the buffer is readable first as we use the readable byte count // to create the HttpChunk. This is needed as otherwise we may end up with @@ -294,14 +287,14 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> ou // handled like it is the last HttpChunk. // // See https://github.com/netty/netty/issues/433 - if (readLimit == 0) { + if (toRead == 0) { return; } - int toRead = Math.min(readLimit, maxChunkSize); if (toRead > chunkSize) { toRead = (int) chunkSize; } + ByteBuf content = buffer.readRetainedSlice(toRead); chunkSize -= toRead; @@ -337,7 +330,7 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf buffer, List<Object> ou } case READ_CHUNKED_CONTENT: { assert chunkSize <= Integer.MAX_VALUE; - int toRead = Math.min((int) chunkSize, maxChunkSize); + int toRead = (int) chunkSize; toRead = Math.min(toRead, buffer.readableBytes()); if (toRead == 0) { return; diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java index 24252c73587..1be6cf52e36 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java @@ -66,19 +66,19 @@ public HttpRequestDecoder() { * Creates a new instance with the specified parameters. */ public HttpRequestDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true); + int maxInitialLineLength, int maxHeaderSize) { + super(maxInitialLineLength, maxHeaderSize, true); } public HttpRequestDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true, validateHeaders); + int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders) { + super(maxInitialLineLength, maxHeaderSize, true, validateHeaders); } public HttpRequestDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders, + int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders, int initialBufferSize) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true, validateHeaders, initialBufferSize); + super(maxInitialLineLength, maxHeaderSize, true, validateHeaders, initialBufferSize); } @Override diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java index 21491971ef3..ede22f4eeb0 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java @@ -97,19 +97,19 @@ public HttpResponseDecoder() { * Creates a new instance with the specified parameters. */ public HttpResponseDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true); + int maxInitialLineLength, int maxHeaderSize) { + super(maxInitialLineLength, maxHeaderSize, true); } public HttpResponseDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true, validateHeaders); + int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders) { + super(maxInitialLineLength, maxHeaderSize, true, validateHeaders); } public HttpResponseDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders, + int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders, int initialBufferSize) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true, validateHeaders, initialBufferSize); + super(maxInitialLineLength, maxHeaderSize, true, validateHeaders, initialBufferSize); } @Override diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java index 4e8d61361b2..f72484e4f21 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerCodec.java @@ -41,32 +41,32 @@ public final class HttpServerCodec extends CombinedChannelDuplexHandler<HttpRequ * {@code maxChunkSize (8192)}). */ public HttpServerCodec() { - this(4096, 8192, 8192); + this(4096, 8192); } /** * Creates a new instance with the specified decoder options. */ - public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) { - init(new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize), + public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize) { + init(new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize), new HttpServerResponseEncoder()); } /** * Creates a new instance with the specified decoder options. */ - public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders) { - init(new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders), + public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders) { + init(new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize, validateHeaders), new HttpServerResponseEncoder()); } /** * Creates a new instance with the specified decoder options. */ - public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders, + public HttpServerCodec(int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders, int initialBufferSize) { init( - new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize, maxChunkSize, + new HttpServerRequestDecoder(maxInitialLineLength, maxHeaderSize, validateHeaders, initialBufferSize), new HttpServerResponseEncoder()); } @@ -81,18 +81,18 @@ public void upgradeFrom(ChannelHandlerContext ctx) { } private final class HttpServerRequestDecoder extends HttpRequestDecoder { - public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize); + public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize) { + super(maxInitialLineLength, maxHeaderSize); } - public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, + public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders); + super(maxInitialLineLength, maxHeaderSize, validateHeaders); } - public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, + public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders, int initialBufferSize) { - super(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders, initialBufferSize); + super(maxInitialLineLength, maxHeaderSize, validateHeaders, initialBufferSize); } @Override diff --git a/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspDecoder.java index acc028978f0..cbab65e7f68 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspDecoder.java @@ -106,7 +106,7 @@ public RtspDecoder() { public RtspDecoder(final int maxInitialLineLength, final int maxHeaderSize, final int maxContentLength) { - super(maxInitialLineLength, maxHeaderSize, maxContentLength * 2, false); + super(maxInitialLineLength, maxHeaderSize, false); } /** @@ -122,7 +122,6 @@ public RtspDecoder(final int maxInitialLineLength, final boolean validateHeaders) { super(maxInitialLineLength, maxHeaderSize, - maxContentLength * 2, false, validateHeaders); } diff --git a/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspObjectDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspObjectDecoder.java index e52c0ce51e5..e19472193b3 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspObjectDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspObjectDecoder.java @@ -55,23 +55,22 @@ public abstract class RtspObjectDecoder extends HttpObjectDecoder { /** * Creates a new instance with the default - * {@code maxInitialLineLength (4096)}, {@code maxHeaderSize (8192)}, and - * {@code maxContentLength (8192)}. + * {@code maxInitialLineLength (4096)}, {@code maxHeaderSize (8192)}. */ protected RtspObjectDecoder() { - this(4096, 8192, 8192); + this(4096, 8192); } /** * Creates a new instance with the specified parameters. */ - protected RtspObjectDecoder(int maxInitialLineLength, int maxHeaderSize, int maxContentLength) { - super(maxInitialLineLength, maxHeaderSize, maxContentLength * 2, false); + protected RtspObjectDecoder(int maxInitialLineLength, int maxHeaderSize) { + super(maxInitialLineLength, maxHeaderSize, false); } protected RtspObjectDecoder( - int maxInitialLineLength, int maxHeaderSize, int maxContentLength, boolean validateHeaders) { - super(maxInitialLineLength, maxHeaderSize, maxContentLength * 2, false, validateHeaders); + int maxInitialLineLength, int maxHeaderSize, boolean validateHeaders) { + super(maxInitialLineLength, maxHeaderSize, false, validateHeaders); } @Override
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpClientCodecTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpClientCodecTest.java index 16a6eff29de..416e4d6749e 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpClientCodecTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpClientCodecTest.java @@ -59,7 +59,7 @@ public class HttpClientCodecTest { @Test public void testConnectWithResponseContent() { - HttpClientCodec codec = new HttpClientCodec(4096, 8192, 8192, true); + HttpClientCodec codec = new HttpClientCodec(4096, 8192, true); EmbeddedChannel ch = new EmbeddedChannel(codec); sendRequestAndReadResponse(ch, HttpMethod.CONNECT, RESPONSE); @@ -68,7 +68,7 @@ public void testConnectWithResponseContent() { @Test public void testFailsNotOnRequestResponseChunked() { - HttpClientCodec codec = new HttpClientCodec(4096, 8192, 8192, true); + HttpClientCodec codec = new HttpClientCodec(4096, 8192, true); EmbeddedChannel ch = new EmbeddedChannel(codec); sendRequestAndReadResponse(ch, HttpMethod.GET, CHUNKED_RESPONSE); @@ -77,7 +77,7 @@ public void testFailsNotOnRequestResponseChunked() { @Test public void testFailsOnMissingResponse() { - HttpClientCodec codec = new HttpClientCodec(4096, 8192, 8192, true); + HttpClientCodec codec = new HttpClientCodec(4096, 8192, true); EmbeddedChannel ch = new EmbeddedChannel(codec); assertTrue(ch.writeOutbound(new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, @@ -95,7 +95,7 @@ public void testFailsOnMissingResponse() { @Test public void testFailsOnIncompleteChunkedResponse() { - HttpClientCodec codec = new HttpClientCodec(4096, 8192, 8192, true); + HttpClientCodec codec = new HttpClientCodec(4096, 8192, true); EmbeddedChannel ch = new EmbeddedChannel(codec); ch.writeOutbound(new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "http://localhost/")); @@ -130,7 +130,7 @@ public void testServerCloseSocketInputProvidesData() throws InterruptedException @Override protected void initChannel(Channel ch) throws Exception { // Don't use the HttpServerCodec, because we don't want to have content-length or anything added. - ch.pipeline().addLast(new HttpRequestDecoder(4096, 8192, 8192, true)); + ch.pipeline().addLast(new HttpRequestDecoder(4096, 8192, true)); ch.pipeline().addLast(new HttpObjectAggregator(4096)); ch.pipeline().addLast(new SimpleChannelInboundHandler<FullHttpRequest>() { @Override @@ -174,7 +174,7 @@ public void operationComplete(ChannelFuture future) throws Exception { cb.handler(new ChannelInitializer<Channel>() { @Override protected void initChannel(Channel ch) throws Exception { - ch.pipeline().addLast(new HttpClientCodec(4096, 8192, 8192, true, true)); + ch.pipeline().addLast(new HttpClientCodec(4096, 8192, true, true)); ch.pipeline().addLast(new HttpObjectAggregator(4096)); ch.pipeline().addLast(new SimpleChannelInboundHandler<FullHttpResponse>() { @Override @@ -212,7 +212,7 @@ public void testPassThroughAfterConnect() throws Exception { } private static void testAfterConnect(final boolean parseAfterConnect) throws Exception { - EmbeddedChannel ch = new EmbeddedChannel(new HttpClientCodec(4096, 8192, 8192, true, true, parseAfterConnect)); + EmbeddedChannel ch = new EmbeddedChannel(new HttpClientCodec(4096, 8192, true, true, parseAfterConnect)); Consumer connectResponseConsumer = new Consumer(); sendRequestAndReadResponse(ch, HttpMethod.CONNECT, EMPTY_RESPONSE, connectResponseConsumer); @@ -284,7 +284,7 @@ public void testDecodesFinalResponseAfterSwitchingProtocols() { "Connection: Upgrade\r\n" + "Upgrade: TLS/1.2, HTTP/1.1\r\n\r\n"; - HttpClientCodec codec = new HttpClientCodec(4096, 8192, 8192, true); + HttpClientCodec codec = new HttpClientCodec(4096, 8192, true); EmbeddedChannel ch = new EmbeddedChannel(codec, new HttpObjectAggregator(1024)); HttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "http://localhost/"); diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecoderTest.java index 7a27a4c08ac..b7ea6df628e 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecoderTest.java @@ -284,7 +284,7 @@ public void testRequestContentLength1() { // or removes it completely (handlers down the chain must rely on LastHttpContent object) // force content to be in more than one chunk (5 bytes/chunk) - HttpRequestDecoder decoder = new HttpRequestDecoder(4096, 4096, 5); + HttpRequestDecoder decoder = new HttpRequestDecoder(4096, 4096); HttpContentDecoder decompressor = new HttpContentDecompressor(); EmbeddedChannel channel = new EmbeddedChannel(decoder, decompressor); String headers = "POST / HTTP/1.1\r\n" + @@ -313,7 +313,7 @@ public void testRequestContentLength2() { // case 2: if HttpObjectAggregator is down the chain, then correct Content-Length header must be set // force content to be in more than one chunk (5 bytes/chunk) - HttpRequestDecoder decoder = new HttpRequestDecoder(4096, 4096, 5); + HttpRequestDecoder decoder = new HttpRequestDecoder(4096, 4096); HttpContentDecoder decompressor = new HttpContentDecompressor(); HttpObjectAggregator aggregator = new HttpObjectAggregator(1024); EmbeddedChannel channel = new EmbeddedChannel(decoder, decompressor, aggregator); @@ -345,7 +345,7 @@ public void testResponseContentLength1() { // or removes it completely (handlers down the chain must rely on LastHttpContent object) // force content to be in more than one chunk (5 bytes/chunk) - HttpResponseDecoder decoder = new HttpResponseDecoder(4096, 4096, 5); + HttpResponseDecoder decoder = new HttpResponseDecoder(4096, 4096); HttpContentDecoder decompressor = new HttpContentDecompressor(); EmbeddedChannel channel = new EmbeddedChannel(decoder, decompressor); String headers = "HTTP/1.1 200 OK\r\n" + @@ -377,7 +377,7 @@ public void testResponseContentLength2() { // case 2: if HttpObjectAggregator is down the chain, then correct Content-Length header must be set // force content to be in more than one chunk (5 bytes/chunk) - HttpResponseDecoder decoder = new HttpResponseDecoder(4096, 4096, 5); + HttpResponseDecoder decoder = new HttpResponseDecoder(4096, 4096); HttpContentDecoder decompressor = new HttpContentDecompressor(); HttpObjectAggregator aggregator = new HttpObjectAggregator(1024); EmbeddedChannel channel = new EmbeddedChannel(decoder, decompressor, aggregator); @@ -405,7 +405,7 @@ public void testResponseContentLength2() { @Test public void testFullHttpRequest() { // test that ContentDecoder can be used after the ObjectAggregator - HttpRequestDecoder decoder = new HttpRequestDecoder(4096, 4096, 5); + HttpRequestDecoder decoder = new HttpRequestDecoder(4096, 4096); HttpObjectAggregator aggregator = new HttpObjectAggregator(1024); HttpContentDecoder decompressor = new HttpContentDecompressor(); EmbeddedChannel channel = new EmbeddedChannel(decoder, aggregator, decompressor); @@ -432,7 +432,7 @@ public void testFullHttpRequest() { @Test public void testFullHttpResponse() { // test that ContentDecoder can be used after the ObjectAggregator - HttpResponseDecoder decoder = new HttpResponseDecoder(4096, 4096, 5); + HttpResponseDecoder decoder = new HttpResponseDecoder(4096, 4096); HttpObjectAggregator aggregator = new HttpObjectAggregator(1024); HttpContentDecoder decompressor = new HttpContentDecompressor(); EmbeddedChannel channel = new EmbeddedChannel(decoder, aggregator, decompressor); @@ -460,7 +460,7 @@ public void testFullHttpResponse() { @Test public void testFullHttpResponseEOF() { // test that ContentDecoder can be used after the ObjectAggregator - HttpResponseDecoder decoder = new HttpResponseDecoder(4096, 4096, 5); + HttpResponseDecoder decoder = new HttpResponseDecoder(4096, 4096); HttpContentDecoder decompressor = new HttpContentDecompressor(); EmbeddedChannel channel = new EmbeddedChannel(decoder, decompressor); String headers = "HTTP/1.1 200 OK\r\n" + diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java index 45720631c40..7005bfab4dd 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java @@ -297,7 +297,7 @@ public void testMessagesSplitBetweenMultipleBuffers() { @Test public void testTooLargeInitialLine() { - EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder(10, 1024, 1024)); + EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder(10, 1024)); String requestStr = "GET /some/path HTTP/1.1\r\n" + "Host: localhost1\r\n\r\n"; @@ -310,7 +310,7 @@ public void testTooLargeInitialLine() { @Test public void testTooLargeHeaders() { - EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder(1024, 10, 1024)); + EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder(1024, 10)); String requestStr = "GET /some/path HTTP/1.1\r\n" + "Host: localhost1\r\n\r\n"; diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java index 017dbd5ff94..99707a7e1f7 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java @@ -49,7 +49,7 @@ public class HttpResponseDecoderTest { public void testMaxHeaderSize1() { final int maxHeaderSize = 8192; - final EmbeddedChannel ch = new EmbeddedChannel(new HttpResponseDecoder(4096, maxHeaderSize, 8192)); + final EmbeddedChannel ch = new EmbeddedChannel(new HttpResponseDecoder(4096, maxHeaderSize)); final char[] bytes = new char[maxHeaderSize / 2 - 2]; Arrays.fill(bytes, 'a'); @@ -81,7 +81,7 @@ public void testMaxHeaderSize1() { public void testMaxHeaderSize2() { final int maxHeaderSize = 8192; - final EmbeddedChannel ch = new EmbeddedChannel(new HttpResponseDecoder(4096, maxHeaderSize, 8192)); + final EmbeddedChannel ch = new EmbeddedChannel(new HttpResponseDecoder(4096, maxHeaderSize)); final char[] bytes = new char[maxHeaderSize / 2 - 2]; Arrays.fill(bytes, 'a'); @@ -146,55 +146,6 @@ public void testResponseChunked() { assertNull(ch.readInbound()); } - @Test - public void testResponseChunkedExceedMaxChunkSize() { - EmbeddedChannel ch = new EmbeddedChannel(new HttpResponseDecoder(4096, 8192, 32)); - ch.writeInbound( - Unpooled.copiedBuffer("HTTP/1.1 200 OK\r\nTransfer-Encoding: chunked\r\n\r\n", CharsetUtil.US_ASCII)); - - HttpResponse res = ch.readInbound(); - assertThat(res.protocolVersion(), sameInstance(HttpVersion.HTTP_1_1)); - assertThat(res.status(), is(HttpResponseStatus.OK)); - - byte[] data = new byte[64]; - for (int i = 0; i < data.length; i++) { - data[i] = (byte) i; - } - - for (int i = 0; i < 10; i++) { - assertFalse(ch.writeInbound(Unpooled.copiedBuffer(Integer.toHexString(data.length) + "\r\n", - CharsetUtil.US_ASCII))); - assertTrue(ch.writeInbound(Unpooled.wrappedBuffer(data))); - - byte[] decodedData = new byte[data.length]; - HttpContent content = ch.readInbound(); - assertEquals(32, content.content().readableBytes()); - content.content().readBytes(decodedData, 0, 32); - content.release(); - - content = ch.readInbound(); - assertEquals(32, content.content().readableBytes()); - - content.content().readBytes(decodedData, 32, 32); - - assertArrayEquals(data, decodedData); - content.release(); - - assertFalse(ch.writeInbound(Unpooled.copiedBuffer("\r\n", CharsetUtil.US_ASCII))); - } - - // Write the last chunk. - ch.writeInbound(Unpooled.copiedBuffer("0\r\n\r\n", CharsetUtil.US_ASCII)); - - // Ensure the last chunk was decoded. - LastHttpContent content = ch.readInbound(); - assertFalse(content.content().isReadable()); - content.release(); - - ch.finish(); - assertNull(ch.readInbound()); - } - @Test public void testClosureWithoutContentLength1() throws Exception { EmbeddedChannel ch = new EmbeddedChannel(new HttpResponseDecoder()); diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java index 17570b758f9..8346112ba24 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpServerCodecTest.java @@ -33,7 +33,7 @@ public class HttpServerCodecTest { public void testUnfinishedChunkedHttpRequestIsLastFlag() throws Exception { int maxChunkSize = 2000; - HttpServerCodec httpServerCodec = new HttpServerCodec(1000, 1000, maxChunkSize); + HttpServerCodec httpServerCodec = new HttpServerCodec(1000, 1000); EmbeddedChannel decoderEmbedder = new EmbeddedChannel(httpServerCodec); int totalContentLength = maxChunkSize * 5;
train
val
"2018-12-19T12:56:14"
"2018-10-25T16:09:06Z"
bryce-anderson
val
netty/netty/8687_8691
netty/netty
netty/netty/8687
netty/netty/8691
[ "timestamp(timedelta=0.0, similarity=0.8459687228988724)" ]
6fdd7fcddbe964b2f30d7492a926f4f0bf0f083f
82ec6ba815adcb3ab057ef8a65f6f620a1ad5611
[ "@slggamerTrue thanks for reporting. Let me come up with a unit test and fix.", "@slggamerTrue can you test https://github.com/netty/netty/pull/8691 ?", "Thanks @normanmaurer. The fix is perfect. Looking forward to the new release. ", "@slggamerTrue thanks for checking and reporting" ]
[ "Use `nameWithDot` and `mappingWithDot` here?" ]
"2018-12-27T07:49:17Z"
[ "defect" ]
Dead Loop in DNS resolver
### Problem There is a dead loop in DnsResolveContext.onExpectedResponse when there is a loop in the CNAMES ![image](https://user-images.githubusercontent.com/20608396/50428555-8ac8c780-08f3-11e9-908c-5ea0c8cf1d6a.png) We found this problem because one DNS meet the problem ![image](https://user-images.githubusercontent.com/20608396/50428579-ca8faf00-08f3-11e9-829c-8927ebf72baf.png) ### Steps to reproduce There is a small chance that the "www.altura.org" would response the loop CNAME. It's easy to reproduce the problem by your own DNS server. The same case can be handled by the JDK DNS Resolver. ### Netty version 4.1.27 ### JVM version (e.g. `java -version`) JDK 1.8.181 ### OS version (e.g. `uname -a`) Centos 7.0
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java" ]
[ "resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java" ]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java index e75f49c8d46..847bfb5b9f3 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java @@ -48,6 +48,7 @@ import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; import java.util.IdentityHashMap; import java.util.Iterator; import java.util.List; @@ -651,10 +652,11 @@ private void onExpectedResponse( // Make sure the record is for the questioned domain. if (!recordName.equals(questionName)) { + Map<String, String> cnamesCopy = new HashMap<String, String>(cnames); // Even if the record's name is not exactly same, it might be an alias defined in the CNAME records. String resolved = questionName; do { - resolved = cnames.get(resolved); + resolved = cnamesCopy.remove(resolved); if (recordName.equals(resolved)) { break; } @@ -749,8 +751,12 @@ private static Map<String, String> buildAliasMap(DnsResponse response, DnsCnameC String mapping = domainName.toLowerCase(Locale.US); // Cache the CNAME as well. - cache.cache(hostnameWithDot(name), hostnameWithDot(mapping), r.timeToLive(), loop); - cnames.put(name, mapping); + String nameWithDot = hostnameWithDot(name); + String mappingWithDot = hostnameWithDot(mapping); + if (!nameWithDot.equalsIgnoreCase(mappingWithDot)) { + cache.cache(nameWithDot, mappingWithDot, r.timeToLive(), loop); + cnames.put(name, mapping); + } } return cnames != null? cnames : Collections.<String, String>emptyMap(); @@ -875,12 +881,21 @@ private DnsServerAddressStream getNameServers(String hostname) { private void followCname(DnsQuestion question, String cname, DnsQueryLifecycleObserver queryLifecycleObserver, Promise<List<T>> promise) { + Set<String> cnames = null; for (;;) { // Resolve from cnameCache() until there is no more cname entry cached. String mapping = cnameCache().get(hostnameWithDot(cname)); if (mapping == null) { break; } + if (cnames == null) { + // Detect loops. + cnames = new HashSet<String>(2); + } + if (!cnames.add(cname)) { + // Follow CNAME from cache would loop. Lets break here. + break; + } cname = mapping; }
diff --git a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java index 70c8b6b52e0..1326a7975e8 100644 --- a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java +++ b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java @@ -96,6 +96,7 @@ import static io.netty.handler.codec.dns.DnsRecordType.CNAME; import static io.netty.resolver.dns.DnsServerAddresses.sequential; import static java.util.Collections.singletonList; +import static java.util.Collections.singletonMap; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.hasSize; import static org.hamcrest.Matchers.instanceOf; @@ -2135,6 +2136,52 @@ public Set<ResourceRecord> getRecords(QuestionRecord question) { } } + @Test + public void testFollowCNAMELoop() throws IOException { + expectedException.expect(UnknownHostException.class); + TestDnsServer dnsServer2 = new TestDnsServer(new RecordStore() { + + @Override + public Set<ResourceRecord> getRecords(QuestionRecord question) { + Set<ResourceRecord> records = new LinkedHashSet<ResourceRecord>(4); + + records.add(new TestDnsServer.TestResourceRecord("x." + question.getDomainName(), + RecordType.A, Collections.<String, Object>singletonMap( + DnsAttribute.IP_ADDRESS.toLowerCase(), "10.0.0.99"))); + records.add(new TestDnsServer.TestResourceRecord( + "cname2.netty.io", RecordType.CNAME, + Collections.<String, Object>singletonMap( + DnsAttribute.DOMAIN_NAME.toLowerCase(), "cname.netty.io"))); + records.add(new TestDnsServer.TestResourceRecord( + "cname.netty.io", RecordType.CNAME, + Collections.<String, Object>singletonMap( + DnsAttribute.DOMAIN_NAME.toLowerCase(), "cname2.netty.io"))); + records.add(new TestDnsServer.TestResourceRecord( + question.getDomainName(), RecordType.CNAME, + Collections.<String, Object>singletonMap( + DnsAttribute.DOMAIN_NAME.toLowerCase(), "cname.netty.io"))); + return records; + } + }); + dnsServer2.start(); + DnsNameResolver resolver = null; + try { + DnsNameResolverBuilder builder = newResolver() + .recursionDesired(false) + .resolvedAddressTypes(ResolvedAddressTypes.IPV4_ONLY) + .maxQueriesPerResolve(16) + .nameServerProvider(new SingletonDnsServerAddressStreamProvider(dnsServer2.localAddress())); + + resolver = builder.build(); + resolver.resolveAll("somehost.netty.io").syncUninterruptibly().getNow(); + } finally { + dnsServer2.stop(); + if (resolver != null) { + resolver.close(); + } + } + } + @Test public void testSearchDomainQueryFailureForSingleAddressTypeCompletes() { expectedException.expect(UnknownHostException.class);
val
val
"2019-01-14T07:24:34"
"2018-12-26T01:56:12Z"
slggamerTrue
val
netty/netty/8700_8716
netty/netty
netty/netty/8700
netty/netty/8716
[ "timestamp(timedelta=0.0, similarity=0.8804482505339712)" ]
c424599593b0feee6bb76735cd60ad8f151d21fb
7988cfec0a32a44a879b60e65ad4a9cabfd3c4d7
[ "Another somewhat related question.\r\n\r\nWhen using `HttpChunkedInput` with `ChunkedWriteHandler`, `lastHttpContent` would be written/flushed even in case write promise is already reported as a failed one. We rely on `endOfInput` [being](https://github.com/netty/netty/blob/fa84e2b3af45ec7fd47909eff0aa7d2be5a54972/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java#L222) `true` after `closeInput` was called [here](https://github.com/netty/netty/blob/fa84e2b3af45ec7fd47909eff0aa7d2be5a54972/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java#L277) or [here](https://github.com/netty/netty/blob/fa84e2b3af45ec7fd47909eff0aa7d2be5a54972/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java#L289), which [doesn't work](https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec-http/src/main/java/io/netty/handler/codec/http/HttpChunkedInput.java#L72-L74) for `HttpChunkedInput`.", "@kachayev yes this is a bug... would you mind open a PR with a fix ?", "@normanmaurer Sure! Will do shortly." ]
[ "do we also need to release the message ?", "Not here, I think... I assume someone who resolved the promise (either with success or with failure) have already done this by calling `closeInput`. That's how the code is organized now. As we're not allocating any new chunks (calling `readChunk`), everything else should be the responsibility of a specific input implementation.", "good point. Maybe add a comment ?", "Will do!", "You also need to call `ReferenceCountUtil.release(msg)` before failing the promise. ", "just use `ch.finish()` and assert the return value.", "just use `ch.finish()` and assert the return value.", "You also need to call `ReferenceCountUtil.release(msg)` before failing the promise.", "just use `ch.finish()` and assert the return value.", "Call `ReferenceCountUtil.release(msg)` before failing the promise.", "just use `ch.finish()` and assert the return value.", "call `ReferenceCountUtil.release(msg)` before failing the promise.", "just use `ch.finish()` and assert the return value.", "@normanmaurer `ch.finish` would return `false` because we effectively don't do any writes here. All messages are still in `ChunkedWriteHandler` queue, until `flush`. Meaning both `inboundMessages` and `outboundMessages` queues are empty.", "@kachayev sure which is fine just use `assertFalse(...)` as this is what you expect, or what I am missing ?", "@normanmaurer Oh, yeah. You're right! 🤦‍♂️It's just mentally hard to read `assertFalse(ch.finish())` as \"all good, just no messages\" :) I'll leave a comment why this is expected.", "@normanmaurer Updated once again. I also \"fixed\" a couple of older tests to be consistent in terms of `ch.finish()` usage." ]
"2019-01-14T12:05:56Z"
[ "defect" ]
ChunkedWriteHandler may report a failed write as a successful one
### Expected behavior If the writing of the last chunk of the chunked input message failed, the corresponding promise should be failed with an appropriate exception. ### Actual behavior `operationComplete` callback [reports success](https://github.com/netty/netty/blob/fa84e2b3af45ec7fd47909eff0aa7d2be5a54972/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java#L268) with no check of the `future.isSuccesful()`. If I'm not missing anything and this was not done on purpose, I'll submit a PR with a fix. ### Steps to reproduce n/a ### Minimal yet complete reproducer code (or URL to code) n/a ### Netty version 4.1.32 ### JVM version (e.g. `java -version`) n/a ### OS version (e.g. `uname -a`) n/a
[ "handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java" ]
[ "handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java" ]
[ "handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java b/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java index b4805d98ef0..f39328dff7a 100644 --- a/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java +++ b/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java @@ -209,6 +209,21 @@ private void doFlush(final ChannelHandlerContext ctx) { if (currentWrite == null) { break; } + + if (currentWrite.promise.isDone()) { + // This might happen e.g. in the case when a write operation + // failed, but there're still unconsumed chunks left. + // Most chunked input sources would stop generating chunks + // and report end of input, but this doesn't work with any + // source wrapped in HttpChunkedInput. + // Note, that we're not trying to release the message/chunks + // as this had to be done already by someone who resolved the + // promise (using ChunkedInput.close method). + // See https://github.com/netty/netty/issues/8700. + this.currentWrite = null; + continue; + } + final PendingWrite currentWrite = this.currentWrite; final Object pendingMessage = currentWrite.msg; @@ -264,9 +279,13 @@ private void doFlush(final ChannelHandlerContext ctx) { f.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { - currentWrite.progress(chunks.progress(), chunks.length()); - currentWrite.success(chunks.length()); - closeInput(chunks); + if (!future.isSuccess()) { + closeInput(chunks); + currentWrite.fail(future.cause()); + } else { + currentWrite.progress(chunks.progress(), chunks.length()); + currentWrite.success(chunks.length()); + } } }); } else if (channel.isWritable()) {
diff --git a/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java b/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java index 66b69516fc0..5b03048ba69 100644 --- a/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java +++ b/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java @@ -21,8 +21,11 @@ import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelFutureListener; import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelPromise; +import io.netty.channel.ChannelOutboundHandlerAdapter; import io.netty.channel.embedded.EmbeddedChannel; import io.netty.util.CharsetUtil; +import io.netty.util.ReferenceCountUtil; import org.junit.Test; import java.io.ByteArrayInputStream; @@ -31,10 +34,9 @@ import java.io.IOException; import java.nio.channels.Channels; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; +import static org.junit.Assert.*; public class ChunkedWriteHandlerTest { private static final byte[] BYTES = new byte[1024 * 64]; @@ -162,8 +164,7 @@ public void operationComplete(ChannelFuture future) throws Exception { EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); ch.writeAndFlush(input).addListener(listener).syncUninterruptibly(); - ch.checkException(); - ch.finish(); + assertTrue(ch.finish()); // the listener should have been notified assertTrue(listenerNotified.get()); @@ -220,13 +221,218 @@ public long progress() { EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); ch.writeAndFlush(input).syncUninterruptibly(); - ch.checkException(); assertTrue(ch.finish()); assertEquals(0, ch.readOutbound()); assertNull(ch.readOutbound()); } + @Test + public void testWriteFailureChunkedStream() throws IOException { + checkFirstFailed(new ChunkedStream(new ByteArrayInputStream(BYTES))); + } + + @Test + public void testWriteFailureChunkedNioStream() throws IOException { + checkFirstFailed(new ChunkedNioStream(Channels.newChannel(new ByteArrayInputStream(BYTES)))); + } + + @Test + public void testWriteFailureChunkedFile() throws IOException { + checkFirstFailed(new ChunkedFile(TMP)); + } + + @Test + public void testWriteFailureChunkedNioFile() throws IOException { + checkFirstFailed(new ChunkedNioFile(TMP)); + } + + @Test + public void testWriteFailureUnchunkedData() throws IOException { + checkFirstFailed(Unpooled.wrappedBuffer(BYTES)); + } + + @Test + public void testSkipAfterFailedChunkedStream() throws IOException { + checkSkipFailed(new ChunkedStream(new ByteArrayInputStream(BYTES)), + new ChunkedStream(new ByteArrayInputStream(BYTES))); + } + + @Test + public void testSkipAfterFailedChunkedNioStream() throws IOException { + checkSkipFailed(new ChunkedNioStream(Channels.newChannel(new ByteArrayInputStream(BYTES))), + new ChunkedNioStream(Channels.newChannel(new ByteArrayInputStream(BYTES)))); + } + + @Test + public void testSkipAfterFailedChunkedFile() throws IOException { + checkSkipFailed(new ChunkedFile(TMP), new ChunkedFile(TMP)); + } + + @Test + public void testSkipAfterFailedChunkedNioFile() throws IOException { + checkSkipFailed(new ChunkedNioFile(TMP), new ChunkedFile(TMP)); + } + + // See https://github.com/netty/netty/issues/8700. + @Test + public void testFailureWhenLastChunkFailed() throws IOException { + ChannelOutboundHandlerAdapter failLast = new ChannelOutboundHandlerAdapter() { + private int passedWrites; + + @Override + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) { + if (++this.passedWrites < 4) { + ctx.write(msg, promise); + } else { + ReferenceCountUtil.release(msg); + promise.tryFailure(new RuntimeException()); + } + } + }; + + EmbeddedChannel ch = new EmbeddedChannel(failLast, new ChunkedWriteHandler()); + ChannelFuture r = ch.writeAndFlush(new ChunkedFile(TMP, 1024 * 16)); // 4 chunks + assertTrue(ch.finish()); + + assertFalse(r.isSuccess()); + assertTrue(r.cause() instanceof RuntimeException); + + // 3 out of 4 chunks were already written + int read = 0; + for (;;) { + ByteBuf buffer = ch.readOutbound(); + if (buffer == null) { + break; + } + read += buffer.readableBytes(); + buffer.release(); + } + + assertEquals(1024 * 16 * 3, read); + } + + @Test + public void testDiscardPendingWritesOnInactive() throws IOException { + + final AtomicBoolean closeWasCalled = new AtomicBoolean(false); + + ChunkedInput<ByteBuf> notifiableInput = new ChunkedInput<ByteBuf>() { + private boolean done; + private final ByteBuf buffer = Unpooled.copiedBuffer("Test", CharsetUtil.ISO_8859_1); + + @Override + public boolean isEndOfInput() throws Exception { + return done; + } + + @Override + public void close() throws Exception { + buffer.release(); + closeWasCalled.set(true); + } + + @Deprecated + @Override + public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception { + return readChunk(ctx.alloc()); + } + + @Override + public ByteBuf readChunk(ByteBufAllocator allocator) throws Exception { + if (done) { + return null; + } + done = true; + return buffer.retainedDuplicate(); + } + + @Override + public long length() { + return -1; + } + + @Override + public long progress() { + return 1; + } + }; + + EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); + + // Write 3 messages and close channel before flushing + ChannelFuture r1 = ch.write(new ChunkedFile(TMP)); + ChannelFuture r2 = ch.write(new ChunkedNioFile(TMP)); + ch.write(notifiableInput); + + // Should be `false` as we do not expect any messages to be written + assertFalse(ch.finish()); + + assertFalse(r1.isSuccess()); + assertFalse(r2.isSuccess()); + assertTrue(closeWasCalled.get()); + } + + // See https://github.com/netty/netty/issues/8700. + @Test + public void testStopConsumingChunksWhenFailed() { + final ByteBuf buffer = Unpooled.copiedBuffer("Test", CharsetUtil.ISO_8859_1); + final AtomicInteger chunks = new AtomicInteger(0); + + ChunkedInput<ByteBuf> nonClosableInput = new ChunkedInput<ByteBuf>() { + @Override + public boolean isEndOfInput() throws Exception { + return chunks.get() >= 5; + } + + @Override + public void close() throws Exception { + // no-op + } + + @Deprecated + @Override + public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception { + return readChunk(ctx.alloc()); + } + + @Override + public ByteBuf readChunk(ByteBufAllocator allocator) throws Exception { + chunks.incrementAndGet(); + return buffer.retainedDuplicate(); + } + + @Override + public long length() { + return -1; + } + + @Override + public long progress() { + return 1; + } + }; + + ChannelOutboundHandlerAdapter noOpWrites = new ChannelOutboundHandlerAdapter() { + @Override + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) { + ReferenceCountUtil.release(msg); + promise.tryFailure(new RuntimeException()); + } + }; + + EmbeddedChannel ch = new EmbeddedChannel(noOpWrites, new ChunkedWriteHandler()); + ch.writeAndFlush(nonClosableInput).awaitUninterruptibly(); + // Should be `false` as we do not expect any messages to be written + assertFalse(ch.finish()); + buffer.release(); + + // We should expect only single chunked being read from the input. + // It's possible to get a race condition here between resolving a promise and + // allocating a new chunk, but should be fine when working with embedded channels. + assertEquals(1, chunks.get()); + } + private static void check(Object... inputs) { EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); @@ -255,4 +461,67 @@ private static void check(Object... inputs) { assertEquals(BYTES.length * inputs.length, read); } + + private static void checkFirstFailed(Object input) { + ChannelOutboundHandlerAdapter noOpWrites = new ChannelOutboundHandlerAdapter() { + @Override + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) { + ReferenceCountUtil.release(msg); + promise.tryFailure(new RuntimeException()); + } + }; + + EmbeddedChannel ch = new EmbeddedChannel(noOpWrites, new ChunkedWriteHandler()); + ChannelFuture r = ch.writeAndFlush(input); + + // Should be `false` as we do not expect any messages to be written + assertFalse(ch.finish()); + assertTrue(r.cause() instanceof RuntimeException); + } + + private static void checkSkipFailed(Object input1, Object input2) { + ChannelOutboundHandlerAdapter failFirst = new ChannelOutboundHandlerAdapter() { + private boolean alreadyFailed; + + @Override + public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) { + if (alreadyFailed) { + ctx.write(msg, promise); + } else { + this.alreadyFailed = true; + ReferenceCountUtil.release(msg); + promise.tryFailure(new RuntimeException()); + } + } + }; + + EmbeddedChannel ch = new EmbeddedChannel(failFirst, new ChunkedWriteHandler()); + ChannelFuture r1 = ch.write(input1); + ChannelFuture r2 = ch.writeAndFlush(input2).awaitUninterruptibly(); + assertTrue(ch.finish()); + + assertTrue(r1.cause() instanceof RuntimeException); + assertTrue(r2.isSuccess()); + + // note, that after we've "skipped" the first write, + // we expect to see the second message, chunk by chunk + int i = 0; + int read = 0; + for (;;) { + ByteBuf buffer = ch.readOutbound(); + if (buffer == null) { + break; + } + while (buffer.isReadable()) { + assertEquals(BYTES[i++], buffer.readByte()); + read++; + if (i == BYTES.length) { + i = 0; + } + } + buffer.release(); + } + + assertEquals(BYTES.length, read); + } }
train
val
"2019-01-15T08:38:13"
"2019-01-02T17:23:19Z"
kachayev
val
netty/netty/8727_8728
netty/netty
netty/netty/8727
netty/netty/8728
[ "timestamp(timedelta=240.0, similarity=0.991717390100074)" ]
c893939bd89243a1d6423ed86419bcf7748659ec
1e4481e551696a57c79d3f1acd588171e12e082b
[ "@yulianoifa-mobius thanks a lot for the patch :)" ]
[ "@yulianoifa-mobius please change to 2019", "Add a CRLF.", "i guess you meaned to remove CRLF" ]
"2019-01-17T20:08:51Z"
[ "defect" ]
Allow IP_FREEBIND option for UDP epoll
### Expected behavior while adding IP_FREEBIND option to boostrap it should work with EpollDatagramChannel ### Actual behavior notification is logged that IP_FREEBIND is unknown option for this type of channel ### Steps to reproduce set EpollChannelOption.IP_FREEBIND to true on ConnectionlessBootstrap ### Minimal yet complete reproducer code (or URL to code) group = new EpollEventLoopGroup(poolSize); connectionlessBootstrap=new Bootstrap(); connectionlessBootstrap.option(EpollChannelOption.SO_REUSEPORT, true); connectionlessBootstrap.option(EpollChannelOption.IP_RECVORIGDSTADDR, true); connectionlessBootstrap.option(EpollChannelOption.IP_FREEBIND, true); connectionlessBootstrap.channel(EpollDatagramChannel.class); connectionlessBootstrap.group(group); connectionlessBootstrap.bind(new InetSocketAddress("0.0.0.0", port)); ### Netty version 4.1.25 ### JVM version (e.g. `java -version`) 1.8.0 ### OS version (e.g. `uname -a`) ubuntu
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java" ]
[ "transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollDatagramChannelConfigTest.java" ]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java index f3de6ac5947..778b555fa19 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java @@ -47,7 +47,7 @@ public Map<ChannelOption<?>, Object> getOptions() { ChannelOption.SO_REUSEADDR, ChannelOption.IP_MULTICAST_LOOP_DISABLED, ChannelOption.IP_MULTICAST_ADDR, ChannelOption.IP_MULTICAST_IF, ChannelOption.IP_MULTICAST_TTL, ChannelOption.IP_TOS, ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION, - EpollChannelOption.SO_REUSEPORT, EpollChannelOption.IP_TRANSPARENT, + EpollChannelOption.SO_REUSEPORT, EpollChannelOption.IP_FREEBIND, EpollChannelOption.IP_TRANSPARENT, EpollChannelOption.IP_RECVORIGDSTADDR); } @@ -90,6 +90,9 @@ public <T> T getOption(ChannelOption<T> option) { if (option == EpollChannelOption.IP_TRANSPARENT) { return (T) Boolean.valueOf(isIpTransparent()); } + if (option == EpollChannelOption.IP_FREEBIND) { + return (T) Boolean.valueOf(isFreeBind()); + } if (option == EpollChannelOption.IP_RECVORIGDSTADDR) { return (T) Boolean.valueOf(isIpRecvOrigDestAddr()); } @@ -123,6 +126,8 @@ public <T> boolean setOption(ChannelOption<T> option, T value) { setActiveOnOpen((Boolean) value); } else if (option == EpollChannelOption.SO_REUSEPORT) { setReusePort((Boolean) value); + } else if (option == EpollChannelOption.IP_FREEBIND) { + setFreeBind((Boolean) value); } else if (option == EpollChannelOption.IP_TRANSPARENT) { setIpTransparent((Boolean) value); } else if (option == EpollChannelOption.IP_RECVORIGDSTADDR) { @@ -407,6 +412,31 @@ public EpollDatagramChannelConfig setIpTransparent(boolean ipTransparent) { } } + /** + * Returns {@code true} if <a href="http://man7.org/linux/man-pages/man7/ip.7.html">IP_FREEBIND</a> is enabled, + * {@code false} otherwise. + */ + public boolean isFreeBind() { + try { + return ((EpollDatagramChannel) channel).socket.isIpFreeBind(); + } catch (IOException e) { + throw new ChannelException(e); + } + } + + /** + * If {@code true} is used <a href="http://man7.org/linux/man-pages/man7/ip.7.html">IP_FREEBIND</a> is enabled, + * {@code false} for disable it. Default is disabled. + */ + public EpollDatagramChannelConfig setFreeBind(boolean freeBind) { + try { + ((EpollDatagramChannel) channel).socket.setIpFreeBind(freeBind); + return this; + } catch (IOException e) { + throw new ChannelException(e); + } + } + /** * Returns {@code true} if <a href="http://man7.org/linux/man-pages/man7/ip.7.html">IP_RECVORIGDSTADDR</a> is * enabled, {@code false} otherwise.
diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollDatagramChannelConfigTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollDatagramChannelConfigTest.java new file mode 100644 index 00000000000..39cf7acd064 --- /dev/null +++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollDatagramChannelConfigTest.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel.epoll; + +import org.junit.Test; + +import static org.junit.Assert.assertTrue; + +public class EpollDatagramChannelConfigTest { + + @Test + public void testIpFreeBind() throws Exception { + Epoll.ensureAvailability(); + EpollDatagramChannel channel = new EpollDatagramChannel(); + assertTrue(channel.config().setOption(EpollChannelOption.IP_FREEBIND, true)); + assertTrue(channel.config().getOption(EpollChannelOption.IP_FREEBIND)); + channel.fd().close(); + } +}
test
val
"2019-01-17T09:14:27"
"2019-01-17T19:58:56Z"
yulianoifa-mobius
val
netty/netty/8261_8731
netty/netty
netty/netty/8261
netty/netty/8731
[ "keyword_issue_to_pr" ]
c893939bd89243a1d6423ed86419bcf7748659ec
df5eb060f729a18582d659f868f764edc9a7277e
[ "@johnjaylward would it be possible to try different netty versions and see at which point it breaks ? This would help me a lot. ", "Are there any breaking points in the API between 4.1.13 and 4.1.20? If not, I don't see an issue with trying on my end.", "@johnjaylward as the dns resolver is marked as unstable there were a few, but I think most people should not be affected as the scope is small. So I would give it a try.", "well, now that I've played around with the versions a little more, I'm not sure what's up.\r\nUsing Redisson 3.5.0 with any version of Netty and the DNS query seems to work. \r\n\r\nHowever using Redisson 3.6.0 with any version of Netty, the query fails. I'm guessing Redisson must have changed some configuration option that it is passing to the Netty resolver? Is that a possibility? If so, I'll open this on the Redisson side.", "@johnjaylward sure thats possible... I dont know enough about Redission to tell you what they do.", "Thanks, I'll check with them.", "@johnjaylward please let me know how it goes :)", "@johnjaylward\r\n\r\nI would reopen this issue. Issue appeared once Redisson switched to netty based resolver from JDK's `InetAddress.getByName` method.", "Thanks for looking @mrniko .\r\n\r\n@normanmaurer looks like it's an error in how netty is resolving.", "Updated title since it's not a regression.", "Can we somehow have a testcase to reproduce which does not involve setup an own dnsserver etc?\n\n> Am 10.09.2018 um 17:17 schrieb johnjaylward <notifications@github.com>:\n> \n> Reopened #8261.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "I'm not sure if it's environment related or not. It appears to be environment related as it seems to resolve some host names just fine, it's just this custom top level domain one it has issues with (i.e. not localhost/.com/.edu etc which work fine).\r\n\r\nThe TLD is just \"companyname\" not \"companyname.com\" . The resolver appears to be looking at \"host.companyname\" and not liking the TLD, so it appends the DNS Search Domain to it (host.companyname.someOther.searchDomain), which is invalid.\r\n\r\nShort of setting up your own DNS stack with a custom TLD to test it, I'm not sure what else to do.", "Maybe you can test against a public TLD like .amazon or .google or something? I'm not sure if there would be a difference there though as I have 3 resolvers configured, but only 1 will return a valid result for the custom TLD.", "So you say it only happens for custom TLDs?\n\n> Am 10.09.2018 um 17:31 schrieb johnjaylward <notifications@github.com>:\n> \n> Maybe you can test against a public TLD like .amazon or .google or something? I'm not sure if there would be a difference there though as I have 3 resolvers configured, but only 1 will return a valid result for the custom TLD.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@johnjaylward could you test to only include the dnsserver that handles the custom domainname and see if it resolves in this case ? \r\n\r\nAlso would it be possible to run the following command against each of the servers (using the domain you want to resolve) and add the output here:\r\n\r\n```\r\ndig @dnsserverip host.toplevel A\r\n```", "Personal local network:\r\n```\r\n$ dig @192.168.55.5 host.domain A\r\n\r\n; <<>> DiG 9.12.2-P1 <<>> @192.168.55.5 host.domain A\r\n; (1 server found)\r\n;; global options: +cmd\r\n;; Got answer:\r\n;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 58575\r\n;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1\r\n\r\n;; OPT PSEUDOSECTION:\r\n; EDNS: version: 0, flags:; udp: 4096\r\n;; QUESTION SECTION:\r\n;host.domain. IN A\r\n\r\n;; AUTHORITY SECTION:\r\n. 10619 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2018091101 1800 900 604800 86400\r\n\r\n;; Query time: 0 msec\r\n;; SERVER: 192.168.55.5#53(192.168.55.5)\r\n;; WHEN: Tue Sep 11 15:05:55 Eastern Daylight Time 2018\r\n;; MSG SIZE rcvd: 115\r\n```\r\n\r\nGlobal public network:\r\n```\r\n$ dig @8.8.8.8 host.domain A\r\n\r\n; <<>> DiG 9.12.2-P1 <<>> @8.8.8.8 host.domain A\r\n; (1 server found)\r\n;; global options: +cmd\r\n;; Got answer:\r\n;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 18400\r\n;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1\r\n\r\n;; OPT PSEUDOSECTION:\r\n; EDNS: version: 0, flags:; udp: 512\r\n;; QUESTION SECTION:\r\n;host.domain. IN A\r\n\r\n;; AUTHORITY SECTION:\r\n. 86362 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2018091101 1800 900 604800 86400\r\n\r\n;; Query time: 31 msec\r\n;; SERVER: 8.8.8.8#53(8.8.8.8)\r\n;; WHEN: Tue Sep 11 15:04:32 Eastern Daylight Time 2018\r\n;; MSG SIZE rcvd: 115\r\n```\r\n\r\nCompany DNS over VPN\r\n```\r\n$ dig @10.253.48.2 host.domain A\r\n\r\n; <<>> DiG 9.12.2-P1 <<>> @10.253.48.2 host.domain A\r\n; (1 server found)\r\n;; global options: +cmd\r\n;; Got answer:\r\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42724\r\n;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\r\n\r\n;; OPT PSEUDOSECTION:\r\n; EDNS: version: 0, flags:; udp: 4000\r\n; COOKIE: 065a3fc0ed8df40e (echoed)\r\n;; QUESTION SECTION:\r\n;host.domain. IN A\r\n\r\n;; ANSWER SECTION:\r\nhost.domain. 3600 IN A 10.253.50.30\r\n\r\n;; Query time: 46 msec\r\n;; SERVER: 10.253.48.2#53(10.253.48.2)\r\n;; WHEN: Tue Sep 11 15:08:19 Eastern Daylight Time 2018\r\n;; MSG SIZE rcvd: 68\r\n```\r\n\r\n```\r\n$ dig @10.253.48.3 host.domain A\r\n\r\n; <<>> DiG 9.12.2-P1 <<>> @10.253.48.3 host.domain A\r\n; (1 server found)\r\n;; global options: +cmd\r\n;; Got answer:\r\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19221\r\n;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\r\n\r\n;; OPT PSEUDOSECTION:\r\n; EDNS: version: 0, flags:; udp: 4000\r\n; COOKIE: 5e8dd4d92c46330d (echoed)\r\n;; QUESTION SECTION:\r\n;host.domain. IN A\r\n\r\n;; ANSWER SECTION:\r\nhost.domain. 3600 IN A 10.253.50.30\r\n\r\n;; Query time: 31 msec\r\n;; SERVER: 10.253.48.3#53(10.253.48.3)\r\n;; WHEN: Tue Sep 11 15:08:38 Eastern Daylight Time 2018\r\n;; MSG SIZE rcvd: 68\r\n\r\n```\r\n", "@normanmaurer I'm not sure I can configure Redisson to use a specific DNS server...", "@johnjaylward interesting so at least one server returns `NXDOMAIN`. This is on what OS ? Can you also show me the contents of /etc/resolv.conf", "it's on windows 10 in this instance. When I was stepping through the calls the server list contained all 3 primary servers. I can't remember if it contained the secondary over the VPN. If I have time today, I'll try to step through again and see which servers were listed.", "I am getting similar kind of error with redisson version 3.7.2 and netty version 4.1.25.Final\r\n\r\n`Unable to resolve xxxx.redis.cache.windows.net - java.net.UnknownHostException: failed to resolve xxxx.redis.cache.windows.net' after 4 queries at io.netty.resolver.dns.DnsNameResolverContext.finishResolve(DnsNameResolverContext.java:721) at io.netty.resolver.dns.DnsNameResolverContext.tryToFinishResolve(DnsNameResolverContext.java:663) at io.netty.resolver.dns.DnsNameResolverContext.query(DnsNameResolverContext.java:306) at io.netty.resolver.dns.DnsNameResolverContext.query(DnsNameResolverContext.java:295) at io.netty.resolver.dns.DnsNameResolverContext.access$700(DnsNameResolverContext.java:60) at io.netty.resolver.dns.DnsNameResolverContext$3.operationComplete(DnsNameResolverContext.java:339) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122) at io.netty.resolver.dns.DnsQueryContext.setFailure(DnsQueryContext.java:223) at io.netty.resolver.dns.DnsQueryContext.access$300(DnsQueryContext.java:42) at io.netty.resolver.dns.DnsQueryContext$4.run(DnsQueryContext.java:162) at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:745) Caused by: io.netty.resolver.dns.DnsNameResolverTimeoutException: [/8.8.4.4:53] query timed out after 5000 milliseconds (no stack trace available)`\r\n\r\n\r\n", "@normanmaurer \r\n\r\nCould this error https://github.com/redisson/redisson/issues/1646 related to this issue either?", "@johnjaylward @utsavchanda any more details ?", "I don't at this time. I was mainly looking at upgrading the libraries as part of a larger task and have since put down the upgrade due to this issue and moved onto the rest of the task. I'm not sure when I'll have time to investigate it further.", "Sorry but without more infos I suspect I will not be able to help :(", "@normanmaurer \r\n\r\nHere is yet another case with AWS https://github.com/redisson/redisson/issues/1486#issuecomment-426783405", "@normanmaurer @johnjaylward\r\n\r\nBingo! Finally managed to reproduce the issue. Below is the test code:\r\n```java\r\n NioEventLoopGroup niogroup = new NioEventLoopGroup();\r\n \r\n DnsAddressResolverGroup group = new DnsAddressResolverGroup(NioDatagramChannel.class, DnsServerAddressStreamProviders.platformDefault());\r\n ExecutorService es = Executors.newFixedThreadPool(2);\r\n CountDownLatch latch = new CountDownLatch(100);\r\n for (int i = 0; i < 100; i++) {\r\n es.execute(new Runnable() {\r\n @Override\r\n public void run() {\r\n AddressResolver<InetSocketAddress> resolver = group.getResolver(niogroup.next());\r\n try {\r\n URI uri = new URI(\"redis://dev.myredis.com:6379\");\r\n Future<List<InetSocketAddress>> allNodes = resolver.resolveAll(InetSocketAddress.createUnresolved(uri.getHost(), uri.getPort()));\r\n List<InetSocketAddress> list = allNodes.syncUninterruptibly().getNow();\r\n System.out.println(list);\r\n } catch (URISyntaxException e) {\r\n // TODO Auto-generated catch block\r\n e.printStackTrace();\r\n } finally {\r\n latch.countDown();\r\n }\r\n }\r\n });\r\n }\r\n latch.await();\r\n group.close();\r\n es.shutdown();\r\n niogroup.shutdownGracefully();\r\n```\r\n\r\nMaraDNS (http://maradns.samiam.org/) running on 127.0.0.1:53\r\nHere is the config for it https://github.com/netty/netty/files/1941275/maradns-config.zip\r\n\r\nIf server stands first in list of dns then all works fine:\r\n\r\nDefault DNS servers: [/127.0.0.1:53, /8.8.8.8:53, /87.98.175.85:53, /51.254.25.115:53]\r\nLog without errors: https://gist.github.com/mrniko/8ea3153888aa3dd8315b4549a77c34bb\r\n\r\nBut if it's not, then lot of errors arise:\r\n\r\nDefault DNS servers: [/8.8.8.8:53, /127.0.0.1:53, /87.98.175.85:53, /51.254.25.115:53]\r\nLog with errors: https://gist.github.com/mrniko/59529c03450c69094d7b379515919ea2\r\n\r\nNetty 4.1.30.Final\r\nJDK 11", "Will check once back at work\n\n> Am 05.10.2018 um 20:35 schrieb Nikita Koksharov <notifications@github.com>:\n> \n> @normanmaurer @johnjaylward\n> \n> Bingo! Finally managed to reproduce the issue. Below is the test code:\n> \n> NioEventLoopGroup niogroup = new NioEventLoopGroup();\n> \n> DnsAddressResolverGroup group = new DnsAddressResolverGroup(NioDatagramChannel.class, DnsServerAddressStreamProviders.platformDefault());\n> ExecutorService es = Executors.newFixedThreadPool(2);\n> CountDownLatch latch = new CountDownLatch(100);\n> for (int i = 0; i < 100; i++) {\n> es.execute(new Runnable() {\n> @Override\n> public void run() {\n> AddressResolver<InetSocketAddress> resolver = group.getResolver(niogroup.next());\n> try {\n> URI uri = new URI(\"redis://dev.myredis.com:6379\");\n> Future<List<InetSocketAddress>> allNodes = resolver.resolveAll(InetSocketAddress.createUnresolved(uri.getHost(), uri.getPort()));\n> List<InetSocketAddress> list = allNodes.syncUninterruptibly().getNow();\n> System.out.println(list);\n> } catch (URISyntaxException e) {\n> // TODO Auto-generated catch block\n> e.printStackTrace();\n> } finally {\n> latch.countDown();\n> }\n> }\n> });\n> }\n> latch.await();\n> group.close();\n> es.shutdown();\n> niogroup.shutdownGracefully();\n> MaraDNS (http://maradns.samiam.org/) running on 127.0.0.1:53\n> Here is the config for it https://github.com/netty/netty/files/1941275/maradns-config.zip\n> \n> If server stands first in list of dns then all works fine:\n> \n> Default DNS servers: [/127.0.0.1:53, /8.8.8.8:53, /87.98.175.85:53, /51.254.25.115:53]\n> Log without errors: https://gist.github.com/mrniko/8ea3153888aa3dd8315b4549a77c34bb\n> \n> But if it's not, then lot of errors arise:\n> \n> Default DNS servers: [/8.8.8.8:53, /127.0.0.1:53, /87.98.175.85:53, /51.254.25.115:53]\n> Log with errors: https://gist.github.com/mrniko/59529c03450c69094d7b379515919ea2\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@normanmaurer \r\n\r\nHere is workaround I implemented in Redisson\r\nhttps://github.com/redisson/redisson/commit/14eeed15f6b7b508dd206d40dec7c80377e6c9db", "So you basically create one resolver per dns server address and try all of them before fail the query ?\n\n> Am 06.10.2018 um 16:05 schrieb Nikita Koksharov <notifications@github.com>:\n> \n> @normanmaurer\n> \n> Here is workaround I implemented in Redisson\n> redisson/redisson@14eeed1\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@normanmaurer \r\n\r\nThat's correct", "@mrniko ok I just returned from vacation and looking at this again... I am still trying to wrap my head around this. \r\n\r\nFrom my understanding of the logs what we do is correct. The nameservers we query first returns `NXDOMAIN` which indicates that the domain does not exist and so we bail out. I need to check what exactly the JDK implementation does tho. But from my understanding this is a \"sane thing\" to do.", "Hmm, no, that is the opposite of what you want to do. By bailing early you are preventing any internal/private domains from ever resolving. That's the entire point of allowing multiple resolvers, if one fails or can't find the domain, you try others. Not every domain name or sub domain is on a public resolver. If 8.8.8.8 returns first with \"no result\" and that's the only result you use, then private domains will never work.", "I'm curious if there is any RFC section that tells us what to do in this case, i.e. try other name server on NXDOMAIN. Such a behavior would increase the load on secondary name server and total lookup time for a non-existent domain name, and it doesn't sound like a good idea to me.", "@trustin I did not find anything in the RFC that said I agree with you but I will need to do some more research tomorrow. ", "Is it possible to add custom implementation to netty for this case like I did in Redisson?", "@mrniko sure we could but I would first like to better understand the whole thing before adding anything. Thats why I want to do more research here before taking any action", "The case of glibc resolver:\r\n\r\n- 13 years ago: https://bugzilla.redhat.com/show_bug.cgi?id=160914#c6 (No retry on NXDOMAIN)\r\n- 5 years ago: https://serverfault.com/questions/501739/dns-resolv-conf-issue-dns-doesnt-resolve-for-certain-internal-addresses-despi (No retry on NXDOMAIN)", "I know what you mean, but the entire point of private domains is that a public query would return NXDOMAIN. \r\n\r\nHere's my setup: \r\n* Ethernet connection: IP Address in the 192.168.55.X single DNS Resolver at 192.168.55.5\r\n* VPN IP Address in the 10.X.X.X range, Primary DNS 10.x.x.3 Secondary DNS 10.x.x.2\r\n\r\nThe regular ethernet connections DNS resolver is a standard recusive resolver with cache. It will never be able to resolve things that are private to the company (i.e. dig someHost@myCompany or dig someHost@internal.mycomapany.com will always return NXDOMAIN). This is expected because I don't have access to those resolvers that know about those domains when off the VPN.\r\n\r\nThe VPN dns servers (both primary and secondary) can resolve those as they know of the internal domain names.\r\n\r\nIn this setup I effectively have 2 primary resolvers, one for standard ethernet, and another for the VPN. If netty only takes my public resolving DNS as the primary, then the internal domain names over the company VPN will never get picked up.", "The case of musl resolver (this year) https://github.com/gliderlabs/docker-alpine/issues/402", "@trustin thanks for digging these up... This is also interesting : https://bugzilla.redhat.com/show_bug.cgi?id=162625\r\n \r\n@johnjaylward just to confirm all works when you use the JDK based resolver with netty tho ?", "When you are connected to VPN, the VPN software should update the resolve.conf so that only the internal DNS servers are used. On disconnection, the VPN software should update the resolve.conf again back to the public DNS servers. You should not mix public and internal DNS server. If you would, you have to put the private ones *before* the public ones.\r\n\r\nIs your VPN connection established *after* you start up your Netty application? Then Netty may use the outdated public DNS servers to send queries, but otherwise, all is working as expected in my opinion.", "@trustin yeah I wonder if the problem is that we not \"watch\" for updates of resolve.conf and so may get into trouble when it is altered afterwards. ", "@trustin as far as I know the JDK implementation does have a background thread that takes care of taking updates into account.", "@normanmaurer correct. It was working fine in the old version of Redisson that was using the JDK resolver. It wasn't until the switch to the Netty resolver that it became an issue.\r\n\r\nAlso, this is running from windows. There is no resolve.conf. Windows binds the DNS resolver configurations to the interfaces, so the standard Ethernet connection keeps it's resolver, and the new VPN connection gets it's own resolvers. There is no overwriting of the old resolver like in Linux. I can only assume that windows uses all 3 resolvers when making queries.", "@johnjaylward you should be able to verify by using Wireshark here and see what resolvers it uses. ", "> thanks for digging these up... This is also interesting : https://bugzilla.redhat.com/show_bug.cgi?id=162625\r\n\r\n~I guess this is when the first server disallows a recursive query.~\r\n\r\nAh, you mean Windows resolver?", "This comment explains the difference between glibc and Windows resolver: https://bugzilla.redhat.com/show_bug.cgi?id=160914#c5", "Also I checked the JDK impl again and it reloads the dns server config from time to time. We may should do the same...\n\n> Am 11.10.2018 um 19:45 schrieb Trustin Lee <notifications@github.com>:\n> \n> This comment explains the difference between glibc and Windows resolver: https://bugzilla.redhat.com/show_bug.cgi?id=160914#c5\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Given that JDK uses OS's native resolver, it makes sense that @johnjaylward is seeing different behavior between JDK and Netty resolver on Windows. However, the original report mentions that this issue appears on OS X and Linux as well. Could someone confirm if it really affects OS X and Linux?\r\n\r\nI'm curious if we should tune our resolver so it matches the behavior of the Windows resolver, although I think it's rather close to misconfiguration in VPN.", "@trustin the initial report may have been misleading. My development machine I saw this issue on is Windows. The final application runs on Ubuntu and CentOS, but I did not get to testing that far down the cycle since it was failing in DEV.", "TBH though, I don't expect this to ever be an issue in **my** production environments since there is no VPN there, and the only DNS resolvers configured are the company owned ones. I'm not sure I have a linux machine available I can test this on myself.", "Also, here's a good explanation of the Windows resolver path: https://serverfault.com/questions/84291/how-does-windows-decides-which-dns-server-to-use-when-resolving-names#84293", "So, I ran a wireshark on port 53 as suggested. \r\n\r\nLooks like the preferred order for nslookup when using git-bash on windows is to use the VPN DNS first. It appears that the priority is correct at the OS level. I only see queries against \"default\" server (since nslookup only uses a single server).\r\n\r\nI then tested it using Firefox and it used all 3 DNS servers. It actually tried resolving against the non-VPN server first, which failed: ```24\t2.420892\t192.168.55.5\t192.168.55.169\tDNS\t146\tStandard query response 0x5e05 No such name AAAA host.domain SOA a.root-servers.net```. Firefox then followed up with subsequent queries to the other DNS servers on the VPN.\r\n\r\nThe actual order of the request/responses from Firefox looked like this:\r\n1. Parallel request to the 2 primary DNS interfaces (VPN and Non-VPN. I assume if I had 3 interfaces up, it would have sent 3 requests, one for each primary... maybe not though)\r\n2. Primary non-VPN request failed\r\n3. Request sent to secondary DNS on the VPN\r\n4. Primary DNS on the VPN responded with a success\r\n5. Secondary DNS on the VPN responded with a success.\r\n\r\nRunning wireshark against Chrome provided a similar workflow (querying all the DNS servers), but instead ran against the VPN DNs servers first.\r\n\r\nTechnically from a traffic perspective Chrome is worse as it makes lots of calls out to google services that are really just unnecessary.\r\n\r\nI guess the end result is what you are trying to accomplish with this resolver. If you want it to match the specs as close as possible and work more like a \"dig\" or \"nslookup\", then no changes would be needed. It would be doing it's job as-designed. If your expectation is that this resolver is used for end-user applications, then it should probably ignore NXDOMAIN and try against other servers in order to try and get a working record.\r\n", "@trustin I am still not 100 % sure what to do about this in general but that said what you think about refresh the DNS servers / search domains on regular basis just as the JDK is doing:\r\n\r\nhttp://hg.openjdk.java.net/jdk/jdk11/file/76072a077ee1/src/java.base/unix/classes/sun/net/dns/ResolverConfigurationImpl.java#l124 \r\n\r\n ?", "I think there are two action items:\r\n\r\n- Refreshing the System DNS resolver configuration periodically\r\n - This may or may not be relevant to this issue, but we should fix this.\r\n- Changing the behavior of our DNS resolver so that it's behavior is on par with that of the host OS.\r\n - This probably will fix this issue but I'm not 100% sure if it's worth fixing it given the complexity and its platform-dependent nature. It'd be natural to behave in the same way with host OS's behavior from Windows users' standpoint, though. Would love to listen to what other folks think about this.", "Ok let me do a pr for the refresh in the meantime\n\n> Am 14.10.2018 um 09:01 schrieb Trustin Lee <notifications@github.com>:\n> \n> I think there are two action items:\n> \n> Refreshing the System DNS resolver configuration periodically\n> This may or may not be relevant to this issue, but we should fix this.\n> Changing the behavior of our DNS resolver so that it's behavior is on par with that of the host OS.\n> This probably will fix this issue but I'm not 100% sure if it's worth fixing it given the complexity and its platform-dependent nature. It'd be natural to behave in the same way with host OS's behavior from Windows users' standpoint, though. Would love to listen to what other folks think about this.\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@normanmaurer , @trustin , @johnjaylward ,\r\n\r\nWe bumped into this issue.\r\n\r\nWould vote for this approach:\r\n\r\n> If your expectation is that this resolver is used for end-user applications, then it should probably ignore NXDOMAIN and try against other servers in order to try and get a working record.", "Any idea when this will be fixed?", "@trustin Hi, I confirm that we can see the same behaviour on Centos7 ", "Ping...\r\n@trustin , @normanmaurer , @Scottmitch , @fredericBregier , @nmittler , ...", "Hello, \r\n\r\nif it's any help, we worked this pb around by extending original Netty Client and retrieving its original Bootstrap object, setting resolver to 'null'. \r\n\r\nBy doing so, Netty will eventually use io.netty.resolver.DefaultNameResolver, and so /etc/resolv.conf for DNS name resolutions.\r\n\r\n```java\r\n ...\r\n // Setting resolver to null will default it to io.netty.resolver.DefaultNameResolver.\r\n this.getBootstrap().resolver(null);\r\n ...\r\n```\r\n ", "@fmazoyer \r\n\r\nThe problem is that `io.netty.resolver.DefaultNameResolver` executes name lookup and blocks the caller thread", "@mrniko \r\nWe didn't run in such an issue.\r\nDid you check the call stack with jstack or kill -3?\r\nSorry I can't help more :-( ", "@fmazoyer @mrniko means that the lookup is a blocking call instead of \"event driven\" like the other nio calls, not that it locks up the application. If you are expecting the execution to continue while waiting for the DNS lookup, then you would need to wrap your resolver calls in threads (like a `java.util.concurrent.Executors.newSingleThreadExecutor()` ) when using the `io.netty.resolver.DefaultNameResolver`", "I will have a look again this week. Sorry been super busy lately. ", "@mrniko sorry it took me so long to come back to you. I just tried to reproduce it with the code and config you provided but no luck :(\r\n\r\n```java\r\n @Test\r\n public void test() throws Throwable {\r\n final NioEventLoopGroup niogroup = new NioEventLoopGroup();\r\n\r\n final DnsNameResolverBuilder builder = new DnsNameResolverBuilder();\r\n builder.channelType(NioDatagramChannel.class).nameServerProvider(\r\n new SequentialDnsServerAddressStreamProvider(new InetSocketAddress(\"8.8.8.8\", 53),\r\n new InetSocketAddress(\"127.0.0.1\", 53), new InetSocketAddress(\"87.98.175.85\", 53), new InetSocketAddress(\"51.254.25.115\", 53)))\r\n .resolvedAddressTypes(ResolvedAddressTypes.IPV4_ONLY);\r\n\r\n final DnsAddressResolverGroup group = new DnsAddressResolverGroup(builder);\r\n ExecutorService es = Executors.newFixedThreadPool(2);\r\n final CountDownLatch latch = new CountDownLatch(100);\r\n final Queue<Object> results = new ConcurrentLinkedQueue<Object>();\r\n for (int i = 0; i < 100; i++) {\r\n es.execute(new Runnable() {\r\n @Override\r\n public void run() {\r\n AddressResolver<InetSocketAddress> resolver = group.getResolver(niogroup.next());\r\n try {\r\n URI uri = new URI(\"redis://dev.myredis.com:6379\");\r\n Future<List<InetSocketAddress>> allNodes = resolver.resolveAll(InetSocketAddress.createUnresolved(uri.getHost(), uri.getPort()));\r\n //Thread.sleep(100);\r\n List<InetSocketAddress> list = allNodes.syncUninterruptibly().getNow();\r\n results.offer(list);\r\n } catch (URISyntaxException e) {\r\n // TODO Auto-generated catch block\r\n e.printStackTrace();\r\n } catch (Throwable cause) {\r\n results.offer(cause);\r\n } finally {\r\n latch.countDown();\r\n }\r\n }\r\n });\r\n }\r\n latch.await();\r\n group.close();\r\n es.shutdown();\r\n niogroup.shutdownGracefully();\r\n\r\n for (;;) {\r\n Object o = results.poll();\r\n if (o == null) {\r\n break;\r\n }\r\n if (o instanceof Throwable) {\r\n throw (Throwable) o;\r\n }\r\n }\r\n }\r\n```\r\n\r\nI used the same dns config that you attached. This in on Xubuntu. Any idea ?", "Also @johnjaylward ", "This was on latest 4.1 branch btw.", "@normanmaurer \r\n\r\nThanks for taking the time!\r\n\r\nCould you try exactly the same code I use? without using DnsNameResolverBuilder. ", "@mrniko I did... same result :/", "@normanmaurer \r\n\r\nI run it on windows 10", "@mrniko unfortunately I have no access to windows. Can you try to reproduce on a linux vm ?", "@mrniko maybe you could capture a dump with Wireshark that includes the queries ? Best would be if you adjust it to only to one resolveAll(...) call.", "@mrniko also even if I can not reproduce by now I would be happy to review a PR that fix the problem for you. ", "@normanmaurer\r\n\r\nI'm testing with Netty 4.1.32 and 64-bit JDK 1.8.0_191 on win 10 and noticed strange behaviour:\r\n\r\nMy previous test **doesn't work** with:\r\n\r\n```java\r\nDnsAddressResolverGroup group = new DnsAddressResolverGroup(NioDatagramChannel.class,\r\n DnsServerAddressStreamProviders.platformDefault());\r\n```\r\nand dns sequence got from `DnsServerAddressStreamProviders.platformDefault().nameServerAddressStream(\"\")` is \r\n`/51.254.25.115:53, /127.0.0.1:53, /8.8.8.8:53, dns: /87.98.175.85:53`\r\n\r\n**But it works** if I create group object like in your test with the same dns sequence:\r\n\r\n```java\r\nfinal DnsNameResolverBuilder builder = new DnsNameResolverBuilder();\r\nbuilder.channelType(NioDatagramChannel.class).nameServerProvider(\r\n new SequentialDnsServerAddressStreamProvider(new InetSocketAddress(\"51.254.25.115\", 53),\r\n new InetSocketAddress(\"127.0.0.1\", 53), new InetSocketAddress(\"8.8.8.8\", 53), new InetSocketAddress(\"87.98.175.85\", 53)))\r\n .resolvedAddressTypes(ResolvedAddressTypes.IPV4_ONLY);\r\n\r\nfinal DnsAddressResolverGroup group = new DnsAddressResolverGroup(builder);\r\n```\r\n\r\nHow it could be?", "@johnjaylward @mrniko after re-reading the RFC closely and inspecting our code again I think I found the bug. Can you please check if https://github.com/netty/netty/pull/8731 works for you ?\r\n\r\nAlso thanks to @Lukasa to discuss this with me :)", "@mrniko also can you test removing ` .resolvedAddressTypes(ResolvedAddressTypes.IPV4_ONLY);` ?", "@normanmaurer \r\n\r\nIssue came back if I comment out this code `.resolvedAddressTypes(ResolvedAddressTypes.IPV4_ONLY);`", "@mrniko ok cool... With #8731 the problem is gone also when `.resolvedAddressTypes(ResolvedAddressTypes.IPV4_ONLY)` is NOT used (at least for me). Can you try it out ?", "@normanmaurer \r\nEverything works fine with group object:\r\n\r\n```java\r\nDnsAddressResolverGroup group = new DnsAddressResolverGroup(new DnsNameResolverBuilder()\r\n .channelType(NioDatagramChannel.class)\r\n .nameServerProvider(DnsServerAddressStreamProviders.platformDefault())\r\n .resolvedAddressTypes(ResolvedAddressTypes.IPV4_ONLY));\r\n```\r\nand doesn't work if group object is follow:\r\n\r\n```java\r\nDnsAddressResolverGroup group = new DnsAddressResolverGroup(new DnsNameResolverBuilder()\r\n .channelType(NioDatagramChannel.class)\r\n .nameServerProvider(DnsServerAddressStreamProviders.platformDefault()));\r\n```", "@normanmaurer\r\nDo you have netty-all.jar for the latest version?", "@mrniko let me build one for you with the pr included... one sec", "@mrniko https://drive.google.com/open?id=11pNwvCkl3ECB3CpjlUzDnuG0m3Td27yg please try with this jar and report back.", "@normanmaurer \r\n\r\nThis page by this link reports 404", "@mrniko just fixed the link... please try again", "@normanmaurer \r\n\r\nNow it works. Thank you!", "@mrniko the code or the link, or both ? ;)", "@normanmaurer \r\n\r\nBoth :) ", "Yeah :) ... Please note in the PR as well... ", "@normanmaurer ,\r\nThanks for fixing this.\r\n\r\nWe attempted to workaround this by setting [dnsMonitoringInterval](https://github.com/redisson/redisson/wiki/2.-Configuration#dnsmonitoringinterval) to -1.\r\n`\r\nconfig.useSingleServer().setDnsMonitoringInterval(-1);\r\n`\r\nand surprisingly, it seems to work.\r\n\r\nHowever, JAVA_HOME\\jre\\lib\\security\\java.security had the following parameters:\r\n```\r\n# caching forever\r\nnetworkaddress.cache.ttl=-1 \r\n\r\n# 10 seconds.\r\nnetworkaddress.cache.negative.ttl=10 \r\n```\r\n\r\nDo you think it is a legitimate workaround?\r\n\r\nAlso posted the question in another issue: https://github.com/redisson/redisson/issues/1486#issuecomment-456277039", "@hlms I have no idea how exactly redission uses it so you will need to ask there.", "![Captureh](https://user-images.githubusercontent.com/62762887/102698760-143fd580-4240-11eb-99b2-c9d93c9a001b.PNG)\r\nAny Help Plz!", "Add this propertie for your custom services:\r\n\r\neureka.instance.prefer-ip-address=true" ]
[ "Why this change?", "@ejona86 it made the test easier to adjust as it also change the number of queries :)", "Ah. Makes sense." ]
"2019-01-18T13:28:29Z"
[ "defect" ]
DNS resolver failing to find valid DNS record
### Expected behavior The DNS resolver should find valid DNS records. ### Actual behavior Exception thrown: ``` Caused by: io.netty.resolver.dns.DnsNameResolverContext$SearchDomainUnknownHostException: Search domain query failed. Original hostname: 'host.toplevel' failed to resolve 'host.toplevel.search.domain' after 7 queries at io.netty.resolver.dns.DnsNameResolverContext.finishResolve(DnsNameResolverContext.java:721) at io.netty.resolver.dns.DnsNameResolverContext.tryToFinishResolve(DnsNameResolverContext.java:663) at io.netty.resolver.dns.DnsNameResolverContext.query(DnsNameResolverContext.java:306) at io.netty.resolver.dns.DnsNameResolverContext.query(DnsNameResolverContext.java:295) at io.netty.resolver.dns.DnsNameResolverContext.tryToFinishResolve(DnsNameResolverContext.java:636) at io.netty.resolver.dns.DnsNameResolverContext$3.operationComplete(DnsNameResolverContext.java:342) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.resolver.dns.DnsQueryContext.setSuccess(DnsQueryContext.java:197) at io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:180) at io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:969) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1412) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:943) at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:93) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) ``` ### Steps to reproduce 1. Configure a top level domain `someDomain` on a DNS server you own 1. Configure a host under the new top level domain `someHost.someDomain` 1. Configure multiple resolvers on the DNS client machine that will run the Netty code. i.e. 8.8.8.8, 192.168.1.1, and 10.0.0.1 (I have 3 resolvers configured, each pointing to different DNS masters - global DNS, local personal private network, company private network over a VPN) 1. Configure the search domain to not match the top level domain, i.e. `search.otherDomain` on the DNS client machine that will run the Netty code 1. Ask netty to resolve `someHost.someDomain` 1. failure. ### Minimal yet complete reproducer code (or URL to code) I'm not using Netty directly so I'm not sure what to put here. Do you want my Redisson code? ### Netty version Breaks when I upgrade to Reddison 3.6+ which pulls in Netty 4.1.20+ When forcing downgrade to Netty 4.1.13 the problem still shows, but with a slightly different stack trace. ### JVM version (e.g. `java -version`) java version "1.8.0_162" Java(TM) SE Runtime Environment (build 1.8.0_162-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode) ### OS version (e.g. `uname -a`) Windows 10, Centos 7, Ubuntu 16.04
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java" ]
[ "resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java" ]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java index 847bfb5b9f3..f625be975ce 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsResolveContext.java @@ -497,6 +497,29 @@ private void onResponse(final DnsServerAddressStream nameServerAddrStream, final queryLifecycleObserver.queryNoAnswer(code), true, promise, null); } else { queryLifecycleObserver.queryFailed(NXDOMAIN_QUERY_FAILED_EXCEPTION); + + // Try with the next server if is not authoritative for the domain. + // + // From https://tools.ietf.org/html/rfc1035 : + // + // RCODE Response code - this 4 bit field is set as part of + // responses. The values have the following + // interpretation: + // + // .... + // .... + // + // 3 Name Error - Meaningful only for + // responses from an authoritative name + // server, this code signifies that the + // domain name referenced in the query does + // not exist. + // .... + // .... + if (!res.isAuthoritativeAnswer()) { + query(nameServerAddrStream, nameServerAddrStreamIndex + 1, question, + newDnsQueryLifecycleObserver(question), true, promise, null); + } } } finally { ReferenceCountUtil.safeRelease(envelope);
diff --git a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java index 1326a7975e8..d03ee870775 100644 --- a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java +++ b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java @@ -1143,7 +1143,7 @@ public void aAndAAAAQueryShouldTryFirstDnsServerBeforeSecond() throws IOExceptio new TestRecursiveCacheDnsQueryLifecycleObserverFactory(); DnsNameResolverBuilder builder = new DnsNameResolverBuilder(group.next()) - .resolvedAddressTypes(ResolvedAddressTypes.IPV6_PREFERRED) + .resolvedAddressTypes(ResolvedAddressTypes.IPV4_ONLY) .dnsQueryLifecycleObserverFactory(lifecycleObserverFactory) .channelType(NioDatagramChannel.class) .optResourceEnabled(false) @@ -1156,18 +1156,12 @@ public void aAndAAAAQueryShouldTryFirstDnsServerBeforeSecond() throws IOExceptio TestDnsQueryLifecycleObserver observer = lifecycleObserverFactory.observers.poll(); assertNotNull(observer); - assertEquals(2, lifecycleObserverFactory.observers.size()); + assertEquals(1, lifecycleObserverFactory.observers.size()); assertEquals(2, observer.events.size()); QueryWrittenEvent writtenEvent = (QueryWrittenEvent) observer.events.poll(); assertEquals(dnsServer1.localAddress(), writtenEvent.dnsServerAddress); QueryFailedEvent failedEvent = (QueryFailedEvent) observer.events.poll(); - observer = lifecycleObserverFactory.observers.poll(); - assertEquals(2, observer.events.size()); - writtenEvent = (QueryWrittenEvent) observer.events.poll(); - assertEquals(dnsServer1.localAddress(), writtenEvent.dnsServerAddress); - failedEvent = (QueryFailedEvent) observer.events.poll(); - observer = lifecycleObserverFactory.observers.poll(); assertEquals(2, observer.events.size()); writtenEvent = (QueryWrittenEvent) observer.events.poll();
test
val
"2019-01-17T09:14:27"
"2018-09-04T18:47:33Z"
johnjaylward
val
netty/netty/8743_8744
netty/netty
netty/netty/8743
netty/netty/8744
[ "timestamp(timedelta=1452.0, similarity=1.0)" ]
cf03ed0478c12a7ed400dc00670351d46e0c48e3
b57364d86c763265639ae812469b3725e155bb75
[]
[]
"2019-01-22T07:46:38Z"
[]
Add logs for allocating large bytebuf in PooledByteBufAllocator
### Expected behavior Print logs for allocating large bytebuf in PooledByteBufAllocator, because many applications may care about the memory consumption, and want's to know more info about the memory allocation in case of OOM(for instance, Direct Memory OOM). ### Actual behavior No logs ### Steps to reproduce NA ### Minimal yet complete reproducer code (or URL to code) ``` protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) { PoolThreadCache cache = threadCache.get(); PoolArena<ByteBuffer> directArena = cache.directArena; final ByteBuf buf; if (directArena != null) { buf = directArena.allocate(cache, initialCapacity, maxCapacity); } else { buf = PlatformDependent.hasUnsafe() ? UnsafeByteBufUtil.newUnsafeDirectByteBuf(this, initialCapacity, maxCapacity) : new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity); } return toLeakAwareBuffer(buf); } ``` ### Netty version 4.1.30 ### JVM version (e.g. `java -version`) NA ### OS version (e.g. `uname -a`) NA
[ "buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java" ]
[ "buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java" ]
[]
diff --git a/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java b/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java index de6eee1d54d..440b42cef3c 100644 --- a/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java +++ b/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java @@ -307,6 +307,11 @@ private static int validateAndCalculateChunkSize(int pageSize, int maxOrder) { @Override protected ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity) { + if (initialCapacity > 1024 * 1024) { + logger.info("Try to allocate heap buffer with initialCapacity: " + + initialCapacity + " maxCapacity: " + maxCapacity + ", current usedHeapMemory: " + + metric.usedHeapMemory()); + } PoolThreadCache cache = threadCache.get(); PoolArena<byte[]> heapArena = cache.heapArena; @@ -324,6 +329,11 @@ protected ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity) { @Override protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) { + if (initialCapacity > 1024 * 1024) { + logger.info("Try to allocate direct buffer with initialCapacity: " + + initialCapacity + " maxCapacity: " + maxCapacity + ", current usedDirectMemory: " + + metric.usedDirectMemory()); + } PoolThreadCache cache = threadCache.get(); PoolArena<ByteBuffer> directArena = cache.directArena;
null
test
val
"2019-01-21T13:26:44"
"2019-01-22T07:15:18Z"
liupc
val
netty/netty/8772_8793
netty/netty
netty/netty/8772
netty/netty/8793
[ "keyword_pr_to_issue" ]
948d4a9ec58aef092c84eb76e672fd06e81cc13c
32563bfcc129ef9332f175c277e4f6b59fd37d8c
[ "@rkapsi hmmm good question. For me it sounds more like a bug. WDYT ?", "@normanmaurer same, I'll create a PR" ]
[ "nit: remove empty line", "assert return value", "release message ", "assert return value ", "ensure to release stuff ", "call `channel.finish()` and assert return value", "call `channel.finish()` and assert return value", "assert return value", "release message", "assert return value" ]
"2019-01-28T20:56:44Z"
[]
HttpObjectAggregator w/ isStartMessage: Bug or Feature?
Let's say you want to be very selective when and if to aggregate HttpRequests and Responses. Say you want to only aggregate responses with a content-type of application/json. Naturally I went with this: ```java public class MyHttpObjectAggregator extends HttpObjectAggregator { public MyHttpObjectAggregator(int maxContentLength) { super(maxContentLength); } @Override protected boolean isStartMessage(HttpObject msg) throws Exception { if (msg instanceof HttpResponse) { HttpResponse response = (HttpResponse) msg; HttpHeaders headers = response.headers(); String contentType = headers.get(HttpHeaderNames.CONTENT_TYPE); if (AsciiString.contentEqualsIgnoreCase(contentType, HttpHeaderValues.APPLICATION_JSON)) { return true; } } return false; } } ``` ... but it will not work due to the following two conditions 1) isContentMessage returns true for any HttpContent https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java#L99 2) Subsequently we enter decode(), isContentMessage returns again true but `currentMessage` is null and early exits. Leaving the aggregator in a bad state. https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java#L261-L265 My work around is to introduce a toggle flag in my aggregator that looks like this: ```java public class MyHttpObjectAggregator extends HttpObjectAggregator { private boolean aggregating; public MyHttpObjectAggregator(int maxContentLength) { super(maxContentLength); } @Override protected boolean isStartMessage(HttpObject msg) throws Exception { if (msg instanceof HttpResponse) { HttpResponse response = (HttpResponse) msg; HttpHeaders headers = response.headers(); String contentType = headers.get(HttpHeaderNames.CONTENT_TYPE); if (AsciiString.contentEqualsIgnoreCase(contentType, HttpHeaderValues.APPLICATION_JSON)) { aggregating = true; return true; } else { aggregating = false; return false; } } return false; } @Override protected boolean isContentMessage(HttpObject msg) throws Exception { return aggregating && super.isContentMessage(msg); } } ``` I'm wondering if this is intended behavior or a bug? ### Expected behavior ### Actual behavior ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ### Netty version ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "codec/src/main/java/io/netty/handler/codec/MessageAggregator.java" ]
[ "codec/src/main/java/io/netty/handler/codec/MessageAggregator.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java b/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java index 2cdb880c993..ca43f1d797a 100644 --- a/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java +++ b/codec/src/main/java/io/netty/handler/codec/MessageAggregator.java @@ -61,6 +61,8 @@ public abstract class MessageAggregator<I, S, C extends ByteBufHolder, O extends private ChannelHandlerContext ctx; private ChannelFutureListener continueResponseWriteListener; + private boolean aggregating; + /** * Creates a new instance. * @@ -96,7 +98,20 @@ public boolean acceptInboundMessage(Object msg) throws Exception { @SuppressWarnings("unchecked") I in = (I) msg; - return (isContentMessage(in) || isStartMessage(in)) && !isAggregated(in); + if (isAggregated(in)) { + return false; + } + + // NOTE: It's tempting to make this check only if aggregating is false. There are however + // side conditions in decode(...) in respect to large messages. + if (isStartMessage(in)) { + aggregating = true; + return true; + } else if (aggregating && isContentMessage(in)) { + return true; + } + + return false; } /** @@ -192,6 +207,8 @@ protected final ChannelHandlerContext ctx() { @Override protected void decode(final ChannelHandlerContext ctx, I msg, List<Object> out) throws Exception { + assert aggregating; + if (isStartMessage(msg)) { handlingOversizedMessage = false; if (currentMessage != null) { @@ -246,7 +263,7 @@ public void operationComplete(ChannelFuture future) throws Exception { } else { aggregated = beginAggregation(m, EMPTY_BUFFER); } - finishAggregation(aggregated); + finishAggregation0(aggregated); out.add(aggregated); return; } @@ -301,7 +318,7 @@ public void operationComplete(ChannelFuture future) throws Exception { } if (last) { - finishAggregation(currentMessage); + finishAggregation0(currentMessage); // All done out.add(currentMessage); @@ -371,6 +388,11 @@ protected abstract Object newContinueResponse(S start, int maxContentLength, Cha */ protected void aggregate(O aggregated, C content) throws Exception { } + private void finishAggregation0(O aggregated) throws Exception { + aggregating = false; + finishAggregation(aggregated); + } + /** * Invoked when the specified {@code aggregated} message is about to be passed to the next handler in the pipeline. */ @@ -378,6 +400,7 @@ protected void finishAggregation(O aggregated) throws Exception { } private void invokeHandleOversizedMessage(ChannelHandlerContext ctx, S oversized) throws Exception { handlingOversizedMessage = true; + aggregating = false; currentMessage = null; try { handleOversizedMessage(ctx, oversized); @@ -441,6 +464,7 @@ private void releaseCurrentMessage() { currentMessage.release(); currentMessage = null; handlingOversizedMessage = false; + aggregating = false; } } }
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java index 7dc0ac5a519..503c93339f0 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java @@ -23,7 +23,10 @@ import io.netty.handler.codec.DecoderResult; import io.netty.handler.codec.DecoderResultProvider; import io.netty.handler.codec.TooLongFrameException; +import io.netty.util.AsciiString; import io.netty.util.CharsetUtil; +import io.netty.util.ReferenceCountUtil; + import org.junit.Test; import org.mockito.Mockito; @@ -40,6 +43,7 @@ import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; +import static org.junit.Assert.assertSame; public class HttpObjectAggregatorTest { @@ -517,4 +521,123 @@ public void testReplaceAggregatedResponse() { aggregatedRep.release(); replacedRep.release(); } + + @Test + public void testSelectiveRequestAggregation() { + HttpObjectAggregator myPostAggregator = new HttpObjectAggregator(1024 * 1024) { + @Override + protected boolean isStartMessage(HttpObject msg) throws Exception { + if (msg instanceof HttpRequest) { + HttpRequest request = (HttpRequest) msg; + HttpMethod method = request.method(); + + if (method.equals(HttpMethod.POST)) { + return true; + } + } + + return false; + } + }; + + EmbeddedChannel channel = new EmbeddedChannel(myPostAggregator); + + try { + // Aggregate: POST + HttpRequest request1 = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.POST, "/"); + HttpContent content1 = new DefaultHttpContent(Unpooled.copiedBuffer("Hello, World!", CharsetUtil.UTF_8)); + request1.headers().set(HttpHeaderNames.CONTENT_TYPE, HttpHeaderValues.TEXT_PLAIN); + + assertTrue(channel.writeInbound(request1, content1, LastHttpContent.EMPTY_LAST_CONTENT)); + + // Getting an aggregated response out + Object msg1 = channel.readInbound(); + try { + assertTrue(msg1 instanceof FullHttpRequest); + } finally { + ReferenceCountUtil.release(msg1); + } + + // Don't aggregate: non-POST + HttpRequest request2 = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "/"); + HttpContent content2 = new DefaultHttpContent(Unpooled.copiedBuffer("Hello, World!", CharsetUtil.UTF_8)); + request2.headers().set(HttpHeaderNames.CONTENT_TYPE, HttpHeaderValues.TEXT_PLAIN); + + try { + assertTrue(channel.writeInbound(request2, content2, LastHttpContent.EMPTY_LAST_CONTENT)); + + // Getting the same response objects out + assertSame(request2, channel.readInbound()); + assertSame(content2, channel.readInbound()); + assertSame(LastHttpContent.EMPTY_LAST_CONTENT, channel.readInbound()); + } finally { + ReferenceCountUtil.release(request2); + ReferenceCountUtil.release(content2); + } + + assertFalse(channel.finish()); + } finally { + channel.close(); + } + } + + @Test + public void testSelectiveResponseAggregation() { + HttpObjectAggregator myTextAggregator = new HttpObjectAggregator(1024 * 1024) { + @Override + protected boolean isStartMessage(HttpObject msg) throws Exception { + if (msg instanceof HttpResponse) { + HttpResponse response = (HttpResponse) msg; + HttpHeaders headers = response.headers(); + + String contentType = headers.get(HttpHeaderNames.CONTENT_TYPE); + if (AsciiString.contentEqualsIgnoreCase(contentType, HttpHeaderValues.TEXT_PLAIN)) { + return true; + } + } + + return false; + } + }; + + EmbeddedChannel channel = new EmbeddedChannel(myTextAggregator); + + try { + // Aggregate: text/plain + HttpResponse response1 = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK); + HttpContent content1 = new DefaultHttpContent(Unpooled.copiedBuffer("Hello, World!", CharsetUtil.UTF_8)); + response1.headers().set(HttpHeaderNames.CONTENT_TYPE, HttpHeaderValues.TEXT_PLAIN); + + assertTrue(channel.writeInbound(response1, content1, LastHttpContent.EMPTY_LAST_CONTENT)); + + // Getting an aggregated response out + Object msg1 = channel.readInbound(); + try { + assertTrue(msg1 instanceof FullHttpResponse); + } finally { + ReferenceCountUtil.release(msg1); + } + + // Don't aggregate: application/json + HttpResponse response2 = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK); + HttpContent content2 = new DefaultHttpContent(Unpooled.copiedBuffer("{key: 'value'}", CharsetUtil.UTF_8)); + response2.headers().set(HttpHeaderNames.CONTENT_TYPE, HttpHeaderValues.APPLICATION_JSON); + + try { + assertTrue(channel.writeInbound(response2, content2, LastHttpContent.EMPTY_LAST_CONTENT)); + + // Getting the same response objects out + assertSame(response2, channel.readInbound()); + assertSame(content2, channel.readInbound()); + assertSame(LastHttpContent.EMPTY_LAST_CONTENT, channel.readInbound()); + } finally { + ReferenceCountUtil.release(response2); + ReferenceCountUtil.release(content2); + } + + assertFalse(channel.finish()); + } finally { + channel.close(); + } + } }
val
val
"2019-01-28T19:45:38"
"2019-01-23T20:19:21Z"
rkapsi
val
netty/netty/8796_8798
netty/netty
netty/netty/8796
netty/netty/8798
[ "timestamp(timedelta=0.0, similarity=0.883875586006625)" ]
948d4a9ec58aef092c84eb76e672fd06e81cc13c
a6e6a9151f09e48d92ed96df9b3f4437d78856dc
[ "@aimozg sounds like a bug... Can you submit a PR with a test and fix ?", "@normanmaurer I'd rather someone more experienced with netty internals do it, really.\r\nI'm unsure how to fix it - \r\n1) Just return an min-size empty ASC?\r\n2) Allow 0-size in constructors? (Is that used somewhere)\r\n3) Something else?\r\n\r\nquick-and-dirty 1) would be \r\n```diff\r\n @Override\r\n public AppendableCharSequence subSequence(int start, int end) {\r\n+ if (start == end) {\r\n+ return new AppendableCharSequence(1);\r\n+ }\r\n return new AppendableCharSequence(Arrays.copyOfRange(chars, start, end));\r\n }\r\n```", "@aimozg please check https://github.com/netty/netty/pull/8798 ... That said you should not use this class as its in the \"internal\" package and so may be removed / changed etc without notice." ]
[]
"2019-01-29T14:20:03Z"
[ "defect" ]
AppendableCharSequence.subSequence throws instead of returning empty subsequence
I am writing message decoder using as a base approach of `HttpRequestDecoder` (`ByteBuf -> AppendableCharSequence`) and encountered following problem: `AppendableCharSequence.subSequence(int start, int end);` violates the `CharSequence` interface contract when `start == end`. ### Expected behavior Return an empty sequence, as specified in `CharSequence.subSequence` doc: > _if <tt>start == end</tt> then an empty sequence is returned._ ### Actual behavior `IllegalArgumentException: length: 0 (length: >= 1)` ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ``` AppendableCharSequence asc = new AppendableCharSequence(128); asc.append('/'); Matcher m = Pattern.compile("/([^ ]*+)").matcher(asc); if (m.matches()) m.group(1); ``` or https://gist.github.com/aimozg/075074c6e6362f800bb1fff21bf6a047 ### Netty version 4.124.Final ### Java version 1.8
[ "common/src/main/java/io/netty/util/internal/AppendableCharSequence.java" ]
[ "common/src/main/java/io/netty/util/internal/AppendableCharSequence.java" ]
[ "common/src/test/java/io/netty/util/internal/AppendableCharSequenceTest.java" ]
diff --git a/common/src/main/java/io/netty/util/internal/AppendableCharSequence.java b/common/src/main/java/io/netty/util/internal/AppendableCharSequence.java index 408c32f3802..e8e6abf5661 100644 --- a/common/src/main/java/io/netty/util/internal/AppendableCharSequence.java +++ b/common/src/main/java/io/netty/util/internal/AppendableCharSequence.java @@ -63,6 +63,12 @@ public char charAtUnsafe(int index) { @Override public AppendableCharSequence subSequence(int start, int end) { + if (start == end) { + // If start and end index is the same we need to return an empty sequence to conform to the interface. + // As our expanding logic depends on the fact that we have a char[] with length > 0 we need to construct + // an instance for which this is true. + return new AppendableCharSequence(Math.min(16, chars.length)); + } return new AppendableCharSequence(Arrays.copyOfRange(chars, start, end)); }
diff --git a/common/src/test/java/io/netty/util/internal/AppendableCharSequenceTest.java b/common/src/test/java/io/netty/util/internal/AppendableCharSequenceTest.java index 2d7bab4ec84..9d08c3ee577 100644 --- a/common/src/test/java/io/netty/util/internal/AppendableCharSequenceTest.java +++ b/common/src/test/java/io/netty/util/internal/AppendableCharSequenceTest.java @@ -64,6 +64,16 @@ public void testSubSequence() { assertEquals("abcdefghij", master.subSequence(0, 10).toString()); } + @Test + public void testEmptySubSequence() { + AppendableCharSequence master = new AppendableCharSequence(26); + master.append("abcdefghijlkmonpqrstuvwxyz"); + AppendableCharSequence sub = master.subSequence(0, 0); + assertEquals(0, sub.length()); + sub.append('b'); + assertEquals('b', sub.charAt(0)); + } + private static void testSimpleAppend0(AppendableCharSequence seq) { String text = "testdata"; for (int i = 0; i < text.length(); i++) {
train
val
"2019-01-28T19:45:38"
"2019-01-29T12:45:31Z"
aimozg
val
netty/netty/8736_8799
netty/netty
netty/netty/8736
netty/netty/8799
[ "timestamp(timedelta=1.0, similarity=0.9442260863139813)" ]
948d4a9ec58aef092c84eb76e672fd06e81cc13c
91d3920aa298ea536be7b196f16b32b6ddd27f8d
[]
[ "This flow is a bit convoluted. How about we remove the surrounding if and make it clear why the condition is there:\r\n\r\n```\r\nif (line.length() == 0 && this.trailer == null) {\r\n return LastHttpContent.EMPTY_LAST_CONTENT; // optimization\r\n}\r\n// everything from the if today\r\nLastHttpContent trailer = this.trailer;\r\nif (trailer == null) {\r\n...\r\nreturn trailer;\r\n```\r\n\r\nAlternatively the if could be changed to `line.length() > 0 || this.trailer != null`, but it's non-obvious why it would be that way. (It's still more obvious than this code, in my mind, though.)" ]
"2019-01-29T15:29:55Z"
[ "defect" ]
HttpObjectDecoder ignores HTTP trailer header.
Hi Netty team ! ### Behavior When I was writing test for https://github.com/netty/netty/pull/8721 in **io.netty.handler.codec.http.HttpContentDecoderTest** I encountered a problem with parsing trailer header using HttpObjectDecoder. HttpObjectDecoder is not able to handle following bytes sequence ```java assertTrue(channel.writeInbound(Unpooled.copiedBuffer("My-Trailer: 42", CharsetUtil.US_ASCII))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer("\r\n", CharsetUtil.US_ASCII))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer("\r\n\r\n", CharsetUtil.US_ASCII))); ``` but is able to deal with that ```java assertTrue(channel.writeInbound(Unpooled.copiedBuffer("My-Trailer: 42\r\n\r\n\r\n", CharsetUtil.US_ASCII))); ``` After small debugging session, I noticed that HttpObjectDecoder is actually parsing trailer but that line of code (**io.netty.handler.codec.http.HttpObjectDecoder:674**) ```java line = headerParser.parse(buffer); if (line == null) { return null; } ``` cause to ignore my trailer, because **headerParser.parse(buffer)** on empty **buffer** will return null instead of parsed trailer. ### Minimal yet complete reproducer code (or URL to code) Didn't want to make pull-request (almost exact test is already pushed here https://github.com/netty/netty/pull/8721), but following test (class **io.netty.handler.codec.http.HttpContentDecoderTest**) will reproduce the problem ```java @Test public void testChunkedRequestDecompression() { HttpResponseDecoder decoder = new HttpResponseDecoder(); HttpContentDecoder decompressor = new HttpContentDecompressor(); EmbeddedChannel channel = new EmbeddedChannel(decoder, decompressor, null); String headers = "HTTP/1.1 200 OK\r\n" + "Transfer-Encoding: chunked\r\n" + "Trailer: My-Trailer\r\n" + "Content-Encoding: gzip\r\n\r\n"; channel.writeInbound(Unpooled.copiedBuffer((headers.getBytes(CharsetUtil.US_ASCII)))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer(Integer.toHexString(GZ_HELLO_WORLD.length) + "\r\n", CharsetUtil.US_ASCII))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer(GZ_HELLO_WORLD))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer("\r\n".getBytes(CharsetUtil.US_ASCII)))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer("0\r\n", CharsetUtil.US_ASCII))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer("My-Trailer: 42", CharsetUtil.US_ASCII))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer("\r\n", CharsetUtil.US_ASCII))); assertTrue(channel.writeInbound(Unpooled.copiedBuffer("\r\n\r\n", CharsetUtil.US_ASCII))); Object ob1 = channel.readInbound(); assertThat(ob1, is(instanceOf(DefaultHttpResponse.class))); Object ob2 = channel.readInbound(); assertThat(ob1, is(instanceOf(DefaultHttpResponse.class))); HttpContent content = (HttpContent) ob2; assertEquals(HELLO_WORLD, content.content().toString(CharsetUtil.US_ASCII)); Object ob3 = channel.readInbound(); assertThat(ob1, is(instanceOf(DefaultHttpResponse.class))); LastHttpContent lastContent = (LastHttpContent) ob3; assertFalse(lastContent.trailingHeaders().isEmpty()); assertEquals("42", lastContent.trailingHeaders().get("My-Trailer")); assertHasInboundMessages(channel, false); assertHasOutboundMessages(channel, false); assertFalse(channel.finish()); } ``` ### Netty version 4.1.32.Final
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java index af1d642a039..f8220cdb04e 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java @@ -640,49 +640,50 @@ private LastHttpContent readTrailingHeaders(ByteBuf buffer) { if (line == null) { return null; } + LastHttpContent trailer = this.trailer; + if (line.length() == 0 && trailer == null) { + // We have received the empty line which signals the trailer is complete and did not parse any trailers + // before. Just return an empty last content to reduce allocations. + return LastHttpContent.EMPTY_LAST_CONTENT; + } + CharSequence lastHeader = null; - if (line.length() > 0) { - LastHttpContent trailer = this.trailer; - if (trailer == null) { - trailer = this.trailer = new DefaultLastHttpContent(Unpooled.EMPTY_BUFFER, validateHeaders); - } - do { - char firstChar = line.charAt(0); - if (lastHeader != null && (firstChar == ' ' || firstChar == '\t')) { - List<String> current = trailer.trailingHeaders().getAll(lastHeader); - if (!current.isEmpty()) { - int lastPos = current.size() - 1; - //please do not make one line from below code - //as it breaks +XX:OptimizeStringConcat optimization - String lineTrimmed = line.toString().trim(); - String currentLastPos = current.get(lastPos); - current.set(lastPos, currentLastPos + lineTrimmed); - } - } else { - splitHeader(line); - CharSequence headerName = name; - if (!HttpHeaderNames.CONTENT_LENGTH.contentEqualsIgnoreCase(headerName) && + if (trailer == null) { + trailer = this.trailer = new DefaultLastHttpContent(Unpooled.EMPTY_BUFFER, validateHeaders); + } + while (line.length() > 0) { + char firstChar = line.charAt(0); + if (lastHeader != null && (firstChar == ' ' || firstChar == '\t')) { + List<String> current = trailer.trailingHeaders().getAll(lastHeader); + if (!current.isEmpty()) { + int lastPos = current.size() - 1; + //please do not make one line from below code + //as it breaks +XX:OptimizeStringConcat optimization + String lineTrimmed = line.toString().trim(); + String currentLastPos = current.get(lastPos); + current.set(lastPos, currentLastPos + lineTrimmed); + } + } else { + splitHeader(line); + CharSequence headerName = name; + if (!HttpHeaderNames.CONTENT_LENGTH.contentEqualsIgnoreCase(headerName) && !HttpHeaderNames.TRANSFER_ENCODING.contentEqualsIgnoreCase(headerName) && !HttpHeaderNames.TRAILER.contentEqualsIgnoreCase(headerName)) { - trailer.trailingHeaders().add(headerName, value); - } - lastHeader = name; - // reset name and value fields - name = null; - value = null; - } - - line = headerParser.parse(buffer); - if (line == null) { - return null; + trailer.trailingHeaders().add(headerName, value); } - } while (line.length() > 0); - - this.trailer = null; - return trailer; + lastHeader = name; + // reset name and value fields + name = null; + value = null; + } + line = headerParser.parse(buffer); + if (line == null) { + return null; + } } - return LastHttpContent.EMPTY_LAST_CONTENT; + this.trailer = null; + return trailer; } protected abstract boolean isDecodingRequest();
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java index 017dbd5ff94..f062da2bf1f 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java @@ -683,4 +683,33 @@ public void testConnectionClosedBeforeHeadersReceived() { assertThat(message.decoderResult().cause(), instanceOf(PrematureChannelClosureException.class)); assertNull(channel.readInbound()); } + + @Test + public void testTrailerWithEmptyLineInSeparateBuffer() { + HttpResponseDecoder decoder = new HttpResponseDecoder(); + EmbeddedChannel channel = new EmbeddedChannel(decoder); + + String headers = "HTTP/1.1 200 OK\r\n" + + "Transfer-Encoding: chunked\r\n" + + "Trailer: My-Trailer\r\n"; + assertFalse(channel.writeInbound(Unpooled.copiedBuffer(headers.getBytes(CharsetUtil.US_ASCII)))); + assertTrue(channel.writeInbound(Unpooled.copiedBuffer("\r\n".getBytes(CharsetUtil.US_ASCII)))); + + assertTrue(channel.writeInbound(Unpooled.copiedBuffer("0\r\n", CharsetUtil.US_ASCII))); + assertTrue(channel.writeInbound(Unpooled.copiedBuffer("My-Trailer: 42\r\n", CharsetUtil.US_ASCII))); + assertTrue(channel.writeInbound(Unpooled.copiedBuffer("\r\n", CharsetUtil.US_ASCII))); + + HttpResponse response = channel.readInbound(); + assertEquals(2, response.headers().size()); + assertEquals("chunked", response.headers().get(HttpHeaderNames.TRANSFER_ENCODING)); + assertEquals("My-Trailer", response.headers().get(HttpHeaderNames.TRAILER)); + + LastHttpContent lastContent = channel.readInbound(); + assertEquals(1, lastContent.trailingHeaders().size()); + assertEquals("42", lastContent.trailingHeaders().get("My-Trailer")); + assertEquals(0, lastContent.content().readableBytes()); + lastContent.release(); + + assertFalse(channel.finish()); + } }
train
val
"2019-01-28T19:45:38"
"2019-01-19T22:19:45Z"
KowalczykBartek
val
netty/netty/7756_8800
netty/netty
netty/netty/7756
netty/netty/8800
[ "keyword_pr_to_issue", "timestamp(timedelta=19.0, similarity=0.866272799167824)" ]
8e72071d7648df705fc2372dc63b4840d33846f1
4b7e5c96b4e8f0ba5cabd6667fe203002f0390b2
[ "Yes, this is the excellent feature. Also, I would like to have ElementType.TYPE_PARAMETER as well.\r\n\r\nThe main problem that netty currently support Java 6.", "This makes perfect sense and we are planing to start work on the next major version of netty very soon which will hopefully be able to fix this.\r\n\r\nStay tuned, I will start a proposal for all of this soon.", "@raner / @kiril-me if you are interested you may want to contribute a PR against master which is Java8+ now.", "I can work on that.", "@kiril-me thanks a lot for all the help!" ]
[ "2019", "nit: final", "nit: `new @Sharable ChannelHandlerAdapter() { };`", "nit: `new ChannelHandlerAdapter() {};`", "Is this correct? Child class could be nonsharable, while it extends `@Sharable`.\r\nI think explicit `@Sharable` annotation for sharable classes is a good idea.", "@doom369 not sure I understand you correctly.. but `@Sharable` uses `@Inherited` so if the super-class is sharable the child class should be as well. ", "@normanmaurer missed that. Anyway, in that case:\r\n\r\n```\r\n * Indicates that an annotation type is automatically inherited. If\r\n * an Inherited meta-annotation is present on an annotation type\r\n * declaration, and the user queries the annotation type on a class\r\n * declaration, and the class declaration has no annotation for this type,\r\n * then the class's superclass will automatically be queried for the\r\n * annotation type. This process will be repeated until an annotation for this\r\n * type is found, or the top of the class hierarchy (Object)\r\n * is reached. If no superclass has an annotation for this type, then\r\n * the query will indicate that the class in question has no such annotation.\r\n```\r\n\r\n```\r\n @ChannelHandler.Sharable\r\n private static class A {\r\n }\r\n private static class B extends A {\r\n }\r\n @Test\r\n public void aaa() {\r\n System.out.println(B.class.isAnnotationPresent(ChannelHandler.Sharable.class));\r\n //prints true\r\n }\r\n```", "@doom369 agree... so you are saying we do not need the extra \"query\" ? If so I think you are right... @kiril-me did you check if you really need it ?", "@normanmaurer this check is useful only for the case when the class implements interface marked with `@Sharable`. I don't think this is real use case, however, for this at least 1 more additional test case should be added. Not a big deal.", "@kiril-me can you do this ?", "@normanmaurer There are differences between annotation of the class and inner anonymous class annotation. \r\n\r\n> @Inherited JavaDoc\r\n> Note that this meta-annotation type has no effect if the annotated\r\n> type is used to annotate anything other than a class. Note also\r\n> that this meta-annotation only causes annotations to be inherited\r\n> from superclasses; annotations on implemented interfaces have no\r\n> effect.\r\n\r\nAs @doom369 mentioned, isAnnotationPresent(Sharable.class) looking for @Sharable annotation starting from current class till Object.\r\n\r\nBut this scenario doesn't work for isAnnotationPresent, because @Inherit does not apply directly to the class:\r\n```\r\nChannelInboundHandler handler = new @Sharable ChannelInboundHandlerAdapter() {\r\n @Override\r\n public void channelRead(ChannelHandlerContext context, Object message) {\r\n context.write(message);\r\n }\r\n};\r\n```\r\nTests checking this. If you comment out following code, test will fails:\r\n```\r\nif (!sharable) { \r\n AnnotatedType annotatedType = clazz.getAnnotatedSuperclass();\r\n sharable = annotatedType.isAnnotationPresent(Sharable.class);\r\n }\r\n```\r\n\r\n", "Thanks for the explanation." ]
"2019-01-29T20:00:21Z"
[]
@Sharable should also support @Target(ElementType.TYPE_USE)
### Expected behavior `@Sharable` should also be usable with anonymous inner types, so that it is no longer necessary to make named classes (top-level or nested) for sharable handlers. Since Java 8, this is possible with the new `ElementType` `TYPE_USE`. The practical problem here is that either (a) Netty would have to, at some point, *require* Java 8 (don't think that's the case yet), or (b) an additional Java-8-specific `@Sharable` annotation would have to be introduced. ### Actual behavior `@Sharable` only targets `ElementType` `TYPE` and requires a proper (i.e., named) class. ### Steps to reproduce Add `@Sharable` annotation to inner class, observe compilation error. ### Minimal yet complete reproducer code (or URL to code) ``` ChannelInboundHandler echoHandler = new @Sharable ChannelInboundHandlerAdapter() { @Override public void channelRead(ChannelHandlerContext context, Object message) { context.write(message); } } ``` ### Netty version 4.1.22.Final ### JVM version (e.g. `java -version`) 1.8.0_152 ### OS version (e.g. `uname -a`) Darwin Kernel Version 16.7.0: Wed Oct 4 00:17:00 PDT 2017; root:xnu-3789.71.6~1/RELEASE_X86_64 x86_64 (MacOS 10.12.6)
[ "transport/src/main/java/io/netty/channel/ChannelHandler.java", "transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java" ]
[ "transport/src/main/java/io/netty/channel/ChannelHandler.java", "transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java" ]
[ "transport/src/test/java/io/netty/channel/ChannelHandlerAdapterTest.java" ]
diff --git a/transport/src/main/java/io/netty/channel/ChannelHandler.java b/transport/src/main/java/io/netty/channel/ChannelHandler.java index aca5901a8b4..1efdca492cf 100644 --- a/transport/src/main/java/io/netty/channel/ChannelHandler.java +++ b/transport/src/main/java/io/netty/channel/ChannelHandler.java @@ -210,7 +210,7 @@ public interface ChannelHandler { */ @Inherited @Documented - @Target(ElementType.TYPE) + @Target({ElementType.TYPE, ElementType.TYPE_USE}) @Retention(RetentionPolicy.RUNTIME) @interface Sharable { // no value diff --git a/transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java b/transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java index 722261ed32c..2b35591172c 100644 --- a/transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java +++ b/transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java @@ -18,6 +18,7 @@ import io.netty.util.internal.InternalThreadLocalMap; +import java.lang.reflect.AnnotatedType; import java.util.Map; /** @@ -55,6 +56,10 @@ public boolean isSharable() { Boolean sharable = cache.get(clazz); if (sharable == null) { sharable = clazz.isAnnotationPresent(Sharable.class); + if (!sharable) { + AnnotatedType annotatedType = clazz.getAnnotatedSuperclass(); + sharable = annotatedType.isAnnotationPresent(Sharable.class); + } cache.put(clazz, sharable); } return sharable;
diff --git a/transport/src/test/java/io/netty/channel/ChannelHandlerAdapterTest.java b/transport/src/test/java/io/netty/channel/ChannelHandlerAdapterTest.java new file mode 100644 index 00000000000..06123dbb56d --- /dev/null +++ b/transport/src/test/java/io/netty/channel/ChannelHandlerAdapterTest.java @@ -0,0 +1,47 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel; + +import static org.junit.Assert.assertEquals; + +import org.junit.Test; + +import io.netty.channel.ChannelHandler.Sharable; + +public class ChannelHandlerAdapterTest { + + @Sharable + private static final class SharableChannelHandlerAdapter extends ChannelHandlerAdapter { + } + + @Test + public void testSharable() { + ChannelHandlerAdapter handler = new SharableChannelHandlerAdapter(); + assertEquals(true, handler.isSharable()); + } + + @Test + public void testInnerClassSharable() { + ChannelHandlerAdapter handler = new @Sharable ChannelHandlerAdapter() { }; + assertEquals(true, handler.isSharable()); + } + + @Test + public void testWithoutSharable() { + ChannelHandlerAdapter handler = new ChannelHandlerAdapter() { }; + assertEquals(false, handler.isSharable()); + } +}
train
val
"2019-01-30T13:41:42"
"2018-03-01T00:03:09Z"
raner
val
netty/netty/8846_8848
netty/netty
netty/netty/8846
netty/netty/8848
[ "timestamp(timedelta=1.0, similarity=0.9396118449762707)" ]
737519314153f9146eaf532cd4e945a621cf3fa5
c68e85b749b5433634dab59823a9748cea72fdf5
[ "@arukshani sounds about right... Are you interested in providing a PR to fix it ? /cc @bryce-anderson ", "Yup. Will send a PR with the fix @normanmaurer ", "Sent the PR https://github.com/netty/netty/pull/8848 to fix this." ]
[ "This requires that Upgrade exists in the same Connection header value as requiredHeaders. But each value could be put in its own Connection header. It would be better to join all the values together with a `,` and then splitHeader(). @normanmaurer, is there not a utility to do that?", "@ejona86 thats a good point... I think there is no such utility yet (or at least I could not remember and also could not find any). /cc @arukshani can you do the adjustment and also add a unit test for this ?", "@ejona86 and @normanmaurer I have done the necessary changes and added a unit test as well for that. Please review.", "call `concatenatedConnectionValue.setLength(concatenatedConnectionValue.length() -1)` to strip of the last comma", "@arukshani seems like the test code is completely identical except the different `upgradeString`. Consider moving all the code into an extra method and just pass in the `upgradeString` from both of the test methods to remove all the code-duplication", "Done", "@normanmaurer Done", "nit: you could also move all the `setUpServerChannel()` calls", "Done" ]
"2019-02-06T15:11:03Z"
[ "defect" ]
When more than one connection header is present in h2c upgrade request, upgrade fails
### Expected behavior $subject occurs, with this change https://github.com/netty/netty/pull/7824. (@bryce-anderson and @normanmaurer ) Now that more than one connection header is allowed with this https://github.com/netty/netty/pull/7824, line https://github.com/netty/netty/blob/737519314153f9146eaf532cd4e945a621cf3fa5/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java#L295 returns false and the upgrade fails since the connection header value returned from this is not the upgrade but some other connection header value. If we are allowing multiple connection headers, then in HttpServerUpgradeHandler I think we need to check whether any of the connection header value is upgrade and then remove other connection headers. https://tools.ietf.org/html/rfc7540#section-8.1.2.2 ### Steps to reproduce Send an upgrade request with multiple conection headers to HTTP/2 server and the request hangs. Example request: GET / HTTP/1.1 Host: localhost:9000 connection: keep-alive forwarded: by=127.0.0.1 content-length: 0 upgrade: h2c HTTP2-Settings: AAEAABAAAAIAAAABAAN_____AAQAAP__AAUAAEAAAAYAACAA connection: HTTP2-Settings,upgrade ### Netty version From 4.1.33.Final (This issue started to occur from From 4.1.23.Final onwards)
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/CleartextHttp2ServerUpgradeHandlerTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java index f1f3efcb1de..2b54b0e4b21 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java @@ -14,9 +14,6 @@ */ package io.netty.handler.codec.http; -import static io.netty.util.AsciiString.containsContentEqualsIgnoreCase; -import static io.netty.util.AsciiString.containsAllContentEqualsIgnoreCase; - import io.netty.buffer.Unpooled; import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelFutureListener; @@ -30,7 +27,10 @@ import static io.netty.handler.codec.http.HttpResponseStatus.SWITCHING_PROTOCOLS; import static io.netty.handler.codec.http.HttpVersion.HTTP_1_1; +import static io.netty.util.AsciiString.containsAllContentEqualsIgnoreCase; +import static io.netty.util.AsciiString.containsContentEqualsIgnoreCase; import static io.netty.util.internal.ObjectUtil.checkNotNull; +import static io.netty.util.internal.StringUtil.COMMA; /** * A server-side handler that receives HTTP requests and optionally performs a protocol switch if @@ -284,16 +284,23 @@ private boolean upgrade(final ChannelHandlerContext ctx, final FullHttpRequest r } // Make sure the CONNECTION header is present. - CharSequence connectionHeader = request.headers().get(HttpHeaderNames.CONNECTION); - if (connectionHeader == null) { + List<String> connectionHeaderValues = request.headers().getAll(HttpHeaderNames.CONNECTION); + + if (connectionHeaderValues == null) { return false; } + final StringBuilder concatenatedConnectionValue = new StringBuilder(connectionHeaderValues.size() * 10); + for (CharSequence connectionHeaderValue : connectionHeaderValues) { + concatenatedConnectionValue.append(connectionHeaderValue).append(COMMA); + } + concatenatedConnectionValue.setLength(concatenatedConnectionValue.length() - 1); + // Make sure the CONNECTION header contains UPGRADE as well as all protocol-specific headers. Collection<CharSequence> requiredHeaders = upgradeCodec.requiredUpgradeHeaders(); - List<CharSequence> values = splitHeader(connectionHeader); + List<CharSequence> values = splitHeader(concatenatedConnectionValue); if (!containsContentEqualsIgnoreCase(values, HttpHeaderNames.UPGRADE) || - !containsAllContentEqualsIgnoreCase(values, requiredHeaders)) { + !containsAllContentEqualsIgnoreCase(values, requiredHeaders)) { return false; }
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/CleartextHttp2ServerUpgradeHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/CleartextHttp2ServerUpgradeHandlerTest.java index a544054b668..129f62d7574 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/CleartextHttp2ServerUpgradeHandlerTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/CleartextHttp2ServerUpgradeHandlerTest.java @@ -42,8 +42,15 @@ import java.util.ArrayList; import java.util.List; -import static org.junit.Assert.*; -import static org.mockito.Mockito.*; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertTrue; +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; /** * Tests for {@link CleartextHttp2ServerUpgradeHandler} @@ -112,47 +119,35 @@ public void priorKnowledge() throws Exception { @Test public void upgrade() throws Exception { - setUpServerChannel(); - String upgradeString = "GET / HTTP/1.1\r\n" + - "Host: example.com\r\n" + - "Connection: Upgrade, HTTP2-Settings\r\n" + - "Upgrade: h2c\r\n" + - "HTTP2-Settings: AAMAAABkAAQAAP__\r\n\r\n"; - ByteBuf upgrade = Unpooled.copiedBuffer(upgradeString, CharsetUtil.US_ASCII); - - assertFalse(channel.writeInbound(upgrade)); - - assertEquals(1, userEvents.size()); - - Object userEvent = userEvents.get(0); - assertTrue(userEvent instanceof UpgradeEvent); - assertEquals("h2c", ((UpgradeEvent) userEvent).protocol()); - ReferenceCountUtil.release(userEvent); - - assertEquals(100, http2ConnectionHandler.connection().local().maxActiveStreams()); - assertEquals(65535, http2ConnectionHandler.connection().local().flowController().initialWindowSize()); - - assertEquals(1, http2ConnectionHandler.connection().numActiveStreams()); - assertNotNull(http2ConnectionHandler.connection().stream(1)); - - Http2Stream stream = http2ConnectionHandler.connection().stream(1); - assertEquals(State.HALF_CLOSED_REMOTE, stream.state()); - assertFalse(stream.isHeadersSent()); - - String expectedHttpResponse = "HTTP/1.1 101 Switching Protocols\r\n" + - "connection: upgrade\r\n" + - "upgrade: h2c\r\n\r\n"; - ByteBuf responseBuffer = channel.readOutbound(); - assertEquals(expectedHttpResponse, responseBuffer.toString(CharsetUtil.UTF_8)); - responseBuffer.release(); + "Host: example.com\r\n" + + "Connection: Upgrade, HTTP2-Settings\r\n" + + "Upgrade: h2c\r\n" + + "HTTP2-Settings: AAMAAABkAAQAAP__\r\n\r\n"; + validateClearTextUpgrade(upgradeString); + } - // Check that the preface was send (a.k.a the settings frame) - ByteBuf settingsBuffer = channel.readOutbound(); - assertNotNull(settingsBuffer); - settingsBuffer.release(); + @Test + public void upgradeWithMultipleConnectionHeaders() { + String upgradeString = "GET / HTTP/1.1\r\n" + + "Host: example.com\r\n" + + "Connection: keep-alive\r\n" + + "Connection: Upgrade, HTTP2-Settings\r\n" + + "Upgrade: h2c\r\n" + + "HTTP2-Settings: AAMAAABkAAQAAP__\r\n\r\n"; + validateClearTextUpgrade(upgradeString); + } - assertNull(channel.readOutbound()); + @Test + public void requiredHeadersInSeparateConnectionHeaders() { + String upgradeString = "GET / HTTP/1.1\r\n" + + "Host: example.com\r\n" + + "Connection: keep-alive\r\n" + + "Connection: HTTP2-Settings\r\n" + + "Connection: Upgrade\r\n" + + "Upgrade: h2c\r\n" + + "HTTP2-Settings: AAMAAABkAAQAAP__\r\n\r\n"; + validateClearTextUpgrade(upgradeString); } @Test @@ -254,4 +249,43 @@ private static ByteBuf settingsFrameBuf() { private static Http2Settings expectedSettings() { return new Http2Settings().maxConcurrentStreams(100).initialWindowSize(65535); } + + private void validateClearTextUpgrade(String upgradeString) { + setUpServerChannel(); + + ByteBuf upgrade = Unpooled.copiedBuffer(upgradeString, CharsetUtil.US_ASCII); + + assertFalse(channel.writeInbound(upgrade)); + + assertEquals(1, userEvents.size()); + + Object userEvent = userEvents.get(0); + assertTrue(userEvent instanceof UpgradeEvent); + assertEquals("h2c", ((UpgradeEvent) userEvent).protocol()); + ReferenceCountUtil.release(userEvent); + + assertEquals(100, http2ConnectionHandler.connection().local().maxActiveStreams()); + assertEquals(65535, http2ConnectionHandler.connection().local().flowController().initialWindowSize()); + + assertEquals(1, http2ConnectionHandler.connection().numActiveStreams()); + assertNotNull(http2ConnectionHandler.connection().stream(1)); + + Http2Stream stream = http2ConnectionHandler.connection().stream(1); + assertEquals(State.HALF_CLOSED_REMOTE, stream.state()); + assertFalse(stream.isHeadersSent()); + + String expectedHttpResponse = "HTTP/1.1 101 Switching Protocols\r\n" + + "connection: upgrade\r\n" + + "upgrade: h2c\r\n\r\n"; + ByteBuf responseBuffer = channel.readOutbound(); + assertEquals(expectedHttpResponse, responseBuffer.toString(CharsetUtil.UTF_8)); + responseBuffer.release(); + + // Check that the preface was send (a.k.a the settings frame) + ByteBuf settingsBuffer = channel.readOutbound(); + assertNotNull(settingsBuffer); + settingsBuffer.release(); + + assertNull(channel.readOutbound()); + } }
train
val
"2019-02-04T19:07:42"
"2019-02-06T09:10:46Z"
arukshani
val
netty/netty/8862_8863
netty/netty
netty/netty/8862
netty/netty/8863
[ "timestamp(timedelta=0.0, similarity=0.8443789738886954)" ]
c68e85b749b5433634dab59823a9748cea72fdf5
f176384a729c1d9352c9ed878b9b967ca2f31bf8
[ "+1" ]
[ "this will spuriously succeed on some machines won't it?", "It does not matter what we pass here as I implement `AbstractUnsafe.connect(...)` which will just notify the promise. Remember this is no socket etc. " ]
"2019-02-12T23:58:47Z"
[ "improvement" ]
Provide more information with the ClosedChannelException when fail operation with it
If we close a Channel because of a failed write we should store the "original" exception in the `ClosedChannelException` that will be used to fail further operations to make it easier to understand why the Channel was closed before. This came up yesterday in a discussion with @Scottmitch @ejona86 @carl-mastrangelo .
[ "transport/src/main/java/io/netty/channel/AbstractChannel.java" ]
[ "transport/src/main/java/io/netty/channel/AbstractChannel.java", "transport/src/main/java/io/netty/channel/ExtendedClosedChannelException.java" ]
[ "transport/src/test/java/io/netty/channel/AbstractChannelTest.java" ]
diff --git a/transport/src/main/java/io/netty/channel/AbstractChannel.java b/transport/src/main/java/io/netty/channel/AbstractChannel.java index 11c14a9624b..8f8fe2f787a 100644 --- a/transport/src/main/java/io/netty/channel/AbstractChannel.java +++ b/transport/src/main/java/io/netty/channel/AbstractChannel.java @@ -44,14 +44,14 @@ public abstract class AbstractChannel extends DefaultAttributeMap implements Cha private static final InternalLogger logger = InternalLoggerFactory.getInstance(AbstractChannel.class); - private static final ClosedChannelException FLUSH0_CLOSED_CHANNEL_EXCEPTION = ThrowableUtil.unknownStackTrace( - new ClosedChannelException(), AbstractUnsafe.class, "flush0()"); private static final ClosedChannelException ENSURE_OPEN_CLOSED_CHANNEL_EXCEPTION = ThrowableUtil.unknownStackTrace( - new ClosedChannelException(), AbstractUnsafe.class, "ensureOpen(...)"); + new ExtendedClosedChannelException(null), AbstractUnsafe.class, "ensureOpen(...)"); private static final ClosedChannelException CLOSE_CLOSED_CHANNEL_EXCEPTION = ThrowableUtil.unknownStackTrace( new ClosedChannelException(), AbstractUnsafe.class, "close(...)"); private static final ClosedChannelException WRITE_CLOSED_CHANNEL_EXCEPTION = ThrowableUtil.unknownStackTrace( - new ClosedChannelException(), AbstractUnsafe.class, "write(...)"); + new ExtendedClosedChannelException(null), AbstractUnsafe.class, "write(...)"); + private static final ClosedChannelException FLUSH0_CLOSED_CHANNEL_EXCEPTION = ThrowableUtil.unknownStackTrace( + new ExtendedClosedChannelException(null), AbstractUnsafe.class, "flush0()"); private static final NotYetConnectedException FLUSH0_NOT_YET_CONNECTED_EXCEPTION = ThrowableUtil.unknownStackTrace( new NotYetConnectedException(), AbstractUnsafe.class, "flush0()"); @@ -67,6 +67,7 @@ public abstract class AbstractChannel extends DefaultAttributeMap implements Cha private volatile EventLoop eventLoop; private volatile boolean registered; private boolean closeInitiated; + private Throwable initialCloseCause; /** Cache for the string representation of this channel */ private boolean strValActive; @@ -870,7 +871,7 @@ public final void write(Object msg, ChannelPromise promise) { // need to fail the future right away. If it is not null the handling of the rest // will be done in flush0() // See https://github.com/netty/netty/issues/2362 - safeSetFailure(promise, WRITE_CLOSED_CHANNEL_EXCEPTION); + safeSetFailure(promise, newWriteException(initialCloseCause)); // release message now to prevent resource-leak ReferenceCountUtil.release(msg); return; @@ -926,7 +927,7 @@ protected void flush0() { outboundBuffer.failFlushed(FLUSH0_NOT_YET_CONNECTED_EXCEPTION, true); } else { // Do not trigger channelWritabilityChanged because the channel is closed already. - outboundBuffer.failFlushed(FLUSH0_CLOSED_CHANNEL_EXCEPTION, false); + outboundBuffer.failFlushed(newFlush0Exception(initialCloseCause), false); } } finally { inFlush0 = false; @@ -946,12 +947,14 @@ protected void flush0() { * This is needed as otherwise {@link #isActive()} , {@link #isOpen()} and {@link #isWritable()} * may still return {@code true} even if the channel should be closed as result of the exception. */ - close(voidPromise(), t, FLUSH0_CLOSED_CHANNEL_EXCEPTION, false); + initialCloseCause = t; + close(voidPromise(), t, newFlush0Exception(t), false); } else { try { shutdownOutput(voidPromise(), t); } catch (Throwable t2) { - close(voidPromise(), t2, FLUSH0_CLOSED_CHANNEL_EXCEPTION, false); + initialCloseCause = t; + close(voidPromise(), t2, newFlush0Exception(t), false); } } } finally { @@ -959,6 +962,30 @@ protected void flush0() { } } + private ClosedChannelException newWriteException(Throwable cause) { + if (cause == null) { + return WRITE_CLOSED_CHANNEL_EXCEPTION; + } + return ThrowableUtil.unknownStackTrace( + new ExtendedClosedChannelException(cause), AbstractUnsafe.class, "write(...)"); + } + + private ClosedChannelException newFlush0Exception(Throwable cause) { + if (cause == null) { + return FLUSH0_CLOSED_CHANNEL_EXCEPTION; + } + return ThrowableUtil.unknownStackTrace( + new ExtendedClosedChannelException(cause), AbstractUnsafe.class, "flush0()"); + } + + private ClosedChannelException newEnsureOpenException(Throwable cause) { + if (cause == null) { + return ENSURE_OPEN_CLOSED_CHANNEL_EXCEPTION; + } + return ThrowableUtil.unknownStackTrace( + new ExtendedClosedChannelException(cause), AbstractUnsafe.class, "ensureOpen(...)"); + } + @Override public final ChannelPromise voidPromise() { assertEventLoop(); @@ -971,7 +998,7 @@ protected final boolean ensureOpen(ChannelPromise promise) { return true; } - safeSetFailure(promise, ENSURE_OPEN_CLOSED_CHANNEL_EXCEPTION); + safeSetFailure(promise, newEnsureOpenException(initialCloseCause)); return false; } diff --git a/transport/src/main/java/io/netty/channel/ExtendedClosedChannelException.java b/transport/src/main/java/io/netty/channel/ExtendedClosedChannelException.java new file mode 100644 index 00000000000..3b908cd1930 --- /dev/null +++ b/transport/src/main/java/io/netty/channel/ExtendedClosedChannelException.java @@ -0,0 +1,32 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel; + +import java.nio.channels.ClosedChannelException; + +final class ExtendedClosedChannelException extends ClosedChannelException { + + ExtendedClosedChannelException(Throwable cause) { + if (cause != null) { + initCause(cause); + } + } + + @Override + public Throwable fillInStackTrace() { + return this; + } +}
diff --git a/transport/src/test/java/io/netty/channel/AbstractChannelTest.java b/transport/src/test/java/io/netty/channel/AbstractChannelTest.java index fc5fb9b066d..9d5110ea8d8 100644 --- a/transport/src/test/java/io/netty/channel/AbstractChannelTest.java +++ b/transport/src/test/java/io/netty/channel/AbstractChannelTest.java @@ -15,8 +15,12 @@ */ package io.netty.channel; +import java.io.IOException; +import java.net.InetSocketAddress; import java.net.SocketAddress; +import java.nio.channels.ClosedChannelException; +import io.netty.util.NetUtil; import org.junit.Test; import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; @@ -82,6 +86,69 @@ public void ensureDefaultChannelId() { assertTrue(channelId instanceof DefaultChannelId); } + @Test + public void testClosedChannelExceptionCarryIOException() throws Exception { + final IOException ioException = new IOException(); + final Channel channel = new TestChannel() { + private boolean open = true; + private boolean active; + + @Override + protected AbstractUnsafe newUnsafe() { + return new AbstractUnsafe() { + @Override + public void connect( + SocketAddress remoteAddress, SocketAddress localAddress, ChannelPromise promise) { + active = true; + promise.setSuccess(); + } + }; + } + + @Override + protected void doClose() { + active = false; + open = false; + } + + @Override + protected void doWrite(ChannelOutboundBuffer in) throws Exception { + throw ioException; + } + + @Override + public boolean isOpen() { + return open; + } + + @Override + public boolean isActive() { + return active; + } + }; + + EventLoop loop = new DefaultEventLoop(); + try { + registerChannel(loop, channel); + channel.connect(new InetSocketAddress(NetUtil.LOCALHOST, 8888)).sync(); + assertSame(ioException, channel.writeAndFlush("").await().cause()); + + assertClosedChannelException(channel.writeAndFlush(""), ioException); + assertClosedChannelException(channel.write(""), ioException); + assertClosedChannelException(channel.bind(new InetSocketAddress(NetUtil.LOCALHOST, 8888)), ioException); + } finally { + channel.close(); + loop.shutdownGracefully(); + } + } + + private static void assertClosedChannelException(ChannelFuture future, IOException expected) + throws InterruptedException { + Throwable cause = future.await().cause(); + assertTrue(cause instanceof ClosedChannelException); + assertSame(expected, cause.getCause()); + } + private static void registerChannel(EventLoop eventLoop, Channel channel) throws Exception { DefaultChannelPromise future = new DefaultChannelPromise(channel); channel.unsafe().register(eventLoop, future); @@ -90,11 +157,8 @@ private static void registerChannel(EventLoop eventLoop, Channel channel) throws private static class TestChannel extends AbstractChannel { private static final ChannelMetadata TEST_METADATA = new ChannelMetadata(false); - private class TestUnsafe extends AbstractUnsafe { - @Override - public void connect(SocketAddress remoteAddress, SocketAddress localAddress, ChannelPromise promise) { } - } + private final ChannelConfig config = new DefaultChannelConfig(this); TestChannel() { super(null); @@ -102,7 +166,7 @@ public void connect(SocketAddress remoteAddress, SocketAddress localAddress, Cha @Override public ChannelConfig config() { - return new DefaultChannelConfig(this); + return config; } @Override @@ -122,7 +186,12 @@ public ChannelMetadata metadata() { @Override protected AbstractUnsafe newUnsafe() { - return new TestUnsafe(); + return new AbstractUnsafe() { + @Override + public void connect(SocketAddress remoteAddress, SocketAddress localAddress, ChannelPromise promise) { + promise.setFailure(new UnsupportedOperationException()); + } + }; } @Override @@ -141,16 +210,16 @@ protected SocketAddress remoteAddress0() { } @Override - protected void doBind(SocketAddress localAddress) throws Exception { } + protected void doBind(SocketAddress localAddress) { } @Override - protected void doDisconnect() throws Exception { } + protected void doDisconnect() { } @Override - protected void doClose() throws Exception { } + protected void doClose() { } @Override - protected void doBeginRead() throws Exception { } + protected void doBeginRead() { } @Override protected void doWrite(ChannelOutboundBuffer in) throws Exception { }
train
val
"2019-02-12T17:05:30"
"2019-02-12T16:02:43Z"
normanmaurer
val
netty/netty/8875_8876
netty/netty
netty/netty/8875
netty/netty/8876
[ "timestamp(timedelta=0.0, similarity=0.8729818833690185)" ]
f176384a729c1d9352c9ed878b9b967ca2f31bf8
e609b5eeb7daba819f530fea500991c9cfc18412
[ "@lutovich fix looks good. Please submit a PR :)", "@normanmaurer done, thank you." ]
[ "please also call `channel.finish()` and assert return value.", "replace by `writeOutbound(...)` and assert return value", "replace by `writeOutbound(...)` ", "please also call `channel.finish()` and assert return value.", "nit: final", "private", "private ", "private ", "final ", "@lutovich Can we move this call out of the `if...else` block?", "@lutovich seems like this was missed ?", "I changed it back on purpose in the second commit to make sure listener only invokes `ChunkedInput#close()` on failure. On success it also needs to read progress and length from the input. Does this make sense?", "ah ok yeah.", "@lutovich We assume here that only `in.isEndOfInput` or `in.length` were able to fail, right? I would probably move this try/catch upper to be more \"targeted\" and specific. Otherwise, it's not obvious from the first glance why do we try to close the input once again here. ", "@lutovich Not related to this specific change, but... I wonder if there's a chance that the write was already set to success/failure 🤔 Here's [a comment](https://github.com/netty/netty/blob/33c18469349d45b139506446f356e2b4f3af7086/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java#L217-L225) from `doFlush`, not sure if the same is applicable here.", "@kachayev yeah, I think this try-catch is to handle failures in `#isEndOfInput()` and `#length()`. Added a commit with a tighter try-catch as you suggest. It makes code a bit longer and adds a `continue`. Not sure it's an improvement but please take a look :)", "@kachayev I'm not sure, to be honest. `currentWrite` is a `PendingWrite` and uses `trySuccess/tryFailure` to complete the underlying promise. It shouldn't be a problem to invoke `success/failure` more than one, right?", "We need Python's try/catch/**else** :) We can mark that first block failed in a separate variable, and test its value instead of continue.", "Right, should not be a problem. I'm just trying to understand if closing the input (potentially?) twice will cause any troubles.", "I think it should be ok... if not we can do another PR." ]
"2019-02-19T15:49:30Z"
[]
ChunkedWriteHandler does not close successful ChunkedInputs
### Expected behavior `ChunkedWriteHandler` invokes `ChunkedInput#close()` after the input is fully consumed and `ChunkedInput#isEndOfInput()` returns `true.` ### Actual behavior `ChunkedInput#close()` is not invoked when the input completes successfully. This can result in resource leaks when the input is based on an external resource, like a file handle. ### Steps to reproduce Run the attached test. ### Minimal yet complete reproducer code (or URL to code) ```java @Test public void testCloseSuccessfulChunkedInput() throws Exception { final int totalChunks = 10; final AtomicBoolean closeInvoked = new AtomicBoolean(); ChunkedInput<ByteBuf> input = new ChunkedInput<ByteBuf>() { int chunksProduced = 0; @Override public boolean isEndOfInput() throws Exception { return chunksProduced >= totalChunks; } @Override public void close() throws Exception { closeInvoked.set(true); } @Override public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception { return readChunk(ctx.alloc()); } @Override public ByteBuf readChunk(ByteBufAllocator allocator) throws Exception { ByteBuf buf = allocator.buffer(4); buf.writeInt(chunksProduced); chunksProduced++; return buf; } @Override public long length() { return totalChunks; } @Override public long progress() { return chunksProduced; } }; EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); ch.writeAndFlush(input).sync(); for (int i = 0; i < totalChunks; i++) { ByteBuf buf = ch.readOutbound(); assertEquals(i, buf.readInt()); } assertTrue(closeInvoked.get()); // this assertion fails } ``` ### Netty version Problem exists only in `4.1.33.Final`. ### JVM version (e.g. `java -version`) ``` java version "1.8.0_201" Java(TM) SE Runtime Environment (build 1.8.0_201-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode) ``` ### OS version (e.g. `uname -a`) ``` Darwin MacBook-Pro.local 18.2.0 Darwin Kernel Version 18.2.0: Thu Dec 20 20:46:53 PST 2018; root:xnu-4903.241.1~1/RELEASE_X86_64 x86_64 ``` This issue can be fixed by closing the input in a write future listener [here](https://github.com/netty/netty/blob/netty-4.1.33.Final/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java#L286-L287) after the input is consumed. I have a branch with a tentative fix https://github.com/lutovich/netty/commit/1534f7778ad8c14d488fe17916c5baed5aaedc60 and can create a PR if it looks okay.
[ "handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java" ]
[ "handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java" ]
[ "handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java b/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java index f39328dff7a..1a1822b5973 100644 --- a/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java +++ b/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java @@ -166,22 +166,28 @@ private void discard(Throwable cause) { Object message = currentWrite.msg; if (message instanceof ChunkedInput) { ChunkedInput<?> in = (ChunkedInput<?>) message; + boolean endOfInput; + long inputLength; try { - if (!in.isEndOfInput()) { - if (cause == null) { - cause = new ClosedChannelException(); - } - currentWrite.fail(cause); - } else { - currentWrite.success(in.length()); - } + endOfInput = in.isEndOfInput(); + inputLength = in.length(); closeInput(in); } catch (Exception e) { + closeInput(in); currentWrite.fail(e); if (logger.isWarnEnabled()) { - logger.warn(ChunkedInput.class.getSimpleName() + ".isEndOfInput() failed", e); + logger.warn(ChunkedInput.class.getSimpleName() + " failed", e); } - closeInput(in); + continue; + } + + if (!endOfInput) { + if (cause == null) { + cause = new ClosedChannelException(); + } + currentWrite.fail(cause); + } else { + currentWrite.success(inputLength); } } else { if (cause == null) { @@ -249,8 +255,8 @@ private void doFlush(final ChannelHandlerContext ctx) { ReferenceCountUtil.release(message); } - currentWrite.fail(t); closeInput(chunks); + currentWrite.fail(t); break; } @@ -283,8 +289,12 @@ public void operationComplete(ChannelFuture future) throws Exception { closeInput(chunks); currentWrite.fail(future.cause()); } else { - currentWrite.progress(chunks.progress(), chunks.length()); - currentWrite.success(chunks.length()); + // read state of the input in local variables before closing it + long inputProgress = chunks.progress(); + long inputLength = chunks.length(); + closeInput(chunks); + currentWrite.progress(inputProgress, inputLength); + currentWrite.success(inputLength); } } }); @@ -293,7 +303,7 @@ public void operationComplete(ChannelFuture future) throws Exception { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { - closeInput((ChunkedInput<?>) pendingMessage); + closeInput(chunks); currentWrite.fail(future.cause()); } else { currentWrite.progress(chunks.progress(), chunks.length()); @@ -305,7 +315,7 @@ public void operationComplete(ChannelFuture future) throws Exception { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { - closeInput((ChunkedInput<?>) pendingMessage); + closeInput(chunks); currentWrite.fail(future.cause()); } else { currentWrite.progress(chunks.progress(), chunks.length());
diff --git a/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java b/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java index 5b03048ba69..be6951d88bd 100644 --- a/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java +++ b/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java @@ -21,8 +21,8 @@ import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelFutureListener; import io.netty.channel.ChannelHandlerContext; -import io.netty.channel.ChannelPromise; import io.netty.channel.ChannelOutboundHandlerAdapter; +import io.netty.channel.ChannelPromise; import io.netty.channel.embedded.EmbeddedChannel; import io.netty.util.CharsetUtil; import io.netty.util.ReferenceCountUtil; @@ -33,9 +33,11 @@ import java.io.FileOutputStream; import java.io.IOException; import java.nio.channels.Channels; +import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; +import static java.util.concurrent.TimeUnit.*; import static org.junit.Assert.*; public class ChunkedWriteHandlerTest { @@ -433,6 +435,142 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) assertEquals(1, chunks.get()); } + @Test + public void testCloseSuccessfulChunkedInput() { + int chunks = 10; + TestChunkedInput input = new TestChunkedInput(chunks); + EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); + + assertTrue(ch.writeOutbound(input)); + + for (int i = 0; i < chunks; i++) { + ByteBuf buf = ch.readOutbound(); + assertEquals(i, buf.readInt()); + buf.release(); + } + + assertTrue(input.isClosed()); + assertFalse(ch.finish()); + } + + @Test + public void testCloseFailedChunkedInput() { + Exception error = new Exception("Unable to produce a chunk"); + ThrowingChunkedInput input = new ThrowingChunkedInput(error); + + EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); + + try { + ch.writeOutbound(input); + fail("Exception expected"); + } catch (Exception e) { + assertEquals(error, e); + } + + assertTrue(input.isClosed()); + assertFalse(ch.finish()); + } + + @Test + public void testWriteListenerInvokedAfterSuccessfulChunkedInputClosed() throws Exception { + final TestChunkedInput input = new TestChunkedInput(2); + EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); + + final AtomicBoolean inputClosedWhenListenerInvoked = new AtomicBoolean(); + final CountDownLatch listenerInvoked = new CountDownLatch(1); + + ChannelFuture writeFuture = ch.write(input); + writeFuture.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + inputClosedWhenListenerInvoked.set(input.isClosed()); + listenerInvoked.countDown(); + } + }); + ch.flush(); + + assertTrue(listenerInvoked.await(10, SECONDS)); + assertTrue(writeFuture.isSuccess()); + assertTrue(inputClosedWhenListenerInvoked.get()); + assertTrue(ch.finishAndReleaseAll()); + } + + @Test + public void testWriteListenerInvokedAfterFailedChunkedInputClosed() throws Exception { + final ThrowingChunkedInput input = new ThrowingChunkedInput(new RuntimeException()); + EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); + + final AtomicBoolean inputClosedWhenListenerInvoked = new AtomicBoolean(); + final CountDownLatch listenerInvoked = new CountDownLatch(1); + + ChannelFuture writeFuture = ch.write(input); + writeFuture.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + inputClosedWhenListenerInvoked.set(input.isClosed()); + listenerInvoked.countDown(); + } + }); + ch.flush(); + + assertTrue(listenerInvoked.await(10, SECONDS)); + assertFalse(writeFuture.isSuccess()); + assertTrue(inputClosedWhenListenerInvoked.get()); + assertFalse(ch.finish()); + } + + @Test + public void testWriteListenerInvokedAfterChannelClosedAndInputFullyConsumed() throws Exception { + // use empty input which has endOfInput = true + final TestChunkedInput input = new TestChunkedInput(0); + EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); + + final AtomicBoolean inputClosedWhenListenerInvoked = new AtomicBoolean(); + final CountDownLatch listenerInvoked = new CountDownLatch(1); + + ChannelFuture writeFuture = ch.write(input); + writeFuture.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + inputClosedWhenListenerInvoked.set(input.isClosed()); + listenerInvoked.countDown(); + } + }); + ch.close(); // close channel to make handler discard the input on subsequent flush + ch.flush(); + + assertTrue(listenerInvoked.await(10, SECONDS)); + assertTrue(writeFuture.isSuccess()); + assertTrue(inputClosedWhenListenerInvoked.get()); + assertFalse(ch.finish()); + } + + @Test + public void testWriteListenerInvokedAfterChannelClosedAndInputNotFullyConsumed() throws Exception { + // use non-empty input which has endOfInput = false + final TestChunkedInput input = new TestChunkedInput(42); + EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); + + final AtomicBoolean inputClosedWhenListenerInvoked = new AtomicBoolean(); + final CountDownLatch listenerInvoked = new CountDownLatch(1); + + ChannelFuture writeFuture = ch.write(input); + writeFuture.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + inputClosedWhenListenerInvoked.set(input.isClosed()); + listenerInvoked.countDown(); + } + }); + ch.close(); // close channel to make handler discard the input on subsequent flush + ch.flush(); + + assertTrue(listenerInvoked.await(10, SECONDS)); + assertFalse(writeFuture.isSuccess()); + assertTrue(inputClosedWhenListenerInvoked.get()); + assertFalse(ch.finish()); + } + private static void check(Object... inputs) { EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler()); @@ -524,4 +662,96 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) assertEquals(BYTES.length, read); } + + private static final class TestChunkedInput implements ChunkedInput<ByteBuf> { + private final int chunksToProduce; + + private int chunksProduced; + private volatile boolean closed; + + TestChunkedInput(int chunksToProduce) { + this.chunksToProduce = chunksToProduce; + } + + @Override + public boolean isEndOfInput() { + return chunksProduced >= chunksToProduce; + } + + @Override + public void close() { + closed = true; + } + + @Override + public ByteBuf readChunk(ChannelHandlerContext ctx) { + return readChunk(ctx.alloc()); + } + + @Override + public ByteBuf readChunk(ByteBufAllocator allocator) { + ByteBuf buf = allocator.buffer(); + buf.writeInt(chunksProduced); + chunksProduced++; + return buf; + } + + @Override + public long length() { + return chunksToProduce; + } + + @Override + public long progress() { + return chunksProduced; + } + + boolean isClosed() { + return closed; + } + } + + private static final class ThrowingChunkedInput implements ChunkedInput<ByteBuf> { + private final Exception error; + + private volatile boolean closed; + + ThrowingChunkedInput(Exception error) { + this.error = error; + } + + @Override + public boolean isEndOfInput() { + return false; + } + + @Override + public void close() { + closed = true; + } + + @Override + public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception { + return readChunk(ctx.alloc()); + } + + @Override + public ByteBuf readChunk(ByteBufAllocator allocator) throws Exception { + throw error; + } + + @Override + public long length() { + return -1; + } + + @Override + public long progress() { + return -1; + } + + boolean isClosed() { + return closed; + } + } }
train
val
"2019-02-15T22:13:17"
"2019-02-19T15:25:39Z"
lutovich
val
netty/netty/8868_8885
netty/netty
netty/netty/8868
netty/netty/8885
[ "timestamp(timedelta=0.0, similarity=0.9519903605496893)" ]
d02b51965f6b4906ca1f8f9648c2eb801df7b94a
81e43d50880fbe007c27e897034fdfa0c844a88b
[ "@georgeOsdDev what you suggest we should do here ? You are saying we should check in the constructor that you do not construct something \"invalid\" ?", "I'm not sure to check `count` in the constructor.\r\n`sun.nio.ch.FileChannelImpl` does not throws Exception even if `count ` is larger than real file size.\r\nhttps://docs.oracle.com/javase/8/docs/api/java/nio/channels/FileChannel.html#transferTo-long-long-java.nio.channels.WritableByteChannel-\r\n\r\n---\r\nWhen CPU usage increase, It seems `NioSocketChannel.doWrite` is called repeatedly. So it seems to be a matter of eventLoop handling\r\n```\r\nio.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:375)\r\nio.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:934)\r\nio.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:367)\r\nio.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:639)\r\nio.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)\r\nio.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)\r\nio.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)\r\nio.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)\r\nio.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\r\n```\r\n\r\nWhen `count` is *not* larger than realSize, `in.remove();` is called and doWrite is called just once.\r\nhttps://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java#L238", "Looking into it", "@georgeOsdDev PTAL https://github.com/netty/netty/pull/8885" ]
[]
"2019-02-25T09:59:48Z"
[ "defect" ]
DefaultFileRegion.transferTo with invalid count may cause hang
### Expected behavior `DefaultFileRegion.transferTo` will end successfully or throws Exception. ### Actual behavior Thread will be hang and CPU usage increase to 100% ### Steps to reproduce Initialize `DefaultFileRegion` with `count` larger than the actual `FileChannel ` size. ``` new DefaultFileRegion(raf.getChannel(), 0, fileLength) ``` ### Minimal yet complete reproducer code (or URL to code) https://github.com/netty/netty/blob/4.1/example/src/main/java/io/netty/example/http/file/HttpStaticFileServerHandler.java#L175-L193 Truncate files from another process between L175 and L192 Related issue. https://github.com/xitrum-framework/xitrum/issues/675 ### Netty version 4.1 ### JVM version (e.g. `java -version`) java version "1.8.0_202" Java(TM) SE Runtime Environment (build 1.8.0_202-b08) Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode) ### OS version (e.g. `uname -a`) Darwin OshidaIMac.local 17.7.0 Darwin Kernel Version 17.7.0: Wed Oct 10 23:06:14 PDT 2018; root:xnu-4570.71.13~1/RELEASE_X86_64 x86_64
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueStreamChannel.java", "transport/src/main/java/io/netty/channel/AbstractChannel.java", "transport/src/main/java/io/netty/channel/DefaultFileRegion.java" ]
[ "transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueStreamChannel.java", "transport/src/main/java/io/netty/channel/AbstractChannel.java", "transport/src/main/java/io/netty/channel/DefaultFileRegion.java" ]
[ "testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java", "transport/src/test/java/io/netty/channel/DefaultFileRegionTest.java" ]
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java index 8d993b48e74..e2f2c88cb72 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java @@ -367,13 +367,13 @@ private int writeBytesMultiple( * </ul> */ private int writeDefaultFileRegion(ChannelOutboundBuffer in, DefaultFileRegion region) throws Exception { + final long offset = region.transferred(); final long regionCount = region.count(); - if (region.transferred() >= regionCount) { + if (offset >= regionCount) { in.remove(); return 0; } - final long offset = region.transferred(); final long flushedAmount = socket.sendFile(region, region.position(), offset, regionCount - offset); if (flushedAmount > 0) { in.progress(flushedAmount); @@ -381,6 +381,8 @@ private int writeDefaultFileRegion(ChannelOutboundBuffer in, DefaultFileRegion r in.remove(); } return 1; + } else if (flushedAmount == 0) { + validateFileRegion(region, offset); } return WRITE_STATUS_SNDBUF_FULL; } diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueStreamChannel.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueStreamChannel.java index 7d604f150ac..4b7f4e06e5b 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueStreamChannel.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/AbstractKQueueStreamChannel.java @@ -210,12 +210,13 @@ private int writeBytesMultiple( */ private int writeDefaultFileRegion(ChannelOutboundBuffer in, DefaultFileRegion region) throws Exception { final long regionCount = region.count(); - if (region.transferred() >= regionCount) { + final long offset = region.transferred(); + + if (offset >= regionCount) { in.remove(); return 0; } - final long offset = region.transferred(); final long flushedAmount = socket.sendFile(region, region.position(), offset, regionCount - offset); if (flushedAmount > 0) { in.progress(flushedAmount); @@ -223,6 +224,8 @@ private int writeDefaultFileRegion(ChannelOutboundBuffer in, DefaultFileRegion r in.remove(); } return 1; + } else if (flushedAmount == 0) { + validateFileRegion(region, offset); } return WRITE_STATUS_SNDBUF_FULL; } diff --git a/transport/src/main/java/io/netty/channel/AbstractChannel.java b/transport/src/main/java/io/netty/channel/AbstractChannel.java index 8f8fe2f787a..fcd596563ef 100644 --- a/transport/src/main/java/io/netty/channel/AbstractChannel.java +++ b/transport/src/main/java/io/netty/channel/AbstractChannel.java @@ -1149,6 +1149,10 @@ protected Object filterOutboundMessage(Object msg) throws Exception { return msg; } + protected void validateFileRegion(DefaultFileRegion region, long position) throws IOException { + DefaultFileRegion.validate(region, position); + } + static final class CloseFuture extends DefaultChannelPromise { CloseFuture(AbstractChannel ch) { diff --git a/transport/src/main/java/io/netty/channel/DefaultFileRegion.java b/transport/src/main/java/io/netty/channel/DefaultFileRegion.java index 2435b20a313..2f6bea95a1c 100644 --- a/transport/src/main/java/io/netty/channel/DefaultFileRegion.java +++ b/transport/src/main/java/io/netty/channel/DefaultFileRegion.java @@ -139,6 +139,12 @@ public long transferTo(WritableByteChannel target, long position) throws IOExcep long written = file.transferTo(this.position + position, count, target); if (written > 0) { transferred += written; + } else if (written == 0) { + // If the amount of written data is 0 we need to check if the requested count is bigger then the + // actual file itself as it may have been truncated on disk. + // + // See https://github.com/netty/netty/issues/8868 + validate(this, position); } return written; } @@ -182,4 +188,16 @@ public FileRegion touch() { public FileRegion touch(Object hint) { return this; } + + static void validate(DefaultFileRegion region, long position) throws IOException { + // If the amount of written data is 0 we need to check if the requested count is bigger then the + // actual file itself as it may have been truncated on disk. + // + // See https://github.com/netty/netty/issues/8868 + long size = region.file.size(); + long count = region.count - position; + if (region.position + count + position > size) { + throw new IOException("Underlying file size " + size + " smaller then requested count " + region.count); + } + } }
diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java index 53deb6c6743..881acd1bcff 100644 --- a/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java +++ b/testsuite/src/main/java/io/netty/testsuite/transport/socket/SocketFileRegionTest.java @@ -22,11 +22,13 @@ import io.netty.channel.Channel; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInboundHandler; +import io.netty.channel.ChannelInboundHandlerAdapter; import io.netty.channel.ChannelOption; import io.netty.channel.DefaultFileRegion; import io.netty.channel.FileRegion; import io.netty.channel.SimpleChannelInboundHandler; import io.netty.util.internal.PlatformDependent; +import org.hamcrest.CoreMatchers; import org.junit.Test; import java.io.File; @@ -73,6 +75,11 @@ public void testFileRegionVoidPromiseNotAutoRead() throws Throwable { run(); } + @Test + public void testFileRegionCountLargerThenFile() throws Throwable { + run(); + } + public void testFileRegion(ServerBootstrap sb, Bootstrap cb) throws Throwable { testFileRegion0(sb, cb, false, true, true); } @@ -93,6 +100,34 @@ public void testFileRegionVoidPromiseNotAutoRead(ServerBootstrap sb, Bootstrap c testFileRegion0(sb, cb, true, false, true); } + public void testFileRegionCountLargerThenFile(ServerBootstrap sb, Bootstrap cb) throws Throwable { + File file = File.createTempFile("netty-", ".tmp"); + file.deleteOnExit(); + + final FileOutputStream out = new FileOutputStream(file); + out.write(data); + out.close(); + + sb.childHandler(new SimpleChannelInboundHandler<ByteBuf>() { + @Override + protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) { + // Just drop the message. + } + }); + cb.handler(new ChannelInboundHandlerAdapter()); + + Channel sc = sb.bind().sync().channel(); + Channel cc = cb.connect(sc.localAddress()).sync().channel(); + + // Request file region which is bigger then the underlying file. + FileRegion region = new DefaultFileRegion( + new FileInputStream(file).getChannel(), 0, data.length + 1024); + + assertThat(cc.writeAndFlush(region).await().cause(), CoreMatchers.<Throwable>instanceOf(IOException.class)); + cc.close().sync(); + sc.close().sync(); + } + private static void testFileRegion0( ServerBootstrap sb, Bootstrap cb, boolean voidPromise, final boolean autoRead, boolean defaultFileRegion) throws Throwable { diff --git a/transport/src/test/java/io/netty/channel/DefaultFileRegionTest.java b/transport/src/test/java/io/netty/channel/DefaultFileRegionTest.java new file mode 100644 index 00000000000..e416bccba59 --- /dev/null +++ b/transport/src/test/java/io/netty/channel/DefaultFileRegionTest.java @@ -0,0 +1,120 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.channel; + +import io.netty.util.internal.PlatformDependent; +import org.junit.Test; + +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.RandomAccessFile; +import java.nio.channels.Channels; +import java.nio.channels.WritableByteChannel; + +import static org.junit.Assert.assertArrayEquals; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; + +public class DefaultFileRegionTest { + + private static final byte[] data = new byte[1048576 * 10]; + + static { + PlatformDependent.threadLocalRandom().nextBytes(data); + } + + private static File newFile() throws IOException { + File file = File.createTempFile("netty-", ".tmp"); + file.deleteOnExit(); + + final FileOutputStream out = new FileOutputStream(file); + out.write(data); + out.close(); + return file; + } + + @Test + public void testCreateFromFile() throws IOException { + File file = newFile(); + try { + testFileRegion(new DefaultFileRegion(file, 0, data.length)); + } finally { + file.delete(); + } + } + + @Test + public void testCreateFromFileChannel() throws IOException { + File file = newFile(); + RandomAccessFile randomAccessFile = new RandomAccessFile(file, "r"); + try { + testFileRegion(new DefaultFileRegion(randomAccessFile.getChannel(), 0, data.length)); + } finally { + randomAccessFile.close(); + file.delete(); + } + } + + private static void testFileRegion(FileRegion region) throws IOException { + ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); + WritableByteChannel channel = Channels.newChannel(outputStream); + + try { + assertEquals(data.length, region.count()); + assertEquals(0, region.transferred()); + assertEquals(data.length, region.transferTo(channel, 0)); + assertEquals(data.length, region.count()); + assertEquals(data.length, region.transferred()); + assertArrayEquals(data, outputStream.toByteArray()); + } finally { + channel.close(); + } + } + + @Test + public void testTruncated() throws IOException { + File file = newFile(); + ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); + WritableByteChannel channel = Channels.newChannel(outputStream); + RandomAccessFile randomAccessFile = new RandomAccessFile(file, "rw"); + + try { + FileRegion region = new DefaultFileRegion(randomAccessFile.getChannel(), 0, data.length); + + randomAccessFile.getChannel().truncate(data.length - 1024); + + assertEquals(data.length, region.count()); + assertEquals(0, region.transferred()); + + assertEquals(data.length - 1024, region.transferTo(channel, 0)); + assertEquals(data.length, region.count()); + assertEquals(data.length - 1024, region.transferred()); + try { + region.transferTo(channel, data.length - 1024); + fail(); + } catch (IOException expected) { + // expected + } + } finally { + channel.close(); + + randomAccessFile.close(); + file.delete(); + } + } +}
train
val
"2019-02-25T08:55:55"
"2019-02-15T02:34:21Z"
georgeOsdDev
val
netty/netty/8883_8896
netty/netty
netty/netty/8883
netty/netty/8896
[ "timestamp(timedelta=0.0, similarity=0.9404264630616554)" ]
f8c89e2e055aeb3070d05e6b9425bcff6a8013fd
ee351ef8bcd3d2ee3dcf957741a11429fd67b883
[ "@kachayev this sounds good... Its also inline with what we do in SslHandler. ", "@normanmaurer I'll do a PR for WebSockets first, as I know this code quite well. Will check `SslHandler` later ;) ", "Sorry maybe I was not clear here... we have this kind of code already in SslHandler. I just wanted to mention it as it may gives you some good examples ", "Ah, okay! Now it's clear. I thought you meant that we need to do the same for SSL. Thanks! " ]
[ "why is this volatile ?", "private final ?", "remove public", "why do you retain here ? This looks like a possible memory leak.", "assert both return values", "Previously, I had an API to change this value after a handshaker is already created. As far as I removed that, I can remove `volatile` either.", "Yeah, that's a bug", "nit: you may also want to check `!forceCloseInit.get()`", "Use a timeout as we may never complete the loop here when there is a bug:\r\n\r\nhttps://github.com/netty/netty/pull/8896/files#diff-96a945171dadb844c094502b4d37149bR164", "nice!", "either remove the method or fix the styling to use:\r\n\r\n```java\r\nprivate void setForceCloseComplete() {\r\n ....\r\n}\r\n```\r\n\r\nI would personally just remove it.", "Removed.", "I think you should override this one in each implementation to return the correct type. ", "Done!", "nit: jut call `super.setForceCloseTimeoutMillis(...)` this will also ensure we do the correct thing if we ever change the super method content and will allow to make `forceCloseTimeMillis` private", "nit: jut call `super.setForceCloseTimeoutMillis(...)` this will also ensure we do the correct thing if we ever change the super method content and will allow to make `forceCloseTimeMillis` private", "nit: jut call `super.setForceCloseTimeoutMillis(...)` this will also ensure we do the correct thing if we ever change the super method content and will allow to make `forceCloseTimeMillis` private", "nit: jut call `super.setForceCloseTimeoutMillis(...)` this will also ensure we do the correct thing if we ever change the super method content and will allow to make `forceCloseTimeMillis` private", "nit: you can remove the java docs as it just overrides a method.", "nit: you can remove the java docs as it just overrides a method.", "nit: you can remove the java docs as it just overrides a method.", "nit: you can remove the java docs as it just overrides a method.", "as this is protected we will need to also retain the old constructor as otherwise its API breakage:\r\n\r\n```\r\n20:53:51 [ERROR] Failed to execute goal com.github.siom79.japicmp:japicmp-maven-plugin:0.13.1:cmp (default) on project netty-codec-http: There is at least one incompatibility: io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.WebSocketClientHandshaker(java.net.URI,io.netty.handler.codec.http.websocketx.WebSocketVersion,java.lang.String,io.netty.handler.codec.http.HttpHeaders,int):CONSTRUCTOR_REMOVED -> [Help 1]\r\n```", "Done!", "nit: add a whitespace after `,`", "Done!", "Could you use `Atomic*FieldUpdater` instead?", "Could extract the default value into a static final field.", "Could you add the Javadoc summary, because otherwise the generated Javadoc will have empty summary.", "How about:\r\n\r\n```\r\nCreates a new instance with the specified destination WebSocket location and version to initiate.\r\n```", "nit: do you really need `ScheduledFuture` instead of `Future` as you don't use any method from `Delayed`?", "nit: more of a coding style preference, but I'd rather have one single boolean operation wrapping the method's body: \r\n`if (future.isSuccess() && channel.isActive() && forceCloseInit.compareAndSet(false, true))`" ]
"2019-02-27T22:09:27Z"
[ "defect", "improvement", "feature" ]
Websocket client closing handshake to support "force close" after given timetout
RFC455 defines that, generally, a WebSocket client should not close a TCP connection as far as a server is the one who's responsible for doing that. In practice, when I do not control the server, this might lead to connections leak if the server does not comply with the RFC. [RFC455#7.1.1](https://tools.ietf.org/html/rfc6455#section-7.1.1) says > In abnormal cases (such as not having received a TCP Close from the server after a reasonable amount of time) a client MAY initiate the TCP Close. I suggest to add `forceCloseAfterMillis` option to `WebSocketClientHandshaker` (and to `WebSocketClientProtocolHandler` appropriately) and schedule a channel close when necessary after `CloseWebSocketFrame` being flushed [here](https://github.com/netty/netty/blob/f176384a729c1d9352c9ed878b9b967ca2f31bf8/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java#L430-L435). If there's no objections, I'm happy to provide a PR.
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketHandshakeHandOverTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java index 44c91445a2a..b293614c16d 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java @@ -40,6 +40,9 @@ import java.net.URI; import java.nio.channels.ClosedChannelException; import java.util.Locale; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; /** * Base class for web socket client handshake implementations @@ -50,6 +53,7 @@ public abstract class WebSocketClientHandshaker { private static final String HTTP_SCHEME_PREFIX = HttpScheme.HTTP + "://"; private static final String HTTPS_SCHEME_PREFIX = HttpScheme.HTTPS + "://"; + protected static final int DEFAULT_FORCE_CLOSE_TIMEOUT_MILLIS = 10000; private final URI uri; @@ -57,6 +61,15 @@ public abstract class WebSocketClientHandshaker { private volatile boolean handshakeComplete; + private volatile long forceCloseTimeoutMillis = DEFAULT_FORCE_CLOSE_TIMEOUT_MILLIS; + + private volatile int forceCloseInit; + + private static final AtomicIntegerFieldUpdater<WebSocketClientHandshaker> FORCE_CLOSE_INIT_UPDATER = + AtomicIntegerFieldUpdater.newUpdater(WebSocketClientHandshaker.class, "forceCloseInit"); + + private volatile boolean forceCloseComplete; + private final String expectedSubprotocol; private volatile String actualSubprotocol; @@ -82,11 +95,35 @@ public abstract class WebSocketClientHandshaker { */ protected WebSocketClientHandshaker(URI uri, WebSocketVersion version, String subprotocol, HttpHeaders customHeaders, int maxFramePayloadLength) { + this(uri, version, subprotocol, customHeaders, maxFramePayloadLength, DEFAULT_FORCE_CLOSE_TIMEOUT_MILLIS); + } + + /** + * Base constructor + * + * @param uri + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified + */ + protected WebSocketClientHandshaker(URI uri, WebSocketVersion version, String subprotocol, + HttpHeaders customHeaders, int maxFramePayloadLength, + long forceCloseTimeoutMillis) { this.uri = uri; this.version = version; expectedSubprotocol = subprotocol; this.customHeaders = customHeaders; this.maxFramePayloadLength = maxFramePayloadLength; + this.forceCloseTimeoutMillis = forceCloseTimeoutMillis; } /** @@ -140,6 +177,29 @@ private void setActualSubprotocol(String actualSubprotocol) { this.actualSubprotocol = actualSubprotocol; } + public long forceCloseTimeoutMillis() { + return forceCloseTimeoutMillis; + } + + /** + * Flag to indicate if the closing handshake was initiated because of timeout. + * For testing only. + */ + protected boolean isForceCloseComplete() { + return forceCloseComplete; + } + + /** + * Sets timeout to close the connection if it was not closed by the server. + * + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified + */ + public WebSocketClientHandshaker setForceCloseTimeoutMillis(long forceCloseTimeoutMillis) { + this.forceCloseTimeoutMillis = forceCloseTimeoutMillis; + return this; + } + /** * Begins the opening handshake * @@ -431,7 +491,46 @@ public ChannelFuture close(Channel channel, CloseWebSocketFrame frame, ChannelPr if (channel == null) { throw new NullPointerException("channel"); } - return channel.writeAndFlush(frame, promise); + channel.writeAndFlush(frame, promise); + applyForceCloseTimeout(channel, promise); + return promise; + } + + private void applyForceCloseTimeout(final Channel channel, ChannelFuture flushFuture) { + final long forceCloseTimeoutMillis = this.forceCloseTimeoutMillis; + final WebSocketClientHandshaker handshaker = this; + if (forceCloseTimeoutMillis <= 0 || !channel.isActive() || forceCloseInit != 0) { + return; + } + + flushFuture.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) throws Exception { + // If flush operation failed, there is no reason to expect + // a server to receive CloseFrame. Thus this should be handled + // by the application separately. + // Also, close might be called twice from different threads. + if (future.isSuccess() && channel.isActive() && + FORCE_CLOSE_INIT_UPDATER.compareAndSet(handshaker, 0, 1)) { + final Future<?> forceCloseFuture = channel.eventLoop().schedule(new Runnable() { + @Override + public void run() { + if (channel.isActive()) { + channel.close(); + forceCloseComplete = true; + } + } + }, forceCloseTimeoutMillis, TimeUnit.MILLISECONDS); + + channel.closeFuture().addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) throws Exception { + forceCloseFuture.cancel(false); + } + }); + } + } + }); } /** diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java index f02626319f3..3b43bd3ae58 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java @@ -48,7 +48,7 @@ public class WebSocketClientHandshaker00 extends WebSocketClientHandshaker { private ByteBuf expectedChallengeResponseBytes; /** - * Constructor specifying the destination web socket location and version to initiate + * Creates a new instance with the specified destination WebSocket location and version to initiate. * * @param webSocketURL * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be @@ -64,7 +64,31 @@ public class WebSocketClientHandshaker00 extends WebSocketClientHandshaker { */ public WebSocketClientHandshaker00(URI webSocketURL, WebSocketVersion version, String subprotocol, HttpHeaders customHeaders, int maxFramePayloadLength) { - super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength); + this(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, + DEFAULT_FORCE_CLOSE_TIMEOUT_MILLIS); + } + + /** + * Creates a new instance with the specified destination WebSocket location and version to initiate. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified + */ + public WebSocketClientHandshaker00(URI webSocketURL, WebSocketVersion version, String subprotocol, + HttpHeaders customHeaders, int maxFramePayloadLength, + long forceCloseTimeoutMillis) { + super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); } /** @@ -243,4 +267,11 @@ protected WebSocketFrameDecoder newWebsocketDecoder() { protected WebSocketFrameEncoder newWebSocketEncoder() { return new WebSocket00FrameEncoder(); } + + @Override + public WebSocketClientHandshaker00 setForceCloseTimeoutMillis(long forceCloseTimeoutMillis) { + super.setForceCloseTimeoutMillis(forceCloseTimeoutMillis); + return this; + } + } diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java index 4632a4aecb6..c10132989f1 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java @@ -94,10 +94,43 @@ public WebSocketClientHandshaker07(URI webSocketURL, WebSocketVersion version, S * When set to true, frames which are not masked properly according to the standard will still be * accepted. */ + public WebSocketClientHandshaker07(URI webSocketURL, WebSocketVersion version, String subprotocol, + boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, + boolean performMasking, boolean allowMaskMismatch) { + this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, performMasking, + allowMaskMismatch, DEFAULT_FORCE_CLOSE_TIMEOUT_MILLIS); + } + + /** + * Creates a new instance. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param allowExtensions + * Allow extensions to be used in the reserved bits of the web socket frame + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param performMasking + * Whether to mask all written websocket frames. This must be set to true in order to be fully compatible + * with the websocket specifications. Client applications that communicate with a non-standard server + * which doesn't require masking might set this to false to achieve a higher performance. + * @param allowMaskMismatch + * When set to true, frames which are not masked properly according to the standard will still be + * accepted + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified. + */ public WebSocketClientHandshaker07(URI webSocketURL, WebSocketVersion version, String subprotocol, boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, - boolean performMasking, boolean allowMaskMismatch) { - super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength); + boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis) { + super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); this.allowExtensions = allowExtensions; this.performMasking = performMasking; this.allowMaskMismatch = allowMaskMismatch; @@ -216,4 +249,11 @@ protected WebSocketFrameDecoder newWebsocketDecoder() { protected WebSocketFrameEncoder newWebSocketEncoder() { return new WebSocket07FrameEncoder(performMasking); } + + @Override + public WebSocketClientHandshaker07 setForceCloseTimeoutMillis(long forceCloseTimeoutMillis) { + super.setForceCloseTimeoutMillis(forceCloseTimeoutMillis); + return this; + } + } diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java index 1a11aa6358e..237d2f715ed 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java @@ -68,7 +68,8 @@ public class WebSocketClientHandshaker08 extends WebSocketClientHandshaker { */ public WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, String subprotocol, boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength) { - this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, true, false); + this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, true, + false, DEFAULT_FORCE_CLOSE_TIMEOUT_MILLIS); } /** @@ -93,12 +94,45 @@ public WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, S * which doesn't require masking might set this to false to achieve a higher performance. * @param allowMaskMismatch * When set to true, frames which are not masked properly according to the standard will still be - * accepted. + * accepted + */ + public WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, String subprotocol, + boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, + boolean performMasking, boolean allowMaskMismatch) { + this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, performMasking, + allowMaskMismatch, DEFAULT_FORCE_CLOSE_TIMEOUT_MILLIS); + } + + /** + * Creates a new instance. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param allowExtensions + * Allow extensions to be used in the reserved bits of the web socket frame + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param performMasking + * Whether to mask all written websocket frames. This must be set to true in order to be fully compatible + * with the websocket specifications. Client applications that communicate with a non-standard server + * which doesn't require masking might set this to false to achieve a higher performance. + * @param allowMaskMismatch + * When set to true, frames which are not masked properly according to the standard will still be + * accepted + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified. */ public WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, String subprotocol, boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, - boolean performMasking, boolean allowMaskMismatch) { - super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength); + boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis) { + super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); this.allowExtensions = allowExtensions; this.performMasking = performMasking; this.allowMaskMismatch = allowMaskMismatch; @@ -217,4 +251,11 @@ protected WebSocketFrameDecoder newWebsocketDecoder() { protected WebSocketFrameEncoder newWebSocketEncoder() { return new WebSocket08FrameEncoder(performMasking); } + + @Override + public WebSocketClientHandshaker08 setForceCloseTimeoutMillis(long forceCloseTimeoutMillis) { + super.setForceCloseTimeoutMillis(forceCloseTimeoutMillis); + return this; + } + } diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java index 808f7fc49ad..a96683f2d11 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java @@ -68,7 +68,8 @@ public class WebSocketClientHandshaker13 extends WebSocketClientHandshaker { */ public WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, String subprotocol, boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength) { - this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, true, false); + this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, + true, false); } /** @@ -98,7 +99,41 @@ public WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, S public WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, String subprotocol, boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, boolean performMasking, boolean allowMaskMismatch) { - super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength); + this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, + performMasking, allowMaskMismatch, DEFAULT_FORCE_CLOSE_TIMEOUT_MILLIS); + } + + /** + * Creates a new instance. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param allowExtensions + * Allow extensions to be used in the reserved bits of the web socket frame + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param performMasking + * Whether to mask all written websocket frames. This must be set to true in order to be fully compatible + * with the websocket specifications. Client applications that communicate with a non-standard server + * which doesn't require masking might set this to false to achieve a higher performance. + * @param allowMaskMismatch + * When set to true, frames which are not masked properly according to the standard will still be + * accepted + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified. + */ + public WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, String subprotocol, + boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, + boolean performMasking, boolean allowMaskMismatch, + long forceCloseTimeoutMillis) { + super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); this.allowExtensions = allowExtensions; this.performMasking = performMasking; this.allowMaskMismatch = allowMaskMismatch; @@ -217,4 +252,11 @@ protected WebSocketFrameDecoder newWebsocketDecoder() { protected WebSocketFrameEncoder newWebSocketEncoder() { return new WebSocket13FrameEncoder(performMasking); } + + @Override + public WebSocketClientHandshaker13 setForceCloseTimeoutMillis(long forceCloseTimeoutMillis) { + super.setForceCloseTimeoutMillis(forceCloseTimeoutMillis); + return this; + } + } diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java index b07825f4a17..22afc3bd764 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java @@ -107,24 +107,59 @@ public static WebSocketClientHandshaker newHandshaker( URI webSocketURL, WebSocketVersion version, String subprotocol, boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, boolean performMasking, boolean allowMaskMismatch) { + return newHandshaker(webSocketURL, version, subprotocol, allowExtensions, customHeaders, + maxFramePayloadLength, true, false, -1); + } + + /** + * Creates a new handshaker. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". + * Subsequent web socket frames will be sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. Null if no sub-protocol support is required. + * @param allowExtensions + * Allow extensions to be used in the reserved bits of the web socket frame + * @param customHeaders + * Custom HTTP headers to send during the handshake + * @param maxFramePayloadLength + * Maximum allowable frame payload length. Setting this value to your application's + * requirement may reduce denial of service attacks using long data frames. + * @param performMasking + * Whether to mask all written websocket frames. This must be set to true in order to be fully compatible + * with the websocket specifications. Client applications that communicate with a non-standard server + * which doesn't require masking might set this to false to achieve a higher performance. + * @param allowMaskMismatch + * When set to true, frames which are not masked properly according to the standard will still be + * accepted. + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified + */ + public static WebSocketClientHandshaker newHandshaker( + URI webSocketURL, WebSocketVersion version, String subprotocol, + boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, + boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis) { if (version == V13) { return new WebSocketClientHandshaker13( webSocketURL, V13, subprotocol, allowExtensions, customHeaders, - maxFramePayloadLength, performMasking, allowMaskMismatch); + maxFramePayloadLength, performMasking, allowMaskMismatch, forceCloseTimeoutMillis); } if (version == V08) { return new WebSocketClientHandshaker08( webSocketURL, V08, subprotocol, allowExtensions, customHeaders, - maxFramePayloadLength, performMasking, allowMaskMismatch); + maxFramePayloadLength, performMasking, allowMaskMismatch, forceCloseTimeoutMillis); } if (version == V07) { return new WebSocketClientHandshaker07( webSocketURL, V07, subprotocol, allowExtensions, customHeaders, - maxFramePayloadLength, performMasking, allowMaskMismatch); + maxFramePayloadLength, performMasking, allowMaskMismatch, forceCloseTimeoutMillis); } if (version == V00) { return new WebSocketClientHandshaker00( - webSocketURL, V00, subprotocol, customHeaders, maxFramePayloadLength); + webSocketURL, V00, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); } throw new WebSocketHandshakeException("Protocol version " + version + " not supported.");
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java index 2054af513f9..5cb1e0e7b62 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java @@ -217,7 +217,7 @@ private void testHttpResponseAndFrameInSameBuffer(boolean codec) { String url = "ws://localhost:9999/ws"; final WebSocketClientHandshaker shaker = newHandshaker(URI.create(url)); final WebSocketClientHandshaker handshaker = new WebSocketClientHandshaker( - shaker.uri(), shaker.version(), null, EmptyHttpHeaders.INSTANCE, Integer.MAX_VALUE) { + shaker.uri(), shaker.version(), null, EmptyHttpHeaders.INSTANCE, Integer.MAX_VALUE, -1) { @Override protected FullHttpRequest newHandshakeRequest() { return shaker.newHandshakeRequest(); diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketHandshakeHandOverTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketHandshakeHandOverTest.java index 10ad7749845..5583a27339e 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketHandshakeHandOverTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketHandshakeHandOverTest.java @@ -17,10 +17,13 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; +import io.netty.channel.ChannelFuture; +import io.netty.channel.ChannelFutureListener; import io.netty.channel.ChannelHandler; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.SimpleChannelInboundHandler; import io.netty.channel.embedded.EmbeddedChannel; +import io.netty.handler.codec.http.EmptyHttpHeaders; import io.netty.handler.codec.http.HttpClientCodec; import io.netty.handler.codec.http.HttpObjectAggregator; import io.netty.handler.codec.http.HttpServerCodec; @@ -30,6 +33,7 @@ import org.junit.Test; import java.net.URI; +import java.util.List; import static org.junit.Assert.*; @@ -39,6 +43,23 @@ public class WebSocketHandshakeHandOverTest { private WebSocketServerProtocolHandler.HandshakeComplete serverHandshakeComplete; private boolean clientReceivedHandshake; private boolean clientReceivedMessage; + private boolean serverReceivedCloseHandshake; + private boolean clientForceClosed; + + private final class CloseNoOpServerProtocolHandler extends WebSocketServerProtocolHandler { + CloseNoOpServerProtocolHandler(String websocketPath) { + super(websocketPath, null, false); + } + + @Override + protected void decode(ChannelHandlerContext ctx, WebSocketFrame frame, List<Object> out) throws Exception { + if (frame instanceof CloseWebSocketFrame) { + serverReceivedCloseHandshake = true; + return; + } + super.decode(ctx, frame, out); + } + } @Before public void setUp() { @@ -46,6 +67,8 @@ public void setUp() { serverHandshakeComplete = null; clientReceivedHandshake = false; clientReceivedMessage = false; + serverReceivedCloseHandshake = false; + clientForceClosed = false; } @Test @@ -95,6 +118,64 @@ protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Except assertTrue(clientReceivedMessage); } + @Test(timeout = 10000) + public void testClientHandshakerForceClose() throws Exception { + final WebSocketClientHandshaker handshaker = WebSocketClientHandshakerFactory.newHandshaker( + new URI("ws://localhost:1234/test"), WebSocketVersion.V13, null, true, + EmptyHttpHeaders.INSTANCE, Integer.MAX_VALUE, true, false, 20); + + EmbeddedChannel serverChannel = createServerChannel( + new CloseNoOpServerProtocolHandler("/test"), + new SimpleChannelInboundHandler<Object>() { + @Override + protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception { + } + }); + + EmbeddedChannel clientChannel = createClientChannel(handshaker, new SimpleChannelInboundHandler<Object>() { + @Override + public void userEventTriggered(ChannelHandlerContext ctx, Object evt) { + if (evt == ClientHandshakeStateEvent.HANDSHAKE_COMPLETE) { + ctx.channel().closeFuture().addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) throws Exception { + clientForceClosed = true; + } + }); + handshaker.close(ctx.channel(), new CloseWebSocketFrame()); + } + } + @Override + protected void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception { + } + }); + + // Transfer the handshake from the client to the server + transferAllDataWithMerge(clientChannel, serverChannel); + // Transfer the handshake from the server to client + transferAllDataWithMerge(serverChannel, clientChannel); + + // Transfer closing handshake + transferAllDataWithMerge(clientChannel, serverChannel); + assertTrue(serverReceivedCloseHandshake); + // Should not be closed yet as we disabled closing the connection on the server + assertFalse(clientForceClosed); + + while (!clientForceClosed) { + Thread.sleep(10); + // We need to run all pending tasks as the force close timeout is scheduled on the EventLoop. + clientChannel.runPendingTasks(); + } + + // clientForceClosed would be set to TRUE after any close, + // so check here that force close timeout was actually fired + assertTrue(handshaker.isForceCloseComplete()); + + // Both should be empty + assertFalse(serverChannel.finishAndReleaseAll()); + assertFalse(clientChannel.finishAndReleaseAll()); + } + /** * Transfers all pending data from the source channel into the destination channel.<br> * Merges all data into a single buffer before transmission into the destination. @@ -137,6 +218,16 @@ private static EmbeddedChannel createClientChannel(ChannelHandler handler) throw handler); } + private static EmbeddedChannel createClientChannel(WebSocketClientHandshaker handshaker, + ChannelHandler handler) throws Exception { + return new EmbeddedChannel( + new HttpClientCodec(), + new HttpObjectAggregator(8192), + // Note that we're switching off close frames handling on purpose to test forced close on timeout. + new WebSocketClientProtocolHandler(handshaker, false, false), + handler); + } + private static EmbeddedChannel createServerChannel(ChannelHandler handler) { return new EmbeddedChannel( new HttpServerCodec(), @@ -144,4 +235,14 @@ private static EmbeddedChannel createServerChannel(ChannelHandler handler) { new WebSocketServerProtocolHandler("/test", "test-proto-1, test-proto-2", false), handler); } + + private static EmbeddedChannel createServerChannel(WebSocketServerProtocolHandler webSocketHandler, + ChannelHandler handler) { + return new EmbeddedChannel( + new HttpServerCodec(), + new HttpObjectAggregator(8192), + webSocketHandler, + handler); + } + }
train
val
"2019-04-01T21:02:36"
"2019-02-23T01:58:30Z"
kachayev
val
netty/netty/9018_9019
netty/netty
netty/netty/9018
netty/netty/9019
[ "keyword_pr_to_issue" ]
4b83be1cebab96cbe99dadb1688068ae668969d2
4373a1fba245b3db03cdf3321712d292e2de508b
[ "Sounds good 👌 " ]
[ "final", "final", "`getBytes(CharsetUtil.US_ASCII)`", "`getBytes(CharsetUtil.US_ASCII)`" ]
"2019-04-06T18:09:53Z"
[]
SelfSignedCertificate fails with FIPS enabled causing OpenSsl to fail
### Expected behavior For compliance reasons, we have BouncyCastle FIPS mode enabled and use Netty's OpenSSL bindings for TLS termination. I've noticed that Netty is failing to startup correctly since within `OpenSsl.java` static constructor it creates a `SelfSignedCertificate` with **1024** bits -- as such it is failing to create a KeyManagerFactory. FIPS 140-2 encryption requires the key length to be 2048 bits or greater. I would like to submit a PR to bump the bits for SelfSignedCertificate default to 2048. ### JVM version (e.g. `java -version`) JRE 1.8 ### OS version (e.g. `uname -a`) darwin (OSX) & OracleLinux
[ "handler/src/main/java/io/netty/handler/ssl/OpenSsl.java", "handler/src/main/java/io/netty/handler/ssl/util/SelfSignedCertificate.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/OpenSsl.java", "handler/src/main/java/io/netty/handler/ssl/util/SelfSignedCertificate.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java b/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java index 5d1dd2b7a0c..a12e5c7c94d 100644 --- a/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java +++ b/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java @@ -23,6 +23,7 @@ import io.netty.internal.tcnative.Library; import io.netty.internal.tcnative.SSL; import io.netty.internal.tcnative.SSLContext; +import io.netty.util.CharsetUtil; import io.netty.util.ReferenceCountUtil; import io.netty.util.ReferenceCounted; import io.netty.util.internal.NativeLibraryLoader; @@ -64,71 +65,47 @@ public final class OpenSsl { private static final boolean IS_BORINGSSL; static final Set<String> SUPPORTED_PROTOCOLS_SET; - // Bytes of self-signed certificate for netty.io and the matching private-key - private static byte[] CERT_BYTES = { - 48, -126, 1, -93, 48, -126, 1, 12, -96, 3, 2, 1, 2, 2, 8, 31, 127, -24, 79, 67, - -72, -128, 124, 48, 13, 6, 9, 42, -122, 72, -122, -9, 13, 1, 1, 11, 5, 0, 48, - 19, 49, 17, 48, 15, 6, 3, 85, 4, 3, 19, 8, 110, 101, 116, 116, 121, 46, 105, - 111, 48, 32, 23, 13, 49, 56, 48, 51, 50, 55, 49, 50, 52, 49, 50, 49, 90, 24, - 15, 57, 57, 57, 57, 49, 50, 51, 49, 50, 51, 53, 57, 53, 57, 90, 48, 19, 49, 17, - 48, 15, 6, 3, 85, 4, 3, 19, 8, 110, 101, 116, 116, 121, 46, 105, 111, 48, -127, - -97, 48, 13, 6, 9, 42, -122, 72, -122, -9, 13, 1, 1, 1, 5, 0, 3, -127, -115, 0, - 48, -127, -119, 2, -127, -127, 0, -105, 81, 76, -56, -118, -35, 54, -61, -39, - 69, 77, -56, 36, -126, 15, -35, -97, 126, -59, 2, -110, -39, -122, -116, -62, - -83, -43, -102, 98, 46, -33, 6, 33, 74, -68, -121, -64, -9, -3, 45, 102, -121, - 50, -86, 93, 125, -82, -110, -2, -22, -114, 18, -93, 51, -86, 63, -63, 46, 96, - -37, 16, 105, -11, 96, -97, -77, 98, -2, 117, -66, -118, 31, -62, -94, 109, -61, - -82, 31, -103, 29, -53, -6, 47, 13, -78, -30, -128, 95, -76, 18, 5, -43, -80, - 51, 22, 39, 11, -93, 101, -66, -105, -68, -110, -80, 89, -105, -116, 10, -42, - 16, 51, 4, 113, -23, 69, -111, 85, -61, -59, -33, -83, 5, 114, -112, 34, 34, - -107, 79, 2, 3, 1, 0, 1, 48, 13, 6, 9, 42, -122, 72, -122, -9, 13, 1, 1, 11, 5, - 0, 3, -127, -127, 0, 8, -18, -42, -73, 54, 95, 39, -58, -98, 62, -26, 50, -3, - 71, -125, -128, -19, -87, -46, -85, 72, 17, 46, 75, -104, 125, -51, 27, 123, - 84, 34, 100, -112, 122, -28, 29, -33, 127, -20, -54, 30, -77, 109, -81, -3, - -73, -113, 17, 28, 98, 127, 77, 53, -76, -49, -119, 98, 113, 71, -107, -33, - -57, 37, -55, -60, 89, 65, 83, -96, -54, -22, 122, 10, -11, 11, -67, -58, -57, - 85, -119, 46, -26, -41, 15, -77, 19, 4, -32, -64, -12, 49, 104, -101, 42, 88, - 75, 27, 41, 122, 126, 70, -99, -91, -33, -36, -57, -63, -7, 94, -71, -15, -108, - 59, -32, 50, 47, -35, 71, 104, 47, 97, 43, 93, -128, -65, 11, 29, -88 - }; - private static byte[] KEY_BYTES = { - 48, -126, 2, 120, 2, 1, 0, 48, 13, 6, 9, 42, -122, 72, -122, -9, 13, 1, 1, 1, 5, - 0, 4, -126, 2, 98, 48, -126, 2, 94, 2, 1, 0, 2, -127, -127, 0, -105, 81, 76, -56, - -118, -35, 54, -61, -39, 69, 77, -56, 36, -126, 15, -35, -97, 126, -59, 2, -110, - -39, -122, -116, -62, -83, -43, -102, 98, 46, -33, 6, 33, 74, -68, -121, -64, -9, - -3, 45, 102, -121, 50, -86, 93, 125, -82, -110, -2, -22, -114, 18, -93, 51, -86, - 63, -63, 46, 96, -37, 16, 105, -11, 96, -97, -77, 98, -2, 117, -66, -118, 31, -62, - -94, 109, -61, -82, 31, -103, 29, -53, -6, 47, 13, -78, -30, -128, 95, -76, 18, 5, - -43, -80, 51, 22, 39, 11, -93, 101, -66, -105, -68, -110, -80, 89, -105, -116, 10, - -42, 16, 51, 4, 113, -23, 69, -111, 85, -61, -59, -33, -83, 5, 114, -112, 34, 34, - -107, 79, 2, 3, 1, 0, 1, 2, -127, -128, 68, 52, 93, 11, -73, -85, -26, 87, 120, -61, - -120, 63, -62, 84, -19, -103, -45, -98, 108, 102, -80, -110, 99, -41, 102, -104, - -68, 67, 14, 38, 90, 88, -123, 1, 14, -31, -111, -43, 53, -59, 21, 5, -77, -116, - -98, -1, 91, -124, -34, 106, 19, 7, -53, -112, 42, 24, -6, -106, 81, 9, -20, -24, - 21, -75, 119, -49, 70, 70, -106, -6, -56, -6, 28, 104, 33, -104, 27, 65, -75, -12, - -93, 75, 87, 82, -64, -70, -127, 60, 91, -60, -76, 13, -115, 19, -77, -16, -3, 119, - -88, 111, 96, 78, -103, -30, -87, -118, 106, -7, 97, -21, 20, -31, -43, 28, -18, - -2, 117, 63, 111, -71, 84, -77, -42, 78, 20, -28, -54, -63, 2, 65, 0, -23, 7, -72, - -18, -122, 34, 90, 107, -103, 119, 105, 46, -10, -109, -7, 3, 21, 16, 91, 110, -13, - 120, 95, 122, -77, -60, 18, -52, 103, -1, -90, 39, -3, 99, -10, 18, -14, 47, -104, - -87, -110, 7, -48, -23, -37, 104, -125, 97, 88, -1, -86, -90, -11, -79, -20, 41, - -128, 15, -35, -104, 60, 25, 121, -41, 2, 65, 0, -90, 59, -92, -31, -117, 35, -79, - 16, -76, 57, 90, 15, -6, 84, 47, -113, -42, 19, -56, 121, 123, -121, -91, 91, -37, - -71, 78, -40, 12, 82, -25, -125, -58, 115, -123, 97, 10, -99, -59, 38, -48, -103, - -128, -125, 36, 108, 18, -86, -85, -17, -40, 8, -14, -108, -24, -20, 63, -59, -81, - 5, 11, 35, 1, 73, 2, 65, 0, -30, 11, -8, -85, -128, 120, 80, -121, -15, -35, -80, - -83, -70, -55, 125, -109, 44, -38, -86, 39, 45, -116, 69, -22, 75, -7, 86, 86, -20, - 71, 68, -111, -92, 46, 84, 100, -70, -125, -53, 46, 42, -106, -28, 100, 5, -49, 19, - 42, -38, 95, 95, -42, 7, -99, -23, 61, -76, -103, 47, 86, -34, 109, -60, 15, 2, 65, - 0, -126, -72, -22, -101, 87, 0, -75, 80, 110, 121, -97, 98, 107, 55, -30, -61, 24, - -43, 43, -44, -92, -104, -14, 39, 127, 109, -123, 28, 14, -20, -17, 20, -56, 109, - -75, -40, -81, 49, -116, -123, 78, -117, 55, -19, 105, 41, -9, -81, -15, 79, -58, - 50, -101, 25, 16, -26, 31, -20, 68, 11, 18, 75, -17, -55, 2, 65, 0, -126, -11, 56, - -83, -60, 1, -125, 109, 74, 74, -1, -17, 54, 111, -111, 100, 125, 21, 77, 34, 119, - -33, 23, -13, 66, 74, -78, 80, -67, 57, -42, 65, 65, 58, 96, 0, 72, -122, 3, -78, - 119, 68, -76, 5, 50, 37, 51, 10, -54, 54, -102, 90, -6, 127, -93, 97, 53, 24, 57, - 77, 81, 53, -13, -127 - }; + // self-signed certificate for netty.io and the matching private-key + private static final String CERT = "-----BEGIN CERTIFICATE-----\n" + + "MIICrjCCAZagAwIBAgIIdSvQPv1QAZQwDQYJKoZIhvcNAQELBQAwFjEUMBIGA1UEAxMLZXhhbXBs\n" + + "ZS5jb20wIBcNMTgwNDA2MjIwNjU5WhgPOTk5OTEyMzEyMzU5NTlaMBYxFDASBgNVBAMTC2V4YW1w\n" + + "bGUuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAggbWsmDQ6zNzRZ5AW8E3eoGl\n" + + "qWvOBDb5Fs1oBRrVQHuYmVAoaqwDzXYJ0LOwa293AgWEQ1jpcbZ2hpoYQzqEZBTLnFhMrhRFlH6K\n" + + "bJND8Y33kZ/iSVBBDuGbdSbJShlM+4WwQ9IAso4MZ4vW3S1iv5fGGpLgbtXRmBf/RU8omN0Gijlv\n" + + "WlLWHWijLN8xQtySFuBQ7ssW8RcKAary3pUm6UUQB+Co6lnfti0Tzag8PgjhAJq2Z3wbsGRnP2YS\n" + + "vYoaK6qzmHXRYlp/PxrjBAZAmkLJs4YTm/XFF+fkeYx4i9zqHbyone5yerRibsHaXZWLnUL+rFoe\n" + + "MdKvr0VS3sGmhQIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQADQi441pKmXf9FvUV5EHU4v8nJT9Iq\n" + + "yqwsKwXnr7AsUlDGHBD7jGrjAXnG5rGxuNKBQ35wRxJATKrUtyaquFUL6H8O6aGQehiFTk6zmPbe\n" + + "12Gu44vqqTgIUxnv3JQJiox8S2hMxsSddpeCmSdvmalvD6WG4NthH6B9ZaBEiep1+0s0RUaBYn73\n" + + "I7CCUaAtbjfR6pcJjrFk5ei7uwdQZFSJtkP2z8r7zfeANJddAKFlkaMWn7u+OIVuB4XPooWicObk\n" + + "NAHFtP65bocUYnDpTVdiyvn8DdqyZ/EO8n1bBKBzuSLplk2msW4pdgaFgY7Vw/0wzcFXfUXmL1uy\n" + + "G8sQD/wx\n" + + "-----END CERTIFICATE-----"; + + private static final String KEY = "-----BEGIN PRIVATE KEY-----\n" + + "MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCCBtayYNDrM3NFnkBbwTd6gaWp\n" + + "a84ENvkWzWgFGtVAe5iZUChqrAPNdgnQs7Brb3cCBYRDWOlxtnaGmhhDOoRkFMucWEyuFEWUfops\n" + + "k0PxjfeRn+JJUEEO4Zt1JslKGUz7hbBD0gCyjgxni9bdLWK/l8YakuBu1dGYF/9FTyiY3QaKOW9a\n" + + "UtYdaKMs3zFC3JIW4FDuyxbxFwoBqvLelSbpRRAH4KjqWd+2LRPNqDw+COEAmrZnfBuwZGc/ZhK9\n" + + "ihorqrOYddFiWn8/GuMEBkCaQsmzhhOb9cUX5+R5jHiL3OodvKid7nJ6tGJuwdpdlYudQv6sWh4x\n" + + "0q+vRVLewaaFAgMBAAECggEAP8tPJvFtTxhNJAkCloHz0D0vpDHqQBMgntlkgayqmBqLwhyb18pR\n" + + "i0qwgh7HHc7wWqOOQuSqlEnrWRrdcI6TSe8R/sErzfTQNoznKWIPYcI/hskk4sdnQ//Yn9/Jvnsv\n" + + "U/BBjOTJxtD+sQbhAl80JcA3R+5sArURQkfzzHOL/YMqzAsn5hTzp7HZCxUqBk3KaHRxV7NefeOE\n" + + "xlZuWSmxYWfbFIs4kx19/1t7h8CHQWezw+G60G2VBtSBBxDnhBWvqG6R/wpzJ3nEhPLLY9T+XIHe\n" + + "ipzdMOOOUZorfIg7M+pyYPji+ZIZxIpY5OjrOzXHciAjRtr5Y7l99K1CG1LguQKBgQDrQfIMxxtZ\n" + + "vxU/1cRmUV9l7pt5bjV5R6byXq178LxPKVYNjdZ840Q0/OpZEVqaT1xKVi35ohP1QfNjxPLlHD+K\n" + + "iDAR9z6zkwjIrbwPCnb5kuXy4lpwPcmmmkva25fI7qlpHtbcuQdoBdCfr/KkKaUCMPyY89LCXgEw\n" + + "5KTDj64UywKBgQCNfbO+eZLGzhiHhtNJurresCsIGWlInv322gL8CSfBMYl6eNfUTZvUDdFhPISL\n" + + "UljKWzXDrjw0ujFSPR0XhUGtiq89H+HUTuPPYv25gVXO+HTgBFZEPl4PpA+BUsSVZy0NddneyqLk\n" + + "42Wey9omY9Q8WsdNQS5cbUvy0uG6WFoX7wKBgQDZ1jpW8pa0x2bZsQsm4vo+3G5CRnZlUp+XlWt2\n" + + "dDcp5dC0xD1zbs1dc0NcLeGDOTDv9FSl7hok42iHXXq8AygjEm/QcuwwQ1nC2HxmQP5holAiUs4D\n" + + "WHM8PWs3wFYPzE459EBoKTxeaeP/uWAn+he8q7d5uWvSZlEcANs/6e77eQKBgD21Ar0hfFfj7mK8\n" + + "9E0FeRZBsqK3omkfnhcYgZC11Xa2SgT1yvs2Va2n0RcdM5kncr3eBZav2GYOhhAdwyBM55XuE/sO\n" + + "eokDVutNeuZ6d5fqV96TRaRBpvgfTvvRwxZ9hvKF4Vz+9wfn/JvCwANaKmegF6ejs7pvmF3whq2k\n" + + "drZVAoGAX5YxQ5XMTD0QbMAl7/6qp6S58xNoVdfCkmkj1ZLKaHKIjS/benkKGlySVQVPexPfnkZx\n" + + "p/Vv9yyphBoudiTBS9Uog66ueLYZqpgxlM/6OhYg86Gm3U2ycvMxYjBM1NFiyze21AqAhI+HX+Ot\n" + + "mraV2/guSgDgZAhukRZzeQ2RucI=\n" + + "-----END PRIVATE KEY-----"; static { Throwable cause = null; @@ -256,7 +233,7 @@ public final class OpenSsl { "AEAD-CHACHA20-POLY1305-SHA256"); } - PemEncoded privateKey = PemPrivateKey.toPEM(UnpooledByteBufAllocator.DEFAULT, true, KEY_BYTES); + PemEncoded privateKey = PemPrivateKey.valueOf(KEY.getBytes(CharsetUtil.US_ASCII)); try { X509Certificate certificate = selfSignedCertificate(); certBio = ReferenceCountedOpenSslContext.toBIO(ByteBufAllocator.DEFAULT, certificate); @@ -388,7 +365,9 @@ public Boolean run() { * Returns a self-signed {@link X509Certificate} for {@code netty.io}. */ static X509Certificate selfSignedCertificate() throws CertificateException { - return (X509Certificate) SslContext.X509_CERT_FACTORY.generateCertificate(new ByteArrayInputStream(CERT_BYTES)); + return (X509Certificate) SslContext.X509_CERT_FACTORY.generateCertificate( + new ByteArrayInputStream(CERT.getBytes(CharsetUtil.US_ASCII)) + ); } private static boolean doesSupportOcsp() { diff --git a/handler/src/main/java/io/netty/handler/ssl/util/SelfSignedCertificate.java b/handler/src/main/java/io/netty/handler/ssl/util/SelfSignedCertificate.java index 9f010ce8ec7..259bd6bc204 100644 --- a/handler/src/main/java/io/netty/handler/ssl/util/SelfSignedCertificate.java +++ b/handler/src/main/java/io/netty/handler/ssl/util/SelfSignedCertificate.java @@ -67,6 +67,14 @@ public final class SelfSignedCertificate { private static final Date DEFAULT_NOT_AFTER = new Date(SystemPropertyUtil.getLong( "io.netty.selfSignedCertificate.defaultNotAfter", 253402300799000L)); + /** + * FIPS 140-2 encryption requires the key length to be 2048 bits or greater. + * Let's use that as a sane default but allow the default to be set dynamically + * for those that need more stringent security requirements. + */ + private static final int DEFAULT_KEY_LENGTH_BITS = + SystemPropertyUtil.getInt("io.netty.handler.ssl.util.selfSignedKeyStrength", 2048); + private final File certificate; private final File privateKey; private final X509Certificate cert; @@ -107,7 +115,7 @@ public SelfSignedCertificate(String fqdn) throws CertificateException { public SelfSignedCertificate(String fqdn, Date notBefore, Date notAfter) throws CertificateException { // Bypass entropy collection by using insecure random generator. // We just want to generate it without any delay because it's for testing purposes only. - this(fqdn, ThreadLocalInsecureRandom.current(), 1024, notBefore, notAfter); + this(fqdn, ThreadLocalInsecureRandom.current(), DEFAULT_KEY_LENGTH_BITS, notBefore, notAfter); } /**
null
val
val
"2019-04-08T15:20:14"
"2019-04-05T22:25:36Z"
fzakaria
val
netty/netty/8912_9020
netty/netty
netty/netty/8912
netty/netty/9020
[ "keyword_issue_to_pr", "keyword_pr_to_issue" ]
e63c596f24d2aa0a52f2a8132ad1283952b45aea
51112e2b36ec5550a73d72bfc59f4523f7b8ec27
[ "I mean the idleStateHandler triggers when a channel has not performed a read or write action([in the doc](https://netty.io/4.0/api/io/netty/handler/timeout/IdleStateHandler.html))!\r\n\r\nYou specify that when the channel is not read for 10 seconds then trigger and in the client, you only read all 10 seconds (without subtracting the delay that it took). Of curse it will trigger the IdleStarthandler I mean what did you expect?\r\n\r\nFor TCP keep alive the server can not really check if the connection is open (the client crashed) without pinging it. This is why [wikipedia](https://en.wikipedia.org/wiki/Keepalive#TCP_keepalive) sais: \"TCP keepalive period is required to be configurable and by default is set to no less than 2 hours.\"", "The underlying socket is still being used, the client is still receiving data. Therefor the idelState should not be triggered.\r\nIf you check my gist you can verify this your self. You can also use tcpdump to verify this behavior.", "I reproduced the issue, I found that if the client's message processing speed is very slow (TCP window size is 0), it will trigger multiple Idle messages.\r\n\r\nBy looking at the source code, I found that if the client's processing speed was very slow, `IdleStateHandler` would trigger an Idle event to notify the user to avoid an `OutOfMemoryError` error .\r\n\r\nBut actually in my tests, if I encounter a slow client, Idle events will be triggered many times, which I think is not reasonable.\r\n", "When using `IdleStateHandler` with `observeOutput=true`, `IdleStateHandler` detects the output data changes by executing method `hasOutputChanged`\r\n\r\n```java\r\nprivate boolean hasOutputChanged(ChannelHandlerContext ctx, boolean first) {\r\n if (observeOutput) {\r\n Channel channel = ctx.channel();\r\n Unsafe unsafe = channel.unsafe();\r\n ChannelOutboundBuffer buf = unsafe.outboundBuffer();\r\n\r\n if (buf != null) {\r\n int messageHashCode = System.identityHashCode(buf.current());\r\n // pending write bytes will remain unchanged when flushing a large byte buffer entry.\r\n long pendingWriteBytes = buf.totalPendingWriteBytes();\r\n\r\n if (messageHashCode != lastMessageHashCode || pendingWriteBytes != lastPendingWriteBytes) {\r\n lastMessageHashCode = messageHashCode;\r\n lastPendingWriteBytes = pendingWriteBytes;\r\n\r\n if (!first) {\r\n return true;\r\n }\r\n }\r\n }\r\n }\r\n return false;\r\n}\r\n```\r\nI noticed that `ChannelOutboundBuffer` total pending write bytes size will remain unchanged when flushing a large byte buffer entry. It doesn't change until all buffer data in entry is flushed into channel, so when IdleStateChannel encounters a slow client, hasOutputChanged returns false, triggering the IDLE event. \r\n\r\nI just submitted a related issue #9005\r\n", "Hi,\r\nI just rerun the test I attached to this issue with netty-all:4.1.35.Final and this bug has not been fixed in #9020 \r\n\r\nIf the server writes a response that has an `ByteBuf` that is larger then what the client can download in the `IdleStateHandler` idle timeout, it will trigger the timeout." ]
[ "private ?", "nit you can remove the `else` as you return in the if block", "nit you can remove the else as you return in the previous if block ", "...if nothing was flushed before for the current message or there is no current message", "Done", "Done", "Done", "Done", "I would perform the assignment to flushProgress and the check directly on the if expression to avoid the currentProgress() load to happen regardless the pendingWriteBytes value", "Do you mean to change it to this way ?\r\n```java\r\nif (messageHashCode != lastMessageHashCode ||\r\n pendingWriteBytes != lastPendingWriteBytes || buf.currentProgress() != lastFlushProgress)\r\n```\r\nThis helps to reduce calls to the `currentProgress()` method, but in the if block we need to assign the current progress to `lastFlushProgress `, so there will be an additional call.\r\n\r\nHow about changing it to this way? Although poor readability.\r\n\r\n```java\r\nlong flushProgress = lastFlushProgress;\r\nif (messageHashCode != lastMessageHashCode ||\r\n pendingWriteBytes != lastPendingWriteBytes || (flushProgress = buf.currentProgress()) != lastFlushPorgress) {\r\n lastFlushPorgress = flushProgress;\r\n}\r\n```", "Sorry for my bad explanation of the change request :P\r\nYes I was meaning exactly the second case you have shown, lgtm", "as the is not a volatile access I think I would just keep it as it is as it is easier to maintain .", "Or alternative (which may be better) just use a second if block in case the previous branch is true.", "@normanmaurer Just like this ?\r\n```java\r\nif (messageHashCode != lastMessageHashCode || pendingWriteBytes != lastPendingWriteBytes) {\r\n // some code\r\n if(!first){\r\n return true;\r\n }\r\n}\r\n\r\nlong flushProgress = buf.currentProgress();\r\nif ( lastFlushProgress != flushProgress) {\r\n lastFlushProgress = flushProgress;\r\n if(!first){\r\n return true;\r\n }\r\n}\r\n```", "@qeesung yes", "Okay, let me update it.", "thanks a lot! Just make sure you also make it pass in terms of check style as your code has some check style problems ;)", "Done :-)" ]
"2019-04-07T14:54:56Z"
[]
Handling slow clients / detecting idleness
### Expected behavior When using `IdleStateHandler` with `observeOutput=true` the `IdleStateEvent.WRITER_IDLE` or `IdleStateEvent.ALL_IDLE` should not trigger because of slow clients. This is same issue as in #6150. ### Actual behavior The `IdleStateHandler` triggers `IdleStateEvent.WRITER_IDLE` or `IdleStateEvent.ALL_IDLE` even though Netty is still writing to the client socket. ### Steps to reproduce Create an minimal server, use `IdleStateHandler` to detect idleness. Create an client that slowly downloads the content from the server. Watch the `IdleStateHandler` trigger before the download is complete. ### Minimal yet complete reproducer code (or URL to code) https://gist.github.com/magnus-gustafsson/c009c04aedf14a8b426dcb48450cc7d4 When this sample is run with the `IdleStateHandler` in the pipeline we get the following output: ``` Mon Mar 04 09:02:28 CET 2019: Server - listening on: http://localhost:8082/ Mon Mar 04 09:02:28 CET 2019: Client - connecting Mon Mar 04 09:02:28 CET 2019: Server - Received request, writing response with size 67108864 on : [id: 0xb4450a55, L:/127.0.0.1:8082 - R:/127.0.0.1:38182] Mon Mar 04 09:02:38 CET 2019: Server - Channel is idle, closing it: [id: 0xb4450a55, L:/127.0.0.1:8082 - R:/127.0.0.1:38182], ALL_IDLE Mon Mar 04 09:02:38 CET 2019: Server - : Write complete. Success was false on [id: 0xb4450a55, L:/127.0.0.1:8082 ! R:/127.0.0.1:38182] Mon Mar 04 09:02:41 CET 2019: Client - Done : read 20998517 ``` There we see that the `ALL_IDLE` event is triggered. This results in that the client only manage to download 20998517 bytes. However, if we remove `IdleStateHandler` from the pipeline we get: ``` Mon Mar 04 09:05:05 CET 2019: Server - listening on: http://localhost:8082/ Mon Mar 04 09:05:05 CET 2019: Client - connecting Mon Mar 04 09:05:05 CET 2019: Server - Received request, writing response with size 67108864 on : [id: 0xf8335944, L:/127.0.0.1:8082 - R:/127.0.0.1:38252] Mon Mar 04 09:05:44 CET 2019: Server - : Write complete. Success was true on [id: 0xf8335944, L:/127.0.0.1:8082 ! R:/127.0.0.1:38252] Mon Mar 04 09:05:47 CET 2019: Client - Done : read 67108949 ``` The client manage to download everything from the server. ### Netty version Same behavior in both 4.1.7.Final and 4.1.33.Final ### JVM version (e.g. `java -version`) java version "1.8.0_161" Java(TM) SE Runtime Environment (build 1.8.0_161-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode) ### OS version (e.g. `uname -a`) Linux magnusg 4.9.0-040900-generic #201612111631 SMP Sun Dec 11 21:33:00 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[ "handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java", "transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java" ]
[ "handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java", "transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java" ]
[ "handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java b/handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java index 299e4c7ec79..ee80dcf7e0d 100644 --- a/handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java +++ b/handler/src/main/java/io/netty/handler/timeout/IdleStateHandler.java @@ -129,6 +129,7 @@ public void operationComplete(ChannelFuture future) throws Exception { private long lastChangeCheckTimeStamp; private int lastMessageHashCode; private long lastPendingWriteBytes; + private long lastFlushProgress; /** * Creates a new instance firing {@link IdleStateEvent}s. @@ -399,6 +400,7 @@ private void initOutputChanged(ChannelHandlerContext ctx) { if (buf != null) { lastMessageHashCode = System.identityHashCode(buf.current()); lastPendingWriteBytes = buf.totalPendingWriteBytes(); + lastFlushProgress = buf.currentProgress(); } } } @@ -443,6 +445,15 @@ private boolean hasOutputChanged(ChannelHandlerContext ctx, boolean first) { return true; } } + + long flushProgress = buf.currentProgress(); + if (flushProgress != lastFlushProgress) { + lastFlushProgress = flushProgress; + + if (!first) { + return true; + } + } } } diff --git a/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java b/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java index d3a934a8297..e042490f30b 100644 --- a/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java +++ b/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java @@ -220,6 +220,18 @@ public Object current() { return entry.msg; } + /** + * Return the current message flush progress. + * @return {@code 0} if nothing was flushed before for the current message or there is no current message + */ + public long currentProgress() { + Entry entry = flushedEntry; + if (entry == null) { + return 0; + } + return entry.progress; + } + /** * Notify the {@link ChannelPromise} of the current message about writing progress. */ @@ -227,9 +239,9 @@ public void progress(long amount) { Entry e = flushedEntry; assert e != null; ChannelPromise p = e.promise; + long progress = e.progress + amount; + e.progress = progress; if (p instanceof ChannelProgressivePromise) { - long progress = e.progress + amount; - e.progress = progress; ((ChannelProgressivePromise) p).tryProgress(progress, e.total); } }
diff --git a/handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java b/handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java index a27364f4393..668f3f1ea67 100644 --- a/handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java +++ b/handler/src/test/java/io/netty/handler/timeout/IdleStateHandlerTest.java @@ -236,6 +236,7 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc channel.writeAndFlush(Unpooled.wrappedBuffer(new byte[] { 1 })); channel.writeAndFlush(Unpooled.wrappedBuffer(new byte[] { 2 })); channel.writeAndFlush(Unpooled.wrappedBuffer(new byte[] { 3 })); + channel.writeAndFlush(Unpooled.wrappedBuffer(new byte[5 * 1024])); // Establish a baseline. We're not consuming anything and let it idle once. idleStateHandler.tickRun(); @@ -283,6 +284,30 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc assertEquals(0, events.size()); assertEquals(26L, idleStateHandler.tick(TimeUnit.SECONDS)); // 23s + 2s + 1s + // Consume part of the message every 2 seconds, then be idle for 1 seconds, + // then run the task and we should get an IdleStateEvent because the first trigger + idleStateHandler.tick(2L, TimeUnit.SECONDS); + assertNotNullAndRelease(channel.consumePart(1024)); + idleStateHandler.tick(2L, TimeUnit.SECONDS); + assertNotNullAndRelease(channel.consumePart(1024)); + idleStateHandler.tickRun(1L, TimeUnit.SECONDS); + assertEquals(1, events.size()); + assertEquals(31L, idleStateHandler.tick(TimeUnit.SECONDS)); // 26s + 2s + 2s + 1s + events.clear(); + + // Consume part of the message every 2 seconds, then be idle for 1 seconds, + // then consume all the rest of the message, then run the task and we shouldn't + // get an IdleStateEvent because the data is flowing and we haven't been idle for long enough! + idleStateHandler.tick(2L, TimeUnit.SECONDS); + assertNotNullAndRelease(channel.consumePart(1024)); + idleStateHandler.tick(2L, TimeUnit.SECONDS); + assertNotNullAndRelease(channel.consumePart(1024)); + idleStateHandler.tickRun(1L, TimeUnit.SECONDS); + assertEquals(0, events.size()); + assertEquals(36L, idleStateHandler.tick(TimeUnit.SECONDS)); // 31s + 2s + 2s + 1s + idleStateHandler.tick(2L, TimeUnit.SECONDS); + assertNotNullAndRelease(channel.consumePart(1024)); + // There are no messages left! Advance the ticker by 3 seconds, // attempt a consume() but it will be null, then advance the // ticker by an another 2 seconds and we should get an IdleStateEvent @@ -292,7 +317,7 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc idleStateHandler.tickRun(2L, TimeUnit.SECONDS); assertEquals(1, events.size()); - assertEquals(31L, idleStateHandler.tick(TimeUnit.SECONDS)); // 26s + 3s + 2s + assertEquals(43L, idleStateHandler.tick(TimeUnit.SECONDS)); // 36s + 2s + 3s + 2s // q.e.d. } finally { @@ -379,7 +404,7 @@ protected void doWrite(ChannelOutboundBuffer in) throws Exception { // the messages in the ChannelOutboundBuffer. } - public Object consume() { + private Object consume() { ChannelOutboundBuffer buf = unsafe().outboundBuffer(); if (buf != null) { Object msg = buf.current(); @@ -391,5 +416,24 @@ public Object consume() { } return null; } + + /** + * Consume the part of a message. + * + * @param byteCount count of byte to be consumed + * @return the message currently being consumed + */ + private Object consumePart(int byteCount) { + ChannelOutboundBuffer buf = unsafe().outboundBuffer(); + if (buf != null) { + Object msg = buf.current(); + if (msg != null) { + ReferenceCountUtil.retain(msg); + buf.removeBytes(byteCount); + return msg; + } + } + return null; + } } }
train
val
"2019-04-09T09:44:23"
"2019-03-04T08:10:53Z"
magnus-gustafsson
val
netty/netty/9031_9034
netty/netty
netty/netty/9031
netty/netty/9034
[ "keyword_pr_to_issue" ]
6ed203b7baf4c5797a6692975059b05d26821402
bedc8a6ea55af69eea382bb802b0b199bf1c9fad
[ "@Mr00Anderson ca. == approximately . So yep PRs welcome " ]
[]
"2019-04-11T19:55:18Z"
[]
Typo Documentation - MessageSizeEstimator.class
https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/transport/src/main/java/io/netty/channel/MessageSizeEstimator.java#L19 "Responsible to estimate size of a message. The size represent how much memory the message will ca. reserve in memory." I was going through the source and trying to determine where MessageSizeEstimator.class is used and what "ca" in the documentation means. I will make the pull request once I know what "ca" means, if it needs to be corrected.
[ "transport/src/main/java/io/netty/channel/MessageSizeEstimator.java" ]
[ "transport/src/main/java/io/netty/channel/MessageSizeEstimator.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/MessageSizeEstimator.java b/transport/src/main/java/io/netty/channel/MessageSizeEstimator.java index 5c84927781a..92f164dc5d5 100644 --- a/transport/src/main/java/io/netty/channel/MessageSizeEstimator.java +++ b/transport/src/main/java/io/netty/channel/MessageSizeEstimator.java @@ -16,8 +16,8 @@ package io.netty.channel; /** - * Responsible to estimate size of a message. The size represent how much memory the message will ca. reserve in - * memory. + * Responsible to estimate the size of a message. The size represents approximately how much memory the message will + * reserve in memory. */ public interface MessageSizeEstimator {
null
val
val
"2019-04-11T18:54:31"
"2019-04-11T17:14:39Z"
Mr00Anderson
val
netty/netty/9092_9098
netty/netty
netty/netty/9092
netty/netty/9098
[ "keyword_pr_to_issue" ]
ec62af01c7af372d853bd1a2d2d981a897263b6d
00a9a25f29cf07728794089affdd735af29209de
[ "Nope will look into it next week.\n\nDoes it work when calling channel.close() ?\n\n> Am 26.04.2019 um 14:40 schrieb Michael Nitschinger <notifications@github.com>:\n> \n> In one of my unit tests I have code like:\n> \n> final Throwable expectedCause = new Exception(\"something failed\");\n> EmbeddedChannel channel = new EmbeddedChannel(new ChannelOutboundHandlerAdapter() {\n> @Override\n> public void close(ChannelHandlerContext ctx, ChannelPromise promise) {\n> promise.tryFailure(expectedCause);\n> }\n> });\n> To simulate a failing channel close. This worked great up until 4.1.34.Final, but in the latest version 4.1.35.Final this test failed.\n> \n> Turns out when I debug into it, the close method of the ChannelOutboundHandlerAdapter is never called on channel.disconnect() and as a result my assertions fail since the channel close completes successfully instead of triggering the failure.\n> \n> Is this expected?\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Yes, when calling `channel.close()` my handler is triggered, when calling `channel.disconnect()` it is not.", "Ok thanks will check but not before Monday. Feel free to investigate in the meantime if you have cycles\n\n> Am 26.04.2019 um 14:56 schrieb Michael Nitschinger <notifications@github.com>:\n> \n> Yes, when calling channel.close() my handler is triggered, when calling channel.disconnect() it is not.\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@normanmaurer no rush. won't have time to dig into it before next week either - let's sync up then.", "@normanmaurer @daschl see #9098" ]
[]
"2019-04-27T10:44:21Z"
[ "defect", "regression" ]
EmbeddedChannel Close Regression with 4.1.35.Final
In one of my unit tests I have code like: ```java final Throwable expectedCause = new Exception("something failed"); EmbeddedChannel channel = new EmbeddedChannel(new ChannelOutboundHandlerAdapter() { @Override public void close(ChannelHandlerContext ctx, ChannelPromise promise) { promise.tryFailure(expectedCause); } }); channel.disconnect().addListener(future -> { // future should fail, but is successful since 4.1.35.Final }); ``` To simulate a failing channel close. This worked great up until 4.1.34.Final, but in the latest version 4.1.35.Final this test failed. Turns out when I debug into it, the close method of the `ChannelOutboundHandlerAdapter` is never called on `channel.disconnect()` and as a result my assertions fail since the channel close completes successfully instead of triggering the failure. Is this expected?
[ "transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java" ]
[ "transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java" ]
[ "transport/src/test/java/io/netty/channel/embedded/EmbeddedChannelTest.java" ]
diff --git a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java index 7c68b507049..1ba26fc44ca 100644 --- a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java +++ b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java @@ -555,6 +555,11 @@ private void invokeConnect(SocketAddress remoteAddress, SocketAddress localAddre @Override public ChannelFuture disconnect(final ChannelPromise promise) { + if (!channel().metadata().hasDisconnect()) { + // Translate disconnect to close if the channel has no notion of disconnect-reconnect. + // So far, UDP/IP is the only transport that has such behavior. + return close(promise); + } if (isNotValidPromise(promise, false)) { // cancelled return promise; @@ -563,22 +568,12 @@ public ChannelFuture disconnect(final ChannelPromise promise) { final AbstractChannelHandlerContext next = findContextOutbound(MASK_DISCONNECT); EventExecutor executor = next.executor(); if (executor.inEventLoop()) { - // Translate disconnect to close if the channel has no notion of disconnect-reconnect. - // So far, UDP/IP is the only transport that has such behavior. - if (!channel().metadata().hasDisconnect()) { - next.invokeClose(promise); - } else { - next.invokeDisconnect(promise); - } + next.invokeDisconnect(promise); } else { safeExecute(executor, new Runnable() { @Override public void run() { - if (!channel().metadata().hasDisconnect()) { - next.invokeClose(promise); - } else { - next.invokeDisconnect(promise); - } + next.invokeDisconnect(promise); } }, promise, null); }
diff --git a/transport/src/test/java/io/netty/channel/embedded/EmbeddedChannelTest.java b/transport/src/test/java/io/netty/channel/embedded/EmbeddedChannelTest.java index a5dae45afa4..7c715ef13a5 100644 --- a/transport/src/test/java/io/netty/channel/embedded/EmbeddedChannelTest.java +++ b/transport/src/test/java/io/netty/channel/embedded/EmbeddedChannelTest.java @@ -281,6 +281,17 @@ public void testHasNoDisconnect() { assertNull(handler.pollEvent()); } + @Test + public void testHasNoDisconnectSkipDisconnect() throws InterruptedException { + EmbeddedChannel channel = new EmbeddedChannel(false, new ChannelOutboundHandlerAdapter() { + @Override + public void close(ChannelHandlerContext ctx, ChannelPromise promise) throws Exception { + promise.tryFailure(new Throwable()); + } + }); + assertFalse(channel.disconnect().isSuccess()); + } + @Test public void testFinishAndReleaseAll() { ByteBuf in = Unpooled.buffer();
val
val
"2019-04-26T01:26:08"
"2019-04-26T13:40:12Z"
daschl
val
netty/netty/9099_9103
netty/netty
netty/netty/9099
netty/netty/9103
[ "keyword_issue_to_pr", "keyword_pr_to_issue" ]
b9c4e17291a5e507877b9c8490e62d65426bce35
cb85e03d728e8e24be6b9b05070643779948785a
[ "- Which version do you use?\r\n- -1 means no substring found.\r\n- 6 in array `bytes` of your method `test004()` is a byte which will be interpreted `\\u0006` in AsciiString, while \"6\" is a string whose ascii value is `\\u0036`. Combination of these two fact means the result is -1 even if the method `lastIndexOf` is bug free.\r\n- `byte[] bytes = new byte['1', '2', '3', '4', '5', '6', '7', '8', '9'] ` may be what you want.\r\n", "\r\n- i use netty-4.1.34 version.\r\n\r\n- Based on your comments, I added a test case:\r\n@Test\r\n public void test005(){\r\n AsciiString ascii = new AsciiString(\"12345678910\");\r\n int b = ascii.lastIndexOf(\"6\");\r\n System.out.println(b);\r\n\r\n }\r\n But the result is still -1, this is not what I expected.In my opinion, this should return 5.\r\n\r\n I looked at the source code,\r\n\r\n public int lastIndexOf(CharSequence string) {\r\n // Use count instead of count - 1 so lastIndexOf(\"\") answers count\r\n return lastIndexOf(string, length());\r\n }\r\n\r\n lastIndexOf(string, length()) should be modified to lastIndexOf(string, 0) ?\r\n", "@xiaoheng1 You're right. I also checked the source code. This should be an issue, I will do PR later.", "OK, Thanks!", "Thanks @xiaoheng1, have opened #9103 to address this.", "OK, Thanks!" ]
[]
"2019-04-29T07:14:15Z"
[ "defect" ]
The public int lastIndexOf(CharSequence string) method of the AsciiString class always returns -1
public int lastIndexOf(CharSequence string) { // Use count instead of count - 1 so lastIndexOf("") answers count return lastIndexOf(string, length()); } @Test public void test004(){ byte[] bytes = new byte[]{1,2,3,4,5,6,7,8,9,10}; AsciiString ascii = new AsciiString(bytes, 0, 10, false); int b = ascii.lastIndexOf("6"); System.out.println(b); } the result is -1.
[ "common/src/main/java/io/netty/util/AsciiString.java" ]
[ "common/src/main/java/io/netty/util/AsciiString.java" ]
[ "common/src/test/java/io/netty/util/AsciiStringCharacterTest.java" ]
diff --git a/common/src/main/java/io/netty/util/AsciiString.java b/common/src/main/java/io/netty/util/AsciiString.java index 302b36e89bc..248ae88b214 100644 --- a/common/src/main/java/io/netty/util/AsciiString.java +++ b/common/src/main/java/io/netty/util/AsciiString.java @@ -742,7 +742,7 @@ public int indexOf(char ch, int start) { */ public int lastIndexOf(CharSequence string) { // Use count instead of count - 1 so lastIndexOf("") answers count - return lastIndexOf(string, length()); + return lastIndexOf(string, length); } /** @@ -757,23 +757,20 @@ public int lastIndexOf(CharSequence string) { */ public int lastIndexOf(CharSequence subString, int start) { final int subCount = subString.length(); + start = Math.min(start, length - subCount); if (start < 0) { - start = 0; - } - if (subCount <= 0) { - return start < length ? start : length; - } - if (subCount > length - start) { return INDEX_NOT_FOUND; } + if (subCount == 0) { + return start; + } final char firstChar = subString.charAt(0); if (firstChar > MAX_CHAR_VALUE) { return INDEX_NOT_FOUND; } final byte firstCharAsByte = c2b0(firstChar); - final int end = offset + start; - for (int i = offset + length - subCount; i >= end; --i) { + for (int i = offset + start; i >= 0; --i) { if (value[i] == firstCharAsByte) { int o1 = i, o2 = 0; while (++o2 < subCount && b2c(value[++o1]) == subString.charAt(o2)) {
diff --git a/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java b/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java index c2a835df660..deff4267d65 100644 --- a/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java +++ b/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java @@ -356,15 +356,15 @@ public void testStaticIndexOfChar() { @Test public void testLastIndexOfCharSequence() { assertEquals(0, new AsciiString("abcd").lastIndexOf("abcd", 0)); - assertEquals(0, new AsciiString("abcd").lastIndexOf("abc", 0)); - assertEquals(1, new AsciiString("abcd").lastIndexOf("bcd", 0)); - assertEquals(1, new AsciiString("abcd").lastIndexOf("bc", 0)); - assertEquals(5, new AsciiString("abcdabcd").lastIndexOf("bcd", 0)); - assertEquals(0, new AsciiString("abcd", 1, 2).lastIndexOf("bc", 0)); - assertEquals(0, new AsciiString("abcd", 1, 3).lastIndexOf("bcd", 0)); - assertEquals(1, new AsciiString("abcdabcd", 4, 4).lastIndexOf("bcd", 0)); + assertEquals(0, new AsciiString("abcd").lastIndexOf("abc", 4)); + assertEquals(1, new AsciiString("abcd").lastIndexOf("bcd", 4)); + assertEquals(1, new AsciiString("abcd").lastIndexOf("bc", 4)); + assertEquals(5, new AsciiString("abcdabcd").lastIndexOf("bcd", 10)); + assertEquals(0, new AsciiString("abcd", 1, 2).lastIndexOf("bc", 2)); + assertEquals(0, new AsciiString("abcd", 1, 3).lastIndexOf("bcd", 3)); + assertEquals(1, new AsciiString("abcdabcd", 4, 4).lastIndexOf("bcd", 4)); assertEquals(3, new AsciiString("012345").lastIndexOf("345", 3)); - assertEquals(3, new AsciiString("012345").lastIndexOf("345", 0)); + assertEquals(3, new AsciiString("012345").lastIndexOf("345", 6)); // Test with empty string assertEquals(0, new AsciiString("abcd").lastIndexOf("", 0)); @@ -376,7 +376,7 @@ public void testLastIndexOfCharSequence() { assertEquals(-1, new AsciiString("abcdbc").lastIndexOf("bce", 0)); assertEquals(-1, new AsciiString("abcd", 1, 3).lastIndexOf("abc", 0)); assertEquals(-1, new AsciiString("abcd", 1, 2).lastIndexOf("bd", 0)); - assertEquals(-1, new AsciiString("012345").lastIndexOf("345", 4)); + assertEquals(-1, new AsciiString("012345").lastIndexOf("345", 2)); assertEquals(-1, new AsciiString("012345").lastIndexOf("abc", 3)); assertEquals(-1, new AsciiString("012345").lastIndexOf("abc", 0)); assertEquals(-1, new AsciiString("012345").lastIndexOf("abcdefghi", 0));
val
val
"2019-04-29T20:45:49"
"2019-04-28T08:36:48Z"
xiaoheng1
val
netty/netty/9131_9132
netty/netty
netty/netty/9131
netty/netty/9132
[ "keyword_pr_to_issue" ]
cb85e03d728e8e24be6b9b05070643779948785a
3221bf6854dcf16329dffe1fa28c6be5e30269b5
[ "Since ```ApplicationProtocolNegotiationHandler``` is just conceptually a special case of a ```Channelinitializer```, similar logic should be used when invoking ```configurePipeline()```, and patterns possible in ```ChannelInitializer``` should also be possible in ```ApplicationProtocolNegotiationHandler```. The only real difference is that the invocation is not triggered by the handler being added, but rather some event a bit further down the line.", "@RoganDawes sounds like a bug... are you interested in providing a PR ?", "Will do.\n\nOn Tue, 07 May 2019 at 10:59 Norman Maurer <notifications@github.com> wrote:\n\n> @RoganDawes <https://github.com/RoganDawes> sounds like a bug... are you\n> interested in providing a PR ?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/netty/netty/issues/9131#issuecomment-489995834>, or mute\n> the thread\n> <https://github.com/notifications/unsubscribe-auth/AABHBC4XOG674D6N6O5EWDLPUFAFHANCNFSM4HLGWTFQ>\n> .\n>\n" ]
[ "can you add `{ ...}` to match our code style ? ", "can you add `{ ...}` to match our code style ? ", "please use `await(...)` and not `getCount`. if you really not want to call `await(...)` just replace the latch usage with an `AtomicBoolean`. ", "I didn't want to use ```await(...)``` because it would introduce a 5 second delay on each invocation. An ```AtomicBoolean``` solves that problem nicely, thanks.", "Resolved by using ```AtomicBoolean```" ]
"2019-05-07T12:38:52Z"
[ "defect" ]
ApplicationProtocolNegotiationHandler loses its place in the context too soon
### Expected behavior ``` @Override protected void configurePipeline(ChannelHandlerContext ctx, String protocol) throws Exception { ctx.pipeline().replace(this, null, initializer); // or ctx.pipeline().addAfter(this, null, initializer); // or ctx.pipeline().addBefore(this, null, initializer); } ``` should work, similarly to how a ```ChannelInitializer``` works. ### Actual behavior An exception is thrown on all "relative" pipeline operations, because the ApplicationProtocolNegotiationHandler instance has already been removed from the pipeline BEFORE ```configurePipeline()``` is called: ```ApplicationProtocolNegotiationHandler.java @Override public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { if (evt instanceof SslHandshakeCompletionEvent) { ctx.pipeline().remove(this); // <-- Removed before calling configurePipeline :-( SslHandshakeCompletionEvent handshakeEvent = (SslHandshakeCompletionEvent) evt; if (handshakeEvent.isSuccess()) { SslHandler sslHandler = ctx.pipeline().get(SslHandler.class); if (sslHandler == null) { throw new IllegalStateException("cannot find a SslHandler in the pipeline (requir ed for " + "application-level protocol negotiation)"); } String protocol = sslHandler.applicationProtocol(); configurePipeline(ctx, protocol != null? protocol : fallbackProtocol); } else { handshakeFailure(ctx, handshakeEvent.cause()); } } ctx.fireUserEventTriggered(evt); } ``` Compare that to Channelinitializer: ```ChannelInitializer.java @SuppressWarnings("unchecked") private boolean initChannel(ChannelHandlerContext ctx) throws Exception { if (initMap.add(ctx)) { // Guard against re-entrance. try { initChannel((C) ctx.channel()); } catch (Throwable cause) { // Explicitly call exceptionCaught(...) as we removed the handler before calling init Channel(...). // We do so to prevent multiple calls to initChannel(...). exceptionCaught(ctx, cause); } finally { ChannelPipeline pipeline = ctx.pipeline(); if (pipeline.context(this) != null) { pipeline.remove(this); } } return true; } return false; } ``` Note that the comment regarding the handler already having been removed in ```Channelinitializer``` is incorrect. See the following excerpt from current code: ```ChannelInitializer.java /** * {@inheritDoc} If override this method ensure you call super! */ @Override public void handlerAdded(ChannelHandlerContext ctx) throws Exception { if (ctx.channel().isRegistered()) { // This should always be true with our current DefaultChannelPipeline implementation. // The good thing about calling initChannel(...) in handlerAdded(...) is that there will be no ordering // surprises if a ChannelInitializer will add another ChannelInitializer. This is as all handlers // will be added in the expected order. if (initChannel(ctx)) { // We are done with init the Channel, removing the initializer now. removeState(ctx); } } } ``` ### Steps to reproduce Use ```ctx.replace(...)``` in the ```configurePipeline()``` method. ### Minimal yet complete reproducer code (or URL to code) Not available. ### Netty version netty-4.1.36.Final ### JVM version (e.g. `java -version`) java version "11.0.1" 2018-10-16 LTS Java(TM) SE Runtime Environment 18.9 (build 11.0.1+13-LTS) Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.1+13-LTS, mixed mode) ### OS version (e.g. `uname -a`) Linux nemesis 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[ "handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java b/handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java index 0e3ea0f39a3..3df8cf5cb46 100644 --- a/handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java +++ b/handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java @@ -79,22 +79,29 @@ protected ApplicationProtocolNegotiationHandler(String fallbackProtocol) { @Override public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception { if (evt instanceof SslHandshakeCompletionEvent) { - ctx.pipeline().remove(this); - SslHandshakeCompletionEvent handshakeEvent = (SslHandshakeCompletionEvent) evt; - if (handshakeEvent.isSuccess()) { - SslHandler sslHandler = ctx.pipeline().get(SslHandler.class); - if (sslHandler == null) { - throw new IllegalStateException("cannot find a SslHandler in the pipeline (required for " + - "application-level protocol negotiation)"); + try { + SslHandshakeCompletionEvent handshakeEvent = (SslHandshakeCompletionEvent) evt; + if (handshakeEvent.isSuccess()) { + SslHandler sslHandler = ctx.pipeline().get(SslHandler.class); + if (sslHandler == null) { + throw new IllegalStateException("cannot find a SslHandler in the pipeline (required for " + + "application-level protocol negotiation)"); + } + String protocol = sslHandler.applicationProtocol(); + configurePipeline(ctx, protocol != null ? protocol : fallbackProtocol); + } else { + handshakeFailure(ctx, handshakeEvent.cause()); + } + } catch (Throwable cause) { + exceptionCaught(ctx, cause); + } finally { + ChannelPipeline pipeline = ctx.pipeline(); + if (pipeline.context(this) != null) { + pipeline.remove(this); } - String protocol = sslHandler.applicationProtocol(); - configurePipeline(ctx, protocol != null? protocol : fallbackProtocol); - } else { - handshakeFailure(ctx, handshakeEvent.cause()); } } - ctx.fireUserEventTriggered(evt); } @@ -119,6 +126,7 @@ protected void handshakeFailure(ChannelHandlerContext ctx, Throwable cause) thro @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { logger.warn("{} Failed to select the application-level protocol:", ctx.channel(), cause); + ctx.fireExceptionCaught(cause); ctx.close(); } }
diff --git a/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java b/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java index 8d8003d745e..17d2a6e0e42 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java @@ -16,6 +16,32 @@ package io.netty.handler.ssl; +import static org.hamcrest.CoreMatchers.is; +import static org.hamcrest.CoreMatchers.nullValue; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertThat; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; +import static org.junit.Assume.assumeTrue; + +import java.io.File; +import java.net.InetSocketAddress; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; + +import javax.net.ssl.SSLEngine; + +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + import io.netty.bootstrap.Bootstrap; import io.netty.bootstrap.ServerBootstrap; import io.netty.buffer.ByteBuf; @@ -44,28 +70,10 @@ import io.netty.util.Mapping; import io.netty.util.ReferenceCountUtil; import io.netty.util.ReferenceCounted; -import io.netty.util.internal.ResourcesUtil; import io.netty.util.concurrent.Promise; import io.netty.util.internal.ObjectUtil; +import io.netty.util.internal.ResourcesUtil; import io.netty.util.internal.StringUtil; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; - -import java.io.File; -import java.net.InetSocketAddress; -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicReference; - -import javax.net.ssl.SSLEngine; - -import static org.hamcrest.CoreMatchers.is; -import static org.hamcrest.CoreMatchers.nullValue; -import static org.junit.Assert.*; -import static org.junit.Assume.assumeTrue; @RunWith(Parameterized.class) public class SniHandlerTest { @@ -339,6 +347,8 @@ public void testSniWithApnHandler() throws Exception { SslContext sniContext = makeSslContext(provider, true); final SslContext clientContext = makeSslClientContext(provider, true); try { + final AtomicBoolean serverApnCtx = new AtomicBoolean(false); + final AtomicBoolean clientApnCtx = new AtomicBoolean(false); final CountDownLatch serverApnDoneLatch = new CountDownLatch(1); final CountDownLatch clientApnDoneLatch = new CountDownLatch(1); @@ -363,6 +373,8 @@ protected void initChannel(Channel ch) throws Exception { p.addLast(new ApplicationProtocolNegotiationHandler("foo") { @Override protected void configurePipeline(ChannelHandlerContext ctx, String protocol) { + // addresses issue #9131 + serverApnCtx.set(ctx.pipeline().context(this) != null); serverApnDoneLatch.countDown(); } }); @@ -381,6 +393,8 @@ protected void initChannel(Channel ch) throws Exception { ch.pipeline().addLast(new ApplicationProtocolNegotiationHandler("foo") { @Override protected void configurePipeline(ChannelHandlerContext ctx, String protocol) { + // addresses issue #9131 + clientApnCtx.set(ctx.pipeline().context(this) != null); clientApnDoneLatch.countDown(); } }); @@ -395,6 +409,8 @@ protected void configurePipeline(ChannelHandlerContext ctx, String protocol) { assertTrue(serverApnDoneLatch.await(5, TimeUnit.SECONDS)); assertTrue(clientApnDoneLatch.await(5, TimeUnit.SECONDS)); + assertTrue(serverApnCtx.get()); + assertTrue(clientApnCtx.get()); assertThat(handler.hostname(), is("sni.fake.site")); assertThat(handler.sslContext(), is(sniContext)); } finally {
train
val
"2019-05-13T07:03:32"
"2019-05-07T08:50:59Z"
RoganDawes
val
netty/netty/9159_9160
netty/netty
netty/netty/9159
netty/netty/9160
[ "keyword_pr_to_issue" ]
e348bd9217eb626aa20dfd15d7c8c860e1fd7596
52c53891900ad3d06d8556d5ad5876103fd9b3b2
[ "PR submitted: https://github.com/netty/netty/pull/9160" ]
[ "2019", "just use `assertEquals(expected.key(), actual.key()` .... same for the other lines ", "call `expected.release()` and `actual.release()` in a finally block. ", "2019", "see above", "see above", "package-private ? \r\n\r\nplease add javadocs", "Addressed", "Addressed", "Getting a ref count exception when running mvn test when I do that. The buffers are unpooled.", "Addressed", "Addressed", "Addressed", "Addressed", "hmm can I see the exception ?", "ah I now see why... you sometimes pass the same buffer / request .\r\n\r\nJust change it to when you pass in the same instance to use `retainedDuplicate()`. This will ensure you will also increment the reference count ", "Addressed.\r\nI've added some code to track what refs should be released and cleaned up in a junit `@After` method.", "missing `@Override` ?", "missing `@Override` ?", "honestly I think it would be a lot cleaner if you just release stuff in the tests itself. ", "See above... I would prefer if you just do it directly in the tests ", "no the signature of the methods are different:\r\n```\r\nsuper: void copyMeta(AbstractBinaryMemcacheMessage dst)\r\nthis : void copyMeta(DefaultBinaryMemcacheRequest dst)\r\n```", "no the signature of the methods are different:\r\n```\r\nsuper: void copyMeta(AbstractBinaryMemcacheMessage dst)\r\nthis : void copyMeta(DefaultBinaryMemcacheResponse dst)\r\n```", "sure...", "done." ]
"2019-05-19T23:40:26Z"
[]
codec-memcache: copy, duplicate, replace not copying full request/response metadata
### Expected behavior Calling `copy()`, `duplicate()` or `replace()` on `FullBinaryMemcacheResponse` or `FullBinaryMemcacheRequest` instances should copy status, opCode, etc. that are defined in `AbstractBinaryMemcacheMessage`. ### Actual behavior Calling these methods do not copy the data and status, opCode etc. get lost upon calling these methods. ### Steps to reproduce 1. Create a `DefaultFullBinaryMemcacheRequest/Response`. 2. Set status, or opCode, ... 3. Call `copy()` 4. Notice copied instance is missing status, opCode, etc. ### Minimal yet complete reproducer code (or URL to code) Unit test: ``` @Test public void test() { DefaultFullBinaryMemcacheResponse res = new DefaultFullBinaryMemcacheResponse( Unpooled.copiedBuffer("key", CharsetUtil.UTF_8), Unpooled.EMPTY_BUFFER, Unpooled.EMPTY_BUFFER); res.setStatus((short) 1); res.setOpcode((byte) 0x01); FullBinaryMemcacheResponse copy = res.copy(); assertEquals(1, copy.status()); assertEquals(0x01, copy.opcode()); } ``` ### Netty version Latest master. ### JVM version (e.g. `java -version`) `java version "1.8.0_181"` ### OS version (e.g. `uname -a`) `Darwin computer-name 17.7.0 Darwin Kernel Version 17.7.0: Wed Feb 27 00:43:23 PST 2019; root:xnu-4570.71.35~1/RELEASE_X86_64 x86_64`
[ "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/AbstractBinaryMemcacheMessage.java", "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheRequest.java", "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheResponse.java", "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequest.java", "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponse.java" ]
[ "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/AbstractBinaryMemcacheMessage.java", "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheRequest.java", "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheResponse.java", "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequest.java", "codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponse.java" ]
[ "codec-memcache/src/test/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequestTest.java", "codec-memcache/src/test/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponseTest.java" ]
diff --git a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/AbstractBinaryMemcacheMessage.java b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/AbstractBinaryMemcacheMessage.java index c40ddf48d10..c9417e9730d 100644 --- a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/AbstractBinaryMemcacheMessage.java +++ b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/AbstractBinaryMemcacheMessage.java @@ -78,7 +78,7 @@ public BinaryMemcacheMessage setKey(ByteBuf key) { this.key = key; short oldKeyLength = keyLength; keyLength = key == null ? 0 : (short) key.readableBytes(); - totalBodyLength = totalBodyLength + keyLength - oldKeyLength; + totalBodyLength = totalBodyLength + keyLength - oldKeyLength; return this; } @@ -232,4 +232,20 @@ public BinaryMemcacheMessage touch(Object hint) { } return this; } + + /** + * Copies special metadata hold by this instance to the provided instance + * + * @param dst The instance where to copy the metadata of this instance to + */ + void copyMeta(AbstractBinaryMemcacheMessage dst) { + dst.magic = magic; + dst.opcode = opcode; + dst.keyLength = keyLength; + dst.extrasLength = extrasLength; + dst.dataType = dataType; + dst.totalBodyLength = totalBodyLength; + dst.opaque = opaque; + dst.cas = cas; + } } diff --git a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheRequest.java b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheRequest.java index 68feb03a30e..bac5845510a 100644 --- a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheRequest.java +++ b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheRequest.java @@ -92,4 +92,14 @@ public BinaryMemcacheRequest touch(Object hint) { super.touch(hint); return this; } + + /** + * Copies special metadata hold by this instance to the provided instance + * + * @param dst The instance where to copy the metadata of this instance to + */ + void copyMeta(DefaultBinaryMemcacheRequest dst) { + super.copyMeta(dst); + dst.reserved = reserved; + } } diff --git a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheResponse.java b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheResponse.java index b632dcea849..639913edd9a 100644 --- a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheResponse.java +++ b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultBinaryMemcacheResponse.java @@ -41,7 +41,7 @@ public DefaultBinaryMemcacheResponse() { /** * Create a new {@link DefaultBinaryMemcacheResponse} with the header and key. * - * @param key the key to use + * @param key the key to use. */ public DefaultBinaryMemcacheResponse(ByteBuf key) { this(key, null); @@ -92,4 +92,14 @@ public BinaryMemcacheResponse touch(Object hint) { super.touch(hint); return this; } + + /** + * Copies special metadata hold by this instance to the provided instance + * + * @param dst The instance where to copy the metadata of this instance to + */ + void copyMeta(DefaultBinaryMemcacheResponse dst) { + super.copyMeta(dst); + dst.status = status; + } } diff --git a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequest.java b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequest.java index dbc5bacbab4..8f6302dff29 100644 --- a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequest.java +++ b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequest.java @@ -102,7 +102,7 @@ public FullBinaryMemcacheRequest copy() { if (extras != null) { extras = extras.copy(); } - return new DefaultFullBinaryMemcacheRequest(key, extras, content().copy()); + return newInstance(key, extras, content().copy()); } @Override @@ -115,7 +115,7 @@ public FullBinaryMemcacheRequest duplicate() { if (extras != null) { extras = extras.duplicate(); } - return new DefaultFullBinaryMemcacheRequest(key, extras, content().duplicate()); + return newInstance(key, extras, content().duplicate()); } @Override @@ -133,6 +133,12 @@ public FullBinaryMemcacheRequest replace(ByteBuf content) { if (extras != null) { extras = extras.retainedDuplicate(); } - return new DefaultFullBinaryMemcacheRequest(key, extras, content); + return newInstance(key, extras, content); + } + + private DefaultFullBinaryMemcacheRequest newInstance(ByteBuf key, ByteBuf extras, ByteBuf content) { + DefaultFullBinaryMemcacheRequest newInstance = new DefaultFullBinaryMemcacheRequest(key, extras, content); + copyMeta(newInstance); + return newInstance; } } diff --git a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponse.java b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponse.java index 734cba8ffe9..a8853a58cca 100644 --- a/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponse.java +++ b/codec-memcache/src/main/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponse.java @@ -102,7 +102,7 @@ public FullBinaryMemcacheResponse copy() { if (extras != null) { extras = extras.copy(); } - return new DefaultFullBinaryMemcacheResponse(key, extras, content().copy()); + return newInstance(key, extras, content().copy()); } @Override @@ -115,7 +115,7 @@ public FullBinaryMemcacheResponse duplicate() { if (extras != null) { extras = extras.duplicate(); } - return new DefaultFullBinaryMemcacheResponse(key, extras, content().duplicate()); + return newInstance(key, extras, content().duplicate()); } @Override @@ -133,6 +133,12 @@ public FullBinaryMemcacheResponse replace(ByteBuf content) { if (extras != null) { extras = extras.retainedDuplicate(); } - return new DefaultFullBinaryMemcacheResponse(key, extras, content); + return newInstance(key, extras, content); + } + + private FullBinaryMemcacheResponse newInstance(ByteBuf key, ByteBuf extras, ByteBuf content) { + DefaultFullBinaryMemcacheResponse newInstance = new DefaultFullBinaryMemcacheResponse(key, extras, content); + copyMeta(newInstance); + return newInstance; } }
diff --git a/codec-memcache/src/test/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequestTest.java b/codec-memcache/src/test/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequestTest.java new file mode 100644 index 00000000000..6b7e98502d9 --- /dev/null +++ b/codec-memcache/src/test/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheRequestTest.java @@ -0,0 +1,99 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec.memcache.binary; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; +import io.netty.util.CharsetUtil; +import org.junit.Before; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotSame; + +public class DefaultFullBinaryMemcacheRequestTest { + + private DefaultFullBinaryMemcacheRequest request; + + @Before + public void setUp() { + request = new DefaultFullBinaryMemcacheRequest( + Unpooled.copiedBuffer("key", CharsetUtil.UTF_8), + Unpooled.wrappedBuffer(new byte[]{1, 3, 4, 9}), + Unpooled.copiedBuffer("some value", CharsetUtil.UTF_8)); + request.setReserved((short) 534); + request.setMagic((byte) 0x03); + request.setOpcode((byte) 0x02); + request.setKeyLength((short) 32); + request.setExtrasLength((byte) 34); + request.setDataType((byte) 43); + request.setTotalBodyLength(345); + request.setOpaque(3); + request.setCas(345345L); + } + + @Test + public void fullCopy() { + FullBinaryMemcacheRequest newInstance = request.copy(); + try { + assertCopy(request, request.content(), newInstance); + } finally { + request.release(); + newInstance.release(); + } + } + + @Test + public void fullDuplicate() { + FullBinaryMemcacheRequest newInstance = request.duplicate(); + try { + assertCopy(request, request.content(), newInstance); + } finally { + request.release(); + } + } + + @Test + public void fullReplace() { + ByteBuf newContent = Unpooled.copiedBuffer("new value", CharsetUtil.UTF_8); + FullBinaryMemcacheRequest newInstance = request.replace(newContent); + try { + assertCopy(request, newContent, newInstance); + } finally { + request.release(); + newInstance.release(); + } + } + + private void assertCopy(FullBinaryMemcacheRequest expected, ByteBuf expectedContent, + FullBinaryMemcacheRequest actual) { + assertNotSame(expected, actual); + + assertEquals(expected.key(), actual.key()); + assertEquals(expected.extras(), actual.extras()); + assertEquals(expectedContent, actual.content()); + + assertEquals(expected.reserved(), actual.reserved()); + assertEquals(expected.magic(), actual.magic()); + assertEquals(expected.opcode(), actual.opcode()); + assertEquals(expected.keyLength(), actual.keyLength()); + assertEquals(expected.extrasLength(), actual.extrasLength()); + assertEquals(expected.dataType(), actual.dataType()); + assertEquals(expected.totalBodyLength(), actual.totalBodyLength()); + assertEquals(expected.opaque(), actual.opaque()); + assertEquals(expected.cas(), actual.cas()); + } +} diff --git a/codec-memcache/src/test/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponseTest.java b/codec-memcache/src/test/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponseTest.java new file mode 100644 index 00000000000..f94989ba281 --- /dev/null +++ b/codec-memcache/src/test/java/io/netty/handler/codec/memcache/binary/DefaultFullBinaryMemcacheResponseTest.java @@ -0,0 +1,98 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.handler.codec.memcache.binary; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; +import io.netty.util.CharsetUtil; +import org.junit.Before; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotSame; + +public class DefaultFullBinaryMemcacheResponseTest { + + private DefaultFullBinaryMemcacheResponse response; + + @Before + public void setUp() { + response = new DefaultFullBinaryMemcacheResponse( + Unpooled.copiedBuffer("key", CharsetUtil.UTF_8), + Unpooled.wrappedBuffer(new byte[]{1, 3, 4, 9}), + Unpooled.copiedBuffer("some value", CharsetUtil.UTF_8)); + response.setStatus((short) 1); + response.setMagic((byte) 0x03); + response.setOpcode((byte) 0x02); + response.setKeyLength((short) 32); + response.setExtrasLength((byte) 34); + response.setDataType((byte) 43); + response.setTotalBodyLength(345); + response.setOpaque(3); + response.setCas(345345L); + } + + @Test + public void fullCopy() { + FullBinaryMemcacheResponse newInstance = response.copy(); + try { + assertResponseEquals(response, response.content(), newInstance); + } finally { + response.release(); + newInstance.release(); + } + } + + @Test + public void fullDuplicate() { + try { + assertResponseEquals(response, response.content(), response.duplicate()); + } finally { + response.release(); + } + } + + @Test + public void fullReplace() { + ByteBuf newContent = Unpooled.copiedBuffer("new value", CharsetUtil.UTF_8); + FullBinaryMemcacheResponse newInstance = response.replace(newContent); + try { + assertResponseEquals(response, newContent, newInstance); + } finally { + response.release(); + newInstance.release(); + } + } + + private void assertResponseEquals(FullBinaryMemcacheResponse expected, ByteBuf expectedContent, + FullBinaryMemcacheResponse actual) { + assertNotSame(expected, actual); + + assertEquals(expected.key(), actual.key()); + assertEquals(expected.extras(), actual.extras()); + assertEquals(expectedContent, actual.content()); + + assertEquals(expected.status(), actual.status()); + assertEquals(expected.magic(), actual.magic()); + assertEquals(expected.opcode(), actual.opcode()); + assertEquals(expected.keyLength(), actual.keyLength()); + assertEquals(expected.extrasLength(), actual.extrasLength()); + assertEquals(expected.dataType(), actual.dataType()); + assertEquals(expected.totalBodyLength(), actual.totalBodyLength()); + assertEquals(expected.opaque(), actual.opaque()); + assertEquals(expected.cas(), actual.cas()); + } +}
test
val
"2019-05-22T09:23:09"
"2019-05-19T21:24:25Z"
fabienrenaud
val
netty/netty/9197_9202
netty/netty
netty/netty/9197
netty/netty/9202
[ "keyword_pr_to_issue" ]
e1a881fa2b420866e1987e2ad2bce18e0d9c0b94
ede7251ecb50b7b53a2fdd8a5152b2327e1ebf14
[ "I could provide a PR. You can use set Length instead of delete in String Builder btw ", "@SplotyCode please if you have time :)", "@SplotyCode Thanks for your suggestion, it's much better." ]
[ "please keep old \"for loop\" style to reduce GC. ", "Please keep using old \"for loop\" style to reduce GC" ]
"2019-05-30T17:49:30Z"
[]
The method toString() of MqttSubscribePayload throw ArrayIndexOutOfBoundsException
### Expected behavior print MqttSubscribePayload correctly ### Actual behavior ArrayIndexOutOfBoundsException was thrown ### Steps to reproduce new a MqttSubscribePayload which has no topicSubscriptions(size of List<MqttTopicSubscription> is 0) Though the MqttSubscribePayload couldn't be empty in MQTT protocol. There also some cases like unit test that should cover this situation. So, I think the code below seems not gracefully enough. ``` @Override public String toString() { StringBuilder builder = new StringBuilder(StringUtil.simpleClassName(this)).append('['); for (int i = 0; i < topicSubscriptions.size() - 1; i++) { builder.append(topicSubscriptions.get(i)).append(", "); } builder.append(topicSubscriptions.get(topicSubscriptions.size() - 1)); builder.append(']'); return builder.toString(); } ``` Could we implement it like this below? ``` @Override public String toString() { StringBuilder builder = new StringBuilder(StringUtil.simpleClassName(this)).append('['); for (int i = 0; i <= topicSubscriptions.size() - 1; i++) { builder.append(topicSubscriptions.get(i)).append(", "); } if (builder.substring(builder.length() - 2).equals(", ")) { builder.delete(builder.length() - 2, builder.length()); } builder.append(']'); return builder.toString(); } ``` ### Minimal yet complete reproducer code (or URL to code) ``` @Test public void test_MqttSubscribePayload_toString() { List<MqttTopicSubscription> subscriptions = new ArrayList<>(); MqttSubscribePayload payload = new MqttSubscribePayload(subscriptions); System.out.println(payload.toString()); } ``` ### Netty version 4.1.32.Final ### JVM version (e.g. `java -version`) java version "1.8.0_201" Java(TM) SE Runtime Environment (build 1.8.0_201-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode) ### OS version (e.g. `uname -a`) windows
[ "codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttSubscribePayload.java", "codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttUnsubscribePayload.java" ]
[ "codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttSubscribePayload.java", "codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttUnsubscribePayload.java" ]
[ "codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttConnectPayloadTest.java" ]
diff --git a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttSubscribePayload.java b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttSubscribePayload.java index eb1b9c9c572..aa3a3244740 100644 --- a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttSubscribePayload.java +++ b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttSubscribePayload.java @@ -39,11 +39,12 @@ public List<MqttTopicSubscription> topicSubscriptions() { @Override public String toString() { StringBuilder builder = new StringBuilder(StringUtil.simpleClassName(this)).append('['); - for (int i = 0; i < topicSubscriptions.size() - 1; i++) { + for (int i = 0; i < topicSubscriptions.size(); i++) { builder.append(topicSubscriptions.get(i)).append(", "); } - builder.append(topicSubscriptions.get(topicSubscriptions.size() - 1)); - builder.append(']'); - return builder.toString(); + if (!topicSubscriptions.isEmpty()) { + builder.setLength(builder.length() - 2); + } + return builder.append(']').toString(); } } diff --git a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttUnsubscribePayload.java b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttUnsubscribePayload.java index b032d12067a..9812bd9923c 100644 --- a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttUnsubscribePayload.java +++ b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttUnsubscribePayload.java @@ -39,11 +39,12 @@ public List<String> topics() { @Override public String toString() { StringBuilder builder = new StringBuilder(StringUtil.simpleClassName(this)).append('['); - for (int i = 0; i < topics.size() - 1; i++) { + for (int i = 0; i < topics.size(); i++) { builder.append("topicName = ").append(topics.get(i)).append(", "); } - builder.append("topicName = ").append(topics.get(topics.size() - 1)) - .append(']'); - return builder.toString(); + if (!topics.isEmpty()) { + builder.setLength(builder.length() - 2); + } + return builder.append("]").toString(); } }
diff --git a/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttConnectPayloadTest.java b/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttConnectPayloadTest.java index 5a929dc1e5e..f1d9dc0db10 100644 --- a/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttConnectPayloadTest.java +++ b/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttConnectPayloadTest.java @@ -21,6 +21,8 @@ import io.netty.util.CharsetUtil; import org.junit.Test; +import java.util.Collections; + public class MqttConnectPayloadTest { @Test @@ -88,4 +90,11 @@ public void testBuilderNullWillMessage() throws Exception { assertNull(mqttConnectPayload.willMessageInBytes()); assertNull(mqttConnectPayload.willMessage()); } + + /* See https://github.com/netty/netty/pull/9202 */ + @Test + public void testEmptyTopicsToString() { + new MqttSubscribePayload(Collections.<MqttTopicSubscription>emptyList()).toString(); + new MqttUnsubscribePayload(Collections.<String>emptyList()).toString(); + } }
train
val
"2019-05-27T16:05:40"
"2019-05-29T17:13:21Z"
xiangwangcheng
val
netty/netty/9205_9206
netty/netty
netty/netty/9205
netty/netty/9206
[ "keyword_pr_to_issue" ]
ac95ff8b631124f3e646a7550ea6b686cfcc092d
3c36ce6b5ca859f8939f09309e248e41bc3ef2bb
[ "@slandelle will you provide a PR ?", "FYI, it's currently very hard to workaround this issue by subclassing WebSocketClientHandshaker13: `expectedChallengeResponseString` is private, `WebSocketUtil` is package private.\r\n\r\nI would love to contribute a fix but I'd like your opinion:\r\n\r\n* is this too much of a corner case so we should move WebSocketClientHandshaker13's url computation into a protected method so user can subclass?\r\n* or should we add yet another constructor with a new boolean parameter to all WebSocketClientHandshakerXX classes and another factory method in WebSocketClientHandshakerFactory?", "@slandelle will it be a problem to always use the absolute url ?", "@normanmaurer Absolutely, most server implementations will rightfully blow up because of the malformed request. The only case you use absolute urls is with HTTP proxy over clear HTTP.", "hmm I see... I am not sure what is better (protected vs static method). WDYT ?", "I'd say to go with the additional parameter/constructors/factory for Netty 4.1.\r\n\r\nBut this really needs to be cleaned up for Netty 5: drop early specs, make single constructor package protected and introduce a Builder a la `DnsNameResolverBuilder`.\r\n\r\nWDYT?", "@slandelle sounds fine to me... " ]
[ "we use 4 spaces...", "can the constructors be package-private for now ?", "It's very hard to break parameters lists with IntelliJ because it displays parameter names... It would be great to have a maven plugin for formatting the code automatically.", "@normanmaurer Where can I find the formatting rules? I can find them neither in CONTRIBUTING.MD nor in https://github.com/netty/netty-build/blob/master/src/main/resources/io/netty/checkstyle.xml.", "odd indent", "same comment" ]
"2019-05-31T13:26:34Z"
[]
WebSocketClientHandshaker13 doesn't work with HTTP proxies over clear HTTP
### Expected behavior When talking with an HTTP proxy over clear HTTP, a user agent should sent requests with an **absolute** url. ### Actual behavior `WebSocketClientHandshaker13::newHandshakeRequest` generates a request with [relative url](https://github.com/netty/netty/blob/netty-4.1.36.Final/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java#L164). ### Steps to reproduce NA ### Minimal yet complete reproducer code (or URL to code) NA ### Netty version 4.1.36.Final ### JVM version (e.g. `java -version`) irrelevant ### OS version (e.g. `uname -a`) irrelevant
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java index 79d499d583e..1ec598bf4a4 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java @@ -75,6 +75,8 @@ public abstract class WebSocketClientHandshaker { private final int maxFramePayloadLength; + private final boolean absoluteUpgradeUrl; + /** * Base constructor * @@ -115,12 +117,39 @@ protected WebSocketClientHandshaker(URI uri, WebSocketVersion version, String su protected WebSocketClientHandshaker(URI uri, WebSocketVersion version, String subprotocol, HttpHeaders customHeaders, int maxFramePayloadLength, long forceCloseTimeoutMillis) { + this(uri, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis, false); + } + + /** + * Base constructor + * + * @param uri + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified + * @param absoluteUpgradeUrl + * Use an absolute url for the Upgrade request, typically when connecting through an HTTP proxy over + * clear HTTP + */ + protected WebSocketClientHandshaker(URI uri, WebSocketVersion version, String subprotocol, + HttpHeaders customHeaders, int maxFramePayloadLength, + long forceCloseTimeoutMillis, boolean absoluteUpgradeUrl) { this.uri = uri; this.version = version; expectedSubprotocol = subprotocol; this.customHeaders = customHeaders; this.maxFramePayloadLength = maxFramePayloadLength; this.forceCloseTimeoutMillis = forceCloseTimeoutMillis; + this.absoluteUpgradeUrl = absoluteUpgradeUrl; } /** @@ -535,7 +564,11 @@ public void operationComplete(ChannelFuture future) throws Exception { /** * Return the constructed raw path for the give {@link URI}. */ - static String rawPath(URI wsURL) { + protected String upgradeUrl(URI wsURL) { + if (absoluteUpgradeUrl) { + return wsURL.toString(); + } + String path = wsURL.getRawPath(); String query = wsURL.getRawQuery(); if (query != null && !query.isEmpty()) { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java index 3b43bd3ae58..c41ba6f0542 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java @@ -88,7 +88,34 @@ public WebSocketClientHandshaker00(URI webSocketURL, WebSocketVersion version, S public WebSocketClientHandshaker00(URI webSocketURL, WebSocketVersion version, String subprotocol, HttpHeaders customHeaders, int maxFramePayloadLength, long forceCloseTimeoutMillis) { - super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); + this(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis, false); + } + + /** + * Creates a new instance with the specified destination WebSocket location and version to initiate. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified + * @param absoluteUpgradeUrl + * Use an absolute url for the Upgrade request, typically when connecting through an HTTP proxy over + * clear HTTP + */ + WebSocketClientHandshaker00(URI webSocketURL, WebSocketVersion version, String subprotocol, + HttpHeaders customHeaders, int maxFramePayloadLength, + long forceCloseTimeoutMillis, boolean absoluteUpgradeUrl) { + super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis, + absoluteUpgradeUrl); } /** @@ -148,12 +175,10 @@ protected FullHttpRequest newHandshakeRequest() { System.arraycopy(key3, 0, challenge, 8, 8); expectedChallengeResponseBytes = Unpooled.wrappedBuffer(WebSocketUtil.md5(challenge)); - // Get path URI wsURL = uri(); - String path = rawPath(wsURL); // Format request - FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path); + FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, upgradeUrl(wsURL)); HttpHeaders headers = request.headers(); if (customHeaders != null) { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java index c10132989f1..b7f55d8dbe7 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java @@ -130,7 +130,45 @@ public WebSocketClientHandshaker07(URI webSocketURL, WebSocketVersion version, S public WebSocketClientHandshaker07(URI webSocketURL, WebSocketVersion version, String subprotocol, boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis) { - super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); + this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, performMasking, + allowMaskMismatch, forceCloseTimeoutMillis, false); + } + + /** + * Creates a new instance. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param allowExtensions + * Allow extensions to be used in the reserved bits of the web socket frame + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param performMasking + * Whether to mask all written websocket frames. This must be set to true in order to be fully compatible + * with the websocket specifications. Client applications that communicate with a non-standard server + * which doesn't require masking might set this to false to achieve a higher performance. + * @param allowMaskMismatch + * When set to true, frames which are not masked properly according to the standard will still be + * accepted + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified. + * @param absoluteUpgradeUrl + * Use an absolute url for the Upgrade request, typically when connecting through an HTTP proxy over + * clear HTTP + */ + WebSocketClientHandshaker07(URI webSocketURL, WebSocketVersion version, String subprotocol, + boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, + boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis, + boolean absoluteUpgradeUrl) { + super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis, + absoluteUpgradeUrl); this.allowExtensions = allowExtensions; this.performMasking = performMasking; this.allowMaskMismatch = allowMaskMismatch; @@ -156,9 +194,7 @@ public WebSocketClientHandshaker07(URI webSocketURL, WebSocketVersion version, S */ @Override protected FullHttpRequest newHandshakeRequest() { - // Get path URI wsURL = uri(); - String path = rawPath(wsURL); // Get 16 bit nonce and base 64 encode it byte[] nonce = WebSocketUtil.randomBytes(16); @@ -175,7 +211,7 @@ protected FullHttpRequest newHandshakeRequest() { } // Format request - FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path); + FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, upgradeUrl(wsURL)); HttpHeaders headers = request.headers(); if (customHeaders != null) { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java index 237d2f715ed..33f7d2b883d 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java @@ -132,7 +132,45 @@ public WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, S public WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, String subprotocol, boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis) { - super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); + this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, performMasking, + allowMaskMismatch, forceCloseTimeoutMillis, false); + } + + /** + * Creates a new instance. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param allowExtensions + * Allow extensions to be used in the reserved bits of the web socket frame + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param performMasking + * Whether to mask all written websocket frames. This must be set to true in order to be fully compatible + * with the websocket specifications. Client applications that communicate with a non-standard server + * which doesn't require masking might set this to false to achieve a higher performance. + * @param allowMaskMismatch + * When set to true, frames which are not masked properly according to the standard will still be + * accepted + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified. + * @param absoluteUpgradeUrl + * Use an absolute url for the Upgrade request, typically when connecting through an HTTP proxy over + * clear HTTP + */ + WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, String subprotocol, + boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, + boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis, + boolean absoluteUpgradeUrl) { + super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis, + absoluteUpgradeUrl); this.allowExtensions = allowExtensions; this.performMasking = performMasking; this.allowMaskMismatch = allowMaskMismatch; @@ -158,9 +196,7 @@ public WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, S */ @Override protected FullHttpRequest newHandshakeRequest() { - // Get path URI wsURL = uri(); - String path = rawPath(wsURL); // Get 16 bit nonce and base 64 encode it byte[] nonce = WebSocketUtil.randomBytes(16); @@ -177,7 +213,7 @@ protected FullHttpRequest newHandshakeRequest() { } // Format request - FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path); + FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, upgradeUrl(wsURL)); HttpHeaders headers = request.headers(); if (customHeaders != null) { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java index a96683f2d11..b3cf60432cc 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java @@ -133,7 +133,45 @@ public WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, S boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis) { - super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis); + this(webSocketURL, version, subprotocol, allowExtensions, customHeaders, maxFramePayloadLength, performMasking, + allowMaskMismatch, forceCloseTimeoutMillis, false); + } + + /** + * Creates a new instance. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". Subsequent web socket frames will be + * sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. + * @param allowExtensions + * Allow extensions to be used in the reserved bits of the web socket frame + * @param customHeaders + * Map of custom headers to add to the client request + * @param maxFramePayloadLength + * Maximum length of a frame's payload + * @param performMasking + * Whether to mask all written websocket frames. This must be set to true in order to be fully compatible + * with the websocket specifications. Client applications that communicate with a non-standard server + * which doesn't require masking might set this to false to achieve a higher performance. + * @param allowMaskMismatch + * When set to true, frames which are not masked properly according to the standard will still be + * accepted + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified. + * @param absoluteUpgradeUrl + * Use an absolute url for the Upgrade request, typically when connecting through an HTTP proxy over + * clear HTTP + */ + WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, String subprotocol, + boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, + boolean performMasking, boolean allowMaskMismatch, + long forceCloseTimeoutMillis, boolean absoluteUpgradeUrl) { + super(webSocketURL, version, subprotocol, customHeaders, maxFramePayloadLength, forceCloseTimeoutMillis, + absoluteUpgradeUrl); this.allowExtensions = allowExtensions; this.performMasking = performMasking; this.allowMaskMismatch = allowMaskMismatch; @@ -159,9 +197,7 @@ public WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, S */ @Override protected FullHttpRequest newHandshakeRequest() { - // Get path URI wsURL = uri(); - String path = rawPath(wsURL); // Get 16 bit nonce and base 64 encode it byte[] nonce = WebSocketUtil.randomBytes(16); @@ -178,7 +214,7 @@ protected FullHttpRequest newHandshakeRequest() { } // Format request - FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path); + FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, upgradeUrl(wsURL)); HttpHeaders headers = request.headers(); if (customHeaders != null) { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java index 22afc3bd764..8038920ad84 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerFactory.java @@ -164,4 +164,62 @@ public static WebSocketClientHandshaker newHandshaker( throw new WebSocketHandshakeException("Protocol version " + version + " not supported."); } + + /** + * Creates a new handshaker. + * + * @param webSocketURL + * URL for web socket communications. e.g "ws://myhost.com/mypath". + * Subsequent web socket frames will be sent to this URL. + * @param version + * Version of web socket specification to use to connect to the server + * @param subprotocol + * Sub protocol request sent to the server. Null if no sub-protocol support is required. + * @param allowExtensions + * Allow extensions to be used in the reserved bits of the web socket frame + * @param customHeaders + * Custom HTTP headers to send during the handshake + * @param maxFramePayloadLength + * Maximum allowable frame payload length. Setting this value to your application's + * requirement may reduce denial of service attacks using long data frames. + * @param performMasking + * Whether to mask all written websocket frames. This must be set to true in order to be fully compatible + * with the websocket specifications. Client applications that communicate with a non-standard server + * which doesn't require masking might set this to false to achieve a higher performance. + * @param allowMaskMismatch + * When set to true, frames which are not masked properly according to the standard will still be + * accepted. + * @param forceCloseTimeoutMillis + * Close the connection if it was not closed by the server after timeout specified + * @param absoluteUpgradeUrl + * Use an absolute url for the Upgrade request, typically when connecting through an HTTP proxy over + * clear HTTP + */ + public static WebSocketClientHandshaker newHandshaker( + URI webSocketURL, WebSocketVersion version, String subprotocol, + boolean allowExtensions, HttpHeaders customHeaders, int maxFramePayloadLength, + boolean performMasking, boolean allowMaskMismatch, long forceCloseTimeoutMillis, boolean absoluteUpgradeUrl) { + if (version == V13) { + return new WebSocketClientHandshaker13( + webSocketURL, V13, subprotocol, allowExtensions, customHeaders, + maxFramePayloadLength, performMasking, allowMaskMismatch, forceCloseTimeoutMillis, absoluteUpgradeUrl); + } + if (version == V08) { + return new WebSocketClientHandshaker08( + webSocketURL, V08, subprotocol, allowExtensions, customHeaders, + maxFramePayloadLength, performMasking, allowMaskMismatch, forceCloseTimeoutMillis, absoluteUpgradeUrl); + } + if (version == V07) { + return new WebSocketClientHandshaker07( + webSocketURL, V07, subprotocol, allowExtensions, customHeaders, + maxFramePayloadLength, performMasking, allowMaskMismatch, forceCloseTimeoutMillis, absoluteUpgradeUrl); + } + if (version == V00) { + return new WebSocketClientHandshaker00( + webSocketURL, V00, subprotocol, customHeaders, + maxFramePayloadLength, forceCloseTimeoutMillis, absoluteUpgradeUrl); + } + + throw new WebSocketHandshakeException("Protocol version " + version + " not supported."); + } }
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java index 33c6ce6847e..9b0432a1999 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java @@ -22,8 +22,10 @@ public class WebSocketClientHandshaker00Test extends WebSocketClientHandshakerTest { @Override - protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers) { - return new WebSocketClientHandshaker00(uri, WebSocketVersion.V00, subprotocol, headers, 1024); + protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers, + boolean absoluteUpgradeUrl) { + return new WebSocketClientHandshaker00(uri, WebSocketVersion.V00, subprotocol, headers, + 1024, 10000, absoluteUpgradeUrl); } @Override diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java index 9ff3e8485b9..01acaf92b51 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java @@ -22,8 +22,11 @@ public class WebSocketClientHandshaker07Test extends WebSocketClientHandshakerTest { @Override - protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers) { - return new WebSocketClientHandshaker07(uri, WebSocketVersion.V07, subprotocol, false, headers, 1024); + protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers, + boolean absoluteUpgradeUrl) { + return new WebSocketClientHandshaker07(uri, WebSocketVersion.V07, subprotocol, false, headers, + 1024, true, false, 10000, + absoluteUpgradeUrl); } @Override diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java index 1efb6821b9b..79c6dd46bfd 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java @@ -21,7 +21,10 @@ public class WebSocketClientHandshaker08Test extends WebSocketClientHandshaker07Test { @Override - protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers) { - return new WebSocketClientHandshaker08(uri, WebSocketVersion.V08, subprotocol, false, headers, 1024); + protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers, + boolean absoluteUpgradeUrl) { + return new WebSocketClientHandshaker08(uri, WebSocketVersion.V08, subprotocol, false, headers, + 1024, true, true, 10000, + absoluteUpgradeUrl); } } diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java index 1727178831d..9a72e2feb1a 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java @@ -21,7 +21,10 @@ public class WebSocketClientHandshaker13Test extends WebSocketClientHandshaker07Test { @Override - protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers) { - return new WebSocketClientHandshaker13(uri, WebSocketVersion.V13, subprotocol, false, headers, 1024); + protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers, + boolean absoluteUpgradeUrl) { + return new WebSocketClientHandshaker13(uri, WebSocketVersion.V13, subprotocol, false, headers, + 1024, true, true, 10000, + absoluteUpgradeUrl); } } diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java index 5cb1e0e7b62..7e09031441d 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java @@ -40,10 +40,11 @@ import static org.junit.Assert.*; public abstract class WebSocketClientHandshakerTest { - protected abstract WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers); + protected abstract WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers, + boolean absoluteUpgradeUrl); protected WebSocketClientHandshaker newHandshaker(URI uri) { - return newHandshaker(uri, null, null); + return newHandshaker(uri, null, null, false); } protected abstract CharSequence getOriginHeaderName(); @@ -180,7 +181,7 @@ protected void testHeaderDefaultHttp(String uri, CharSequence header, String exp @Test @SuppressWarnings("deprecation") - public void testRawPath() { + public void testUpgradeUrl() { URI uri = URI.create("ws://localhost:9999/path%20with%20ws"); WebSocketClientHandshaker handshaker = newHandshaker(uri); FullHttpRequest request = handshaker.newHandshakeRequest(); @@ -192,7 +193,7 @@ public void testRawPath() { } @Test - public void testRawPathWithQuery() { + public void testUpgradeUrlWithQuery() { URI uri = URI.create("ws://localhost:9999/path%20with%20ws?a=b%20c"); WebSocketClientHandshaker handshaker = newHandshaker(uri); FullHttpRequest request = handshaker.newHandshakeRequest(); @@ -203,6 +204,18 @@ public void testRawPathWithQuery() { } } + @Test + public void testAbsoluteUpgradeUrlWithQuery() { + URI uri = URI.create("ws://localhost:9999/path%20with%20ws?a=b%20c"); + WebSocketClientHandshaker handshaker = newHandshaker(uri, null, null, true); + FullHttpRequest request = handshaker.newHandshakeRequest(); + try { + assertEquals("ws://localhost:9999/path%20with%20ws?a=b%20c", request.uri()); + } finally { + request.release(); + } + } + @Test(timeout = 3000) public void testHttpResponseAndFrameInSameBuffer() { testHttpResponseAndFrameInSameBuffer(false); @@ -317,7 +330,7 @@ public void testDuplicateWebsocketHandshakeHeaders() { inputHeaders.add(getProtocolHeaderName(), bogusSubProtocol); String realSubProtocol = "realSubProtocol"; - WebSocketClientHandshaker handshaker = newHandshaker(uri, realSubProtocol, inputHeaders); + WebSocketClientHandshaker handshaker = newHandshaker(uri, realSubProtocol, inputHeaders, false); FullHttpRequest request = handshaker.newHandshakeRequest(); HttpHeaders outputHeaders = request.headers();
train
val
"2019-06-07T22:51:25"
"2019-05-31T12:38:12Z"
slandelle
val
netty/netty/9208_9211
netty/netty
netty/netty/9208
netty/netty/9211
[ "keyword_issue_to_pr" ]
ec69da9afb8388c9ff7e25b2a6bc78c9bf91fb07
b91889c3db5f1ad90d2061df003bb95fe7510de4
[ "There is no `fireHandlerXXX` method in `ChannelHandlerContext`, only `fireChannelXXX` method. If you want to propagate handler Removed event, you can try `fireUserEventTriggered`.\r\n\r\n", "My understanding of channelReadComplete () may not be sufficient.\r\n\r\nIf it is a valid specification that channelReadComplete () is called when deleting a handler in port unification, close this issue.\r\nThe behavior was unexpected just for me.", "Thanks @lifeinwild, I think you're right that it would be better for the removal not to trigger this unless the handler is mid-read. I see that @normanmaurer has opened #9211 to address it.", "Yes I did fix it... closing" ]
[]
"2019-06-01T16:31:27Z"
[]
readComplete when handlerRemoved
### Expected behavior dont call fireChannelReadComplete call fireHandlerRemoved or nothing ### Actual behavior call fireChannelReadComplete https://github.com/netty/netty/blob/a33200ca38990b88315a48637e5ac5da398b100d/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java line 255 >ctx.fireChannelReadComplete(); ### Steps to reproduce handler is removed in port unification https://netty.io/4.1/xref/io/netty/example/portunification/PortUnificationServerHandler.html >p.remove(this); the removing handler has nothing to do with readComplete. ### Minimal yet complete reproducer code (or URL to code) you can reproduce it anytime by removing handler. ### Netty version 4.1.31 ### JVM version (e.g. `java -version`) corretto 8 ### OS version (e.g. `uname -a`) win 10
[ "codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java" ]
[ "codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java" ]
[]
diff --git a/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java b/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java index bed1efc211c..0457a93abdc 100644 --- a/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java +++ b/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java @@ -241,18 +241,16 @@ public final void handlerRemoved(ChannelHandlerContext ctx) throws Exception { if (buf != null) { // Directly set this to null so we are sure we not access it in any other method here anymore. cumulation = null; - + numReads = 0; int readable = buf.readableBytes(); if (readable > 0) { ByteBuf bytes = buf.readBytes(readable); buf.release(); ctx.fireChannelRead(bytes); + ctx.fireChannelReadComplete(); } else { buf.release(); } - - numReads = 0; - ctx.fireChannelReadComplete(); } handlerRemoved0(ctx); }
null
train
val
"2019-05-31T07:04:03"
"2019-05-31T22:26:45Z"
lifeinwild
val
netty/netty/9105_9247
netty/netty
netty/netty/9105
netty/netty/9247
[ "keyword_issue_to_pr" ]
6381d0766ab8438b5702791cd826d79ade4adf52
c9aaa93d83b5b571dbc733d2632232db82b3d884
[ "@nitsanw FYI... ", "AFAIK the only misbehaviours could happen by using offer of a single-producer JCTools queue or just relaxedPoll instead of poll.\r\nThe first one seems not the case here, while on consumer side just using a mix of relaxedPoll()/drain() with poll()/isEmpty() is enough to guarantee strict empty recognition a-la CLQ.\r\nIMO just using poll() could be a simple fix that would force a JCTools queue to use the same semantic of CLQ::poll.", "@franz1981 I'm afraid I don't quite follow what you're saying, but we currently _do_ use `poll()` and this has the problems I describe.\r\n\r\nHowever, I have just taken another look at `wakeup()`, and since we _always_ perform a `compareAndSet`, we actually could use `relaxedPoll` to solve the busy-spin issue. However, we probably _shouldn't_ be doing an unconditional `compareAndSet`, since this is costly - it should ordinarily be guarded by a cheap test of its current value.", "@belliottsmith when you say we should guard you think of doing something like this:\r\n\r\n```\r\nif (!inEventLoop && !wakenUp.get() && wakenUp.compareAndSet(false, true)) {\r\n selector.wakeup();\r\n}\r\n```\r\n?", "@belliottsmith \r\n\r\nThe poll() always return null if there is no pending offering and the only (internal) spinning I'm aware of in JCTools is while awaiting the message slot to be filled by the producer (because JCTools's offer happen in 2 stages: producer sequence increment AND message slot fill).\r\nIf the event loop logic fall into a spinning i don't see how it can be related to JCTools unless:\r\n\r\n1. the queue used is single producer\r\n2. the event loop logic that perform the sleep/awake is not correct (assuming is using poll() as you've suggested)\r\n\r\nI think that the awake logic could use a different strategy too, that would avoid using compareAndSwap on offer side: https://github.com/JCTools/JCTools/blob/master/jctools-experimental/src/main/java/org/jctools/queues/blocking/ScParkTakeStrategy.java#L19\r\n\r\n\r\n", "@normanmaurer right", "@franz1981 the two stages are not atomic, and a thread can be suspended by the operating system in between these two steps\r\n\r\nalso, the EventLoop should definitely not be parking; it should only be voluntarily suspending its execution when entering `select()`", "@belliottsmith Correct: that means that you would likely to see a spin into the queue::poll call: that's the issue you're seeing? Because I cannot see how using a putVolatile while setting the message slot could make any difference...@nitsanw can confirm/reject it", "@belliottsmith I think I would even argue that we may not need the CAS at all and could just use a volatile. At worse we would call `wakeup` multiple times. That said maybe just add a `get` before is good enough as a failed CAS should be at least cheaper then an extra selector wakeup. WDYT ?", "@normanmaurer I think the get + CAS is probably safest for now, the problem being the negotiation between the consumer resetting it to `false` before going to sleep, and a producer seeing the `true` before this happens and failing to prevent it going to sleep. It _would_ be possible to avoid a CAS here, but it would require a bit more complex surgery and thought to modify all of the places we actually go to sleep. On modern chips a CAS isn't so costly anyway, and probably less costly than actually triggering the `wakeup` (though I don't know how actually costly this is, the code comments suggest it is costly) - it might be something to consider for 5.0 as part of wider refactors, but doesn't intuitively feel like it fits the risk/reward tradeoff for 4.x to me.", "@belliottsmith @normanmaurer I think this can be closed now that #9247 is merged?", "@njhill yes!", "@belliottsmith \"Specifically, it is not non-blocking, and can lead to the EventLoop busy-spinning waiting for a task that may not be provided promptly because the offering thread's execution has been suspended by the operating system.\" - I assume you are referring to the following:\r\nhttps://github.com/JCTools/JCTools/blob/58a08a14928ca966d55543d3e4f6f40a44a65518/jctools-core/src/main/java/org/jctools/queues/BaseMpscLinkedArrayQueue.java#L265\r\n\r\n```\r\n if (casProducerIndex(pIndex, pIndex + 2))\r\n {\r\n break;\r\n }\r\n }\r\n // INDEX visible before ELEMENT\r\n/* Producer thread is suspended here */\r\n final long offset = modifiedCalcElementOffset(pIndex, mask);\r\n soElement(buffer, offset, e); // release element e\r\n return true;\r\n```\r\n\r\nIf this happens the consumer thread will spin-wait for the element to appear in `poll()`:\r\n\r\n```\r\n Object e = lvElement(buffer, offset);// LoadLoad\r\n if (e == null)\r\n {\r\n if (index != lvProducerIndex())\r\n {\r\n/* index is visible, but element is not visible until the producer is able to make progress */\r\n do\r\n {\r\n e = lvElement(buffer, offset);\r\n }\r\n while (e == null);\r\n }\r\n else\r\n {\r\n return null;\r\n }\r\n }\r\n```\r\n\r\nThis makes the queue blocking. What is worse, successful `offer` calls from other producers cannot overtake the \"stuck producer\" and the consumer cannot attend to them (I've always thought of this as a bubble in the queue). The producers can make progress, but the consumer can't skip ahead and deq the items until the \"stuck producer\" releases it.\r\n\r\nIf this is the issue under discussion, I'm not sure what the following suggestion means: \"A queue identical to the JCTools one but using a putVolatile to set the item in the queue's backing array would suffice to avoid the busy-spin blocking of the EventLoop\"\r\n\r\nThe problem is that the index is visible before the element. Making the element \"more visible\" (by using a stronger barrier) will not make a difference. The index and the element are not atomically published (as @belliottsmith points out later). Using CLQ will provide better progress guarantees here AFAIK (IIRC the node is only published when fully initialized, so the \"bubble\" in the queue cannot happen) and I think the feature request is sensible.", "@belliottsmith @franz1981 reading through the discussion and the java doc I realize JCTools should do a better job documenting the progress guarantees on this and other queues. I've filed a bug: https://github.com/JCTools/JCTools/issues/259", "> If this is the issue under discussion, I'm not sure what the following suggestion means: \"A queue identical to the JCTools one but using a putVolatile to set the item in the queue's backing array would suffice to avoid the busy-spin blocking of the EventLoop\"\r\n\r\nIt means the `eventLoop` (by using `relaxedPoll()`) would at least be able to proceed with other work such as reading/writing to a socket, or could stop if it had no other work to do, freeing up a thread for the producer to complete its work.\r\n\r\nObviously a preferable solution is to use a truly non-blocking queue, but blocking the `eventLoop` is worse than just blocking the external work queue.", "I was referring in particular to: \"using a **putVolatile** to set the item in the queue's backing array would suffice to avoid the busy-spin blocking of the EventLoop\".\r\nI agree that using a `relaxedPoll` will return control to the EventLoop, giving you more options in how to utilize time until an element is visible. I don't see how `putVolatile` makes a difference here.\r\nThanks.", "Well, in combination with #9109 you might appreciate my reasoning. Long story short, it depends on the behaviour of the code that wraps it to provide blocking behaviour. `putVolatile` instead of `putOrdered` would permit negotiating if a wake-up is _not_ necessary with only `LoadLoad` barriers. However, _today_, Netty imposes a full CAS in each side of that transaction, so it wouldn't actually be necessary.\r\n\r\n(It's been a while since I had the context to think about this, so I may bow out of further discussion for now, as I don't really have time to re-research my answers/thoughts)", "Reading through the discussion, I understand you have some reservations regarding the semantics/guarantees of JCTools queues, but I can't see how they add up to anything actionable.\r\n\r\nIf you have a concern with a particular detail please file a bug with JCTools and I'll be happy to address it.\r\n\r\nMaking the element write a volatile write makes no difference to either ticket AFAICT. It imposes further ordering constraints on the offer, which are irrelevant to the offer and do not help the `poll` code where these tickets seems to focus. The `offer` method as a whole already has volatile write semantics (in this implementation) due to the successful CAS operation.", "I have no concerns about the semantics, that would imply I thought they were overall problematic. Every concurrent structure's semantics matter only in the context they're used.\r\n\r\nIMO, this project should be using a truly non-blocking queue, but that doesn't mean the algorithm employed by JCTools is of general concern. \r\n\r\n(I will concede that I consider the default spin-lock blocking behaviour to be a bug, for any real-time application, but that's a personal stance on the matter of unbounded user-space spin locks and priority inversion)", "\"IMO, this project should be using a truly non-blocking queue\" - The trade offs are between higher per-element costs (allocation + barriers) and the JCTools option which is lock-less but still risks blocking. The configurable queue impl solves this issue and offers choice, so I think this is solved.\r\n\r\n\"I consider the default spin-lock blocking behaviour to be a bug\" - Would you prefer to put a yield in the loop instead? If you have suggestion I'm happy to hear it.\r\n\r\n\r\n", "Given that a relaxed poll can return NULL either due to emptiness and a \"not yet\" published element I believe that it can be improved by make these 2 cases explicit and let a user to choose what to do while awaiting element publication, but it would mean an additional xxxPoll method: it could be useful, but most JCtools users love and uses it because qs are \"kinda\" replacement for j.u.c. queues and I'm not sure that Netty would love to be bounded to use MessagePassingQueue API instead of the more general java Queue one.", "I've been down enough of these rabbit holes to know any answer will only lead to another, so I hope you don't mind if I bow out for real for now. I only provided that qualifier as a post-script because I felt I had been dishonest by disclaiming _any_ concerns about JCTools in response to your claim I had expressed general concerns (which I do not feel I had, I was only discussing their semantics in the context of this project). JCTools is a great library, that you should be proud of; it would be surprising if nobody had any concerns or criticisms. Perhaps we will have some time to discuss them in person one day.", "\"JCTools is a great library, that you should be proud of\" - Thanks :-)\r\n\r\n\"it would be surprising if nobody had any concerns or criticisms.\" - Indeed, and I am keen to address these concerns where I can. So please, if possible, make time to submit issues if appropriate. Even if they remain unresolved they will be informing other users of the risks/rewards/tradeoffs/decision making etc.\r\n\r\nThanks for your time, I hope we have an opportunity to discuss in person in the near future." ]
[]
"2019-06-17T15:21:22Z"
[]
Support pluggable EventLoop task queue
The default task queue used by EPoll and NIO EventLoop implementations is provided by JCTools, and while it is efficient it has some undesirable properties. Specifically, it is not non-blocking, and can lead to the EventLoop busy-spinning waiting for a task that may not be provided promptly because the offering thread's execution has been suspended by the operating system. This is a rare scenario, but it will happen and some application maintainers might like to avoid it. Unfortunately, there is no simple fix in the use of the queue; the `relaxedPoll()` method appears to be a candidate, but does not provide volatile visibility guarantees to a preceding `offer()`, meaning a wakeup could be missed (although we could write to a dummy volatile variable after offering to provide this). A queue identical to the JCTools one but using a `putVolatile` to set the item in the queue's backing array would suffice to avoid the busy-spin blocking of the EventLoop, but the issue of future tasks being unreachable until the thread completes would remain. Ideally, the queue used by these event loops would be pluggable, so that users can choose their ideal point on the spectrum of tradeoffs. A simple replacement that guarantees progress is the JDK `ConcurrentLinkedQueue`, but at the cost of slightly more expensive `offer()` and `poll()`; the above mentioned modification to the JCTools `offer()` could be mixed with `relaxedPoll()` to simply avoid busy-spin blocking, and a custom Mpsc queue implementation could guarantee forward progress with only marginally higher cost. Permitting the application maintainer to pick their poison would be tremendously helpful.
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoopGroup.java", "transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java", "transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoopGroup.java" ]
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java", "transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java", "transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoopGroup.java", "transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java", "transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java", "transport/src/main/java/io/netty/channel/nio/NioEventLoopGroup.java" ]
[ "transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java", "transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java" ]
diff --git a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java index 92b648889e7..a7aee9a4c6b 100644 --- a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java +++ b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java @@ -167,6 +167,17 @@ protected SingleThreadEventExecutor(EventExecutorGroup parent, Executor executor rejectedExecutionHandler = ObjectUtil.checkNotNull(rejectedHandler, "rejectedHandler"); } + protected SingleThreadEventExecutor(EventExecutorGroup parent, Executor executor, + boolean addTaskWakesUp, Queue<Runnable> taskQueue, + RejectedExecutionHandler rejectedHandler) { + super(parent); + this.addTaskWakesUp = addTaskWakesUp; + this.maxPendingTasks = DEFAULT_MAX_PENDING_EXECUTOR_TASKS; + this.executor = ThreadExecutorMap.apply(executor, this); + this.taskQueue = ObjectUtil.checkNotNull(taskQueue, "taskQueue"); + rejectedExecutionHandler = ObjectUtil.checkNotNull(rejectedHandler, "rejectedHandler"); + } + /** * @deprecated Please use and override {@link #newTaskQueue(int)}. */ diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java index d99f4a5d48c..29d71d66a32 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java @@ -17,6 +17,7 @@ import io.netty.channel.EventLoop; import io.netty.channel.EventLoopGroup; +import io.netty.channel.MultithreadEventLoopGroup; import io.netty.channel.SelectStrategy; import io.netty.channel.SingleThreadEventLoop; import io.netty.channel.epoll.AbstractEpollChannel.AbstractEpollUnsafe; @@ -80,8 +81,10 @@ public int get() throws Exception { private static final long MAX_SCHEDULED_TIMERFD_NS = 999999999; EpollEventLoop(EventLoopGroup parent, Executor executor, int maxEvents, - SelectStrategy strategy, RejectedExecutionHandler rejectedExecutionHandler) { - super(parent, executor, false, DEFAULT_MAX_PENDING_TASKS, rejectedExecutionHandler); + SelectStrategy strategy, RejectedExecutionHandler rejectedExecutionHandler, + MultithreadEventLoopGroup.EventLoopTaskQueueFactory queueFactory) { + super(parent, executor, false, newTaskQueue(queueFactory), newTaskQueue(queueFactory), + rejectedExecutionHandler); selectStrategy = ObjectUtil.checkNotNull(strategy, "strategy"); if (maxEvents == 0) { allowGrowing = true; @@ -140,6 +143,14 @@ public int get() throws Exception { } } + private static Queue<Runnable> newTaskQueue( + MultithreadEventLoopGroup.EventLoopTaskQueueFactory queueFactory) { + if (queueFactory == null) { + return newTaskQueue0(DEFAULT_MAX_PENDING_TASKS); + } + return queueFactory.newTaskQueue(DEFAULT_MAX_PENDING_TASKS); + } + /** * Return a cleared {@link IovArray} that can be used for writes in this {@link EventLoop}. */ @@ -217,9 +228,13 @@ void remove(AbstractEpollChannel ch) throws IOException { @Override protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { + return newTaskQueue0(maxPendingTasks); + } + + private static Queue<Runnable> newTaskQueue0(int maxPendingTasks) { // This event loop never calls takeTask() return maxPendingTasks == Integer.MAX_VALUE ? PlatformDependent.<Runnable>newMpscQueue() - : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); + : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } /** diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java index acb4212fe97..1a383a78c35 100644 --- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java +++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java @@ -119,6 +119,13 @@ public EpollEventLoopGroup(int nThreads, Executor executor, EventExecutorChooser super(nThreads, executor, chooserFactory, 0, selectStrategyFactory, rejectedExecutionHandler); } + public EpollEventLoopGroup(int nThreads, Executor executor, EventExecutorChooserFactory chooserFactory, + SelectStrategyFactory selectStrategyFactory, + RejectedExecutionHandler rejectedExecutionHandler, + EventLoopTaskQueueFactory queueFactory) { + super(nThreads, executor, chooserFactory, 0, selectStrategyFactory, rejectedExecutionHandler, queueFactory); + } + /** * Sets the percentage of the desired amount of time spent for I/O in the child event loops. The default value is * {@code 50}, which means the event loop will try to spend the same amount of time for I/O as for non-I/O tasks. @@ -131,7 +138,9 @@ public void setIoRatio(int ioRatio) { @Override protected EventLoop newChild(Executor executor, Object... args) throws Exception { + EventLoopTaskQueueFactory queueFactory = args.length == 4 ? (EventLoopTaskQueueFactory) args[3] : null; return new EpollEventLoop(this, executor, (Integer) args[0], - ((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2]); + ((SelectStrategyFactory) args[1]).newSelectStrategy(), + (RejectedExecutionHandler) args[2], queueFactory); } } diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java index a968118a671..567dd67c315 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoop.java @@ -17,6 +17,7 @@ import io.netty.channel.EventLoop; import io.netty.channel.EventLoopGroup; +import io.netty.channel.MultithreadEventLoopGroup; import io.netty.channel.SelectStrategy; import io.netty.channel.SingleThreadEventLoop; import io.netty.channel.kqueue.AbstractKQueueChannel.AbstractKQueueUnsafe; @@ -71,8 +72,10 @@ public int get() throws Exception { private volatile int ioRatio = 50; KQueueEventLoop(EventLoopGroup parent, Executor executor, int maxEvents, - SelectStrategy strategy, RejectedExecutionHandler rejectedExecutionHandler) { - super(parent, executor, false, DEFAULT_MAX_PENDING_TASKS, rejectedExecutionHandler); + SelectStrategy strategy, RejectedExecutionHandler rejectedExecutionHandler, + MultithreadEventLoopGroup.EventLoopTaskQueueFactory queueFactory) { + super(parent, executor, false, newTaskQueue(queueFactory), newTaskQueue(queueFactory), + rejectedExecutionHandler); selectStrategy = ObjectUtil.checkNotNull(strategy, "strategy"); this.kqueueFd = Native.newKQueue(); if (maxEvents == 0) { @@ -90,6 +93,14 @@ public int get() throws Exception { } } + private static Queue<Runnable> newTaskQueue( + MultithreadEventLoopGroup.EventLoopTaskQueueFactory queueFactory) { + if (queueFactory == null) { + return newTaskQueue0(DEFAULT_MAX_PENDING_TASKS); + } + return queueFactory.newTaskQueue(DEFAULT_MAX_PENDING_TASKS); + } + void add(AbstractKQueueChannel ch) { assert inEventLoop(); AbstractKQueueChannel old = channels.put(ch.fd().intValue(), ch); @@ -305,9 +316,13 @@ protected void run() { @Override protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { + return newTaskQueue0(maxPendingTasks); + } + + private static Queue<Runnable> newTaskQueue0(int maxPendingTasks) { // This event loop never calls takeTask() return maxPendingTasks == Integer.MAX_VALUE ? PlatformDependent.<Runnable>newMpscQueue() - : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); + : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } /** diff --git a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoopGroup.java b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoopGroup.java index fe32aa5a7b9..77325c4c235 100644 --- a/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoopGroup.java +++ b/transport-native-kqueue/src/main/java/io/netty/channel/kqueue/KQueueEventLoopGroup.java @@ -116,6 +116,14 @@ public KQueueEventLoopGroup(int nThreads, Executor executor, EventExecutorChoose super(nThreads, executor, chooserFactory, 0, selectStrategyFactory, rejectedExecutionHandler); } + public KQueueEventLoopGroup(int nThreads, Executor executor, EventExecutorChooserFactory chooserFactory, + SelectStrategyFactory selectStrategyFactory, + RejectedExecutionHandler rejectedExecutionHandler, + EventLoopTaskQueueFactory queueFactory) { + super(nThreads, executor, chooserFactory, 0, selectStrategyFactory, + rejectedExecutionHandler, queueFactory); + } + /** * Sets the percentage of the desired amount of time spent for I/O in the child event loops. The default value is * {@code 50}, which means the event loop will try to spend the same amount of time for I/O as for non-I/O tasks. @@ -128,7 +136,10 @@ public void setIoRatio(int ioRatio) { @Override protected EventLoop newChild(Executor executor, Object... args) throws Exception { + EventLoopTaskQueueFactory queueFactory = args.length == 4 ? (EventLoopTaskQueueFactory) args[3] : null; + return new KQueueEventLoop(this, executor, (Integer) args[0], - ((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2]); + ((SelectStrategyFactory) args[1]).newSelectStrategy(), + (RejectedExecutionHandler) args[2], queueFactory); } } diff --git a/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java b/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java index a9bc23dfe96..ee322184719 100644 --- a/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java +++ b/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java @@ -23,6 +23,7 @@ import io.netty.util.internal.logging.InternalLogger; import io.netty.util.internal.logging.InternalLoggerFactory; +import java.util.Queue; import java.util.concurrent.Executor; import java.util.concurrent.ThreadFactory; @@ -96,4 +97,21 @@ public ChannelFuture register(ChannelPromise promise) { public ChannelFuture register(Channel channel, ChannelPromise promise) { return next().register(channel, promise); } + + /** + * Factory used to create {@link Queue} instances that will be used to store tasks for an {@link EventLoop}. + * + * Generally speaking the returned {@link Queue} MUST be thread-safe and depending on the {@link EventLoop} + * implementation must be of type {@link java.util.concurrent.BlockingQueue}. + */ + public interface EventLoopTaskQueueFactory { + + /** + * Returns a new {@link Queue} to use. + * @param maxCapacity the maximum amount of elements that can be stored in the {@link Queue} at a given point + * in time. + * @return the new queue. + */ + Queue<Runnable> newTaskQueue(int maxCapacity); + } } diff --git a/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java b/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java index 1fe2d3fe2b5..9abb39fc389 100644 --- a/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java +++ b/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java @@ -59,6 +59,13 @@ protected SingleThreadEventLoop(EventLoopGroup parent, Executor executor, tailTasks = newTaskQueue(maxPendingTasks); } + protected SingleThreadEventLoop(EventLoopGroup parent, Executor executor, + boolean addTaskWakesUp, Queue<Runnable> taskQueue, Queue<Runnable> tailTaskQueue, + RejectedExecutionHandler rejectedExecutionHandler) { + super(parent, executor, addTaskWakesUp, taskQueue, rejectedExecutionHandler); + tailTasks = ObjectUtil.checkNotNull(tailTaskQueue, "tailTaskQueue"); + } + @Override public EventLoopGroup parent() { return (EventLoopGroup) super.parent(); diff --git a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java index bb9a1e25cf6..edd41a57905 100644 --- a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java +++ b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java @@ -19,6 +19,7 @@ import io.netty.channel.ChannelException; import io.netty.channel.EventLoop; import io.netty.channel.EventLoopException; +import io.netty.channel.MultithreadEventLoopGroup; import io.netty.channel.SelectStrategy; import io.netty.channel.SingleThreadEventLoop; import io.netty.util.IntSupplier; @@ -131,8 +132,10 @@ public Void run() { private boolean needsToSelectAgain; NioEventLoop(NioEventLoopGroup parent, Executor executor, SelectorProvider selectorProvider, - SelectStrategy strategy, RejectedExecutionHandler rejectedExecutionHandler) { - super(parent, executor, false, DEFAULT_MAX_PENDING_TASKS, rejectedExecutionHandler); + SelectStrategy strategy, RejectedExecutionHandler rejectedExecutionHandler, + MultithreadEventLoopGroup.EventLoopTaskQueueFactory queueFactory) { + super(parent, executor, false, newTaskQueue(queueFactory), newTaskQueue(queueFactory), + rejectedExecutionHandler); if (selectorProvider == null) { throw new NullPointerException("selectorProvider"); } @@ -146,6 +149,14 @@ public Void run() { selectStrategy = strategy; } + private static Queue<Runnable> newTaskQueue( + MultithreadEventLoopGroup.EventLoopTaskQueueFactory queueFactory) { + if (queueFactory == null) { + return newTaskQueue0(DEFAULT_MAX_PENDING_TASKS); + } + return queueFactory.newTaskQueue(DEFAULT_MAX_PENDING_TASKS); + } + private static final class SelectorTuple { final Selector unwrappedSelector; final Selector selector; @@ -265,9 +276,13 @@ public SelectorProvider selectorProvider() { @Override protected Queue<Runnable> newTaskQueue(int maxPendingTasks) { + return newTaskQueue0(maxPendingTasks); + } + + private static Queue<Runnable> newTaskQueue0(int maxPendingTasks) { // This event loop never calls takeTask() return maxPendingTasks == Integer.MAX_VALUE ? PlatformDependent.<Runnable>newMpscQueue() - : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); + : PlatformDependent.<Runnable>newMpscQueue(maxPendingTasks); } /** diff --git a/transport/src/main/java/io/netty/channel/nio/NioEventLoopGroup.java b/transport/src/main/java/io/netty/channel/nio/NioEventLoopGroup.java index 833b7549785..8598e010439 100644 --- a/transport/src/main/java/io/netty/channel/nio/NioEventLoopGroup.java +++ b/transport/src/main/java/io/netty/channel/nio/NioEventLoopGroup.java @@ -101,6 +101,15 @@ public NioEventLoopGroup(int nThreads, Executor executor, EventExecutorChooserFa super(nThreads, executor, chooserFactory, selectorProvider, selectStrategyFactory, rejectedExecutionHandler); } + public NioEventLoopGroup(int nThreads, Executor executor, EventExecutorChooserFactory chooserFactory, + final SelectorProvider selectorProvider, + final SelectStrategyFactory selectStrategyFactory, + final RejectedExecutionHandler rejectedExecutionHandler, + final EventLoopTaskQueueFactory taskQueueFactory) { + super(nThreads, executor, chooserFactory, selectorProvider, selectStrategyFactory, + rejectedExecutionHandler, taskQueueFactory); + } + /** * Sets the percentage of the desired amount of time spent for I/O in the child event loops. The default value is * {@code 50}, which means the event loop will try to spend the same amount of time for I/O as for non-I/O tasks. @@ -123,7 +132,8 @@ public void rebuildSelectors() { @Override protected EventLoop newChild(Executor executor, Object... args) throws Exception { + EventLoopTaskQueueFactory queueFactory = args.length == 4 ? (EventLoopTaskQueueFactory) args[3] : null; return new NioEventLoop(this, executor, (SelectorProvider) args[0], - ((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2]); + ((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2], queueFactory); } }
diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java index f250c272e64..71c56b6d863 100644 --- a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java +++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java @@ -51,7 +51,7 @@ public void testScheduleBigDelayNotOverflow() { final EventLoopGroup group = new EpollEventLoop(null, new ThreadPerTaskExecutor(new DefaultThreadFactory(getClass())), 0, - DefaultSelectStrategyFactory.INSTANCE.newSelectStrategy(), RejectedExecutionHandlers.reject()) { + DefaultSelectStrategyFactory.INSTANCE.newSelectStrategy(), RejectedExecutionHandlers.reject(), null) { @Override void handleLoopException(Throwable t) { capture.set(t); diff --git a/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java b/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java index 7db891afc62..99c949b0600 100644 --- a/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java +++ b/transport/src/test/java/io/netty/channel/nio/NioEventLoopTest.java @@ -17,15 +17,20 @@ import io.netty.channel.AbstractEventLoopTest; import io.netty.channel.Channel; +import io.netty.channel.DefaultSelectStrategyFactory; import io.netty.channel.EventLoop; import io.netty.channel.EventLoopGroup; +import io.netty.channel.MultithreadEventLoopGroup; import io.netty.channel.SelectStrategy; import io.netty.channel.SelectStrategyFactory; import io.netty.channel.socket.ServerSocketChannel; import io.netty.channel.socket.nio.NioServerSocketChannel; import io.netty.util.IntSupplier; +import io.netty.util.concurrent.DefaultEventExecutorChooserFactory; import io.netty.util.concurrent.DefaultThreadFactory; import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.RejectedExecutionHandlers; +import io.netty.util.concurrent.ThreadPerTaskExecutor; import org.hamcrest.core.IsInstanceOf; import org.junit.Ignore; import org.junit.Test; @@ -36,9 +41,12 @@ import java.nio.channels.Selector; import java.nio.channels.SocketChannel; import java.nio.channels.spi.SelectorProvider; +import java.util.Queue; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.RejectedExecutionException; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; import static org.junit.Assert.*; @@ -281,4 +289,35 @@ public void testChannelsRegistered() { group.shutdownGracefully(); } } + + @Test + public void testCustomQueue() { + final AtomicBoolean called = new AtomicBoolean(); + NioEventLoopGroup group = new NioEventLoopGroup(1, + new ThreadPerTaskExecutor(new DefaultThreadFactory(NioEventLoopGroup.class)), + DefaultEventExecutorChooserFactory.INSTANCE, SelectorProvider.provider(), + DefaultSelectStrategyFactory.INSTANCE, RejectedExecutionHandlers.reject(), + new MultithreadEventLoopGroup.EventLoopTaskQueueFactory() { + @Override + public Queue<Runnable> newTaskQueue(int maxCapacity) { + called.set(true); + return new LinkedBlockingQueue<Runnable>(maxCapacity); + } + }); + + final NioEventLoop loop = (NioEventLoop) group.next(); + + try { + loop.submit(new Runnable() { + @Override + public void run() { + // NOOP. + } + }).syncUninterruptibly(); + assertTrue(called.get()); + } finally { + group.shutdownGracefully(); + } + } + }
train
val
"2019-06-19T20:50:27"
"2019-04-29T12:00:47Z"
belliottsmith
val
netty/netty/9201_9250
netty/netty
netty/netty/9201
netty/netty/9250
[ "keyword_pr_to_issue" ]
6381d0766ab8438b5702791cd826d79ade4adf52
712077cdef050bc9aa211d301a1bdc3e763b6097
[ "@shevah yes this sounds like a bug... Would you be willing to create a PR to fix it ?", "Yes" ]
[ "imho we should call `retain` as last method to ensure we can not leak ", "`toString(...)` will never return null so this does not look correct. Maybe it should check if the buffer is empty ?", "why not an `ArrayList` to reduce the GC ?", "why not an `ArrayList` to reduce the GC ?", "why not an `ArrayList` to reduce the GC ?", "Should return `HAProxyMessage `.", "Should return `HAProxyMessage `.", "Done :-)", "Done", "Done", "Done", "Done", "Done", "Done", "imho we should not extend `DefaultByteBufHolder` but `AbstractReferenceCounted`. There is no need to hold a reference to the \"original\" `ByteBuf` here.", "I like this idea, I think if `HAProxyMessage` inherits `AbstractReferenceCounted`, then maybe we can make the implementation a bit simpler. Just overwrite the `AbstractReferenceCounted.deallocate()` method and release the TLV list in this method, like this:\r\n```java\r\nclass HAProxyMessage extends AbstractReferenceCounted {\r\n @Override\r\n public HAProxyMessage retain() {\r\n return (HAProxyMessage) super.retain();\r\n }\r\n // overwrite other methods...\r\n\r\n @Override\r\n protected void deallocate() {\r\n for (HAProxyTLV tlv : this.tlvs){\r\n tlv.release();\r\n }\r\n }\r\n}\r\n```\r\nIn this case, the TLV list will be released together when `HAProxyMessage` is released, but when `HAProxyMessage refCnt` is increased, the TLV `refCnt` will not increase with `HAProxyMessage`.\r\n\r\n@normanmaurer What do you think of this? Will it cause other additional problems?\r\n\r\nBy the way, I found that this is also the case in DNSMessage:\r\nhttps://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec-dns/src/main/java/io/netty/handler/codec/dns/AbstractDnsMessage.java#L378-L388", "@qeesung good question...Let me think about it but in general I would say it is fine.", "@normanmaurer It's done in this way. Could you please review it again :-)?", "I think you also want to call `leak.record` in the other `touch`, `retain` and `release` methods.", "put all of this in a finally block, just in case.", "Done", "Done", "you can move this to a method and re-use....\r\n\r\n\r\n```\r\nprivate void tryRecord() {\r\n if (leak != null) {\r\n leak.record();\r\n }\r\n}\r\n```", "Done!", "for this you need to duplicate the code as you should pass down the `hint`.", "I am sorry for my carelessness. Done!", "nit... you can move this out of this block and so remove the same code above.", "Done" ]
"2019-06-17T16:20:31Z"
[]
HAProxyMessage memory leak due unreleased TLVs
`HAProxyMessage` isn't a `ReferencedCounted` object. A component consuming an `HAProxyMessage` object and not passing it further might not release it, while it should be released as it contains a list of TLVs created in the static function `readNextTLV`: https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec-haproxy/src/main/java/io/netty/handler/codec/haproxy/HAProxyMessage.java#L235-L241 Notice that this function retains a slice of the original header ByteBuf increasing its refCnt and causing a possible memory leak if one forgets to release them explicitly: https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec-haproxy/src/main/java/io/netty/handler/codec/haproxy/HAProxyMessage.java#L296 I would expect `HAProxyMessage` to implement `ReferencedCounted` and release the TLVs in a similar fashion to: `haProxyMessage.tlvs().forEach(tlv -> tlv.release());` We have found this issue only after we had leaks. Furthermore, it seems that slices are purposely retained as seen in Netty tests, am I missing something? https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec-haproxy/src/test/java/io/netty/handler/codec/haproxy/HAProxyMessageDecoderTest.java#L642-L650 ### Netty version 4.1.36
[ "codec-haproxy/src/main/java/io/netty/handler/codec/haproxy/HAProxyMessage.java" ]
[ "codec-haproxy/src/main/java/io/netty/handler/codec/haproxy/HAProxyMessage.java" ]
[ "codec-haproxy/src/test/java/io/netty/handler/codec/haproxy/HAProxyMessageDecoderTest.java" ]
diff --git a/codec-haproxy/src/main/java/io/netty/handler/codec/haproxy/HAProxyMessage.java b/codec-haproxy/src/main/java/io/netty/handler/codec/haproxy/HAProxyMessage.java index b40bf42df26..bc623855915 100644 --- a/codec-haproxy/src/main/java/io/netty/handler/codec/haproxy/HAProxyMessage.java +++ b/codec-haproxy/src/main/java/io/netty/handler/codec/haproxy/HAProxyMessage.java @@ -17,9 +17,13 @@ import io.netty.buffer.ByteBuf; import io.netty.handler.codec.haproxy.HAProxyProxiedProtocol.AddressFamily; +import io.netty.util.AbstractReferenceCounted; import io.netty.util.ByteProcessor; import io.netty.util.CharsetUtil; import io.netty.util.NetUtil; +import io.netty.util.ResourceLeakDetector; +import io.netty.util.ResourceLeakDetectorFactory; +import io.netty.util.ResourceLeakTracker; import java.util.ArrayList; import java.util.Collections; @@ -28,29 +32,11 @@ /** * Message container for decoded HAProxy proxy protocol parameters */ -public final class HAProxyMessage { - - /** - * Version 1 proxy protocol message for 'UNKNOWN' proxied protocols. Per spec, when the proxied protocol is - * 'UNKNOWN' we must discard all other header values. - */ - private static final HAProxyMessage V1_UNKNOWN_MSG = new HAProxyMessage( - HAProxyProtocolVersion.V1, HAProxyCommand.PROXY, HAProxyProxiedProtocol.UNKNOWN, null, null, 0, 0); - - /** - * Version 2 proxy protocol message for 'UNKNOWN' proxied protocols. Per spec, when the proxied protocol is - * 'UNKNOWN' we must discard all other header values. - */ - private static final HAProxyMessage V2_UNKNOWN_MSG = new HAProxyMessage( - HAProxyProtocolVersion.V2, HAProxyCommand.PROXY, HAProxyProxiedProtocol.UNKNOWN, null, null, 0, 0); - - /** - * Version 2 proxy protocol message for local requests. Per spec, we should use an unspecified protocol and family - * for 'LOCAL' commands. Per spec, when the proxied protocol is 'UNKNOWN' we must discard all other header values. - */ - private static final HAProxyMessage V2_LOCAL_MSG = new HAProxyMessage( - HAProxyProtocolVersion.V2, HAProxyCommand.LOCAL, HAProxyProxiedProtocol.UNKNOWN, null, null, 0, 0); +public final class HAProxyMessage extends AbstractReferenceCounted { + private static final ResourceLeakDetector<HAProxyMessage> leakDetector = + ResourceLeakDetectorFactory.instance().newResourceLeakDetector(HAProxyMessage.class); + private final ResourceLeakTracker<HAProxyMessage> leak; private final HAProxyProtocolVersion protocolVersion; private final HAProxyCommand command; private final HAProxyProxiedProtocol proxiedProtocol; @@ -108,6 +94,8 @@ private HAProxyMessage( this.sourcePort = sourcePort; this.destinationPort = destinationPort; this.tlvs = Collections.unmodifiableList(tlvs); + + leak = leakDetector.track(this); } /** @@ -150,7 +138,7 @@ static HAProxyMessage decodeHeader(ByteBuf header) { } if (cmd == HAProxyCommand.LOCAL) { - return V2_LOCAL_MSG; + return unknownMsg(HAProxyProtocolVersion.V2, HAProxyCommand.LOCAL); } // Per spec, the 14th byte is the protocol and address family byte @@ -162,7 +150,7 @@ static HAProxyMessage decodeHeader(ByteBuf header) { } if (protAndFam == HAProxyProxiedProtocol.UNKNOWN) { - return V2_UNKNOWN_MSG; + return unknownMsg(HAProxyProtocolVersion.V2, HAProxyCommand.PROXY); } int addressInfoLen = header.readUnsignedShort(); @@ -337,7 +325,7 @@ static HAProxyMessage decodeHeader(String header) { } if (protAndFam == HAProxyProxiedProtocol.UNKNOWN) { - return V1_UNKNOWN_MSG; + return unknownMsg(HAProxyProtocolVersion.V1, HAProxyCommand.PROXY); } if (numParts != 6) { @@ -349,6 +337,14 @@ static HAProxyMessage decodeHeader(String header) { protAndFam, parts[2], parts[3], parts[4], parts[5]); } + /** + * Proxy protocol message for 'UNKNOWN' proxied protocols. Per spec, when the proxied protocol is + * 'UNKNOWN' we must discard all other header values. + */ + private static HAProxyMessage unknownMsg(HAProxyProtocolVersion version, HAProxyCommand command) { + return new HAProxyMessage(version, command, HAProxyProxiedProtocol.UNKNOWN, null, null, 0, 0); + } + /** * Convert ip address bytes to string representation * @@ -358,31 +354,20 @@ static HAProxyMessage decodeHeader(String header) { */ private static String ipBytesToString(ByteBuf header, int addressLen) { StringBuilder sb = new StringBuilder(); - if (addressLen == 4) { - sb.append(header.readByte() & 0xff); - sb.append('.'); - sb.append(header.readByte() & 0xff); - sb.append('.'); - sb.append(header.readByte() & 0xff); - sb.append('.'); - sb.append(header.readByte() & 0xff); + final int ipv4Len = 4; + final int ipv6Len = 8; + if (addressLen == ipv4Len) { + for (int i = 0; i < ipv4Len; i++) { + sb.append(header.readByte() & 0xff); + sb.append('.'); + } } else { - sb.append(Integer.toHexString(header.readUnsignedShort())); - sb.append(':'); - sb.append(Integer.toHexString(header.readUnsignedShort())); - sb.append(':'); - sb.append(Integer.toHexString(header.readUnsignedShort())); - sb.append(':'); - sb.append(Integer.toHexString(header.readUnsignedShort())); - sb.append(':'); - sb.append(Integer.toHexString(header.readUnsignedShort())); - sb.append(':'); - sb.append(Integer.toHexString(header.readUnsignedShort())); - sb.append(':'); - sb.append(Integer.toHexString(header.readUnsignedShort())); - sb.append(':'); - sb.append(Integer.toHexString(header.readUnsignedShort())); + for (int i = 0; i < ipv6Len; i++) { + sb.append(Integer.toHexString(header.readUnsignedShort())); + sb.append(':'); + } } + sb.setLength(sb.length() - 1); return sb.toString(); } @@ -519,4 +504,63 @@ public int destinationPort() { public List<HAProxyTLV> tlvs() { return tlvs; } + + @Override + public HAProxyMessage touch() { + tryRecord(); + return (HAProxyMessage) super.touch(); + } + + @Override + public HAProxyMessage touch(Object hint) { + if (leak != null) { + leak.record(hint); + } + return this; + } + + @Override + public HAProxyMessage retain() { + tryRecord(); + return (HAProxyMessage) super.retain(); + } + + @Override + public HAProxyMessage retain(int increment) { + tryRecord(); + return (HAProxyMessage) super.retain(increment); + } + + @Override + public boolean release() { + tryRecord(); + return super.release(); + } + + @Override + public boolean release(int decrement) { + tryRecord(); + return super.release(decrement); + } + + private void tryRecord() { + if (leak != null) { + leak.record(); + } + } + + @Override + protected void deallocate() { + try { + for (HAProxyTLV tlv : tlvs) { + tlv.release(); + } + } finally { + final ResourceLeakTracker<HAProxyMessage> leak = this.leak; + if (leak != null) { + boolean closed = leak.close(this); + assert closed; + } + } + } }
diff --git a/codec-haproxy/src/test/java/io/netty/handler/codec/haproxy/HAProxyMessageDecoderTest.java b/codec-haproxy/src/test/java/io/netty/handler/codec/haproxy/HAProxyMessageDecoderTest.java index 2d4039de3d0..02ff285565c 100644 --- a/codec-haproxy/src/test/java/io/netty/handler/codec/haproxy/HAProxyMessageDecoderTest.java +++ b/codec-haproxy/src/test/java/io/netty/handler/codec/haproxy/HAProxyMessageDecoderTest.java @@ -58,6 +58,7 @@ public void testIPV4Decode() { assertEquals(443, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test @@ -78,6 +79,7 @@ public void testIPV6Decode() { assertEquals(443, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test @@ -98,6 +100,7 @@ public void testUnknownProtocolDecode() { assertEquals(0, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test(expected = HAProxyProtocolException.class) @@ -264,6 +267,7 @@ public void testV2IPV4Decode() { assertEquals(443, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test @@ -319,6 +323,7 @@ public void testV2UDPDecode() { assertEquals(443, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test @@ -398,6 +403,7 @@ public void testv2IPV6Decode() { assertEquals(443, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test @@ -476,6 +482,7 @@ public void testv2UnixDecode() { assertEquals(0, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test @@ -531,6 +538,7 @@ public void testV2LocalProtocolDecode() { assertEquals(0, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test @@ -586,6 +594,7 @@ public void testV2UnknownProtocolDecode() { assertEquals(0, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test @@ -642,9 +651,7 @@ public void testV2WithSslTLVs() throws Exception { assertTrue(0 < firstTlv.refCnt()); assertTrue(0 < secondTlv.refCnt()); assertTrue(0 < thirdTLV.refCnt()); - assertFalse(thirdTLV.release()); - assertFalse(secondTlv.release()); - assertTrue(firstTlv.release()); + assertTrue(msg.release()); assertEquals(0, firstTlv.refCnt()); assertEquals(0, secondTlv.refCnt()); assertEquals(0, thirdTLV.refCnt()); @@ -653,6 +660,51 @@ public void testV2WithSslTLVs() throws Exception { assertFalse(ch.finish()); } + @Test + public void testReleaseHAProxyMessage() { + ch = new EmbeddedChannel(new HAProxyMessageDecoder()); + + final byte[] bytes = { + 13, 10, 13, 10, 0, 13, 10, 81, 85, 73, 84, 10, 33, 17, 0, 35, 127, 0, 0, 1, 127, 0, 0, 1, + -55, -90, 7, 89, 32, 0, 20, 5, 0, 0, 0, 0, 33, 0, 5, 84, 76, 83, 118, 49, 34, 0, 4, 76, 69, 65, 70 + }; + + int startChannels = ch.pipeline().names().size(); + assertTrue(ch.writeInbound(copiedBuffer(bytes))); + Object msgObj = ch.readInbound(); + assertEquals(startChannels - 1, ch.pipeline().names().size()); + HAProxyMessage msg = (HAProxyMessage) msgObj; + + final List<HAProxyTLV> tlvs = msg.tlvs(); + assertEquals(3, tlvs.size()); + + assertEquals(1, msg.refCnt()); + for (HAProxyTLV tlv : tlvs) { + assertEquals(3, tlv.refCnt()); + } + + // Retain the haproxy message + msg.retain(); + assertEquals(2, msg.refCnt()); + for (HAProxyTLV tlv : tlvs) { + assertEquals(3, tlv.refCnt()); + } + + // Decrease the haproxy message refCnt + msg.release(); + assertEquals(1, msg.refCnt()); + for (HAProxyTLV tlv : tlvs) { + assertEquals(3, tlv.refCnt()); + } + + // Release haproxy message, TLVs will be released with it + msg.release(); + assertEquals(0, msg.refCnt()); + for (HAProxyTLV tlv : tlvs) { + assertEquals(0, tlv.refCnt()); + } + } + @Test public void testV2WithTLV() { ch = new EmbeddedChannel(new HAProxyMessageDecoder(4)); @@ -738,6 +790,7 @@ public void testV2WithTLV() { assertEquals(0, msg.destinationPort()); assertNull(ch.readInbound()); assertFalse(ch.finish()); + assertTrue(msg.release()); } @Test(expected = HAProxyProtocolException.class)
train
val
"2019-06-19T20:50:27"
"2019-05-30T15:52:03Z"
shevah
val
netty/netty/9278_9280
netty/netty
netty/netty/9278
netty/netty/9280
[ "keyword_pr_to_issue" ]
039087ed47a482a08ded357b047850830b65e8b9
18b7bdff12afb3dd6c6d579d4ade7159079afd36
[ "The issue is that on this line\r\n\r\nhttps://github.com/netty/netty/commit/8f7ef1cabb5442584e56a9e79bcb9696bc572a94#diff-006d8822b20e6bcf0e9f5f2470153807R166\r\n\r\nThe call to retrieve the method is returning `null` on GraalVM. The issue has a knock on effect as previous versions of Netty allows handlers to not be registered for reflection, but since the change all handlers used by netty have to be registered for reflective access. It would be better if this was not the case and instead a null check is added to the method call.", "@graemerocher sure we can add a null check but still I wonder if this is even \"valid\" to return null in terms of API. Shouldn't you better throw an exception if you can not find the method (for whatever reason). ", "@graemerocher also please feel free to send a PR for the extra null check. ", "Will do thanks ", "@graemerocher <3", "Thanks!", "@graemerocher I'm also running into this issue. I'm not sure I fully understand why this `NoSuchMethodException` is being thrown and why it's fine to ignore it. In a vert.x based application I compiled with graalvm native, I'm getting hundreds of these reflective errors when starting up because I have the `io.netty` level set to `FINE`.\r\n\r\nAre these errors expected? I'm using `4.1.49` so the netty version I'm using includes this fix. Thanks in advance!" ]
[]
"2019-06-25T06:12:12Z"
[]
Micronaut Graal native-images doesn't work anymore with Netty 4.1.35+
### Expected behavior I'm trying to upgrade Micronaut to use Netty 4.1.35 or later and it should be possible to build Micronaut GraalVM native-images with that Netty version. I know that Netty introduced Graal support in 4.1.36 but it uses the old approach, meaning that it's compatible with Graal pre-19.0.0 version. I was hoping to send a PR here to upgrade Netty to support Graal 19.0.0 (and 19.0.2 now), but first this need to be fixed. ### Actual behavior The application fails when starting the native image. ### Steps to reproduce It's not easy to reproduce the issue because you need to upgrade Netty version in Micronaut, build it locally and then create a native-image using that Micronaut version. When staring the application the error is: ```$ ./basic-app 00:00:28.269 [main] ERROR i.m.h.server.netty.NettyHttpServer - Error starting Micronaut server: io.netty.bootstrap.ServerBootstrap$1.channelRegistered(io.netty.channel.ChannelHandlerContext) java.lang.NoSuchMethodException: io.netty.bootstrap.ServerBootstrap$1.channelRegistered(io.netty.channel.ChannelHandlerContext) at java.lang.Class.getMethod(DynamicHub.java:987) at io.netty.channel.ChannelHandlerMask$2.run(ChannelHandlerMask.java:166) at io.netty.channel.ChannelHandlerMask$2.run(ChannelHandlerMask.java:163) at java.security.AccessController.doPrivileged(AccessController.java:83) at io.netty.channel.ChannelHandlerMask.isSkippable(ChannelHandlerMask.java:163) at io.netty.channel.ChannelHandlerMask.mask0(ChannelHandlerMask.java:91) at io.netty.channel.ChannelHandlerMask.mask(ChannelHandlerMask.java:76) at io.netty.channel.AbstractChannelHandlerContext.<init>(AbstractChannelHandlerContext.java:104) at io.netty.channel.DefaultChannelHandlerContext.<init>(DefaultChannelHandlerContext.java:26) at io.netty.channel.DefaultChannelPipeline.newContext(DefaultChannelPipeline.java:120) at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:204) at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:385) at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:372) at io.netty.bootstrap.ServerBootstrap.init(ServerBootstrap.java:169) at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:321) at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:282) at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:278) at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:253) at io.micronaut.http.server.netty.NettyHttpServer.bindServerToHost(NettyHttpServer.java:421) at io.micronaut.http.server.netty.NettyHttpServer.start(NettyHttpServer.java:318) at io.micronaut.http.server.netty.NettyHttpServer.start(NettyHttpServer.java:96) at io.micronaut.runtime.Micronaut.lambda$start$2(Micronaut.java:75) at java.util.Optional.ifPresent(Optional.java:159) at io.micronaut.runtime.Micronaut.start(Micronaut.java:73) at io.micronaut.runtime.Micronaut.run(Micronaut.java:303) at io.micronaut.runtime.Micronaut.run(Micronaut.java:289) at example.micronaut.Application.main(Application.java:8) 00:00:28.272 [main] ERROR io.micronaut.runtime.Micronaut - Error starting Micronaut server: Unable to start Micronaut server on port: 8080 io.micronaut.http.server.exceptions.ServerStartupException: Unable to start Micronaut server on port: 8080 at io.micronaut.http.server.netty.NettyHttpServer.bindServerToHost(NettyHttpServer.java:446) at io.micronaut.http.server.netty.NettyHttpServer.start(NettyHttpServer.java:318) at io.micronaut.http.server.netty.NettyHttpServer.start(NettyHttpServer.java:96) at io.micronaut.runtime.Micronaut.lambda$start$2(Micronaut.java:75) at java.util.Optional.ifPresent(Optional.java:159) at io.micronaut.runtime.Micronaut.start(Micronaut.java:73) at io.micronaut.runtime.Micronaut.run(Micronaut.java:303) at io.micronaut.runtime.Micronaut.run(Micronaut.java:289) at example.micronaut.Application.main(Application.java:8) Caused by: java.lang.NoSuchMethodException: io.netty.bootstrap.ServerBootstrap$1.channelRegistered(io.netty.channel.ChannelHandlerContext) at java.lang.Class.getMethod(DynamicHub.java:987) at io.netty.channel.ChannelHandlerMask$2.run(ChannelHandlerMask.java:166) at io.netty.channel.ChannelHandlerMask$2.run(ChannelHandlerMask.java:163) at java.security.AccessController.doPrivileged(AccessController.java:83) at io.netty.channel.ChannelHandlerMask.isSkippable(ChannelHandlerMask.java:163) at io.netty.channel.ChannelHandlerMask.mask0(ChannelHandlerMask.java:91) at io.netty.channel.ChannelHandlerMask.mask(ChannelHandlerMask.java:76) at io.netty.channel.AbstractChannelHandlerContext.<init>(AbstractChannelHandlerContext.java:104) at io.netty.channel.DefaultChannelHandlerContext.<init>(DefaultChannelHandlerContext.java:26) at io.netty.channel.DefaultChannelPipeline.newContext(DefaultChannelPipeline.java:120) at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:204) at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:385) at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:372) at io.netty.bootstrap.ServerBootstrap.init(ServerBootstrap.java:169) at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:321) at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:282) at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:278) at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:253) at io.micronaut.http.server.netty ``` As Graeme Rocher mentioned in our issue https://github.com/micronaut-projects/micronaut-core/pull/1802#issuecomment-504642431 there is a breaking change introduced in Netty 4.1.35 here https://github.com/netty/netty/commit/8f7ef1cabb5442584e56a9e79bcb9696bc572a94#diff-006d8822b20e6bcf0e9f5f2470153807R166: ![image](https://user-images.githubusercontent.com/559192/60031863-c4616c00-96a5-11e9-844d-5530156d21db.png) ### Minimal yet complete reproducer code (or URL to code) Not simple to reproduce. If you can't figure it out we can publish a specific snapshot version with the upgraded Netty dependency and the Micronaut Graal application that reproduces the issue. ### Netty version 4.1.35 and later. ### JVM version (e.g. `java -version`) ``` openjdk version "1.8.0_212" OpenJDK Runtime Environment (build 1.8.0_212-20190523183340.buildslave.jdk8u-src-tar--b03) OpenJDK 64-Bit GraalVM CE 19.0.2 (build 25.212-b03-jvmci-19-b04, mixed mode) ``` ### OS version (e.g. `uname -a`) ``` Linux nobita 4.18.0-20-generic #21~18.04.1-Ubuntu SMP Wed May 8 08:43:37 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux ```
[ "transport/src/main/java/io/netty/channel/ChannelHandlerMask.java" ]
[ "transport/src/main/java/io/netty/channel/ChannelHandlerMask.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/ChannelHandlerMask.java b/transport/src/main/java/io/netty/channel/ChannelHandlerMask.java index ef384433280..526006a84e3 100644 --- a/transport/src/main/java/io/netty/channel/ChannelHandlerMask.java +++ b/transport/src/main/java/io/netty/channel/ChannelHandlerMask.java @@ -175,7 +175,7 @@ public Boolean run() throws Exception { "Class {} missing method {}, assume we can not skip execution", handlerType, methodName, e); return false; } - return m.isAnnotationPresent(Skip.class); + return m != null && m.isAnnotationPresent(Skip.class); } }); }
null
train
val
"2019-06-24T23:11:24"
"2019-06-24T15:34:21Z"
ilopmar
val
netty/netty/9301_9304
netty/netty
netty/netty/9301
netty/netty/9304
[ "keyword_pr_to_issue" ]
262ced7ce4a9254fb55046aee89f0ff4d15e17f1
4596f9e1397ffbc7d1a68b670e529dcf3a5d6036
[ "This should be an issue. The way to find the remainder is either `length % 16` or `length & 15`. `length & 15` performs better.\r\n\r\n", "@qeesung yes,i think so.", "PR welcome\n\n> Am 29.06.2019 um 08:15 schrieb xiaoheng1 <notifications@github.com>:\n> \n> @qeesung yes,i think so.\n> \n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@normanmaurer Ok, I will submit a PR.", "@qeesung \r\n> length & 15 performs better.\r\n\r\nI'm not that sure: last time I've checked length % 16, the JIT, given that 16 is constant, is able to perform the same optimization you'll perform by hand. Please check the assembly or perform a benchmark to be sure of it\r\n", "@franz1981 I will check it more deeply. :-)", "I am pretty sure that HashMap uses &. As the HashMap is very optimized I think & might be faster ", "> I am pretty sure that HashMap uses &. As the HashMap is very optimized I think & might be faster\r\n\r\nI found the code in the JDK7 source code\r\n\r\nhttps://github.com/openjdk-mirror/jdk7u-jdk/blob/f4d80957e89a19a29bb9f9807d2a28351ed7f7df/src/share/classes/java/util/HashMap.java#L272-L277\r\n\r\n```java\r\n/**\r\n* Returns index for hash code h.\r\n*/\r\nstatic int indexFor(int h, int length) {\r\n return h & (length-1);\r\n}\r\n```", "Maby they also used this method because length may not be constant? ", "@SplotyCode @qeesung The JVM doesn't trust final (non static) fields unless for specific internal java util (invoke) classes ie length cannot be assumed (from the JIT) as constant on both `ArrayDeque` and `IdentityHashMap` or other impls, that need to use the \"clever\" mask bit-trick instead. \r\n`prettyHexDump` is hardcoding as constant 16 so this kind of trick is not needed and should be avoided as \"premature\" optimization unless there are strong proof that the JVM has failed to optimize it." ]
[]
"2019-06-29T13:05:10Z"
[]
ByteBufUtil.prettyHexDump may have a problem when calculating the initial capacity of StringBuilder
i use netty4.1.34 source code in netty4.1.34 ``` private static String prettyHexDump(ByteBuf buffer, int offset, int length) { if (length == 0) { return StringUtil.EMPTY_STRING; } else { int rows = length / 16 + (length % 15 == 0? 0 : 1) + 4; StringBuilder buf = new StringBuilder(rows * 80); appendPrettyHexDump(buf, buffer, offset, length); return buf.toString(); } } ``` in my opinion, rows = length / 16 + (length & 15 == 0? 0 : 1) + 4; if int rows = length / 16 + (length % 15 == 0? 0 : 1) + 4; when length = 0 or length = 15, length % 15 always 0,this is wrong.
[ "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[ "buffer/src/main/java/io/netty/buffer/ByteBufUtil.java" ]
[]
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java index ae8d9ed3ea2..b8fb5b446f7 100644 --- a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java +++ b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java @@ -1095,7 +1095,7 @@ private static String prettyHexDump(ByteBuf buffer, int offset, int length) { if (length == 0) { return StringUtil.EMPTY_STRING; } else { - int rows = length / 16 + (length % 15 == 0? 0 : 1) + 4; + int rows = length / 16 + ((length & 15) == 0? 0 : 1) + 4; StringBuilder buf = new StringBuilder(rows * 80); appendPrettyHexDump(buf, buffer, offset, length); return buf.toString();
null
val
val
"2019-06-29T12:16:58"
"2019-06-29T03:21:41Z"
xiaoheng1
val
netty/netty/9305_9306
netty/netty
netty/netty/9305
netty/netty/9306
[ "keyword_issue_to_pr" ]
262ced7ce4a9254fb55046aee89f0ff4d15e17f1
f8c1f350dbe6424036693bf5c0a62dfa39512818
[ "@xiaoheng1 good point... are you willing to open a PR ?", "@normanmaurer I am glad to,i will open a PR.", "#9306 Fix public int read() throws IOException method exceeds the limit of length" ]
[ "nit: merge both lines ", "Call buf.release() and in.close()", "Call buf2.release() and in2.close()", "@normanmaurer ok, i add it." ]
"2019-06-29T17:34:06Z"
[]
ByteBufInputStream.read() may have a problem
i use netty4.1.34 souce code: ``` public int read() throws IOException { if (!buffer.isReadable()) { return -1; } return buffer.readByte() & 0xff; } ``` In my opinion, the buffer.isReadable() method should not be used here to judge, but the available() method should be used. Because ByteBufInputStream is passed in length when constructing, so if the buffer.isReadable() method is used here If judged, it may exceed the limit of length, which is unreasonable.
[ "buffer/src/main/java/io/netty/buffer/ByteBufInputStream.java" ]
[ "buffer/src/main/java/io/netty/buffer/ByteBufInputStream.java" ]
[ "buffer/src/test/java/io/netty/buffer/ByteBufStreamTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufInputStream.java b/buffer/src/main/java/io/netty/buffer/ByteBufInputStream.java index 038cd8db459..c0b116c1483 100644 --- a/buffer/src/main/java/io/netty/buffer/ByteBufInputStream.java +++ b/buffer/src/main/java/io/netty/buffer/ByteBufInputStream.java @@ -163,7 +163,8 @@ public boolean markSupported() { @Override public int read() throws IOException { - if (!buffer.isReadable()) { + int available = available(); + if (available == 0) { return -1; } return buffer.readByte() & 0xff;
diff --git a/buffer/src/test/java/io/netty/buffer/ByteBufStreamTest.java b/buffer/src/test/java/io/netty/buffer/ByteBufStreamTest.java index 222ddcf7ee7..989e160b22a 100644 --- a/buffer/src/test/java/io/netty/buffer/ByteBufStreamTest.java +++ b/buffer/src/test/java/io/netty/buffer/ByteBufStreamTest.java @@ -212,4 +212,39 @@ public void testReadLine() throws Exception { assertEquals(charCount, count); in.close(); } + + @Test + public void testRead() throws Exception { + // case1 + ByteBuf buf = Unpooled.buffer(16); + buf.writeBytes(new byte[]{1, 2, 3, 4, 5, 6}); + + ByteBufInputStream in = new ByteBufInputStream(buf, 3); + + assertEquals(1, in.read()); + assertEquals(2, in.read()); + assertEquals(3, in.read()); + assertEquals(-1, in.read()); + assertEquals(-1, in.read()); + assertEquals(-1, in.read()); + + buf.release(); + in.close(); + + // case2 + ByteBuf buf2 = Unpooled.buffer(16); + buf2.writeBytes(new byte[]{1, 2, 3, 4, 5, 6}); + + ByteBufInputStream in2 = new ByteBufInputStream(buf2, 4); + + assertEquals(1, in2.read()); + assertEquals(2, in2.read()); + assertEquals(3, in2.read()); + assertEquals(4, in2.read()); + assertNotEquals(5, in2.read()); + assertEquals(-1, in2.read()); + + buf2.release(); + in2.close(); + } }
val
val
"2019-06-29T12:16:58"
"2019-06-29T15:59:41Z"
xiaoheng1
val
netty/netty/8962_9311
netty/netty
netty/netty/8962
netty/netty/9311
[ "keyword_pr_to_issue" ]
f8c1f350dbe6424036693bf5c0a62dfa39512818
18e412195256190f3d4a1d88c0f44608d4fce6ba
[ "I suspect it has to do with the maximal datagram packet size. Try to use `new FixedRecvByteBufAllocator(4096)` when you bootstrap your server. ", "I tried this:\r\n```\r\nNioDatagramChannel channel = new NioDatagramChannel();\r\nchannel.config().setRecvByteBufAllocator(new FixedRecvByteBufAllocator(4096));\r\nBootstrap bootstrap = new Bootstrap();\r\nbootstrap.group(eventLoopGroup)\r\n .channelFactory(() -> channel)\r\n ...\r\n```\r\nNo luck. Tried 8192 as well.", "I tried to reproduce the problem. This seems to be a bug in `DatagramDnsResponseDecoder`, if the server receives the compressed DNS response and decodes it, it will be found that if the compression pointer is included in the `RADATA` of `ANSWER SECTION`, this problem will occur.\r\n\r\nThe reason is that `DatagramDnsResponseDecoder` does not decompress the compression pointer contained in `RADATA` of `ANSWER SECTION`.\r\n\r\nThe `ANSWER SECTION` has a resource record format:\r\n```\r\n0 1 2 3 4 5 6 7 8 9 A B C D E F \r\n+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\r\n| |\r\n/ /\r\n/ NAME /\r\n| |\r\n+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\r\n| TYPE |\r\n+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\r\n| CLASS |\r\n+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\r\n| TTL |\r\n| |\r\n+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\r\n| RDLENGTH |\r\n+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--|\r\n/ RDATA /\r\n/ /\r\n+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\r\n```\r\n\r\nFor Example:\r\n\r\nSend `www.apple.com` DNS query request.\r\n\r\nExpect `ANSWER SECTION`:\r\n```\r\nwww.apple.com. 1376 IN CNAME www.apple.com.edgekey.net.\r\nwww.apple.com.edgekey.net. 134 IN CNAME www.apple.com.edgekey.net.globalredir.akadns.net.\r\nwww.apple.com.edgekey.net.globalredir.akadns.net. 2663 IN CNAME e6858.e19.s.tl88.net.\r\ne6858.e19.s.tl88.net. 5 IN A 222.163.207.56\r\n```\r\n\r\nActually decoded by Netty\r\n```\r\nwww.apple.com. 1376 IN CNAME .www.apple.com.edgekey.net.\r\n +------------------------+\r\n |\r\n +--------------------------v----------------------+\r\n | 0 1 2 3 4 5 6 7 8 9 a b c d e f |\r\n+---------------------------------------------------------------------------+\r\n|00000000| 03 77 77 77 05 61 70 70 6c 65 03 63 6f 6d 07 65 |.www.apple.com.e|\r\n|00000010| 64 67 65 6b 65 79 03 6e 65 74 00 |dgekey.net. |\r\n+---------------------------------------------------------------------------+\r\n\r\n```\r\n\r\n```\r\nwww.apple.com.edgekey.net. 134 IN CNAME www.apple.com.edgekey.net.globalredir.akadns.A\r\n +------------------------------+\r\n |\r\n +--------------------------v----------------------+\r\n | 0 1 2 3 4 5 6 7 8 9 a b c d e f |\r\n+---------------------------------------------------------------------------+\r\n|00000000| 03 77 77 77 05 61 70 70 6c 65 03 63 6f 6d 07 65 |.www.apple.com.e|\r\n|00000010| 64 67 65 6b 65 79 03 6e 65 74 0b 67 6c 6f 62 61 |dgekey.net.globa|\r\n|00000020| 6c 72 65 64 69 72 06 61 6b 61 64 6e 73 c0 41 |lredir.akadns.A |\r\n+---------------------------------------------------------------------------+\r\n +---+\r\n |\r\n v\r\n Undecompressed Compression Pointer\r\n```\r\n\r\n```\r\nwww.apple.com.edgekey.net.globalredir.akadns.net. 2663 IN CNAME e6858.e19.s.tl88.AA\r\n +-----------------------------------------+\r\n |\r\n +--------------------------v----------------------+\r\n | 0 1 2 3 4 5 6 7 8 9 a b c d e f |\r\n+---------------------------------------------------------------------------+\r\n|00000000| 05 65 36 38 35 38 03 65 31 39 01 73 04 74 6c 38 |.e6858.e19.s.tl8|\r\n|00000010| 38 c0 41 |8.A |\r\n+---------------------------------------------------------------------------+\r\n +---+\r\n |\r\n v\r\n Undecompressed Compression Pointer\r\n```\r\n\r\n```\r\ne6858.e19.s.tl88.net. 5 IN A 222.163.207.56\r\n +-------------+\r\n |\r\n +--------------------------v----------------------+\r\n | 0 1 2 3 4 5 6 7 8 9 a b c d e f |\r\n+---------------------------------------------------------------------------+\r\n|00000000| da 3a 65 e5 |.:e. |\r\n+---------------------------------------------------------------------------+\r\n```\r\n\r\nWhen all ANSWER Records are combined in dns-proxy server and sent to the client, the original compression pointer will point to the illegal position, resulting in `bad label type` error.\r\n\r\n I'll do a PR.", "@qeesung can you provide a focused fix for the issue without changing the whole class hierarchy etc and breaking the API or is this not possible at all ?", "@normanmaurer Okay, I'll give it a try. :-)\r\n\r\n", "@qeesung thanks a lot", "@normanmaurer The reason for this problem is that rdata of DNS record cannot be parsed correctly when it contains compressed pointers. Generally, compression pointers are included only when rdata type is text type, such as `NS`, `CNAME` type. In other words, rdata can be parsed correctly only if Dns record type is known in advance.\r\n\r\nIn the original implementation, `DnsRecord` has only three subclasses, `DnsRawRecord`, `DnsPtrRecord` and `DnsOptPseudoRecord`. The text type of `DnsRecord` is currently hold in `DnsRawRecord`. To solve the problem of bad label, there are two solutions:\r\n\r\nThe original implementation of method `decodeRecord` of class `DefaultDnsRecordDecoder`\r\n\r\nhttps://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java#L89-L103\r\n\r\n**Solution1**: In the `decodeRecord` method of class `DefaultDnsRecordDecoder`, a new if branch is added to decode the record of text type. This approach has minimal impact on current implementations, but it should not be a good design. If you need to decode other types of records later, you need to add a new if branch.\r\n```java\r\nprotected DnsRecord decodeRecord(\r\n String name, DnsRecordType type, int dnsClass, long timeToLive,\r\n ByteBuf in, int offset, int length) throws Exception {\r\n\tif (type == DnsRecordType.PTR) {\r\n\t\treturn new DefaultDnsPtrRecord(\r\n\t\t name, dnsClass, timeToLive, decodeName0(in.duplicate().setIndex(offset, offset + length)));\r\n\t}\r\n \tif (type == DnsRecordType.NS || type == DnsRecordType.CNAME /** || more types... */) {\r\n // decode normal text rdata or compressed pointer text rdata\r\n // return DnsRawRecord\r\n }\r\n\treturn new DefaultDnsRawRecord(\r\n\t name, type, dnsClass, timeToLive, in.retainedDuplicate().setIndex(offset, offset + length));\r\n}\r\n```\r\n\r\n**Solution2**: Add an RData decoder for each type of `DnsRecord`, This method has been implemented in PR #9084 . If a new type of RData needs to be decoded, then only one decoder needs to be added and registered. But this will break some of the APIs and implementations\r\n\r\n```java\r\nprotected DnsRecord decodeRecord(\r\n String name, DnsRecordType type, int dnsClass, long timeToLive,\r\n ByteBuf in, int offset, int length) throws Exception {\r\n\tByteBuf rData = in.duplicate().setIndex(offset, offset + length);\r\n\t// Get the corresponding RData decoder by type\r\n\tDnsRDataDecoder<? extends DnsRecord> rDataDecoder = DnsRDataCodecs.rDataDecoder(type);\r\n\tif (rDataDecoder != null) {\r\n\t\treturn rDataDecoder.decodeRData(name, dnsClass, timeToLive, rData);\r\n\t}\r\n\treturn new DefaultDnsRawRecord(name, type, dnsClass, timeToLive, rData.retain());\r\n}\r\n```\r\nWDYT?Which way is better?\r\n", "@qeesung for 4.1 I think we should do 1) for master we can check 2)", "@normanmaurer Okay, I'll submit a new PR for 1)" ]
[ "Package private ?", "Package private", "Package private", "Package private", "Done!", "Done!", "Done!", "Done", "use `compression.alloc().buffer(...)` to ensure the same allocator is used (which may be a pooled allocated) ", "Done", "nit: typo... should be \"uncompressed\"", "nit: should be \"uncompressed\"", "Done", "Done" ]
"2019-07-01T16:50:53Z"
[]
netty-resolver-dns "Got bad packet: bad label type"
I have built a simple DNS proxy server using the Netty DNS resolver library. [See the code here.](https://github.com/nzhenry/dns-proxy) It works for most queries but not all. For example, when I try to query the A records for "www.apple.com" using nslookup or dig, it fails intermittently with this error: ```;; Got bad packet: bad label type 305 bytes ca 20 80 00 00 01 00 04 00 00 00 01 03 77 77 77 .............www 05 61 70 70 6c 65 03 63 6f 6d 00 00 01 00 01 03 .apple.com...... 77 77 77 05 61 70 70 6c 65 03 63 6f 6d 00 00 05 www.apple.com... 00 01 00 00 00 37 00 1b 03 77 77 77 05 61 70 70 .....7...www.app 6c 65 03 63 6f 6d 07 65 64 67 65 6b 65 79 03 6e le.com.edgekey.n 65 74 00 03 77 77 77 05 61 70 70 6c 65 03 63 6f et..www.apple.co 6d 07 65 64 67 65 6b 65 79 03 6e 65 74 00 00 05 m.edgekey.net... 00 01 00 00 30 2b 00 2f 03 77 77 77 05 61 70 70 ....0+./.www.app 6c 65 03 63 6f 6d 07 65 64 67 65 6b 65 79 03 6e le.com.edgekey.n 65 74 0b 67 6c 6f 62 61 6c 72 65 64 69 72 06 61 et.globalredir.a 6b 61 64 6e 73 c0 41 03 77 77 77 05 61 70 70 6c kadns.A.www.appl 65 03 63 6f 6d 07 65 64 67 65 6b 65 79 03 6e 65 e.com.edgekey.ne 74 0b 67 6c 6f 62 61 6c 72 65 64 69 72 06 61 6b t.globalredir.ak 61 64 6e 73 03 6e 65 74 00 00 05 00 01 00 00 0c adns.net........ b7 00 19 05 65 36 38 35 38 05 64 73 63 65 39 0a ....e6858.dsce9. 61 6b 61 6d 61 69 65 64 67 65 c0 41 05 65 36 38 akamaiedge.A.e68 35 38 05 64 73 63 65 39 0a 61 6b 61 6d 61 69 65 58.dsce9.akamaie 64 67 65 03 6e 65 74 00 00 01 00 01 00 00 00 11 dge.net......... 00 04 68 5f a0 7e 00 00 29 10 00 00 00 00 00 00 ..h_....)....... 00 . ``` When it fails the debug log looks like this: ```DefaultDnsQuestion(www.apple.com. IN A) DefaultDnsRawRecord(www.apple.com. 885 IN CNAME 27B) DefaultDnsRawRecord(www.apple.com.edgekey.net. 10239 IN CNAME 47B) DefaultDnsRawRecord(www.apple.com.edgekey.net.globalredir.akadns.net. 441 IN CNAME 25B) DefaultDnsRawRecord(e6858.dsce9.akamaiedge.net. 9 IN A 4B)] RECEIVED: [[[id: 0xc9dafb10], 36804, /192.168.1.1:53, DatagramDnsResponse(from: /192.168.1.1:53, to: /0:0:0:0:0:0:0:0:51678, 36804, QUERY(0), NoError(0), RD RA) DefaultDnsQuestion(www.apple.com. IN A) DefaultDnsRawRecord(www.apple.com. 885 IN CNAME 27B) DefaultDnsRawRecord(www.apple.com.edgekey.net. 10239 IN CNAME 47B) DefaultDnsRawRecord(www.apple.com.edgekey.net.globalredir.akadns.net. 441 IN CNAME 25B) DefaultDnsRawRecord(e6858.dsce9.akamaiedge.net. 9 IN A 4B)]: {}], {} ``` When it succeeds the log looks like this: ```DefaultDnsQuestion(www.apple.com. IN A) DefaultDnsRawRecord(www.apple.com. 881 IN CNAME 27B) DefaultDnsRawRecord(www.apple.com.edgekey.net. 10235 IN CNAME 50B) DefaultDnsRawRecord(www.apple.com.edgekey.net.globalredir.akadns.net. 437 IN CNAME 28B) DefaultDnsRawRecord(e6858.dsce9.akamaiedge.net. 5 IN A 4B) DefaultDnsRawRecord(OPT flags:0 udp:4096 0B)] RECEIVED: [[[id: 0xc9dafb10], 37086, /192.168.1.1:53, DatagramDnsResponse(from: /192.168.1.1:53, to: /0:0:0:0:0:0:0:0:51678, 37086, QUERY(0), NoError(0), RD RA) DefaultDnsQuestion(www.apple.com. IN A) DefaultDnsRawRecord(www.apple.com. 881 IN CNAME 27B) DefaultDnsRawRecord(www.apple.com.edgekey.net. 10235 IN CNAME 50B) DefaultDnsRawRecord(www.apple.com.edgekey.net.globalredir.akadns.net. 437 IN CNAME 28B) DefaultDnsRawRecord(e6858.dsce9.akamaiedge.net. 5 IN A 4B) DefaultDnsRawRecord(OPT flags:0 udp:4096 0B)]: {}], {} ``` Notice the additional OPT record when it succeeds. This appears to be the key. I just don't know how I can fix the problem when it fails. I've discovered that if I remove the CNAME records from the response, then the query resolves with no errors. I would like to return a proper copy of the server response including the CNAME records. Any help would be greatly appreciated!
[ "codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java", "codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordEncoder.java" ]
[ "codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java", "codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordEncoder.java", "codec-dns/src/main/java/io/netty/handler/codec/dns/DnsCodecUtil.java" ]
[ "codec-dns/src/test/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoderTest.java" ]
diff --git a/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java b/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java index b4e50ff402e..e61d46cc204 100644 --- a/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java +++ b/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoder.java @@ -16,8 +16,6 @@ package io.netty.handler.codec.dns; import io.netty.buffer.ByteBuf; -import io.netty.handler.codec.CorruptedFrameException; -import io.netty.util.CharsetUtil; import io.netty.util.internal.UnstableApi; /** @@ -98,6 +96,11 @@ protected DnsRecord decodeRecord( return new DefaultDnsPtrRecord( name, dnsClass, timeToLive, decodeName0(in.duplicate().setIndex(offset, offset + length))); } + if (type == DnsRecordType.CNAME || type == DnsRecordType.NS) { + return new DefaultDnsRawRecord(name, type, dnsClass, timeToLive, + DnsCodecUtil.decompressDomainName( + in.duplicate().setIndex(offset, offset + length))); + } return new DefaultDnsRawRecord( name, type, dnsClass, timeToLive, in.retainedDuplicate().setIndex(offset, offset + length)); } @@ -123,69 +126,6 @@ protected String decodeName0(ByteBuf in) { * @return the domain name for an entry */ public static String decodeName(ByteBuf in) { - int position = -1; - int checked = 0; - final int end = in.writerIndex(); - final int readable = in.readableBytes(); - - // Looking at the spec we should always have at least enough readable bytes to read a byte here but it seems - // some servers do not respect this for empty names. So just workaround this and return an empty name in this - // case. - // - // See: - // - https://github.com/netty/netty/issues/5014 - // - https://www.ietf.org/rfc/rfc1035.txt , Section 3.1 - if (readable == 0) { - return ROOT; - } - - final StringBuilder name = new StringBuilder(readable << 1); - while (in.isReadable()) { - final int len = in.readUnsignedByte(); - final boolean pointer = (len & 0xc0) == 0xc0; - if (pointer) { - if (position == -1) { - position = in.readerIndex() + 1; - } - - if (!in.isReadable()) { - throw new CorruptedFrameException("truncated pointer in a name"); - } - - final int next = (len & 0x3f) << 8 | in.readUnsignedByte(); - if (next >= end) { - throw new CorruptedFrameException("name has an out-of-range pointer"); - } - in.readerIndex(next); - - // check for loops - checked += 2; - if (checked >= end) { - throw new CorruptedFrameException("name contains a loop."); - } - } else if (len != 0) { - if (!in.isReadable(len)) { - throw new CorruptedFrameException("truncated label in a name"); - } - name.append(in.toString(in.readerIndex(), len, CharsetUtil.UTF_8)).append('.'); - in.skipBytes(len); - } else { // len == 0 - break; - } - } - - if (position != -1) { - in.readerIndex(position); - } - - if (name.length() == 0) { - return ROOT; - } - - if (name.charAt(name.length() - 1) != '.') { - name.append('.'); - } - - return name.toString(); + return DnsCodecUtil.decodeDomainName(in); } } diff --git a/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordEncoder.java b/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordEncoder.java index 48f60bcc951..45914ea0656 100644 --- a/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordEncoder.java +++ b/codec-dns/src/main/java/io/netty/handler/codec/dns/DefaultDnsRecordEncoder.java @@ -16,14 +16,11 @@ package io.netty.handler.codec.dns; import io.netty.buffer.ByteBuf; -import io.netty.buffer.ByteBufUtil; import io.netty.channel.socket.InternetProtocolFamily; import io.netty.handler.codec.UnsupportedMessageTypeException; import io.netty.util.internal.StringUtil; import io.netty.util.internal.UnstableApi; -import static io.netty.handler.codec.dns.DefaultDnsRecordDecoder.ROOT; - /** * The default {@link DnsRecordEncoder} implementation. * @@ -141,25 +138,7 @@ private void encodeRawRecord(DnsRawRecord record, ByteBuf out) throws Exception } protected void encodeName(String name, ByteBuf buf) throws Exception { - if (ROOT.equals(name)) { - // Root domain - buf.writeByte(0); - return; - } - - final String[] labels = name.split("\\."); - for (String label : labels) { - final int labelLen = label.length(); - if (labelLen == 0) { - // zero-length label means the end of the name. - break; - } - - buf.writeByte(labelLen); - ByteBufUtil.writeAscii(buf, label); - } - - buf.writeByte(0); // marks end of name field + DnsCodecUtil.encodeDomainName(name, buf); } private static byte padWithZeros(byte b, int lowOrderBitsToPreserve) { diff --git a/codec-dns/src/main/java/io/netty/handler/codec/dns/DnsCodecUtil.java b/codec-dns/src/main/java/io/netty/handler/codec/dns/DnsCodecUtil.java new file mode 100644 index 00000000000..8804cf748e4 --- /dev/null +++ b/codec-dns/src/main/java/io/netty/handler/codec/dns/DnsCodecUtil.java @@ -0,0 +1,132 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ + +package io.netty.handler.codec.dns; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufUtil; +import io.netty.buffer.Unpooled; +import io.netty.handler.codec.CorruptedFrameException; +import io.netty.util.CharsetUtil; + +import static io.netty.handler.codec.dns.DefaultDnsRecordDecoder.*; + +final class DnsCodecUtil { + private DnsCodecUtil() { + // Util class + } + + static void encodeDomainName(String name, ByteBuf buf) { + if (ROOT.equals(name)) { + // Root domain + buf.writeByte(0); + return; + } + + final String[] labels = name.split("\\."); + for (String label : labels) { + final int labelLen = label.length(); + if (labelLen == 0) { + // zero-length label means the end of the name. + break; + } + + buf.writeByte(labelLen); + ByteBufUtil.writeAscii(buf, label); + } + + buf.writeByte(0); // marks end of name field + } + + static String decodeDomainName(ByteBuf in) { + int position = -1; + int checked = 0; + final int end = in.writerIndex(); + final int readable = in.readableBytes(); + + // Looking at the spec we should always have at least enough readable bytes to read a byte here but it seems + // some servers do not respect this for empty names. So just workaround this and return an empty name in this + // case. + // + // See: + // - https://github.com/netty/netty/issues/5014 + // - https://www.ietf.org/rfc/rfc1035.txt , Section 3.1 + if (readable == 0) { + return ROOT; + } + + final StringBuilder name = new StringBuilder(readable << 1); + while (in.isReadable()) { + final int len = in.readUnsignedByte(); + final boolean pointer = (len & 0xc0) == 0xc0; + if (pointer) { + if (position == -1) { + position = in.readerIndex() + 1; + } + + if (!in.isReadable()) { + throw new CorruptedFrameException("truncated pointer in a name"); + } + + final int next = (len & 0x3f) << 8 | in.readUnsignedByte(); + if (next >= end) { + throw new CorruptedFrameException("name has an out-of-range pointer"); + } + in.readerIndex(next); + + // check for loops + checked += 2; + if (checked >= end) { + throw new CorruptedFrameException("name contains a loop."); + } + } else if (len != 0) { + if (!in.isReadable(len)) { + throw new CorruptedFrameException("truncated label in a name"); + } + name.append(in.toString(in.readerIndex(), len, CharsetUtil.UTF_8)).append('.'); + in.skipBytes(len); + } else { // len == 0 + break; + } + } + + if (position != -1) { + in.readerIndex(position); + } + + if (name.length() == 0) { + return ROOT; + } + + if (name.charAt(name.length() - 1) != '.') { + name.append('.'); + } + + return name.toString(); + } + + /** + * Decompress pointer data. + * @param compression comporession data + * @return decompressed data + */ + static ByteBuf decompressDomainName(ByteBuf compression) { + String domainName = decodeDomainName(compression); + ByteBuf result = compression.alloc().buffer(domainName.length() << 1); + encodeDomainName(domainName, result); + return result; + } +}
diff --git a/codec-dns/src/test/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoderTest.java b/codec-dns/src/test/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoderTest.java index 6de6ce5d724..a90acaa2833 100644 --- a/codec-dns/src/test/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoderTest.java +++ b/codec-dns/src/test/java/io/netty/handler/codec/dns/DefaultDnsRecordDecoderTest.java @@ -16,10 +16,11 @@ package io.netty.handler.codec.dns; import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufUtil; import io.netty.buffer.Unpooled; import org.junit.Test; -import static org.junit.Assert.assertEquals; +import static org.junit.Assert.*; public class DefaultDnsRecordDecoderTest { @@ -89,6 +90,81 @@ public void testDecodePtrRecord() throws Exception { } } + @Test + public void testdecompressCompressPointer() { + byte[] compressionPointer = { + 5, 'n', 'e', 't', 't', 'y', 2, 'i', 'o', 0, + (byte) 0xC0, 0 + }; + ByteBuf buffer = Unpooled.wrappedBuffer(compressionPointer); + ByteBuf uncompressed = null; + try { + uncompressed = DnsCodecUtil.decompressDomainName(buffer.duplicate().setIndex(10, 12)); + assertEquals(0, ByteBufUtil.compare(buffer.duplicate().setIndex(0, 10), uncompressed)); + } finally { + buffer.release(); + if (uncompressed != null) { + uncompressed.release(); + } + } + } + + @Test + public void testdecompressNestedCompressionPointer() { + byte[] nestedCompressionPointer = { + 6, 'g', 'i', 't', 'h', 'u', 'b', 2, 'i', 'o', 0, // github.io + 5, 'n', 'e', 't', 't', 'y', (byte) 0xC0, 0, // netty.github.io + (byte) 0xC0, 11, // netty.github.io + }; + ByteBuf buffer = Unpooled.wrappedBuffer(nestedCompressionPointer); + ByteBuf uncompressed = null; + try { + uncompressed = DnsCodecUtil.decompressDomainName(buffer.duplicate().setIndex(19, 21)); + assertEquals(0, ByteBufUtil.compare( + Unpooled.wrappedBuffer(new byte[] { + 5, 'n', 'e', 't', 't', 'y', 6, 'g', 'i', 't', 'h', 'u', 'b', 2, 'i', 'o', 0 + }), uncompressed)); + } finally { + buffer.release(); + if (uncompressed != null) { + uncompressed.release(); + } + } + } + + @Test + public void testDecodeCompressionRDataPointer() throws Exception { + DefaultDnsRecordDecoder decoder = new DefaultDnsRecordDecoder(); + byte[] compressionPointer = { + 5, 'n', 'e', 't', 't', 'y', 2, 'i', 'o', 0, + (byte) 0xC0, 0 + }; + ByteBuf buffer = Unpooled.wrappedBuffer(compressionPointer); + DefaultDnsRawRecord cnameRecord = null; + DefaultDnsRawRecord nsRecord = null; + try { + cnameRecord = (DefaultDnsRawRecord) decoder.decodeRecord( + "netty.github.io", DnsRecordType.CNAME, DnsRecord.CLASS_IN, 60, buffer, 10, 2); + assertEquals("The rdata of CNAME-type record should be decompressed in advance", + 0, ByteBufUtil.compare(buffer.duplicate().setIndex(0, 10), cnameRecord.content())); + assertEquals("netty.io.", DnsCodecUtil.decodeDomainName(cnameRecord.content())); + nsRecord = (DefaultDnsRawRecord) decoder.decodeRecord( + "netty.github.io", DnsRecordType.NS, DnsRecord.CLASS_IN, 60, buffer, 10, 2); + assertEquals("The rdata of NS-type record should be decompressed in advance", + 0, ByteBufUtil.compare(buffer.duplicate().setIndex(0, 10), nsRecord.content())); + assertEquals("netty.io.", DnsCodecUtil.decodeDomainName(nsRecord.content())); + } finally { + buffer.release(); + if (cnameRecord != null) { + cnameRecord.release(); + } + + if (nsRecord != null) { + nsRecord.release(); + } + } + } + @Test public void testDecodeMessageCompression() throws Exception { // See https://www.ietf.org/rfc/rfc1035 [4.1.4. Message compression]
test
val
"2019-07-01T15:57:34"
"2019-03-20T23:37:47Z"
nzhenry
val
netty/netty/9134_9312
netty/netty
netty/netty/9134
netty/netty/9312
[ "keyword_pr_to_issue" ]
b02ee1106f81a97334c076e5510d7f90d4f4e224
be26f4e00fe1c7aeaec0356f4f16ea643ca2f6da
[ "@davydotcom do you have a reproducer ?", "Use websocket client in netty in any way and it’s very easily reproduced code is wrong in handshaker\n\nSent from my iPhone\n\n> On May 8, 2019, at 3:12 AM, Norman Maurer <notifications@github.com> wrote:\n> \n> @davydotcom do you have a reproducer ?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Maybe just open a pr if it’s so easy to fix. Sorry I am very busy atm\n\n> Am 08.05.2019 um 13:17 schrieb David Estes <notifications@github.com>:\n> \n> Use websocket client in netty in any way and it’s very easily reproduced code is wrong in handshaker\n> \n> Sent from my iPhone\n> \n> > On May 8, 2019, at 3:12 AM, Norman Maurer <notifications@github.com> wrote:\n> > \n> > @davydotcom do you have a reproducer ?\n> > \n> > —\n> > You are receiving this because you were mentioned.\n> > Reply to this email directly, view it on GitHub, or mute the thread.\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "We use Netty through Vert.x for a proxy application. For some of our websocket tests we modify the \"origin\" header of the client requests. This does not work anymore due to the changes here. Also we need to pass the \"origin\" header downstream. This is not possible anymore and we are stuck here. Maybe you can change the behaviour and set the \"origin\" header only if it is not already present?", "Resolved in #9435. Sorry for the duplicate comment." ]
[]
"2019-07-01T20:12:41Z"
[]
WebSocketClientHandshaker13 Invalid Handshake
### Expected behavior Origin Header should be sent on handshake ### Actual behavior Sec-WebSocket-Origin Header is sent instead which is not a client handshake but rather a server to client origin handshake per the Specification
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker13.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker13.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketRequestBuilder.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java index b3cf60432cc..f0ea38c0d89 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java @@ -189,7 +189,7 @@ public WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, S * Upgrade: websocket * Connection: Upgrade * Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== - * Sec-WebSocket-Origin: http://example.com + * Origin: http://example.com * Sec-WebSocket-Protocol: chat, superchat * Sec-WebSocket-Version: 13 * </pre> @@ -225,7 +225,7 @@ protected FullHttpRequest newHandshakeRequest() { .set(HttpHeaderNames.CONNECTION, HttpHeaderValues.UPGRADE) .set(HttpHeaderNames.SEC_WEBSOCKET_KEY, key) .set(HttpHeaderNames.HOST, websocketHostValue(wsURL)) - .set(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, websocketOriginValue(wsURL)); + .set(HttpHeaderNames.ORIGIN, websocketOriginValue(wsURL)); String expectedSubprotocol = expectedSubprotocol(); if (expectedSubprotocol != null && !expectedSubprotocol.isEmpty()) { @@ -251,7 +251,7 @@ protected FullHttpRequest newHandshakeRequest() { * * @param response * HTTP response returned from the server for the request sent by beginOpeningHandshake00(). - * @throws WebSocketHandshakeException + * @throws WebSocketHandshakeException if handshake response is invalid. */ @Override protected void verify(FullHttpResponse response) { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker13.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker13.java index af139c7616b..a21143f8f2d 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker13.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker13.java @@ -115,7 +115,7 @@ public WebSocketServerHandshaker13( * Upgrade: websocket * Connection: Upgrade * Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== - * Sec-WebSocket-Origin: http://example.com + * Origin: http://example.com * Sec-WebSocket-Protocol: chat, superchat * Sec-WebSocket-Version: 13 * </pre>
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java index 01acaf92b51..acc10d7c244 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java @@ -46,7 +46,7 @@ protected CharSequence[] getHandshakeHeaderNames() { HttpHeaderNames.CONNECTION, HttpHeaderNames.SEC_WEBSOCKET_KEY, HttpHeaderNames.HOST, - HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, + getOriginHeaderName(), HttpHeaderNames.SEC_WEBSOCKET_VERSION, }; } diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java index 9a72e2feb1a..cdd9bd71ba5 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java @@ -15,11 +15,13 @@ */ package io.netty.handler.codec.http.websocketx; +import io.netty.handler.codec.http.HttpHeaderNames; import io.netty.handler.codec.http.HttpHeaders; import java.net.URI; public class WebSocketClientHandshaker13Test extends WebSocketClientHandshaker07Test { + @Override protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, HttpHeaders headers, boolean absoluteUpgradeUrl) { @@ -27,4 +29,10 @@ protected WebSocketClientHandshaker newHandshaker(URI uri, String subprotocol, H 1024, true, true, 10000, absoluteUpgradeUrl); } + + @Override + protected CharSequence getOriginHeaderName() { + return HttpHeaderNames.ORIGIN; + } + } diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketRequestBuilder.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketRequestBuilder.java index fd199b864fb..65ef489edb2 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketRequestBuilder.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketRequestBuilder.java @@ -138,7 +138,11 @@ public FullHttpRequest build() { headers.set(HttpHeaderNames.SEC_WEBSOCKET_KEY, key); } if (origin != null) { - headers.set(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, origin); + if (version == WebSocketVersion.V13 || version == WebSocketVersion.V00) { + headers.set(HttpHeaderNames.ORIGIN, origin); + } else { + headers.set(HttpHeaderNames.SEC_WEBSOCKET_ORIGIN, origin); + } } if (version != null) { headers.set(HttpHeaderNames.SEC_WEBSOCKET_VERSION, version.toHttpHeaderValue());
test
val
"2019-07-10T12:19:15"
"2019-05-07T20:56:15Z"
davydotcom
val
netty/netty/1802_9427
netty/netty
netty/netty/1802
netty/netty/9427
[ "keyword_pr_to_issue" ]
3f0f322562699a65b4133fdb328075da6670bec3
4db38b4e0c734b9cdaf0591dd0c5b3f4d4f11ea4
[ "@trustin after thinking more about I think we should disallow to expand a duplicated buffer. WDYT ?\n", "Please see issue #1800. Data loss can happen with others, as well, because currently only data from the reader index on is copied, even if a mark exists before that. I think that as long as you can arbitrarily set the reader index in the buffer, all buffer data should always be copied on expansion.\n", "And the reasoning about your thought?\n\nOn 30/08/13 23:07, Norman Maurer wrote:\n\n> @trustin https://github.com/trustin after thinking more about I \n> think we should disallow to expand a duplicated buffer. WDYT ?\n> \n> —\n> Reply to this email directly or view it on GitHub \n> https://github.com/netty/netty/issues/1802#issuecomment-23563314.\n\n## \n\nhttps://twitter.com/trustin\nhttps://twitter.com/trustin_ko\nhttps://twitter.com/netty_project\n", "#1800 is unrelated with this issue.\n\nOn 30/08/13 23:40, jhutton wrote:\n\n> Please see issue #1800 https://github.com/netty/netty/issues/1800. \n> Data loss can happen with others, as well, because currently only data \n> from the reader index on is copied, even if a mark exists before that. \n> I think that as long as you can arbitrarily set the reader index in \n> the buffer, all buffer data should always be copied on expansion.\n> \n> —\n> Reply to this email directly or view it on GitHub \n> https://github.com/netty/netty/issues/1802#issuecomment-23565551.\n\n## \n\nhttps://twitter.com/trustin\nhttps://twitter.com/trustin_ko\nhttps://twitter.com/netty_project\n", "@normanmaurer Now I see what you mean. Yeah, all derived buffers should be unallowed to perform expansion.\n", "@trustin good :) can we do it in 4.0.9 as it is a change in behavior but I would argue it was a bug to allow it before and broken anyway\n", "Sure. We could probably override `ensureWritable()` and `capacity()` at `AbstractDerivedByteBuf`? We still cannot prevent a user from expanding the original buffer, right?\n", "Now we finally found a place to use `BufferOverflowException`. :-)\n", "@trustin after thinking moe about it we need to copy the whole old buffer when adjust the capacity if a derived buffer was obtained as otherwise we may corrupt the data for a obtained data if the readerIndex of the original buffer is bigger then the one of the obtained one.\n", "@trustin what you think about this ? See my last comment\n", "Yeah, let's copy the whole region.\n", "Always copy the whole region.\n" ]
[ "Then why not use it ?", "Was going to write the same...", "Wouldn't it break subclasses that rely on the use of `allocateArray()` e.g. to [track buffer heap usage](https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/UnpooledByteBufAllocator.java#L157) or impl [custom pooling](https://github.com/netty/netty/pull/8015)?\r\n\r\nThough I noticed that the `copy()` method currently violates this, I can open another PR to fix that.", "See #9440 for the other mentioned fix", "I see.... Makes sense. Can you remove TODO: then ?", "@normanmaurer done" ]
"2019-08-05T05:57:25Z"
[ "defect" ]
DuplicatedByteBuf not correctly expand capacity
The problem is that it delegate the call to the wrapped buffer which will use its own readerIndex to detect which data needs to get copied while expand it. Thus it leads to dataloss
[ "buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java", "buffer/src/main/java/io/netty/buffer/PoolArena.java", "buffer/src/main/java/io/netty/buffer/PooledByteBuf.java", "buffer/src/main/java/io/netty/buffer/UnpooledDirectByteBuf.java", "buffer/src/main/java/io/netty/buffer/UnpooledHeapByteBuf.java", "buffer/src/main/java/io/netty/buffer/UnpooledUnsafeNoCleanerDirectByteBuf.java" ]
[ "buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java", "buffer/src/main/java/io/netty/buffer/PoolArena.java", "buffer/src/main/java/io/netty/buffer/PooledByteBuf.java", "buffer/src/main/java/io/netty/buffer/UnpooledDirectByteBuf.java", "buffer/src/main/java/io/netty/buffer/UnpooledHeapByteBuf.java", "buffer/src/main/java/io/netty/buffer/UnpooledUnsafeNoCleanerDirectByteBuf.java" ]
[]
diff --git a/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java b/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java index 3a19f85c21a..254d40480ae 100644 --- a/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java @@ -268,6 +268,13 @@ protected final void adjustMarkers(int decrement) { } } + // Called after a capacity reduction + protected final void trimIndicesToCapacity(int newCapacity) { + if (writerIndex() > newCapacity) { + setIndex0(Math.min(readerIndex(), newCapacity), newCapacity); + } + } + @Override public ByteBuf ensureWritable(int minWritableBytes) { checkPositiveOrZero(minWritableBytes, "minWritableBytes"); diff --git a/buffer/src/main/java/io/netty/buffer/PoolArena.java b/buffer/src/main/java/io/netty/buffer/PoolArena.java index 67955ce6d4d..2005e9a0ec3 100644 --- a/buffer/src/main/java/io/netty/buffer/PoolArena.java +++ b/buffer/src/main/java/io/netty/buffer/PoolArena.java @@ -379,9 +379,7 @@ int alignCapacity(int reqCapacity) { } void reallocate(PooledByteBuf<T> buf, int newCapacity, boolean freeOldMemory) { - if (newCapacity < 0 || newCapacity > buf.maxCapacity()) { - throw new IllegalArgumentException("newCapacity: " + newCapacity); - } + assert newCapacity >= 0 && newCapacity <= buf.maxCapacity(); int oldCapacity = buf.length; if (oldCapacity == newCapacity) { @@ -394,29 +392,17 @@ void reallocate(PooledByteBuf<T> buf, int newCapacity, boolean freeOldMemory) { T oldMemory = buf.memory; int oldOffset = buf.offset; int oldMaxLength = buf.maxLength; - int readerIndex = buf.readerIndex(); - int writerIndex = buf.writerIndex(); + // This does not touch buf's reader/writer indices allocate(parent.threadCache(), buf, newCapacity); + int bytesToCopy; if (newCapacity > oldCapacity) { - memoryCopy( - oldMemory, oldOffset, - buf.memory, buf.offset, oldCapacity); - } else if (newCapacity < oldCapacity) { - if (readerIndex < newCapacity) { - if (writerIndex > newCapacity) { - writerIndex = newCapacity; - } - memoryCopy( - oldMemory, oldOffset + readerIndex, - buf.memory, buf.offset + readerIndex, writerIndex - readerIndex); - } else { - readerIndex = writerIndex = newCapacity; - } + bytesToCopy = oldCapacity; + } else { + buf.trimIndicesToCapacity(newCapacity); + bytesToCopy = newCapacity; } - - buf.setIndex(readerIndex, writerIndex); - + memoryCopy(oldMemory, oldOffset, buf.memory, buf.offset, bytesToCopy); if (freeOldMemory) { free(oldChunk, oldNioBuffer, oldHandle, oldMaxLength, buf.cache); } diff --git a/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java b/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java index 8c2d54674db..5b984eb7e78 100644 --- a/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java @@ -110,7 +110,7 @@ public final ByteBuf capacity(int newCapacity) { (maxLength > 512 || newCapacity > maxLength - 16)) { // here newCapacity < length length = newCapacity; - setIndex(Math.min(readerIndex(), newCapacity), Math.min(writerIndex(), newCapacity)); + trimIndicesToCapacity(newCapacity); return this; } } diff --git a/buffer/src/main/java/io/netty/buffer/UnpooledDirectByteBuf.java b/buffer/src/main/java/io/netty/buffer/UnpooledDirectByteBuf.java index af465c2cd9c..daad1408e7b 100644 --- a/buffer/src/main/java/io/netty/buffer/UnpooledDirectByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/UnpooledDirectByteBuf.java @@ -146,35 +146,23 @@ public int capacity() { @Override public ByteBuf capacity(int newCapacity) { checkNewCapacity(newCapacity); - - int readerIndex = readerIndex(); - int writerIndex = writerIndex(); - int oldCapacity = capacity; + if (newCapacity == oldCapacity) { + return this; + } + int bytesToCopy; if (newCapacity > oldCapacity) { - ByteBuffer oldBuffer = buffer; - ByteBuffer newBuffer = allocateDirect(newCapacity); - oldBuffer.position(0).limit(oldBuffer.capacity()); - newBuffer.position(0).limit(oldBuffer.capacity()); - newBuffer.put(oldBuffer); - newBuffer.clear(); - setByteBuffer(newBuffer, true); - } else if (newCapacity < oldCapacity) { - ByteBuffer oldBuffer = buffer; - ByteBuffer newBuffer = allocateDirect(newCapacity); - if (readerIndex < newCapacity) { - if (writerIndex > newCapacity) { - writerIndex(writerIndex = newCapacity); - } - oldBuffer.position(readerIndex).limit(writerIndex); - newBuffer.position(readerIndex).limit(writerIndex); - newBuffer.put(oldBuffer); - newBuffer.clear(); - } else { - setIndex(newCapacity, newCapacity); - } - setByteBuffer(newBuffer, true); + bytesToCopy = oldCapacity; + } else { + trimIndicesToCapacity(newCapacity); + bytesToCopy = newCapacity; } + ByteBuffer oldBuffer = buffer; + ByteBuffer newBuffer = allocateDirect(newCapacity); + oldBuffer.position(0).limit(bytesToCopy); + newBuffer.position(0).limit(bytesToCopy); + newBuffer.put(oldBuffer).clear(); + setByteBuffer(newBuffer, true); return this; } diff --git a/buffer/src/main/java/io/netty/buffer/UnpooledHeapByteBuf.java b/buffer/src/main/java/io/netty/buffer/UnpooledHeapByteBuf.java index 0e1ff3488da..e2ec10ea7ba 100644 --- a/buffer/src/main/java/io/netty/buffer/UnpooledHeapByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/UnpooledHeapByteBuf.java @@ -120,29 +120,23 @@ public int capacity() { @Override public ByteBuf capacity(int newCapacity) { checkNewCapacity(newCapacity); - - int oldCapacity = array.length; byte[] oldArray = array; + int oldCapacity = oldArray.length; + if (newCapacity == oldCapacity) { + return this; + } + + int bytesToCopy; if (newCapacity > oldCapacity) { - byte[] newArray = allocateArray(newCapacity); - System.arraycopy(oldArray, 0, newArray, 0, oldArray.length); - setArray(newArray); - freeArray(oldArray); - } else if (newCapacity < oldCapacity) { - byte[] newArray = allocateArray(newCapacity); - int readerIndex = readerIndex(); - if (readerIndex < newCapacity) { - int writerIndex = writerIndex(); - if (writerIndex > newCapacity) { - writerIndex(writerIndex = newCapacity); - } - System.arraycopy(oldArray, readerIndex, newArray, readerIndex, writerIndex - readerIndex); - } else { - setIndex(newCapacity, newCapacity); - } - setArray(newArray); - freeArray(oldArray); + bytesToCopy = oldCapacity; + } else { + trimIndicesToCapacity(newCapacity); + bytesToCopy = newCapacity; } + byte[] newArray = allocateArray(newCapacity); + System.arraycopy(oldArray, 0, newArray, 0, bytesToCopy); + setArray(newArray); + freeArray(oldArray); return this; } diff --git a/buffer/src/main/java/io/netty/buffer/UnpooledUnsafeNoCleanerDirectByteBuf.java b/buffer/src/main/java/io/netty/buffer/UnpooledUnsafeNoCleanerDirectByteBuf.java index 3b9c05b83b1..cc00e865adb 100644 --- a/buffer/src/main/java/io/netty/buffer/UnpooledUnsafeNoCleanerDirectByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/UnpooledUnsafeNoCleanerDirectByteBuf.java @@ -48,18 +48,8 @@ public ByteBuf capacity(int newCapacity) { return this; } - ByteBuffer newBuffer = reallocateDirect(buffer, newCapacity); - - if (newCapacity < oldCapacity) { - if (readerIndex() < newCapacity) { - if (writerIndex() > newCapacity) { - writerIndex(newCapacity); - } - } else { - setIndex(newCapacity, newCapacity); - } - } - setByteBuffer(newBuffer, false); + trimIndicesToCapacity(newCapacity); + setByteBuffer(reallocateDirect(buffer, newCapacity), false); return this; } }
null
train
val
"2019-08-07T09:56:28"
"2013-08-30T09:19:08Z"
normanmaurer
val
netty/netty/9475_9477
netty/netty
netty/netty/9475
netty/netty/9477
[ "keyword_pr_to_issue" ]
97361fa2c89da57e88762aaca9e2b186e8c148f5
8a082532f2c9464aa23b4edfd6caa427cc4dd779
[ "\r\n[asciistring.diff.gz](https://github.com/netty/netty/files/3511592/asciistring.diff.gz)\r\n", "Thanks @atcurtis, good catch! Could you submit a PR with your fix and unit test?" ]
[]
"2019-08-17T03:25:10Z"
[ "defect" ]
AsciiString contentEqualsIgnoreCase fails when arrayOffset is non-zero
### Expected behavior AsciiString.contentEqualsIgnoreCase is expected to work. ### Actual behavior AsciiString.contentEqualsIgnoreCase may return true for non-matching strings of equal length ### Steps to reproduce Create AsciiString with non-zero offset. ### Minimal yet complete reproducer code (or URL to code) Example of failure ` @Test public void testContentEqualsIgnoreCase() { byte[] bytes = { 32, 'a' }; AsciiString asciiString = new AsciiString(bytes, 1, 1, false); assertFalse(asciiString.contentEqualsIgnoreCase("b")); assertFalse(asciiString.contentEqualsIgnoreCase(AsciiString.of("b"))); } ` Diff to fix it: ` diff --git a/common/src/main/java/io/netty/util/AsciiString.java b/common/src/main/java/io/netty/util/AsciiString.java index ad382482df..29ef6a3e14 100644 --- a/common/src/main/java/io/netty/util/AsciiString.java +++ b/common/src/main/java/io/netty/util/AsciiString.java @@ -532,7 +532,7 @@ public final class AsciiString implements CharSequence, Comparable<CharSequence> if (string instanceof AsciiString) { AsciiString rhs = (AsciiString) string; - for (int i = arrayOffset(), j = rhs.arrayOffset(); i < length(); ++i, ++j) { + for (int i = arrayOffset(), j = rhs.arrayOffset(), end = i + length(); i < end; ++i, ++j) { if (!equalsIgnoreCase(value[i], rhs.value[j])) { return false; } @@ -540,7 +540,7 @@ public final class AsciiString implements CharSequence, Comparable<CharSequence> return true; } - for (int i = arrayOffset(), j = 0; i < length(); ++i, ++j) { + for (int i = arrayOffset(), j = 0, end = length(); j < end; ++i, ++j) { if (!equalsIgnoreCase(b2c(value[i]), string.charAt(j))) { return false; } ` ### Netty version 4.1.39 and earlier. ### JVM version (e.g. `java -version`) n/a ### OS version (e.g. `uname -a`) n/a
[ "common/src/main/java/io/netty/util/AsciiString.java" ]
[ "common/src/main/java/io/netty/util/AsciiString.java" ]
[ "common/src/test/java/io/netty/util/AsciiStringCharacterTest.java" ]
diff --git a/common/src/main/java/io/netty/util/AsciiString.java b/common/src/main/java/io/netty/util/AsciiString.java index ad382482dfc..29ef6a3e145 100644 --- a/common/src/main/java/io/netty/util/AsciiString.java +++ b/common/src/main/java/io/netty/util/AsciiString.java @@ -532,7 +532,7 @@ public boolean contentEqualsIgnoreCase(CharSequence string) { if (string instanceof AsciiString) { AsciiString rhs = (AsciiString) string; - for (int i = arrayOffset(), j = rhs.arrayOffset(); i < length(); ++i, ++j) { + for (int i = arrayOffset(), j = rhs.arrayOffset(), end = i + length(); i < end; ++i, ++j) { if (!equalsIgnoreCase(value[i], rhs.value[j])) { return false; } @@ -540,7 +540,7 @@ public boolean contentEqualsIgnoreCase(CharSequence string) { return true; } - for (int i = arrayOffset(), j = 0; i < length(); ++i, ++j) { + for (int i = arrayOffset(), j = 0, end = length(); j < end; ++i, ++j) { if (!equalsIgnoreCase(b2c(value[i]), string.charAt(j))) { return false; }
diff --git a/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java b/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java index deff4267d65..7534bb40de3 100644 --- a/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java +++ b/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java @@ -37,6 +37,15 @@ public class AsciiStringCharacterTest { private static final Random r = new Random(); + @Test + public void testContentEqualsIgnoreCase() { + byte[] bytes = { 32, 'a' }; + AsciiString asciiString = new AsciiString(bytes, 1, 1, false); + // https://github.com/netty/netty/issues/9475 + assertFalse(asciiString.contentEqualsIgnoreCase("b")); + assertFalse(asciiString.contentEqualsIgnoreCase(AsciiString.of("b"))); + } + @Test public void testGetBytesStringBuilder() { final StringBuilder b = new StringBuilder();
train
val
"2019-08-16T15:18:17"
"2019-08-17T02:29:45Z"
atcurtis
val
netty/netty/9481_9484
netty/netty
netty/netty/9481
netty/netty/9484
[ "keyword_pr_to_issue" ]
9e2922b04d2901647b717232ed8fa1745bdbe0df
cb739b26194ebaf89f99d83bc6c573723a225173
[ "@lwlee2608 Imho disable all non IO tasks is not going to work as there are situations when Netty need to run these. That said I would say we should most likely just use `99` if 100 is used to not make it behave like `0`.", "Ah, all right. I'm not actually suggesting disabling non-IO tasks. But looking at the calculations, don't you think ratio of 0 makes more sense than 100 to disable the timeout?", "My logic of thinking is as follows: The larger the ratio, the less amount of time non I/O tasks get. Therefore at the maximum ratio, it does not sound right to get an infinite amount of time. ", "But I guess it is too late for the change now. Perhaps we can update to the documentation to clarify on this?", "yeah it is too late... prs welcome for docs ", "Okay sure :)", "The IORatio logic has just been removed in `EpollEventLoop` and I'd assume the intention is to bring `NioEventLoop` in line with this at some point (@normanmaurer correct me if I'm wrong). So it may end up being moot in any case.", "@njhill yes... at the end I think that would be the plan. ", "oic, I'm still going make the javadoc PR since it's just minimal effort ", "ha~\r\nthe code show some NON IO Task must run ,even io Ratio ==100 \r\n if (ioRatio == 100) {\r\n try {\r\n if (strategy > 0) {\r\n processSelectedKeys();\r\n }\r\n } finally {\r\n // Ensure we always run tasks.\r\n ranTasks = runAllTasks();\r\n }\r\n }" ]
[]
"2019-08-20T06:55:33Z"
[]
Setting IORatio to 100
I was reading into netty-transport code and stumble across the ioRatio https://github.com/netty/netty/blob/873988676a2b1bb9cc6e5c1a80e5b27725b1d75c/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java#L515 By the look of it, the smaller the ioRatio, the longer time any non-I/O tasks can spend in the event-loop. For example, given ioTime = 10us ioRatio of 5 will yield timeout = 190us ioRatio of 10 will yield timeout = 90us ioRatio of 50 will yield timeout = 10us ioRatio of 90 will yield timeout = 1.1111us According to the documents: _"Returns the percentage of the desired amount of time spent for I/O in the event loop."_ So it all makes sense. However, when ioRatio is set to 100, I would expect no non-I/O tasks can be executed. However, the reverse is happening, the code disables the timeout altogether meaning non-I/O tasks can take as long as they like. Also explained by @Scottmitch in https://github.com/netty/netty/issues/6058#issuecomment-262619885 Should the timeout only be removed when the ioRatio is set to 0, instead of 100? This make more sense to me: ioRatio of 0 should disable the timeout ioRatio of 5 will yield timeout = 190us ioRatio of 10 will yield timeout = 90us ioRatio of 50 will yield timeout = 10us ioRatio of 90 will yield timeout = 1.1111us ioRatio of 100 will yield timeout = 0 and therefore should not allow non-IO tasks?
[ "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[ "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java index e4eb8019de4..8dd7609b41a 100644 --- a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java +++ b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java @@ -344,8 +344,10 @@ public int getIoRatio() { } /** - * Sets the percentage of the desired amount of time spent for I/O in the event loop. The default value is - * {@code 50}, which means the event loop will try to spend the same amount of time for I/O as for non-I/O tasks. + * Sets the percentage of the desired amount of time spent for I/O in the event loop. Value range from 1-100. + * The default value is {@code 50}, which means the event loop will try to spend the same amount of time for I/O + * as for non-I/O tasks. The lower the number the more time can be spent on non-I/O tasks. If value set to + * {@code 100}, this feature will be disabled and event loop will not attempt to balance I/O and non-I/O tasks. */ public void setIoRatio(int ioRatio) { if (ioRatio <= 0 || ioRatio > 100) {
null
train
val
"2019-08-19T15:08:40"
"2019-08-19T09:49:27Z"
lwlee2608
val
netty/netty/9495_9501
netty/netty
netty/netty/9495
netty/netty/9501
[ "keyword_pr_to_issue" ]
9fa974f6a5fa4d90fc13572c92d94e0681db4570
14e856ac722cab5060a273b007c7c577f9e13e14
[ "@nizarm sounds like a bug yes... I would be happy to review a PR.", "Thanks for confirming @normanmaurer - PR is ready for your review !" ]
[ "I wonder if we should make this more consistent with what we have on the server side:\r\n\r\nhttps://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java#L107\r\n\r\nBasically:\r\n\r\npublic Http2ClientUpgradeCodec(Http2FrameCodec http2Codec, ChannelHandler... handlers) {\r\n ...\r\n}\r\n", "See above .... I think we want to make this more consistent with what we have in the server handler:\r\n\r\nhttps://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java#L152", "In fact I did try this approach initially and realized I will be breaking existing public constructor of Http2ClientUpgradeCodec. For example, contrary to Http2ServerUpgradeCodec, the Http2ClientUpgradeCodec has a public constructor that allow users to specify a separate ChannelHandler for upgradeTo as shown bleow\r\n\r\n```java\r\npublic Http2ClientUpgradeCodec(Http2FrameCodec frameCodec, ChannelHandler upgradeToHandler)\r\n```\r\nI havent used this signature personnally - but not sure if other users of netty out there will be leveraging this to pass separate upgradeToHandler other than the frameCodec.\r\n\r\nAdding the above proposed approach will introduce ambiguity to upradeToHandler unless we strongly type. \r\n\r\nLet me know your thoughts, if we are okay breaking this constructor - I will go ahead and make the change consistent with Http2ServerUpgradeCodec", "ah yeah you are right. I did not realise it... Then its ok as it is. Just please add an extra test case to `Http2ClientUpgradeCodecTest`.", "Missing space after if" ]
"2019-08-22T17:42:24Z"
[]
NullPointerException during Client Side ClearText Upgrade with new Http2MultiplexHandler.java
When cleartext upgrade is implemented in client side using **HttpClientUpgradeHandler** there is no way to pass the newly created **Http2MultiplexHandler**. It seems like we need a way to correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpClientUpgrade(...). Since this special case is not handled in HttpClientUpgradeHandler (but seems like this case is correctly taken care in Http2ServerUpgradeCodec). This did lead to the situation that we did not correctly receive the event on the Http2MultiplexHandler and so did not correctly created the Http2StreamChannel for the upgrade stream. Because of this we ended up with an NPE if a frame was dispatched to the upgrade stream later on. Following is the exception I am receiving when I setup the Http2MultiplexHandler after the upgrade is taken place va.lang.NullPointerException at io.netty.handler.codec.http2.AbstractHttp2StreamChannel$1.visit(AbstractHttp2StreamChannel.java:64) ~[netty-all-4.1.39.Final.jar:4.1.39.Final] at io.netty.handler.codec.http2.Http2FrameCodec$2.visit(Http2FrameCodec.java:200) ~[netty-all-4.1.39.Final.jar:4.1.39.Final] at io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.forEachActiveStream(DefaultHttp2Connection.java:971) ~[netty-all-4.1.39.Final.jar:4.1.39.Final] at io.netty.handler.codec.http2.DefaultHttp2Connection.forEachActiveStream(DefaultHttp2Connection.java:208) ~[netty-all-4.1.39.Final.jar:4.1.39.Final] at io.netty.handler.codec.http2.Http2FrameCodec.forEachActiveStream(Http2FrameCodec.java:196) ~[netty-all-4.1.39.Final.jar:4.1.39.Final] at io.netty.handler.codec.http2.Http2ChannelDuplexHandler.forEachActiveStream(Http2ChannelDuplexHandler.java:83) ~[netty-all-4.1.39.Final.jar:4.1.39.Final] at io.netty.handler.codec.http2.Http2MultiplexHandler.channelWritabilityChanged(Http2MultiplexHandler.java:199) ~[netty-all-4.1.39.Final.jar:4.1.39.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelWritabilityChanged(AbstractChannelHandlerContext.java:436) ~[netty-all-4.1.39.Final.jar:4.1.39.Final] My Code looks something like this ```java Http2ClientUpgradeCodec upgradeCodec = new Http2ClientUpgradeCodec( "Http2Codec", Http2FrameCodecBuilder .forClient() .initialSettings(createHttp2Settings()) .build() ); channel.pipeline().addLast(sourceCodec); channel.pipeline().addLast(new HttpClientUpgradeHandler(sourceCodec, upgradeCodec, _maxContentLength)); channel.pipeline().addLast(new Http2ProtocolUpgradeHandler(upgradePromise)); ``` I add the Http2MultiplexHandler inside my Http2ProtocolUpgradeHandler during success of cleartext upgrade (as I dont have any other way to add it along with upgradecodec). Netty Version I used : 4.1.39 When I follow the same approach to fix the serverside upgrade issue as mentioned in https://github.com/netty/netty/issues/9314 - it works perfectly form me. @normanmaurer , @trustin - let me know if this is a known bug and we are good with my proposed fix. I can give a pull request !
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java" ]
[ "codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java" ]
[ "codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java" ]
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java index 6028a6fcd15..43c6d5dddc6 100644 --- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java +++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java @@ -47,13 +47,14 @@ public class Http2ClientUpgradeCodec implements HttpClientUpgradeHandler.Upgrade private final String handlerName; private final Http2ConnectionHandler connectionHandler; private final ChannelHandler upgradeToHandler; + private final ChannelHandler http2MultiplexHandler; public Http2ClientUpgradeCodec(Http2FrameCodec frameCodec, ChannelHandler upgradeToHandler) { this(null, frameCodec, upgradeToHandler); } public Http2ClientUpgradeCodec(String handlerName, Http2FrameCodec frameCodec, ChannelHandler upgradeToHandler) { - this(handlerName, (Http2ConnectionHandler) frameCodec, upgradeToHandler); + this(handlerName, (Http2ConnectionHandler) frameCodec, upgradeToHandler, null); } /** @@ -66,6 +67,18 @@ public Http2ClientUpgradeCodec(Http2ConnectionHandler connectionHandler) { this((String) null, connectionHandler); } + /** + * Creates the codec using a default name for the connection handler when adding to the + * pipeline. + * + * @param connectionHandler the HTTP/2 connection handler + * @param http2MultiplexHandler the Http2 Multiplexer handler to work with Http2FrameCodec + */ + public Http2ClientUpgradeCodec(Http2ConnectionHandler connectionHandler, + Http2MultiplexHandler http2MultiplexHandler) { + this((String) null, connectionHandler, http2MultiplexHandler); + } + /** * Creates the codec providing an upgrade to the given handler for HTTP/2. * @@ -74,24 +87,38 @@ public Http2ClientUpgradeCodec(Http2ConnectionHandler connectionHandler) { * @param connectionHandler the HTTP/2 connection handler */ public Http2ClientUpgradeCodec(String handlerName, Http2ConnectionHandler connectionHandler) { - this(handlerName, connectionHandler, connectionHandler); + this(handlerName, connectionHandler, connectionHandler, null); + } + + /** + * Creates the codec providing an upgrade to the given handler for HTTP/2. + * + * @param handlerName the name of the HTTP/2 connection handler to be used in the pipeline, + * or {@code null} to auto-generate the name + * @param connectionHandler the HTTP/2 connection handler + */ + public Http2ClientUpgradeCodec(String handlerName, Http2ConnectionHandler connectionHandler, + Http2MultiplexHandler http2MultiplexHandler) { + this(handlerName, connectionHandler, connectionHandler, http2MultiplexHandler); } private Http2ClientUpgradeCodec(String handlerName, Http2ConnectionHandler connectionHandler, ChannelHandler - upgradeToHandler) { + upgradeToHandler, Http2MultiplexHandler http2MultiplexHandler) { this.handlerName = handlerName; this.connectionHandler = checkNotNull(connectionHandler, "connectionHandler"); this.upgradeToHandler = checkNotNull(upgradeToHandler, "upgradeToHandler"); + this.http2MultiplexHandler = http2MultiplexHandler; } @Override + public CharSequence protocol() { return HTTP_UPGRADE_PROTOCOL_NAME; } @Override public Collection<CharSequence> setUpgradeHeaders(ChannelHandlerContext ctx, - HttpRequest upgradeRequest) { + HttpRequest upgradeRequest) { CharSequence settingsValue = getSettingsHeaderValue(ctx); upgradeRequest.headers().set(HTTP_UPGRADE_SETTINGS_HEADER, settingsValue); return UPGRADE_HEADERS; @@ -99,12 +126,24 @@ public Collection<CharSequence> setUpgradeHeaders(ChannelHandlerContext ctx, @Override public void upgradeTo(ChannelHandlerContext ctx, FullHttpResponse upgradeResponse) - throws Exception { - // Add the handler to the pipeline. - ctx.pipeline().addAfter(ctx.name(), handlerName, upgradeToHandler); + throws Exception { + try { + // Add the handler to the pipeline. + ctx.pipeline().addAfter(ctx.name(), handlerName, upgradeToHandler); + + // Add the Http2 Multiplex handler as this handler handle events produced by the connectionHandler. + // See https://github.com/netty/netty/issues/9495 + if (http2MultiplexHandler != null) { + final String name = ctx.pipeline().context(connectionHandler).name(); + ctx.pipeline().addAfter(name, null, http2MultiplexHandler); + } - // Reserve local stream 1 for the response. - connectionHandler.onHttpClientUpgrade(); + // Reserve local stream 1 for the response. + connectionHandler.onHttpClientUpgrade(); + } catch (Http2Exception e) { + ctx.fireExceptionCaught(e); + ctx.close(); + } } /**
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java index 2d381e4f88b..f0b4a581c02 100644 --- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java +++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodecTest.java @@ -33,26 +33,42 @@ public class Http2ClientUpgradeCodecTest { @Test public void testUpgradeToHttp2ConnectionHandler() throws Exception { - testUpgrade(new Http2ConnectionHandlerBuilder().server(false).frameListener(new Http2FrameAdapter()).build()); + testUpgrade(new Http2ConnectionHandlerBuilder().server(false).frameListener( + new Http2FrameAdapter()).build(), null); } @Test public void testUpgradeToHttp2FrameCodec() throws Exception { - testUpgrade(Http2FrameCodecBuilder.forClient().build()); + testUpgrade(Http2FrameCodecBuilder.forClient().build(), null); } @Test public void testUpgradeToHttp2MultiplexCodec() throws Exception { testUpgrade(Http2MultiplexCodecBuilder.forClient(new HttpInboundHandler()) - .withUpgradeStreamHandler(new ChannelInboundHandlerAdapter()).build()); + .withUpgradeStreamHandler(new ChannelInboundHandlerAdapter()).build(), null); } - private static void testUpgrade(Http2ConnectionHandler handler) throws Exception { + @Test + public void testUpgradeToHttp2FrameCodecWithMultiplexer() throws Exception { + testUpgrade(Http2FrameCodecBuilder.forClient().build(), + new Http2MultiplexHandler(new HttpInboundHandler(), new HttpInboundHandler())); + } + + private static void testUpgrade(Http2ConnectionHandler handler, Http2MultiplexHandler multiplexer) + throws Exception { FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.OPTIONS, "*"); EmbeddedChannel channel = new EmbeddedChannel(new ChannelInboundHandlerAdapter()); ChannelHandlerContext ctx = channel.pipeline().firstContext(); - Http2ClientUpgradeCodec codec = new Http2ClientUpgradeCodec("connectionHandler", handler); + + Http2ClientUpgradeCodec codec; + + if (multiplexer == null) { + codec = new Http2ClientUpgradeCodec("connectionHandler", handler); + } else { + codec = new Http2ClientUpgradeCodec("connectionHandler", handler, multiplexer); + } + codec.setUpgradeHeaders(ctx, request); // Flush the channel to ensure we write out all buffered data channel.flush(); @@ -60,6 +76,10 @@ private static void testUpgrade(Http2ConnectionHandler handler) throws Exception codec.upgradeTo(ctx, null); assertNotNull(channel.pipeline().get("connectionHandler")); + if (multiplexer != null) { + assertNotNull(channel.pipeline().get(Http2MultiplexHandler.class)); + } + assertTrue(channel.finishAndReleaseAll()); }
train
val
"2019-08-22T13:59:08"
"2019-08-22T07:12:24Z"
nizarm
val
netty/netty/9429_9512
netty/netty
netty/netty/9429
netty/netty/9512
[ "keyword_pr_to_issue" ]
1a22c126be73b6898caea5e59018f1c28ed86b11
491b1f428b00bba45a2c51d171cf819704b0fbe6
[ "Netty 4.1.32\r\n\r\nClient (browser):\r\n`ws.send(\"\")`\r\nServer log:\r\n`io.netty.handler.codec.DecoderException: io.netty.handler.codec.CodecException: cannot read uncompressed buffer\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:98)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)\r\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)\r\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)\r\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)\r\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)\r\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)\r\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)\r\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)\r\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)\r\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\nCaused by: io.netty.handler.codec.CodecException: cannot read uncompressed buffer\r\n\tat io.netty.handler.codec.http.websocketx.extensions.compression.DeflateDecoder.decode(DeflateDecoder.java:89)\r\n\tat io.netty.handler.codec.http.websocketx.extensions.compression.PerMessageDeflateDecoder.decode(PerMessageDeflateDecoder.java:64)\r\n\tat io.netty.handler.codec.http.websocketx.extensions.compression.PerMessageDeflateDecoder.decode(PerMessageDeflateDecoder.java:30)\r\n\tat io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)\r\n\t... 20 more`", "It seems that something is wrong with your codec handlers, would you please show use your codes about `ChannelIntializer` ?", "Hi, This is behavior can reproduced if send 0 byte via Chrome webSocket connection, received frame is marked as compressed but payload contain only 0 byte. I think we need add test case and check spec https://tools.ietf.org/html/rfc7692 point 7.2, @normanmaurer WDYT ?", "@amizurov agree...", "> It seems that something is wrong with your codec handlers, would you please show use your codes about `ChannelIntializer` ?\r\n\r\n```\r\npackage websocketx;\r\n\r\nimport io.netty.channel.ChannelInitializer;\r\nimport io.netty.channel.ChannelPipeline;\r\nimport io.netty.channel.socket.SocketChannel;\r\nimport io.netty.handler.codec.http.HttpObjectAggregator;\r\nimport io.netty.handler.codec.http.HttpServerCodec;\r\nimport io.netty.handler.codec.http.HttpServerExpectContinueHandler;\r\nimport io.netty.handler.codec.http.websocketx.WebSocketServerProtocolHandler;\r\nimport io.netty.handler.codec.http.websocketx.extensions.compression.WebSocketServerCompressionHandler;\r\nimport io.netty.handler.ssl.SslContext;\r\n\r\n/**\r\n */\r\npublic class WebSocketServerInitializer extends ChannelInitializer<SocketChannel> {\r\n\r\n private static final String WEBSOCKET_PATH = \"/websocket\";\r\n\r\n private final SslContext sslCtx;\r\n\r\n public WebSocketServerInitializer(SslContext sslCtx) {\r\n this.sslCtx = sslCtx;\r\n }\r\n\r\n @Override\r\n public void initChannel(SocketChannel ch) {\r\n ChannelPipeline pipeline = ch.pipeline();\r\n if (sslCtx != null) {\r\n pipeline.addLast(sslCtx.newHandler(ch.alloc()));\r\n }\r\n pipeline.addLast(\"httpServerCodec\", new HttpServerCodec());\r\n pipeline.addLast(new HttpObjectAggregator(65536));\r\n pipeline.addLast(new WebSocketServerCompressionHandler());\r\n pipeline.addLast(new WebSocketServerProtocolHandler(WEBSOCKET_PATH, null, true));\r\n pipeline.addLast(new WebSocketFrameHandler());\r\n pipeline.addLast(new HttpServerExpectContinueHandler());\r\n pipeline.addLast(new HttpServerHandler());\r\n }\r\n}\r\n```", "Sorry for long wait, we incorrectly handle the section https://tools.ietf.org/html/rfc7692#section-7.2.3.6 for both situation encode/decode, I'll try to do pull request a soon as possible.", "Thanks a lot\n\n> Am 23.08.2019 um 07:47 schrieb Andrey Mizurov <notifications@github.com>:\n> \n> Sorry for long wait, we incorrectly handle the section https://tools.ietf.org/html/rfc7692#section-7.2.3.6 for both situation encode/decode, I'll try to do pull request a soon as possible.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n" ]
[ "final ", "maybe we can inline this into the condition where it is used?\r\n\r\n`if (readable && compositeDecompressedContent.numComponents() <= 0 && !EMPTY_DEFLATE_BLOCK.equals(msg.content())) {`", "Done for both encoder/decoder", "```\r\ndecoder.writeInbound(msg.content().retain());\r\n if (appendFrameTail(msg)) {\r\n decoder.writeInbound(FRAME_TAIL.duplicate());\r\n }\r\n```\r\nafter decompression msg.content() is not readable." ]
"2019-08-26T12:25:40Z"
[]
Sending an empty String like "" causes an error
### Expected behavior ### Actual behavior ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ### Netty version ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateEncoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateDecoder.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateEncoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateDecoderTest.java", "codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateEncoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateDecoder.java index 746032dbbcb..223e2e65201 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateDecoder.java @@ -44,6 +44,10 @@ abstract class DeflateDecoder extends WebSocketExtensionDecoder { Unpooled.wrappedBuffer(new byte[] {0x00, 0x00, (byte) 0xff, (byte) 0xff})) .asReadOnly(); + static final ByteBuf EMPTY_DEFLATE_BLOCK = Unpooled.unreleasableBuffer( + Unpooled.wrappedBuffer(new byte[] { 0x00 })) + .asReadOnly(); + private final boolean noContext; private final WebSocketExtensionFilter extensionDecoderFilter; @@ -73,6 +77,35 @@ protected WebSocketExtensionFilter extensionDecoderFilter() { @Override protected void decode(ChannelHandlerContext ctx, WebSocketFrame msg, List<Object> out) throws Exception { + final ByteBuf decompressedContent = decompressContent(ctx, msg); + + final WebSocketFrame outMsg; + if (msg instanceof TextWebSocketFrame) { + outMsg = new TextWebSocketFrame(msg.isFinalFragment(), newRsv(msg), decompressedContent); + } else if (msg instanceof BinaryWebSocketFrame) { + outMsg = new BinaryWebSocketFrame(msg.isFinalFragment(), newRsv(msg), decompressedContent); + } else if (msg instanceof ContinuationWebSocketFrame) { + outMsg = new ContinuationWebSocketFrame(msg.isFinalFragment(), newRsv(msg), decompressedContent); + } else { + throw new CodecException("unexpected frame type: " + msg.getClass().getName()); + } + + out.add(outMsg); + } + + @Override + public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { + cleanup(); + super.handlerRemoved(ctx); + } + + @Override + public void channelInactive(ChannelHandlerContext ctx) throws Exception { + cleanup(); + super.channelInactive(ctx); + } + + private ByteBuf decompressContent(ChannelHandlerContext ctx, WebSocketFrame msg) { if (decoder == null) { if (!(msg instanceof TextWebSocketFrame) && !(msg instanceof BinaryWebSocketFrame)) { throw new CodecException("unexpected initial frame type: " + msg.getClass().getName()); @@ -81,12 +114,14 @@ protected void decode(ChannelHandlerContext ctx, WebSocketFrame msg, List<Object } boolean readable = msg.content().isReadable(); + boolean emptyDeflateBlock = EMPTY_DEFLATE_BLOCK.equals(msg.content()); + decoder.writeInbound(msg.content().retain()); if (appendFrameTail(msg)) { decoder.writeInbound(FRAME_TAIL.duplicate()); } - CompositeByteBuf compositeUncompressedContent = ctx.alloc().compositeBuffer(); + CompositeByteBuf compositeDecompressedContent = ctx.alloc().compositeBuffer(); for (;;) { ByteBuf partUncompressedContent = decoder.readInbound(); if (partUncompressedContent == null) { @@ -96,12 +131,12 @@ protected void decode(ChannelHandlerContext ctx, WebSocketFrame msg, List<Object partUncompressedContent.release(); continue; } - compositeUncompressedContent.addComponent(true, partUncompressedContent); + compositeDecompressedContent.addComponent(true, partUncompressedContent); } // Correctly handle empty frames // See https://github.com/netty/netty/issues/4348 - if (readable && compositeUncompressedContent.numComponents() <= 0) { - compositeUncompressedContent.release(); + if (!emptyDeflateBlock && readable && compositeDecompressedContent.numComponents() <= 0) { + compositeDecompressedContent.release(); throw new CodecException("cannot read uncompressed buffer"); } @@ -109,30 +144,7 @@ protected void decode(ChannelHandlerContext ctx, WebSocketFrame msg, List<Object cleanup(); } - WebSocketFrame outMsg; - if (msg instanceof TextWebSocketFrame) { - outMsg = new TextWebSocketFrame(msg.isFinalFragment(), newRsv(msg), compositeUncompressedContent); - } else if (msg instanceof BinaryWebSocketFrame) { - outMsg = new BinaryWebSocketFrame(msg.isFinalFragment(), newRsv(msg), compositeUncompressedContent); - } else if (msg instanceof ContinuationWebSocketFrame) { - outMsg = new ContinuationWebSocketFrame(msg.isFinalFragment(), newRsv(msg), - compositeUncompressedContent); - } else { - throw new CodecException("unexpected frame type: " + msg.getClass().getName()); - } - out.add(outMsg); - } - - @Override - public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { - cleanup(); - super.handlerRemoved(ctx); - } - - @Override - public void channelInactive(ChannelHandlerContext ctx) throws Exception { - cleanup(); - super.channelInactive(ctx); + return compositeDecompressedContent; } private void cleanup() { diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateEncoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateEncoder.java index 07d5ca34bc4..1203c498804 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateEncoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/DeflateEncoder.java @@ -82,8 +82,39 @@ protected WebSocketExtensionFilter extensionEncoderFilter() { protected abstract boolean removeFrameTail(WebSocketFrame msg); @Override - protected void encode(ChannelHandlerContext ctx, WebSocketFrame msg, - List<Object> out) throws Exception { + protected void encode(ChannelHandlerContext ctx, WebSocketFrame msg, List<Object> out) throws Exception { + final ByteBuf compressedContent; + if (msg.content().isReadable()) { + compressedContent = compressContent(ctx, msg); + } else if (msg.isFinalFragment()) { + // Set empty DEFLATE block manually for unknown buffer size + // https://tools.ietf.org/html/rfc7692#section-7.2.3.6 + compressedContent = EMPTY_DEFLATE_BLOCK.duplicate(); + } else { + throw new CodecException("cannot compress content buffer"); + } + + final WebSocketFrame outMsg; + if (msg instanceof TextWebSocketFrame) { + outMsg = new TextWebSocketFrame(msg.isFinalFragment(), rsv(msg), compressedContent); + } else if (msg instanceof BinaryWebSocketFrame) { + outMsg = new BinaryWebSocketFrame(msg.isFinalFragment(), rsv(msg), compressedContent); + } else if (msg instanceof ContinuationWebSocketFrame) { + outMsg = new ContinuationWebSocketFrame(msg.isFinalFragment(), rsv(msg), compressedContent); + } else { + throw new CodecException("unexpected frame type: " + msg.getClass().getName()); + } + + out.add(outMsg); + } + + @Override + public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { + cleanup(); + super.handlerRemoved(ctx); + } + + private ByteBuf compressContent(ChannelHandlerContext ctx, WebSocketFrame msg) { if (encoder == null) { encoder = new EmbeddedChannel(ZlibCodecFactory.newZlibEncoder( ZlibWrapper.NONE, compressionLevel, windowSize, 8)); @@ -103,6 +134,7 @@ protected void encode(ChannelHandlerContext ctx, WebSocketFrame msg, } fullCompressedContent.addComponent(true, partCompressedContent); } + if (fullCompressedContent.numComponents() <= 0) { fullCompressedContent.release(); throw new CodecException("cannot read compressed buffer"); @@ -120,23 +152,7 @@ protected void encode(ChannelHandlerContext ctx, WebSocketFrame msg, compressedContent = fullCompressedContent; } - WebSocketFrame outMsg; - if (msg instanceof TextWebSocketFrame) { - outMsg = new TextWebSocketFrame(msg.isFinalFragment(), rsv(msg), compressedContent); - } else if (msg instanceof BinaryWebSocketFrame) { - outMsg = new BinaryWebSocketFrame(msg.isFinalFragment(), rsv(msg), compressedContent); - } else if (msg instanceof ContinuationWebSocketFrame) { - outMsg = new ContinuationWebSocketFrame(msg.isFinalFragment(), rsv(msg), compressedContent); - } else { - throw new CodecException("unexpected frame type: " + msg.getClass().getName()); - } - out.add(outMsg); - } - - @Override - public void handlerRemoved(ChannelHandlerContext ctx) throws Exception { - cleanup(); - super.handlerRemoved(ctx); + return compressedContent; } private void cleanup() {
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateDecoderTest.java index fc872b1ecc4..fcd290ab9db 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateDecoderTest.java @@ -33,6 +33,7 @@ import java.util.Random; import static io.netty.handler.codec.http.websocketx.extensions.WebSocketExtensionFilter.*; +import static io.netty.handler.codec.http.websocketx.extensions.compression.DeflateDecoder.*; import static io.netty.util.CharsetUtil.*; import static org.junit.Assert.*; @@ -310,4 +311,21 @@ public boolean mustSkip(WebSocketFrame frame) { } } + @Test + public void testEmptyFrameDecompression() { + EmbeddedChannel decoderChannel = new EmbeddedChannel(new PerMessageDeflateDecoder(false)); + + TextWebSocketFrame emptyDeflateBlockFrame = new TextWebSocketFrame(true, WebSocketExtension.RSV1, + EMPTY_DEFLATE_BLOCK); + + assertTrue(decoderChannel.writeInbound(emptyDeflateBlockFrame)); + TextWebSocketFrame emptyBufferFrame = decoderChannel.readInbound(); + + assertFalse(emptyBufferFrame.content().isReadable()); + + // Composite empty buffer + assertTrue(emptyBufferFrame.release()); + assertFalse(decoderChannel.finish()); + } + } diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateEncoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateEncoderTest.java index 9e986511c57..1f8b47744a7 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateEncoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/extensions/compression/PerMessageDeflateEncoderTest.java @@ -34,6 +34,7 @@ import java.util.Random; import static io.netty.handler.codec.http.websocketx.extensions.WebSocketExtensionFilter.*; +import static io.netty.handler.codec.http.websocketx.extensions.compression.DeflateDecoder.*; import static io.netty.util.CharsetUtil.*; import static org.junit.Assert.*; @@ -102,7 +103,7 @@ public void testAlreadyCompressedFrame() { } @Test - public void testFramementedFrame() { + public void testFragmentedFrame() { EmbeddedChannel encoderChannel = new EmbeddedChannel(new PerMessageDeflateEncoder(9, 15, false, NEVER_SKIP)); EmbeddedChannel decoderChannel = new EmbeddedChannel( @@ -272,4 +273,36 @@ public boolean mustSkip(WebSocketFrame frame) { } } + @Test + public void testEmptyFrameCompression() { + EmbeddedChannel encoderChannel = new EmbeddedChannel(new PerMessageDeflateEncoder(9, 15, false)); + + TextWebSocketFrame emptyFrame = new TextWebSocketFrame(""); + + assertTrue(encoderChannel.writeOutbound(emptyFrame)); + TextWebSocketFrame emptyDeflateFrame = encoderChannel.readOutbound(); + + assertEquals(WebSocketExtension.RSV1, emptyDeflateFrame.rsv()); + assertTrue(ByteBufUtil.equals(EMPTY_DEFLATE_BLOCK, emptyDeflateFrame.content())); + // Unreleasable buffer + assertFalse(emptyDeflateFrame.release()); + + assertFalse(encoderChannel.finish()); + } + + @Test(expected = EncoderException.class) + public void testCodecExceptionForNotFinEmptyFrame() { + EmbeddedChannel encoderChannel = new EmbeddedChannel(new PerMessageDeflateEncoder(9, 15, false)); + + TextWebSocketFrame emptyNotFinFrame = new TextWebSocketFrame(false, 0, ""); + + try { + encoderChannel.writeOutbound(emptyNotFinFrame); + } finally { + // EmptyByteBuf buffer + assertFalse(emptyNotFinFrame.release()); + assertFalse(encoderChannel.finish()); + } + } + }
train
val
"2019-08-26T08:54:45"
"2019-08-05T11:14:37Z"
robot518
val
netty/netty/9553_9557
netty/netty
netty/netty/9553
netty/netty/9557
[ "keyword_pr_to_issue" ]
454cc80141172ec7c6fbc98b8e249ca7645fa27e
bcb0d0224895ea40d64d9de12263c6fcc85cde27
[ "@wilkinsona sounds like a bug... are you interested in providing a PR ?", "@normanmaurer Hi, problem that we handle only first `accept-encoding` header value, also we need to consider the q-value for each, or combine to one header-value by spec https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2", "@amizurov @wilkinsona I more then happy to review PRs ... Currently I am a bit busy so I will put this on my backlog if no-one else will fix it in the meantime ", "@normanmaurer No problem if @wilkinsona don't mind I will do a pull request soon.", "@amizurov If you have the time then please go ahead. If not, I should have some cycles later this week once we've released Spring Boot 2.2 M6." ]
[ "nit: maybe we should make this:\r\n\r\n```java\r\nswitch (acceptedEncodingHeaders.size()) {\r\n case 0:\r\n acceptedEncoding = HttpContentDecoder.IDENTITY;\r\n break;\r\n case 1: \r\n acceptedEncoding = acceptEncodingHeaders.get(0);\r\n break;\r\n default:\r\n // Multiple message-header fields https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2\r\n acceptedEncoding = StringUtil.join(\",\", acceptEncodingHeaders.iterator());\r\n break;\r\n\r\n}\r\n```", "assert return value", "assert return value", "maybe just make this `Iterable` ?", "I wonder if we should optimise this a but to only build the StringBuilder etc when there are at least 2 elements in ?", "This could also be simplified to just `new StringBuilder(elements.next())`.", "Better to just return `builder`, no need to copy to a `String` yet", "It's a shame that `HttpHeaders` doesn't have `get`/`getAll` methods which return `CharSequence`s rather than `String`s. These would be quite easy to add I think and would help cut down on copying and garbage.\r\n\r\n(this is a passing comment and not directly related to the PR change!)", "Don't mind, done", "Done", "Done", "Done", "All done :)", "Done" ]
"2019-09-09T14:51:10Z"
[]
HttpContentEncoder does not handle multiple Accept-Encoding headers correctly
### Expected behavior When a request contains multiple `Accept-Encoding` headers, their values will be treated as if they had been sent as the comma-separated values of a single `Accept-Encoding` header. ### Actual behavior Only the first `Accept-Encoding` header is considered and all subsequent `Accept-Encoding` headers are ignored. ### Steps to reproduce Configure an HTTP server with `HttpContentCompressor` in its pipeline and send a request with multiple `Accept-Encoding` headers. Using curl that would look something like this: ``` curl -H "Accept-Encoding: unknown" -H "Accept-Encoding: gzip" localhost:8080 ``` Observe that the response is not gzipped. ### Minimal yet complete reproducer code (or URL to code) ```java package com.example.demo; import static io.netty.handler.codec.http.HttpHeaderNames.CONNECTION; import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_LENGTH; import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_TYPE; import static io.netty.handler.codec.http.HttpHeaderValues.CLOSE; import static io.netty.handler.codec.http.HttpHeaderValues.KEEP_ALIVE; import static io.netty.handler.codec.http.HttpHeaderValues.TEXT_PLAIN; import static io.netty.handler.codec.http.HttpResponseStatus.OK; import io.netty.bootstrap.ServerBootstrap; import io.netty.buffer.Unpooled; import io.netty.channel.Channel; import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelFutureListener; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInitializer; import io.netty.channel.ChannelOption; import io.netty.channel.ChannelPipeline; import io.netty.channel.EventLoopGroup; import io.netty.channel.SimpleChannelInboundHandler; import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.socket.SocketChannel; import io.netty.channel.socket.nio.NioServerSocketChannel; import io.netty.handler.codec.http.DefaultFullHttpResponse; import io.netty.handler.codec.http.FullHttpResponse; import io.netty.handler.codec.http.HttpContentCompressor; import io.netty.handler.codec.http.HttpObject; import io.netty.handler.codec.http.HttpRequest; import io.netty.handler.codec.http.HttpServerCodec; import io.netty.handler.codec.http.HttpServerExpectContinueHandler; import io.netty.handler.codec.http.HttpUtil; public final class HttpHelloWorldServer { public static void main(String[] args) throws Exception { EventLoopGroup bossGroup = new NioEventLoopGroup(1); EventLoopGroup workerGroup = new NioEventLoopGroup(); try { ServerBootstrap b = new ServerBootstrap(); b.option(ChannelOption.SO_BACKLOG, 1024); b.group(bossGroup, workerGroup) .channel(NioServerSocketChannel.class) .childHandler(new HttpHelloWorldServerInitializer()); Channel ch = b.bind(8080).sync().channel(); ch.closeFuture().sync(); } finally { bossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); } } static final class HttpHelloWorldServerInitializer extends ChannelInitializer<SocketChannel> { @Override public void initChannel(SocketChannel ch) { ChannelPipeline p = ch.pipeline(); p.addLast(new HttpServerCodec()); p.addLast(new HttpContentCompressor()); p.addLast(new HttpServerExpectContinueHandler()); p.addLast(new HttpHelloWorldServerHandler()); } } static final class HttpHelloWorldServerHandler extends SimpleChannelInboundHandler<HttpObject> { private static final byte[] CONTENT = { 'H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd' }; @Override public void channelReadComplete(ChannelHandlerContext context) { context.flush(); } @Override public void channelRead0(ChannelHandlerContext context, HttpObject message) { if (message instanceof HttpRequest) { HttpRequest req = (HttpRequest) message; boolean keepAlive = HttpUtil.isKeepAlive(req); FullHttpResponse response = new DefaultFullHttpResponse(req.protocolVersion(), OK, Unpooled.wrappedBuffer(CONTENT)); response.headers().set(CONTENT_TYPE, TEXT_PLAIN).setInt(CONTENT_LENGTH, response.content().readableBytes()); if (keepAlive) { if (!req.protocolVersion().isKeepAliveDefault()) { response.headers().set(CONNECTION, KEEP_ALIVE); } } else { // Tell the client we're going to close the connection. response.headers().set(CONNECTION, CLOSE); } ChannelFuture f = context.write(response); if (!keepAlive) { f.addListener(ChannelFutureListener.CLOSE); } } } @Override public void exceptionCaught(ChannelHandlerContext context, Throwable ex) { ex.printStackTrace(); context.close(); } } } ``` ### Netty version 4.1.39 ### JVM version (e.g. `java -version`) ``` openjdk version "1.8.0_202" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_202-b08) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.202-b08, mixed mode) ``` ### OS version (e.g. `uname -a`) Darwin Andys-MacBook-Pro.local 17.7.0 Darwin Kernel Version 17.7.0: Sun Jun 2 20:31:42 PDT 2019; root:xnu-4570.71.46~1/RELEASE_X86_64 x86_64
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java", "common/src/main/java/io/netty/util/internal/StringUtil.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java", "common/src/main/java/io/netty/util/internal/StringUtil.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpContentCompressorTest.java", "common/src/test/java/io/netty/util/internal/StringUtilTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java index 7be3b8b92e9..871caef5be2 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java @@ -22,11 +22,14 @@ import io.netty.handler.codec.DecoderResult; import io.netty.handler.codec.MessageToMessageCodec; import io.netty.util.ReferenceCountUtil; +import io.netty.util.internal.StringUtil; import java.util.ArrayDeque; import java.util.List; import java.util.Queue; +import static io.netty.handler.codec.http.HttpHeaderNames.*; + /** * Encodes the content of the outbound {@link HttpResponse} and {@link HttpContent}. * The original content is replaced with the new content encoded by the @@ -71,21 +74,30 @@ public boolean acceptOutboundMessage(Object msg) throws Exception { } @Override - protected void decode(ChannelHandlerContext ctx, HttpRequest msg, List<Object> out) - throws Exception { - CharSequence acceptedEncoding = msg.headers().get(HttpHeaderNames.ACCEPT_ENCODING); - if (acceptedEncoding == null) { - acceptedEncoding = HttpContentDecoder.IDENTITY; + protected void decode(ChannelHandlerContext ctx, HttpRequest msg, List<Object> out) throws Exception { + CharSequence acceptEncoding; + List<String> acceptEncodingHeaders = msg.headers().getAll(ACCEPT_ENCODING); + switch (acceptEncodingHeaders.size()) { + case 0: + acceptEncoding = HttpContentDecoder.IDENTITY; + break; + case 1: + acceptEncoding = acceptEncodingHeaders.get(0); + break; + default: + // Multiple message-header fields https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2 + acceptEncoding = StringUtil.join(",", acceptEncodingHeaders); + break; } HttpMethod method = msg.method(); if (HttpMethod.HEAD.equals(method)) { - acceptedEncoding = ZERO_LENGTH_HEAD; + acceptEncoding = ZERO_LENGTH_HEAD; } else if (HttpMethod.CONNECT.equals(method)) { - acceptedEncoding = ZERO_LENGTH_CONNECT; + acceptEncoding = ZERO_LENGTH_CONNECT; } - acceptEncodingQueue.add(acceptedEncoding); + acceptEncodingQueue.add(acceptEncoding); out.add(ReferenceCountUtil.retain(msg)); } diff --git a/common/src/main/java/io/netty/util/internal/StringUtil.java b/common/src/main/java/io/netty/util/internal/StringUtil.java index e81a024ddc1..2b107d71b21 100644 --- a/common/src/main/java/io/netty/util/internal/StringUtil.java +++ b/common/src/main/java/io/netty/util/internal/StringUtil.java @@ -17,6 +17,7 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Iterator; import java.util.List; import static io.netty.util.internal.ObjectUtil.*; @@ -597,6 +598,36 @@ public static CharSequence trimOws(CharSequence value) { return start == 0 && end == length - 1 ? value : value.subSequence(start, end + 1); } + /** + * Returns a char sequence that contains all {@code elements} joined by a given separator. + * + * @param separator for each element + * @param elements to join together + * + * @return a char sequence joined by a given separator. + */ + public static CharSequence join(CharSequence separator, Iterable<? extends CharSequence> elements) { + ObjectUtil.checkNotNull(separator, "separator"); + ObjectUtil.checkNotNull(elements, "elements"); + + Iterator<? extends CharSequence> iterator = elements.iterator(); + if (!iterator.hasNext()) { + return EMPTY_STRING; + } + + CharSequence firstElement = iterator.next(); + if (!iterator.hasNext()) { + return firstElement; + } + + StringBuilder builder = new StringBuilder(firstElement); + do { + builder.append(separator).append(iterator.next()); + } while (iterator.hasNext()); + + return builder; + } + /** * @return {@code length} if no OWS is found. */ @@ -622,4 +653,5 @@ private static int indexOfLastNonOwsChar(CharSequence value, int start, int leng private static boolean isOws(char c) { return c == SPACE || c == TAB; } + }
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentCompressorTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentCompressorTest.java index a31bc5beb89..d0676fd907a 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentCompressorTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentCompressorTest.java @@ -500,6 +500,39 @@ public void testCompressThresholdNotCompress() throws Exception { assertTrue(ch.finishAndReleaseAll()); } + @Test + public void testMultipleAcceptEncodingHeaders() { + FullHttpRequest request = newRequest(); + request.headers().set(HttpHeaderNames.ACCEPT_ENCODING, "unknown; q=1.0") + .add(HttpHeaderNames.ACCEPT_ENCODING, "gzip; q=0.5") + .add(HttpHeaderNames.ACCEPT_ENCODING, "deflate; q=0"); + + EmbeddedChannel ch = new EmbeddedChannel(new HttpContentCompressor()); + + assertTrue(ch.writeInbound(request)); + + FullHttpResponse res = new DefaultFullHttpResponse( + HttpVersion.HTTP_1_1, HttpResponseStatus.OK, + Unpooled.copiedBuffer("Gzip Win", CharsetUtil.US_ASCII)); + assertTrue(ch.writeOutbound(res)); + + assertEncodedResponse(ch); + HttpContent c = ch.readOutbound(); + assertThat(ByteBufUtil.hexDump(c.content()), is("1f8b080000000000000072afca2c5008cfcc03000000ffff")); + c.release(); + + c = ch.readOutbound(); + assertThat(ByteBufUtil.hexDump(c.content()), is("03001f2ebf0f08000000")); + c.release(); + + LastHttpContent last = ch.readOutbound(); + assertThat(last.content().readableBytes(), is(0)); + last.release(); + + assertThat(ch.readOutbound(), is(nullValue())); + assertTrue(ch.finishAndReleaseAll()); + } + private static FullHttpRequest newRequest() { FullHttpRequest req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/"); req.headers().set(HttpHeaderNames.ACCEPT_ENCODING, "gzip"); diff --git a/common/src/test/java/io/netty/util/internal/StringUtilTest.java b/common/src/test/java/io/netty/util/internal/StringUtilTest.java index 0d27a74fea3..7dd06e7bac1 100644 --- a/common/src/test/java/io/netty/util/internal/StringUtilTest.java +++ b/common/src/test/java/io/netty/util/internal/StringUtilTest.java @@ -534,4 +534,19 @@ public void trimOws() { assertEquals("", StringUtil.trimOws("\t ").toString()); assertEquals("a b", StringUtil.trimOws("\ta b \t").toString()); } + + @Test + public void testJoin() { + assertEquals("", + StringUtil.join(",", Collections.<CharSequence>emptyList()).toString()); + assertEquals("a", + StringUtil.join(",", Collections.singletonList("a")).toString()); + assertEquals("a,b", + StringUtil.join(",", Arrays.asList("a", "b")).toString()); + assertEquals("a,b,c", + StringUtil.join(",", Arrays.asList("a", "b", "c")).toString()); + assertEquals("a,b,c,null,d", + StringUtil.join(",", Arrays.asList("a", "b", "c", null, "d")).toString()); + } + }
train
val
"2019-09-09T13:58:07"
"2019-09-09T08:31:42Z"
wilkinsona
val
netty/netty/8278_9616
netty/netty
netty/netty/8278
netty/netty/9616
[ "keyword_issue_to_pr" ]
b39ffed042844adecaf0a4fc4e9a2f53edaa111d
170e4deee6f78a564727dee18af7dd7ce322a35f
[ "I am not aware of anything like this @vkostyukov ", "@vkostyukov did you find out anything ?", "@normanmaurer Nope - we haven't really made any progress there yet. We've only seen this problem in our tests and were keeping an eye on complains coming from out users. Nobody really complained about slower shutdown yet thus we haven't really invested into it.", "k ok :)", "@vkostyukov would be interesting to know if #9616 resolves this (assuming it still happens for you).", "I think this should be fixed by #9616 . Closing. " ]
[ "do we still need this sleep?", "200ms seems a bit low, I see this failing on a loaded CI server.", "@johnou I think so yes since it's how the graceful shutdown / quiet period is currently implemented. That said polling with `sleep` for this does seem unnecessarily cumbersome (as hinted by the TODO comment above)... probably something for a follow-on improvement.", "Maybe it could be bumped a little, but 200ms seems to me like it should be long enough. The timeouts are intentionally low (< 1 sec) since that's what's currently needed to expose the underlying problem. As explained in the description the bug is largely \"masked\" by the 1sec default wait/select timeout." ]
"2019-09-27T08:52:20Z"
[ "discussion" ]
Native epoll channels/sockets take longer to shutdown/close?
This is neither a bug report nor feature request. I figured I use Github issues to solicit peoples experience and knowledge about Netty's native epoll transports. We've been running Finagle with native epoll (on Linux) for quite a while internally and finally decided to switch our OSS version to it. To our surprise, switching over to a native epoll caused out showdown tests to fail. Turns out, our previous graceful timeout (order of a couple of seconds) didn't do it for native epoll socket/channel (shutting down an HTTP server running with epoll=true). We had to bump our timeouts significantly to ensure the test passes: think of going from 2s to 30s. Before we start digging into why closing epoll channels (server channels if that matters) takes so much longer I decided to ask around if somebody has seen this before. Any pointers would be totally appreciated! I will try to post with thread with the updates should we know more.
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java" ]
[ "common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java" ]
[ "testsuite/src/main/java/io/netty/testsuite/transport/AbstractSingleThreadEventLoopTest.java", "testsuite/src/main/java/io/netty/testsuite/transport/DefaultEventLoopTest.java", "testsuite/src/main/java/io/netty/testsuite/transport/NioEventLoopTest.java", "transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java", "transport-native-kqueue/src/test/java/io/netty/channel/kqueue/KQueueEventLoopTest.java", "transport/src/test/java/io/netty/channel/AbstractEventLoopTest.java" ]
diff --git a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java index 4d77da11a73..d2b9707f393 100644 --- a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java +++ b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java @@ -587,7 +587,7 @@ protected void cleanup() { } protected void wakeup(boolean inEventLoop) { - if (!inEventLoop || state == ST_SHUTTING_DOWN) { + if (!inEventLoop) { // Use offer as we actually only need this to unblock the thread and if offer fails we do not care as there // is already something in the queue. taskQueue.offer(WAKEUP_TASK); @@ -726,7 +726,10 @@ public Future<?> shutdownGracefully(long quietPeriod, long timeout, TimeUnit uni } if (wakeup) { - wakeup(inEventLoop); + taskQueue.offer(WAKEUP_TASK); + if (!addTaskWakesUp) { + wakeup(inEventLoop); + } } return terminationFuture(); @@ -778,7 +781,10 @@ public void shutdown() { } if (wakeup) { - wakeup(inEventLoop); + taskQueue.offer(WAKEUP_TASK); + if (!addTaskWakesUp) { + wakeup(inEventLoop); + } } } @@ -827,7 +833,7 @@ protected boolean confirmShutdown() { if (gracefulShutdownQuietPeriod == 0) { return true; } - wakeup(true); + taskQueue.offer(WAKEUP_TASK); return false; } @@ -840,7 +846,7 @@ protected boolean confirmShutdown() { if (nanoTime - lastExecutionTime <= gracefulShutdownQuietPeriod) { // Check if any tasks were added to the queue every 100ms. // TODO: Change the behavior of takeTask() so that it returns on timeout. - wakeup(true); + taskQueue.offer(WAKEUP_TASK); try { Thread.sleep(100); } catch (InterruptedException e) {
diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/AbstractSingleThreadEventLoopTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/AbstractSingleThreadEventLoopTest.java index e4306b3dc8e..fd58a5255bc 100644 --- a/testsuite/src/main/java/io/netty/testsuite/transport/AbstractSingleThreadEventLoopTest.java +++ b/testsuite/src/main/java/io/netty/testsuite/transport/AbstractSingleThreadEventLoopTest.java @@ -15,14 +15,29 @@ */ package io.netty.testsuite.transport; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; + +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.TimeUnit; + import org.junit.Test; +import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.Channel; +import io.netty.channel.ChannelFuture; +import io.netty.channel.ChannelInboundHandlerAdapter; +import io.netty.channel.EventLoop; import io.netty.channel.EventLoopGroup; +import io.netty.channel.ServerChannel; import io.netty.channel.SingleThreadEventLoop; -import io.netty.channel.socket.ServerSocketChannel; - -import static org.junit.Assert.*; +import io.netty.channel.local.LocalAddress; +import io.netty.channel.local.LocalServerChannel; +import io.netty.util.concurrent.EventExecutor; +import io.netty.util.concurrent.Future; public abstract class AbstractSingleThreadEventLoopTest { @@ -35,19 +50,108 @@ public void testChannelsRegistered() { final Channel ch1 = newChannel(); final Channel ch2 = newChannel(); - assertEquals(0, loop.registeredChannels()); + int rc = loop.registeredChannels(); + boolean channelCountSupported = rc != -1; + + if (channelCountSupported) { + assertEquals(0, loop.registeredChannels()); + } assertTrue(loop.register(ch1).syncUninterruptibly().isSuccess()); assertTrue(loop.register(ch2).syncUninterruptibly().isSuccess()); - assertEquals(2, loop.registeredChannels()); + if (channelCountSupported) { + assertEquals(2, loop.registeredChannels()); + } assertTrue(ch1.deregister().syncUninterruptibly().isSuccess()); - assertEquals(1, loop.registeredChannels()); + if (channelCountSupported) { + assertEquals(1, loop.registeredChannels()); + } } finally { group.shutdownGracefully(); } } + @Test + @SuppressWarnings("deprecation") + public void shutdownBeforeStart() throws Exception { + EventLoopGroup group = newEventLoopGroup(); + assertFalse(group.awaitTermination(2, TimeUnit.MILLISECONDS)); + group.shutdown(); + assertTrue(group.awaitTermination(200, TimeUnit.MILLISECONDS)); + } + + @Test + public void shutdownGracefullyZeroQuietBeforeStart() throws Exception { + EventLoopGroup group = newEventLoopGroup(); + assertTrue(group.shutdownGracefully(0L, 2L, TimeUnit.SECONDS).await(200L)); + } + + // Copied from AbstractEventLoopTest + @Test(timeout = 5000) + public void testShutdownGracefullyNoQuietPeriod() throws Exception { + EventLoopGroup loop = newEventLoopGroup(); + ServerBootstrap b = new ServerBootstrap(); + b.group(loop) + .channel(serverChannelClass()) + .childHandler(new ChannelInboundHandlerAdapter()); + + // Not close the Channel to ensure the EventLoop is still shutdown in time. + ChannelFuture cf = serverChannelClass() == LocalServerChannel.class + ? b.bind(new LocalAddress("local")) : b.bind(0); + cf.sync().channel(); + + Future<?> f = loop.shutdownGracefully(0, 1, TimeUnit.MINUTES); + assertTrue(loop.awaitTermination(600, TimeUnit.MILLISECONDS)); + assertTrue(f.syncUninterruptibly().isSuccess()); + assertTrue(loop.isShutdown()); + assertTrue(loop.isTerminated()); + } + + @Test + public void shutdownGracefullyBeforeStart() throws Exception { + EventLoopGroup group = newEventLoopGroup(); + assertTrue(group.shutdownGracefully(200L, 1000L, TimeUnit.MILLISECONDS).await(500L)); + } + + @Test + public void gracefulShutdownAfterStart() throws Exception { + EventLoop loop = newEventLoopGroup().next(); + final CountDownLatch latch = new CountDownLatch(1); + loop.execute(new Runnable() { + @Override + public void run() { + latch.countDown(); + } + }); + + // Wait for the event loop thread to start. + latch.await(); + + // Request the event loop thread to stop. + loop.shutdownGracefully(200L, 3000L, TimeUnit.MILLISECONDS); + + // Wait until the event loop is terminated. + assertTrue(loop.awaitTermination(500L, TimeUnit.MILLISECONDS)); + + assertRejection(loop); + } + + private static final Runnable NOOP = new Runnable() { + @Override + public void run() { } + }; + + private static void assertRejection(EventExecutor loop) { + try { + loop.execute(NOOP); + fail("A task must be rejected after shutdown() is called."); + } catch (RejectedExecutionException e) { + // Expected + } + } + protected abstract EventLoopGroup newEventLoopGroup(); - protected abstract ServerSocketChannel newChannel(); + protected abstract Channel newChannel(); + protected abstract Class<? extends ServerChannel> serverChannelClass(); } diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/DefaultEventLoopTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/DefaultEventLoopTest.java new file mode 100644 index 00000000000..cb13b806241 --- /dev/null +++ b/testsuite/src/main/java/io/netty/testsuite/transport/DefaultEventLoopTest.java @@ -0,0 +1,41 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.testsuite.transport; + +import io.netty.channel.Channel; +import io.netty.channel.DefaultEventLoopGroup; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.ServerChannel; +import io.netty.channel.local.LocalChannel; +import io.netty.channel.local.LocalServerChannel; + +public class DefaultEventLoopTest extends AbstractSingleThreadEventLoopTest { + + @Override + protected EventLoopGroup newEventLoopGroup() { + return new DefaultEventLoopGroup(); + } + + @Override + protected Channel newChannel() { + return new LocalChannel(); + } + + @Override + protected Class<? extends ServerChannel> serverChannelClass() { + return LocalServerChannel.class; + } +} diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/NioEventLoopTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/NioEventLoopTest.java new file mode 100644 index 00000000000..e4bc928907d --- /dev/null +++ b/testsuite/src/main/java/io/netty/testsuite/transport/NioEventLoopTest.java @@ -0,0 +1,41 @@ +/* + * Copyright 2019 The Netty Project + * + * The Netty Project licenses this file to you under the Apache License, + * version 2.0 (the "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package io.netty.testsuite.transport; + +import io.netty.channel.Channel; +import io.netty.channel.EventLoopGroup; +import io.netty.channel.ServerChannel; +import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.channel.socket.nio.NioServerSocketChannel; +import io.netty.channel.socket.nio.NioSocketChannel; + +public class NioEventLoopTest extends AbstractSingleThreadEventLoopTest { + + @Override + protected EventLoopGroup newEventLoopGroup() { + return new NioEventLoopGroup(); + } + + @Override + protected Channel newChannel() { + return new NioSocketChannel(); + } + + @Override + protected Class<? extends ServerChannel> serverChannelClass() { + return NioServerSocketChannel.class; + } +} diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java index 0e057eb21a1..c6bc431d702 100644 --- a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java +++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollEventLoopTest.java @@ -18,6 +18,7 @@ import io.netty.channel.DefaultSelectStrategyFactory; import io.netty.channel.EventLoop; import io.netty.channel.EventLoopGroup; +import io.netty.channel.ServerChannel; import io.netty.channel.socket.ServerSocketChannel; import io.netty.channel.unix.FileDescriptor; import io.netty.testsuite.transport.AbstractSingleThreadEventLoopTest; @@ -49,6 +50,11 @@ protected ServerSocketChannel newChannel() { return new EpollServerSocketChannel(); } + @Override + protected Class<? extends ServerChannel> serverChannelClass() { + return EpollServerSocketChannel.class; + } + @Test public void testScheduleBigDelayNotOverflow() { final AtomicReference<Throwable> capture = new AtomicReference<Throwable>(); diff --git a/transport-native-kqueue/src/test/java/io/netty/channel/kqueue/KQueueEventLoopTest.java b/transport-native-kqueue/src/test/java/io/netty/channel/kqueue/KQueueEventLoopTest.java index ceda867deaf..55d2e162a87 100644 --- a/transport-native-kqueue/src/test/java/io/netty/channel/kqueue/KQueueEventLoopTest.java +++ b/transport-native-kqueue/src/test/java/io/netty/channel/kqueue/KQueueEventLoopTest.java @@ -17,6 +17,7 @@ import io.netty.channel.EventLoop; import io.netty.channel.EventLoopGroup; +import io.netty.channel.ServerChannel; import io.netty.channel.socket.ServerSocketChannel; import io.netty.testsuite.transport.AbstractSingleThreadEventLoopTest; import io.netty.util.concurrent.Future; @@ -39,6 +40,11 @@ protected ServerSocketChannel newChannel() { return new KQueueServerSocketChannel(); } + @Override + protected Class<? extends ServerChannel> serverChannelClass() { + return KQueueServerSocketChannel.class; + } + @Test public void testScheduleBigDelayNotOverflow() { EventLoopGroup group = new KQueueEventLoopGroup(1); diff --git a/transport/src/test/java/io/netty/channel/AbstractEventLoopTest.java b/transport/src/test/java/io/netty/channel/AbstractEventLoopTest.java index b4d29a67b66..df7fe13f869 100644 --- a/transport/src/test/java/io/netty/channel/AbstractEventLoopTest.java +++ b/transport/src/test/java/io/netty/channel/AbstractEventLoopTest.java @@ -75,7 +75,7 @@ public void testShutdownGracefullyNoQuietPeriod() throws Exception { b.bind(0).sync().channel(); Future<?> f = loop.shutdownGracefully(0, 1, TimeUnit.MINUTES); - assertTrue(loop.awaitTermination(2, TimeUnit.SECONDS)); + assertTrue(loop.awaitTermination(600, TimeUnit.MILLISECONDS)); assertTrue(f.syncUninterruptibly().isSuccess()); assertTrue(loop.isShutdown()); assertTrue(loop.isTerminated());
train
val
"2019-09-27T09:59:25"
"2018-09-10T18:33:30Z"
vkostyukov
val
netty/netty/9634_9647
netty/netty
netty/netty/9634
netty/netty/9647
[ "keyword_pr_to_issue" ]
bd8cea644a07890f5bada18ddff0a849b58cd861
2e5dd288008d4e674f53beaf8d323595813062fb
[ "@dziemba could you also try running it with `-Dio.netty.leakDetection.level=paranoid`?", "@normanmaurer I think the problem might be here: https://github.com/netty/netty/blob/68673b652e422df6db462afb8c8f3ce9413c5c90/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java#L1219\r\n\r\nUnless I'm missing something, it doesn't look like there's any release corresponding to this retain in the later callbacks. Also it would in any case be better to replace this line with just `msg = null;` - no need to retain if releasing immediately after.\r\n\r\nSomething like this, though maybe with some more try/finallys: 82da56c77f90c572f0dbbfcb408c780e506d42ca", "@njhill Here's a complete debug log with paranoid leak detection:\r\n\r\nhttps://gist.github.com/dziemba/b55e1a5bea461d9ac6114cc0ae74b078", "@njhill @dziemba imho the retain() is correct here. I will need to look into this in more detail.", "@normanmaurer apologies I was kind of conflating two different observations. I agree the retain() as currently stands is correct, it could just be replaced by `msg = null` as a small optimization (this is separate to the problem at hand). My other point was that there may not always be a corresponding release() but now agree that's not what's happening in this case.\r\n\r\nI looked a bit closer and think I found the issue(s)... will open a PR soon.", "@njhill ok cool... Will not spend more time on this then and just wait for your PR :)", "@normanmaurer @dziemba please see #9479 \r\n\r\nFYI the main leak in the current code is here - the result of the future needs to be released, since the `finish` method that it's passed to does not take ref-count ownership: https://github.com/netty/netty/blob/4dc1eccf60252f8a690610c20495ed095e274d3a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java#L1302", "Thank you for fixing this! ❤️ ", "@dziemba no problem, thank you for reporting it with so much detail" ]
[ "@njhill can we also verify that this line returns false (add a `assertFalse`):\r\nhttps://github.com/netty/netty/blob/a95d04bb2f6b5f9d000f22b4daa36b5b1f12935b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java#L2901", "@normanmaurer do you mean that it should return true (no more refs left)? I just tried with false and it fails :)", "yes sorry... `true` :)", "nit: release first before log. ", "I think we should preserve the logic and fallback to the original truncated response if we did not receive a TCP response just like we did before:\r\n\r\nhttps://github.com/netty/netty/pull/9647/files#diff-6678a05ef56f2434e7aa20acbe9d97e9L1277\r\n\r\nAlso beside this we should not log the exception if we received the response before. As the exception may be just a \"connection reset\" and so harmless. Also consider calling `ctx.close()` in any case to ensure the channel is closed in all cases. \r\n\r\n", "why not make this a real java doc ?", "lets rename this to `trySuccess` as this is what it does ;) ", "Now fixed. I didn’t add an explicit close since the channel is closed immediately in the listener below.", "nit: why not add these also in the `ChannelInitializer` above ?", "@normanmaurer the latter one can't be done there since it references `tcpCtx` which is only created post-connect. But also none of the handlers are needed anyhow unless/until the channel connects successfully. I actually first tried to remove the `ChannelInitializer` completely, but `Bootstrap` requires at least one handler to be provided - maybe we could reconsider that requirement?\r\n\r\nI did just realize though that the `ChannelInitializer` can be replaced with the single handler that it adds - a small simplification. I just pushed another commit to do that.\r\n\r\nI'm also fine with adding it back along with the first two handlers if you would prefer that, but the last one needs to be here I think if we are to keep the query context verification." ]
"2019-10-08T23:23:49Z"
[ "defect" ]
DNS Resolver leaking direct memory
### Expected behavior DNS resolver does not leak direct memory when buffers are freed correctly. ### Actual behavior Memory is leaked, JVM runs out of direct memory. ``` [error] Exception in thread "main" io.netty.resolver.dns.DnsNameResolverException: [/10.0.2.3:53] failed to send a query via UDP (no stack trace available) [error] Caused by: io.netty.handler.codec.EncoderException: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 234881024, max: 239075328) ``` full stacktrace: https://gist.github.com/dziemba/ffeb8f2c3b131a74c042746151ac05c0 ### Steps to reproduce - Use `DnsNameResolver.resolveAll` to query SRV records for a name that contains a lot of answers (e.g. `hugedns.test.dziemba.net`) - Run query repeatedly and `.release()` returned DNS records. - Watch java process memory usage increase until it finally throws with `io.netty.util.internal.OutOfDirectMemoryError` It might be important that the DNS response is large so it gets truncated and then retried via TCP. ### Minimal yet complete reproducer code (or URL to code) https://gist.github.com/dziemba/c904d227d105b6fc7cf00495257fbb40 - run with `-Xmx128m -XX:MaxDirectMemorySize=4m` to make it fail quickly - will fail as described after around 100-200 iterations (with above settings) - sometimes it fails with actual network errors, re-run a few times if that happens ### Netty version `4.1.42.Final` ### JVM version (e.g. `java -version`) ``` openjdk version "1.8.0_222" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_222-b10) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.222-b10, mixed mode) ``` I was not able to reproduce this on Java 11 or 12. ### OS version (e.g. `uname -a`) ``` Darwin wopro3 18.7.0 Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64 x86_64 ``` I also tried to reproduce this on Linux. It shows the same error behavior (fails after 100 runs of the test script) but does not output the `OutOfDirectMemoryError` for some reason...
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java" ]
[ "resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java", "resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java" ]
[ "resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java" ]
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java index 4e84a475f81..c7551d8056c 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java @@ -50,7 +50,6 @@ import io.netty.resolver.InetNameResolver; import io.netty.resolver.ResolvedAddressTypes; import io.netty.util.NetUtil; -import io.netty.util.ReferenceCountUtil; import io.netty.util.concurrent.EventExecutor; import io.netty.util.concurrent.FastThreadLocal; import io.netty.util.concurrent.Future; @@ -1199,129 +1198,107 @@ private final class DnsResponseHandler extends ChannelInboundHandlerAdapter { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) { - try { - final DatagramDnsResponse res = (DatagramDnsResponse) msg; - final int queryId = res.id(); + final DatagramDnsResponse res = (DatagramDnsResponse) msg; + final int queryId = res.id(); - if (logger.isDebugEnabled()) { - logger.debug("{} RECEIVED: UDP [{}: {}], {}", ch, queryId, res.sender(), res); - } + if (logger.isDebugEnabled()) { + logger.debug("{} RECEIVED: UDP [{}: {}], {}", ch, queryId, res.sender(), res); + } - final DnsQueryContext qCtx = queryContextManager.get(res.sender(), queryId); - if (qCtx == null) { - logger.warn("{} Received a DNS response with an unknown ID: {}", ch, queryId); - return; - } + final DnsQueryContext qCtx = queryContextManager.get(res.sender(), queryId); + if (qCtx == null) { + logger.warn("{} Received a DNS response with an unknown ID: {}", ch, queryId); + res.release(); + return; + } + + // Check if the response was truncated and if we can fallback to TCP to retry. + if (!res.isTruncated() || socketChannelFactory == null) { + qCtx.finish(res); + return; + } - // Check if the response was truncated and if we can fallback to TCP to retry. - if (res.isTruncated() && socketChannelFactory != null) { - // Let's retain as we may need it later on. - res.retain(); - - Bootstrap bs = new Bootstrap(); - bs.option(ChannelOption.SO_REUSEADDR, true) - .group(executor()) - .channelFactory(socketChannelFactory) - .handler(new ChannelInitializer<Channel>() { - @Override - protected void initChannel(Channel ch) { - ch.pipeline().addLast(TCP_ENCODER); - ch.pipeline().addLast(new TcpDnsResponseDecoder()); - ch.pipeline().addLast(new ChannelInboundHandlerAdapter() { - private boolean finish; - - @Override - public void channelRead(ChannelHandlerContext ctx, Object msg) { - try { - Channel channel = ctx.channel(); - DnsResponse response = (DnsResponse) msg; - int queryId = response.id(); - - if (logger.isDebugEnabled()) { - logger.debug("{} RECEIVED: TCP [{}: {}], {}", channel, queryId, - channel.remoteAddress(), response); - } - - DnsQueryContext tcpCtx = queryContextManager.get(res.sender(), queryId); - if (tcpCtx == null) { - logger.warn("{} Received a DNS response with an unknown ID: {}", - channel, queryId); - qCtx.finish(res); - return; - } - - // Release the original response as we will use the response that we - // received via TCP fallback. - res.release(); - - tcpCtx.finish(new AddressedEnvelopeAdapter( - (InetSocketAddress) ctx.channel().remoteAddress(), - (InetSocketAddress) ctx.channel().localAddress(), - response)); - - finish = true; - } finally { - ReferenceCountUtil.release(msg); - } - } - - @Override - public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { - if (!finish) { - if (logger.isDebugEnabled()) { - logger.debug("{} Error during processing response: TCP [{}: {}]", - ctx.channel(), queryId, - ctx.channel().remoteAddress(), cause); - } - // TCP fallback failed, just use the truncated response as - qCtx.finish(res); - } - } - }); - } - }); - bs.connect(res.sender()).addListener(new ChannelFutureListener() { + Bootstrap bs = new Bootstrap(); + bs.option(ChannelOption.SO_REUSEADDR, true) + .group(executor()) + .channelFactory(socketChannelFactory) + .handler(TCP_ENCODER); + bs.connect(res.sender()).addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + if (!future.isSuccess()) { + if (logger.isDebugEnabled()) { + logger.debug("{} Unable to fallback to TCP [{}]", queryId, future.cause()); + } + + // TCP fallback failed, just use the truncated response. + qCtx.finish(res); + return; + } + final Channel channel = future.channel(); + + Promise<AddressedEnvelope<DnsResponse, InetSocketAddress>> promise = + channel.eventLoop().newPromise(); + final TcpDnsQueryContext tcpCtx = new TcpDnsQueryContext(DnsNameResolver.this, channel, + (InetSocketAddress) channel.remoteAddress(), qCtx.question(), + EMPTY_ADDITIONALS, promise); + + channel.pipeline().addLast(new TcpDnsResponseDecoder()); + channel.pipeline().addLast(new ChannelInboundHandlerAdapter() { @Override - public void operationComplete(ChannelFuture future) { - if (future.isSuccess()) { - final Channel channel = future.channel(); - - Promise<AddressedEnvelope<DnsResponse, InetSocketAddress>> promise = - channel.eventLoop().newPromise(); - new TcpDnsQueryContext(DnsNameResolver.this, channel, - (InetSocketAddress) channel.remoteAddress(), qCtx.question(), - EMPTY_ADDITIONALS, promise).query(true, future.channel().newPromise()); - promise.addListener( - new FutureListener<AddressedEnvelope<DnsResponse, InetSocketAddress>>() { - @Override - public void operationComplete( - Future<AddressedEnvelope<DnsResponse, InetSocketAddress>> future) { - channel.close(); - - if (future.isSuccess()) { - qCtx.finish(future.getNow()); - } else { - // TCP fallback failed, just use the truncated response. - qCtx.finish(res); - } - } - }); + public void channelRead(ChannelHandlerContext ctx, Object msg) { + Channel channel = ctx.channel(); + DnsResponse response = (DnsResponse) msg; + int queryId = response.id(); + + if (logger.isDebugEnabled()) { + logger.debug("{} RECEIVED: TCP [{}: {}], {}", channel, queryId, + channel.remoteAddress(), response); + } + + DnsQueryContext foundCtx = queryContextManager.get(res.sender(), queryId); + if (foundCtx == tcpCtx) { + tcpCtx.finish(new AddressedEnvelopeAdapter( + (InetSocketAddress) ctx.channel().remoteAddress(), + (InetSocketAddress) ctx.channel().localAddress(), + response)); } else { - if (logger.isDebugEnabled()) { - logger.debug("{} Unable to fallback to TCP [{}]", queryId, future.cause()); - } + response.release(); + tcpCtx.tryFailure("Received TCP response with unexpected ID", null, false); + logger.warn("{} Received a DNS response with an unexpected ID: {}", + channel, queryId); + } + } + + @Override + public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { + if (tcpCtx.tryFailure("TCP fallback error", cause, false) && logger.isDebugEnabled()) { + logger.debug("{} Error during processing response: TCP [{}: {}]", + ctx.channel(), queryId, + ctx.channel().remoteAddress(), cause); + } + } + }); + promise.addListener( + new FutureListener<AddressedEnvelope<DnsResponse, InetSocketAddress>>() { + @Override + public void operationComplete( + Future<AddressedEnvelope<DnsResponse, InetSocketAddress>> future) { + channel.close(); + + if (future.isSuccess()) { + qCtx.finish(future.getNow()); + res.release(); + } else { // TCP fallback failed, just use the truncated response. qCtx.finish(res); } } }); - } else { - qCtx.finish(res); + tcpCtx.query(true, future.channel().newPromise()); } - } finally { - ReferenceCountUtil.safeRelease(msg); - } + }); } @Override diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java index d5ba91e5e85..d476c1f2fe8 100644 --- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java +++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java @@ -159,7 +159,7 @@ public void operationComplete(ChannelFuture future) { private void onQueryWriteCompletion(ChannelFuture writeFuture) { if (!writeFuture.isSuccess()) { - setFailure("failed to send a query via " + protocol(), writeFuture.cause()); + tryFailure("failed to send a query via " + protocol(), writeFuture.cause(), false); return; } @@ -174,40 +174,37 @@ public void run() { return; } - setFailure("query via " + protocol() + " timed out after " + - queryTimeoutMillis + " milliseconds", null); + tryFailure("query via " + protocol() + " timed out after " + + queryTimeoutMillis + " milliseconds", null, true); } }, queryTimeoutMillis, TimeUnit.MILLISECONDS); } } + /** + * Takes ownership of passed envelope + */ void finish(AddressedEnvelope<? extends DnsResponse, InetSocketAddress> envelope) { final DnsResponse res = envelope.content(); if (res.count(DnsSection.QUESTION) != 1) { logger.warn("Received a DNS response with invalid number of questions: {}", envelope); - return; - } - - if (!question().equals(res.recordAt(DnsSection.QUESTION))) { + } else if (!question().equals(res.recordAt(DnsSection.QUESTION))) { logger.warn("Received a mismatching DNS response: {}", envelope); - return; + } else if (trySuccess(envelope)) { + return; // Ownership transferred, don't release } - - setSuccess(envelope); + envelope.release(); } - private void setSuccess(AddressedEnvelope<? extends DnsResponse, InetSocketAddress> envelope) { - Promise<AddressedEnvelope<DnsResponse, InetSocketAddress>> promise = this.promise; - @SuppressWarnings("unchecked") - AddressedEnvelope<DnsResponse, InetSocketAddress> castResponse = - (AddressedEnvelope<DnsResponse, InetSocketAddress>) envelope.retain(); - if (!promise.trySuccess(castResponse)) { - // We failed to notify the promise as it was failed before, thus we need to release the envelope - envelope.release(); - } + @SuppressWarnings("unchecked") + private boolean trySuccess(AddressedEnvelope<? extends DnsResponse, InetSocketAddress> envelope) { + return promise.trySuccess((AddressedEnvelope<DnsResponse, InetSocketAddress>) envelope); } - private void setFailure(String message, Throwable cause) { + boolean tryFailure(String message, Throwable cause, boolean timeout) { + if (promise.isDone()) { + return false; + } final InetSocketAddress nameServerAddr = nameServerAddr(); final StringBuilder buf = new StringBuilder(message.length() + 64); @@ -218,14 +215,14 @@ private void setFailure(String message, Throwable cause) { .append(" (no stack trace available)"); final DnsNameResolverException e; - if (cause == null) { + if (timeout) { // This was caused by an timeout so use DnsNameResolverTimeoutException to allow the user to // handle it special (like retry the query). e = new DnsNameResolverTimeoutException(nameServerAddr, question(), buf.toString()); } else { e = new DnsNameResolverException(nameServerAddr, question(), buf.toString(), cause); } - promise.tryFailure(e); + return promise.tryFailure(e); } @Override
diff --git a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java index 38a27ca4a1d..3e35e288a5b 100644 --- a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java +++ b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java @@ -71,6 +71,7 @@ import org.junit.rules.ExpectedException; import java.io.IOException; +import java.io.InputStream; import java.net.DatagramSocket; import java.net.Inet4Address; import java.net.InetAddress; @@ -2759,6 +2760,25 @@ public void testTruncatedWithTcpFallbackBecauseOfMtu() throws IOException { testTruncated0(true, true); } + private static DnsMessageModifier modifierFrom(DnsMessage message) { + DnsMessageModifier modifier = new DnsMessageModifier(); + modifier.setAcceptNonAuthenticatedData(message.isAcceptNonAuthenticatedData()); + modifier.setAdditionalRecords(message.getAdditionalRecords()); + modifier.setAnswerRecords(message.getAnswerRecords()); + modifier.setAuthoritativeAnswer(message.isAuthoritativeAnswer()); + modifier.setAuthorityRecords(message.getAuthorityRecords()); + modifier.setMessageType(message.getMessageType()); + modifier.setOpCode(message.getOpCode()); + modifier.setQuestionRecords(message.getQuestionRecords()); + modifier.setRecursionAvailable(message.isRecursionAvailable()); + modifier.setRecursionDesired(message.isRecursionDesired()); + modifier.setReserved(message.isReserved()); + modifier.setResponseCode(message.getResponseCode()); + modifier.setTransactionId(message.getTransactionId()); + modifier.setTruncated(message.isTruncated()); + return modifier; + } + private static void testTruncated0(boolean tcpFallback, final boolean truncatedBecauseOfMtu) throws IOException { final String host = "somehost.netty.io"; final String txt = "this is a txt record"; @@ -2784,20 +2804,7 @@ protected DnsMessage filterMessage(DnsMessage message) { if (!truncatedBecauseOfMtu) { // Create a copy of the message but set the truncated flag. - DnsMessageModifier modifier = new DnsMessageModifier(); - modifier.setAcceptNonAuthenticatedData(message.isAcceptNonAuthenticatedData()); - modifier.setAdditionalRecords(message.getAdditionalRecords()); - modifier.setAnswerRecords(message.getAnswerRecords()); - modifier.setAuthoritativeAnswer(message.isAuthoritativeAnswer()); - modifier.setAuthorityRecords(message.getAuthorityRecords()); - modifier.setMessageType(message.getMessageType()); - modifier.setOpCode(message.getOpCode()); - modifier.setQuestionRecords(message.getQuestionRecords()); - modifier.setRecursionAvailable(message.isRecursionAvailable()); - modifier.setRecursionDesired(message.isRecursionDesired()); - modifier.setReserved(message.isReserved()); - modifier.setResponseCode(message.getResponseCode()); - modifier.setTransactionId(message.getTransactionId()); + DnsMessageModifier modifier = modifierFrom(message); modifier.setTruncated(true); return modifier.getDnsMessage(); } @@ -2842,8 +2849,15 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) { // If we are configured to use TCP as a fallback lets replay the dns message over TCP Socket socket = serverSocket.accept(); + InputStream in = socket.getInputStream(); + assertTrue((in.read() << 8 | (in.read() & 0xff)) > 2); // skip length field + int txnId = in.read() << 8 | (in.read() & 0xff); + IoBuffer ioBuffer = IoBuffer.allocate(1024); - new DnsMessageEncoder().encode(ioBuffer, messageRef.get()); + // Must replace the transactionId with the one from the TCP request + DnsMessageModifier modifier = modifierFrom(messageRef.get()); + modifier.setTransactionId(txnId); + new DnsMessageEncoder().encode(ioBuffer, modifier.getDnsMessage()); ioBuffer.flip(); ByteBuffer lenBuffer = ByteBuffer.allocate(2); @@ -2884,7 +2898,7 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) { } else { assertTrue(envelope.content().isTruncated()); } - envelope.release(); + assertTrue(envelope.release()); } finally { dnsServer2.stop(); if (resolver != null) {
train
val
"2019-10-09T15:12:52"
"2019-10-02T17:02:46Z"
dziemba
val
netty/netty/9668_9670
netty/netty
netty/netty/9668
netty/netty/9670
[ "keyword_pr_to_issue" ]
2e5dd288008d4e674f53beaf8d323595813062fb
e745ef0645cc63e017d0a6599610a59af22842f8
[ "@normanmaurer thanks", "Hi, @switchYello. I think this is expected behavior, see \r\n https://github.com/netty/netty/blob/2e5dd288008d4e674f53beaf8d323595813062fb/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java#L432-L436", "@amizurov \r\n\r\n\tI see that code annotation and iss,but i think, \r\n\tcode 'if (ctx.isRemoved()) {break;}' is to exit the 'while (in.isReadable())' loop,it work good.\r\n\r\n\tmy ISS means,when we remove this decoder, 'webInStream' has byte data and 'out List' has decoded data,we should fire 'out List' before 'webInStream'\r\n\tOtherwise, it will lead to disorder.\r\n\r\n```\r\n while (in.isReadable()) {\r\n int outSize = out.size();\r\n\r\n if (outSize > 0) {\r\n fireChannelRead(ctx, out, outSize);\r\n out.clear();\r\n\r\n // Check if this handler was removed before continuing with decoding.\r\n // If it was removed, it is not safe to continue to operate on the buffer.\r\n //\r\n // See:\r\n // - https://github.com/netty/netty/issues/4635\r\n if (ctx.isRemoved()) {\r\n break;\r\n }\r\n outSize = 0;\r\n }\r\n\r\n```\r\n\r\n", "@switchYello ohh missed, that is output list does not drain after handler removed." ]
[ "nit: add whitespace after each `,`", "You will need to call `release()` on each of the `ByteBuf` instances... Alternative you could change your decoder to just do:\r\n\r\n```\r\nout.add(in.readByte());\r\n```\r\n\r\nAnd then assert the returned `byte`.", "nit: can we also add `assertFalse(buffer5.isReadable());` here to ensure we not produce more then expected ?", "@switchYello let me know once you did address this and I will merge. " ]
"2019-10-15T13:15:10Z"
[]
remove handler cause ByteToMessageDecoder out disorder
### remove handler cause ByteToMessageDecoder out disorder I input byte stream [1,2,3,4,5], when i read `4` then remove ByteToMessageDecoder, the out is stream [1,2,3,5,4] Here are the test cases,the cases will fail ``` @Test public void testDisorder(){ ByteToMessageDecoder decoder = new ByteToMessageDecoder() { int count = 0; //read 4 byte then remove this decoder @Override protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { out.add(in.readBytes(1)); if (++count >= 4) { ctx.pipeline().remove(this); } } }; EmbeddedChannel channel = new EmbeddedChannel(decoder); assertTrue(channel.writeInbound(Unpooled.wrappedBuffer(new byte[]{1, 2, 3, 4, 5}))); assertEquals(1, ((ByteBuf) channel.readInbound()).readByte()); assertEquals(2, ((ByteBuf) channel.readInbound()).readByte()); assertEquals(3, ((ByteBuf) channel.readInbound()).readByte()); assertEquals(4, ((ByteBuf) channel.readInbound()).readByte()); assertEquals(5, ((ByteBuf) channel.readInbound()).readByte()); assertFalse(channel.finish()); } ``` ### netty 4.1 ### java8 ### window10
[ "codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java" ]
[ "codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java" ]
[ "codec/src/test/java/io/netty/handler/codec/ByteToMessageDecoderTest.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java b/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java index 96cd7b3b99b..361863cbafa 100644 --- a/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java +++ b/codec/src/main/java/io/netty/handler/codec/ByteToMessageDecoder.java @@ -81,7 +81,7 @@ public ByteBuf cumulate(ByteBufAllocator alloc, ByteBuf cumulation, ByteBuf in) try { final ByteBuf buffer; if (cumulation.writerIndex() > cumulation.maxCapacity() - in.readableBytes() - || cumulation.refCnt() > 1 || cumulation.isReadOnly()) { + || cumulation.refCnt() > 1 || cumulation.isReadOnly()) { // Expand cumulation (by replace it) when either there is not more room in the buffer // or if the refCnt is greater then 1 which may happen when the user use slice().retain() or // duplicate().retain() or if its read-only. @@ -507,6 +507,8 @@ final void decodeRemovalReentryProtection(ChannelHandlerContext ctx, ByteBuf in, boolean removePending = decodeState == STATE_HANDLER_REMOVED_PENDING; decodeState = STATE_INIT; if (removePending) { + fireChannelRead(ctx, out, out.size()); + out.clear(); handlerRemoved(ctx); } }
diff --git a/codec/src/test/java/io/netty/handler/codec/ByteToMessageDecoderTest.java b/codec/src/test/java/io/netty/handler/codec/ByteToMessageDecoderTest.java index 875de766625..cd9ee370831 100644 --- a/codec/src/test/java/io/netty/handler/codec/ByteToMessageDecoderTest.java +++ b/codec/src/test/java/io/netty/handler/codec/ByteToMessageDecoderTest.java @@ -399,4 +399,31 @@ public void read(ChannelHandlerContext ctx) throws Exception { } assertFalse(channel.finish()); } + + @Test + public void testDisorder() { + ByteToMessageDecoder decoder = new ByteToMessageDecoder() { + int count; + + //read 4 byte then remove this decoder + @Override + protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { + out.add(in.readByte()); + if (++count >= 4) { + ctx.pipeline().remove(this); + } + } + }; + EmbeddedChannel channel = new EmbeddedChannel(decoder); + assertTrue(channel.writeInbound(Unpooled.wrappedBuffer(new byte[]{1, 2, 3, 4, 5}))); + assertEquals((byte) 1, channel.readInbound()); + assertEquals((byte) 2, channel.readInbound()); + assertEquals((byte) 3, channel.readInbound()); + assertEquals((byte) 4, channel.readInbound()); + ByteBuf buffer5 = channel.readInbound(); + assertEquals((byte) 5, buffer5.readByte()); + assertFalse(buffer5.isReadable()); + assertTrue(buffer5.release()); + assertFalse(channel.finish()); + } }
test
val
"2019-10-14T16:10:15"
"2019-10-15T11:57:13Z"
switchYello
val
netty/netty/8554_9688
netty/netty
netty/netty/8554
netty/netty/9688
[ "keyword_pr_to_issue" ]
95230e01da5d9cf2447a9d09d4c42bf42eb7d479
8674ccfcd269382be993c23cebe4a72233e905fb
[ "@nicmunroe yeah this looks like a bug... Can you provide a fix ?", "> @nicmunroe yeah this looks like a bug... Can you provide a fix ?\r\n\r\nPossibly - I'm working through CLA issues to make sure I'm cleared to contribute.", "Thanks a lot !\n\n> Am 15.11.2018 um 00:22 schrieb Nic Munroe <notifications@github.com>:\n> \n> @nicmunroe yeah this looks like a bug... Can you provide a fix ?\n> \n> Possibly - I'm working through CLA issues to make sure I'm cleared to contribute.\n> \n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "@normanmaurer @nicmunroe Hi guys , ```Content-Type = media-type``` follows from RFC:\r\n```media-type = type \"/\" subtype *( OWS \";\" OWS parameter )```\r\n ```type = token```\r\n ``` subtype = token```\r\nI think starting content-type value with ';' is not correct, and we should handle this like \"unsupported media type\".\r\n", "Sounds good to me @amizurov ... @nicmunroe WDYT ?", "The relevant RFC sections (I'm assuming you're referring to RFC 7231 [Section 3.1.1.5](https://tools.ietf.org/html/rfc7231#section-3.1.1.5) and [Section 3.1.1.1](https://tools.ietf.org/html/rfc7231#section-3.1.1.1)) don't specify that you must respond with an error when encountering invalid content-type. In fact when talking about Content-Type in section 3.1.1.5 it says:\r\n\r\n> In practice, resource owners do not always properly configure their origin server to provide the correct Content-Type for a given representation, with the result that some clients will examine a payload's content and override the specified type.\r\n\r\nIt goes on to say this is might be a bad idea and `Implementers are encouraged to provide a means of disabling such \"content sniffing\" when it is used`, but the point is that the RFC seems to be indicating that it should be expected for Content-Type to sometimes be bad/broken/incorrect.\r\n\r\nSo I think we could choose to gracefully handle badly-formatted Content-Type here without throwing an error, and it wouldn't go against the RFC (and might even be more in line with the RFC, depending on how you read it).\r\n\r\nFrom a non-RFC perspective it feels weird to me to throw an exception of any sort when calling `HttpPostRequestDecoder.isMultipart(HttpRequest)` (i.e. a malformed content-type header is clearly not multipart - so the method should just return false), and I'm not sure the `splitHeaderContentType(...)` method would be the right place to throw it even if we decided throwing an exception was warranted.\r\n\r\n(Unless throwing an exception is not what you meant by \"handle this like unsupported media type\"? If so I'd need more info on what you meant.)\r\n\r\nUltimately my gut instinct is that this doesn't feel like a fail-fast situation. Thoughts?", "@nicmunroe Thanks for your thoughts. \r\n\r\nFor clarity, i meant that ``content-type`` value starting with **';'** is incorrect. And I think when we expect the concrete ``media-type``: (application/json, multipart/form-data ... etc) we should handle request like a ```400 Bad Request``` and responding to client with reason: \"unsupported media type\" for example. \r\n\r\nThis is make sense to the client side to knowns what happens and fix issue also we prevent problems on server. But how you said - it's depending how we read RFC and what we wont to fix only one method or situation in common ?", "That's the thing though - at this point in the code there's no concrete media-type expectation. It's just \"parse content-type into an array of multipart stuff and return empty if you can't parse it\", or \"is this multipart?\". This isn't the portion of code where you say \"I *expect* this to be multipart, and you should fail with a 400 for me if it's not\".\r\n\r\nThere may be a spot in the code where that expectation is either explicit or implicit in the contract of the method, and in that case I'm all on board with failing fast. I just don't think this is that spot.", "@nicmunroe My point of view is how to do this method more informative if it make sense. If no, then you are right - ```isMultipart(\"bad content\") -> false```. ", "That's fair. I have no problem with making this method more informative in principal. I'd just prefer a separate issue if there's a desire to make more sweeping functional changes, as adjusting method signatures and such is going to have a ripple effect and end up touching a bunch of code. Instead, at least for this issue, I'd like to just make a targeted fix for the `StringIndexOutOfBoundsException`.", "No problem, do that", "@amizurov @nicmunroe any update here ?", "Hi, @normanmaurer. Oh what the long story, i'll try to fix it soon if @nicmunroe is not opposed to check PR.", "Sorry it took a long time to get approval to do a PR, by then I was buried in other stuff, and I was never able to unbury my priority list enough to get to this. :( \r\n\r\nYeah, I'm happy to check a PR. And/or I still intend to get to this ... someday - I just can't guarantee when that would be.", "@nicmunroe could you please check fix", "Sorry I was out all last week, but your solution looks good to me! Thank you @amizurov !" ]
[]
"2019-10-18T11:51:43Z"
[]
StringIndexOutOfBoundsException thrown by HttpPostRequestDecoder.splitHeaderContentType() when Content-Type header starts with a semicolon
### Expected behavior I'm not sure what the desired behavior should be for `HttpPostRequestDecoder.splitHeaderContentType()` when it finds a Content-Type header that starts with a semicolon, but I'm assuming `StringIndexOutOfBoundsException` is not intentional. ### Actual behavior `HttpPostRequestDecoder.splitHeaderContentType()` throws a `StringIndexOutOfBoundsException` when it parses a Content-Type header that starts with a semicolon `;`. Specifically this line, because the `aEnd` variable is 0 when the Content-Type header starts with a semicolon: https://github.com/netty/netty/blob/00afb19d7a37de21b35ce4f6cb3fa7f74809f2ab/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java#L278. ### Steps to reproduce 1. Make a request to a Netty HTTP server and pass a Content-Type header that starts with a semicolon `;`. I'm not sure if there are HTTP clients that would sanitize this for you and prevent the problem, but I was able to reproduce this with `RestAssured` and a Netty `Bootstrap` acting as a HTTP Client via `HttpClientCodec`, so there are at least a few clients you can use to reproduce. 2. In the Netty server that receives the request, call `HttpPostRequestDecoder.isMultipart(HttpRequest)` or any other code path that ultimately causes `HttpPostRequestDecoder.splitHeaderContentType(String)` to be called with the request's Content-Type header. 3. You'll see a `StringIndexOutOfBoundsException` get thrown. ### Netty version `4.1.30.Final` (probably others as well)
[ "codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java index 0c106264063..b430471248c 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoder.java @@ -140,11 +140,11 @@ protected enum MultiPartStatus { * @return True if the request is a Multipart request */ public static boolean isMultipart(HttpRequest request) { - if (request.headers().contains(HttpHeaderNames.CONTENT_TYPE)) { - return getMultipartDataBoundary(request.headers().get(HttpHeaderNames.CONTENT_TYPE)) != null; - } else { - return false; + String mimeType = request.headers().get(HttpHeaderNames.CONTENT_TYPE); + if (mimeType != null && mimeType.startsWith(HttpHeaderValues.MULTIPART_FORM_DATA.toString())) { + return getMultipartDataBoundary(mimeType) != null; } + return false; } /**
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java index 6e6e3cbf359..2c3bfe7ed80 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java @@ -29,6 +29,7 @@ import io.netty.handler.codec.http.HttpHeaderNames; import io.netty.handler.codec.http.HttpHeaderValues; import io.netty.handler.codec.http.HttpMethod; +import io.netty.handler.codec.http.HttpRequest; import io.netty.handler.codec.http.HttpVersion; import io.netty.handler.codec.http.LastHttpContent; import io.netty.util.CharsetUtil; @@ -734,4 +735,17 @@ public void testNotLeak() { assertTrue(request.release()); } } + + @Test + public void testMultipartFormDataContentType() { + HttpRequest request = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.POST, "/"); + assertFalse(HttpPostRequestDecoder.isMultipart(request)); + + String multipartDataValue = HttpHeaderValues.MULTIPART_FORM_DATA + ";" + "boundary=gc0p4Jq0M2Yt08jU534c0p"; + request.headers().set(HttpHeaderNames.CONTENT_TYPE, ";" + multipartDataValue); + assertFalse(HttpPostRequestDecoder.isMultipart(request)); + + request.headers().set(HttpHeaderNames.CONTENT_TYPE, multipartDataValue); + assertTrue(HttpPostRequestDecoder.isMultipart(request)); + } }
train
val
"2019-10-17T19:12:20"
"2018-11-13T18:38:16Z"
nicmunroe
val
netty/netty/8855_9701
netty/netty
netty/netty/8855
netty/netty/9701
[ "keyword_pr_to_issue" ]
247a4db470d58c5087fb4f2388a18be42b0ce9d0
7d6d953153697bd66c3b01ca8ec73c4494a81788
[ "@slandelle WDYT ?", "👍 I've been parsing URIs this way (HTML5's form-url-encoded) for many years in both AHC and Gatling and I don't recall anyone ever complaining that semi-colon wasn't interpreted as a parameter separator.", "Note also that when encoding an URL with java URI.tostring(), semicolons are _not_ encoded, so the client has extra replaces to do before sending the request", "@slandelle @marcban so I am a bit confused what should be done here... Anyone interested in a PR ?", "For me the correction would only be to delete the `case ‘;’ :` on line 230 of QueryStringDecoder.java. But maybe there are tests to correct also. Never done a PR but I can try...", "> Note also that when encoding an URL with java URI.tostring(), semicolons are _not_ encoded, so the client has extra replaces to do before sending the request\r\n\r\nI suppose you mean: if my server is Netty and uses `QueryStringDecoder`, I have to encode semicolons manually so `QueryStringDecoder` doesn't mess up with parsing, as URI.toString() won't encode them (which is expected).\r\n\r\n> For me the correction would only be to delete the `case ‘;’ :` on line 230 of QueryStringDecoder.java. But maybe there are tests to correct also. Never done a PR but I can try...\r\n\r\nThis is a low hanging fruit so it would be a perfect ft for a first PR indeed :)", "1/ exactly!\r\n2/ I have some docs to read first... :-)" ]
[]
"2019-10-22T15:37:15Z"
[]
Support semicolon as a normal character in URI (no longer a parameter separator)
### Expected result According to the 2014 W3C Recommendation, semicolon is now illegal as a parameter separator in a http URI. `?foo=bar;baz` means the parameter `foo` will have the value `bar;baz` see https://www.w3.org/TR/2014/REC-html5-20141028/forms.html#url-encoded-form-data (thanks to https://stackoverflow.com/questions/3481664/semicolon-as-url-query-separator) ### Current result The URI decoder now supports the obsolete 1999 W3C recommendation `?foo=bar;baz` is decoded as `?foo=bar & baz=` (see QueryStringDecoder.java, line 230 : `case '&': case ';':` both characters are handled equally) ### Netty version tested (through vert.x) on 1.4.19 ### JVM version (e.g. `java -version`) 1.8.0_60 ### OS version (e.g. `uname -a`) Windows i686-pc Intel _Of course this issue is linked to #3044 and #2896, but as the norm changed, I suppose it's better to open a new issue_
[ "codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java index a20126dbbb8..01b747b3f23 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/QueryStringDecoder.java @@ -68,6 +68,7 @@ public class QueryStringDecoder { private final Charset charset; private final String uri; private final int maxParams; + private final boolean semicolonIsNormalChar; private int pathEndIdx; private String path; private Map<String, List<String>> params; @@ -109,9 +110,19 @@ public QueryStringDecoder(String uri, Charset charset, boolean hasPath) { * specified charset. */ public QueryStringDecoder(String uri, Charset charset, boolean hasPath, int maxParams) { + this(uri, charset, hasPath, maxParams, false); + } + + /** + * Creates a new decoder that decodes the specified URI encoded in the + * specified charset. + */ + public QueryStringDecoder(String uri, Charset charset, boolean hasPath, + int maxParams, boolean semicolonIsNormalChar) { this.uri = checkNotNull(uri, "uri"); this.charset = checkNotNull(charset, "charset"); this.maxParams = checkPositive(maxParams, "maxParams"); + this.semicolonIsNormalChar = semicolonIsNormalChar; // `-1` means that path end index will be initialized lazily pathEndIdx = hasPath ? -1 : 0; @@ -138,6 +149,14 @@ public QueryStringDecoder(URI uri, Charset charset) { * specified charset. */ public QueryStringDecoder(URI uri, Charset charset, int maxParams) { + this(uri, charset, maxParams, false); + } + + /** + * Creates a new decoder that decodes the specified URI encoded in the + * specified charset. + */ + public QueryStringDecoder(URI uri, Charset charset, int maxParams, boolean semicolonIsNormalChar) { String rawPath = uri.getRawPath(); if (rawPath == null) { rawPath = EMPTY_STRING; @@ -147,6 +166,7 @@ public QueryStringDecoder(URI uri, Charset charset, int maxParams) { this.uri = rawQuery == null? rawPath : rawPath + '?' + rawQuery; this.charset = checkNotNull(charset, "charset"); this.maxParams = checkPositive(maxParams, "maxParams"); + this.semicolonIsNormalChar = semicolonIsNormalChar; pathEndIdx = rawPath.length(); } @@ -177,7 +197,7 @@ public String path() { */ public Map<String, List<String>> parameters() { if (params == null) { - params = decodeParams(uri, pathEndIdx(), charset, maxParams); + params = decodeParams(uri, pathEndIdx(), charset, maxParams, semicolonIsNormalChar); } return params; } @@ -204,7 +224,8 @@ private int pathEndIdx() { return pathEndIdx; } - private static Map<String, List<String>> decodeParams(String s, int from, Charset charset, int paramsLimit) { + private static Map<String, List<String>> decodeParams(String s, int from, Charset charset, int paramsLimit, + boolean semicolonIsNormalChar) { int len = s.length(); if (from >= len) { return Collections.emptyMap(); @@ -226,8 +247,12 @@ private static Map<String, List<String>> decodeParams(String s, int from, Charse valueStart = i + 1; } break; - case '&': case ';': + if (semicolonIsNormalChar) { + continue; + } + // fall-through + case '&': if (addParam(s, nameStart, valueStart, i, params, charset)) { paramsLimit--; if (paramsLimit == 0) {
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java index a0071c4a70f..5937d66bf24 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/QueryStringDecoderTest.java @@ -129,6 +129,13 @@ public void testExotic() { assertQueryString("/foo?a=1&a=&a=", "/foo?a=1&a&a="); } + @Test + public void testSemicolon() { + assertQueryString("/foo?a=1;2", "/foo?a=1;2", false); + // ";" should be treated as a normal character, see #8855 + assertQueryString("/foo?a=1;2", "/foo?a=1%3B2", true); + } + @Test public void testPathSpecific() { // decode escaped characters @@ -225,8 +232,14 @@ public void testUrlDecoding() throws Exception { } private static void assertQueryString(String expected, String actual) { - QueryStringDecoder ed = new QueryStringDecoder(expected, CharsetUtil.UTF_8); - QueryStringDecoder ad = new QueryStringDecoder(actual, CharsetUtil.UTF_8); + assertQueryString(expected, actual, false); + } + + private static void assertQueryString(String expected, String actual, boolean semicolonIsNormalChar) { + QueryStringDecoder ed = new QueryStringDecoder(expected, CharsetUtil.UTF_8, true, + 1024, semicolonIsNormalChar); + QueryStringDecoder ad = new QueryStringDecoder(actual, CharsetUtil.UTF_8, true, + 1024, semicolonIsNormalChar); Assert.assertEquals(ed.path(), ad.path()); Assert.assertEquals(ed.parameters(), ad.parameters()); }
train
val
"2019-10-22T15:39:43"
"2019-02-07T17:15:10Z"
marcban
val
netty/netty/9706_9707
netty/netty
netty/netty/9706
netty/netty/9707
[ "keyword_pr_to_issue" ]
844b82b986b42d76f29992fb7a0530784786b203
2f32e0b8adb63decd9031e26fa5dd4154d93ce97
[ "@jasonstack want to provide a PR against 4.1 branch ?", "@normanmaurer sure. I will submit a PR..", "@jasonstack cool... please also ensure you sign the ICLA: https://netty.io/s/icla " ]
[]
"2019-10-24T14:09:08Z"
[]
Netty SSL doesn't respect `-Djdk.tls.client.protocols`
### Expected behavior `-Djdk.tls.client.protocols` should control Netty's SSL protocols ### Actual behavior Netty checks if `TLSv1, TLSv1.1, TLSv1.2` are [supported against JDK's supported protocols](https://github.com/netty/netty/blob/39cc7a673939dec96258ff27f5b1874671838af0/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java#L103) which are hardcoded. ### Steps to reproduce ### Minimal yet complete reproducer code (or URL to code) ``` System.setProperty("jdk.tls.client.protocols", "TLSv1.2"); SSLContext context = SSLContext.getInstance("TLS"); context.init(null, null, null); SSLEngine engine = context.createSSLEngine(); // respect JVM flag: [TLSv1.2] System.err.println("Default " + Arrays.toString(context.getDefaultSSLParameters().getProtocols())); // used by Netty, don't respect JVM flag: [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2] System.err.println("Supported " + Arrays.toString(engine.getSupportedProtocols())); ``` ### Netty version 4.0.56 ### JVM version (e.g. `java -version`) 1.8.0_162 ### OS version (e.g. `uname -a`) `dev machine`: 18.6.0 Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64
[ "handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java index eb128fe80e7..0e9a1273831 100644 --- a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java +++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java @@ -79,7 +79,7 @@ public class JdkSslContext extends SslContext { DEFAULT_PROVIDER = context.getProvider(); SSLEngine engine = context.createSSLEngine(); - DEFAULT_PROTOCOLS = defaultProtocols(engine); + DEFAULT_PROTOCOLS = defaultProtocols(context, engine); SUPPORTED_CIPHERS = Collections.unmodifiableSet(supportedCiphers(engine)); DEFAULT_CIPHERS = Collections.unmodifiableList(defaultCiphers(engine, SUPPORTED_CIPHERS)); @@ -98,9 +98,9 @@ public class JdkSslContext extends SslContext { } } - private static String[] defaultProtocols(SSLEngine engine) { - // Choose the sensible default list of protocols. - final String[] supportedProtocols = engine.getSupportedProtocols(); + private static String[] defaultProtocols(SSLContext context, SSLEngine engine) { + // Choose the sensible default list of protocols that respects JDK flags, eg. jdk.tls.client.protocols + final String[] supportedProtocols = context.getDefaultSSLParameters().getProtocols(); Set<String> supportedProtocolsSet = new HashSet<String>(supportedProtocols.length); Collections.addAll(supportedProtocolsSet, supportedProtocols); List<String> protocols = new ArrayList<String>(); @@ -261,7 +261,7 @@ public JdkSslContext(SSLContext sslContext, SSLEngine engine = sslContext.createSSLEngine(); try { if (protocols == null) { - this.protocols = defaultProtocols(engine); + this.protocols = defaultProtocols(sslContext, engine); } else { this.protocols = protocols; }
null
train
val
"2019-10-24T14:57:00"
"2019-10-24T06:55:54Z"
jasonstack
val
netty/netty/9293_9711
netty/netty
netty/netty/9293
netty/netty/9711
[ "keyword_pr_to_issue" ]
844b82b986b42d76f29992fb7a0530784786b203
aba9b34a59a2ad1182330506c9e3f6a6f681ea75
[ "@anuraaga this sounds like a good idea... I think we may also want to support varargs ... Maybe you want to provide an PR ?", "@anuraaga interested in doing a pr ?", "Yup - sorry for the delay have had some stuff lately but will be able to get to this by next week.", "@anuraaga are you still interested ?" ]
[ "Probably not a big deal given it's unlikely to be called frequently, but would be nice to check for `instanceof Collection` and then use `.toArray(...)` directly instead of copying to new list." ]
"2019-10-25T06:04:41Z"
[]
SSLContextBuilder should accept Iterable parameters too
### Expected behavior It is possible to pass parameters like SSL providers to `SslContextBuilder` without having to convert iterables to arrays, which is tedious. ### Actual behavior Need to convert to arrays, except for the `ciphers` method. Wanted to run this by everyone - if the API change makes sense I'd be happy to send a PR. Accepting `Iterable` would make the class easier to use and also make it more consistent. /cc @trustin
[ "handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java b/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java index b0474265b0e..f17169238d9 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java +++ b/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java @@ -16,21 +16,24 @@ package io.netty.handler.ssl; -import static io.netty.util.internal.ObjectUtil.checkNotNull; - import io.netty.util.internal.UnstableApi; -import java.security.KeyStore; -import java.security.Provider; import javax.net.ssl.KeyManagerFactory; +import javax.net.ssl.SSLEngine; import javax.net.ssl.SSLException; import javax.net.ssl.TrustManagerFactory; - import java.io.File; import java.io.InputStream; +import java.security.KeyStore; import java.security.PrivateKey; +import java.security.Provider; import java.security.cert.X509Certificate; -import javax.net.ssl.SSLEngine; +import java.util.ArrayList; +import java.util.List; + +import static io.netty.util.internal.EmptyArrays.EMPTY_STRINGS; +import static io.netty.util.internal.EmptyArrays.EMPTY_X509_CERTIFICATES; +import static io.netty.util.internal.ObjectUtil.checkNotNull; /** * Builder for configuring a new SslContext for creation. @@ -77,6 +80,17 @@ public static SslContextBuilder forServer(PrivateKey key, X509Certificate... key return new SslContextBuilder(true).keyManager(key, keyCertChain); } + /** + * Creates a builder for new server-side {@link SslContext}. + * + * @param key a PKCS#8 private key + * @param keyCertChain the X.509 certificate chain + * @see #keyManager(PrivateKey, X509Certificate[]) + */ + public static SslContextBuilder forServer(PrivateKey key, Iterable<? extends X509Certificate> keyCertChain) { + return forServer(key, toArray(keyCertChain, EMPTY_X509_CERTIFICATES)); + } + /** * Creates a builder for new server-side {@link SslContext}. * @@ -119,6 +133,20 @@ public static SslContextBuilder forServer( return new SslContextBuilder(true).keyManager(key, keyPassword, keyCertChain); } + /** + * Creates a builder for new server-side {@link SslContext}. + * + * @param key a PKCS#8 private key + * @param keyCertChain the X.509 certificate chain + * @param keyPassword the password of the {@code keyFile}, or {@code null} if it's not + * password-protected + * @see #keyManager(File, File, String) + */ + public static SslContextBuilder forServer( + PrivateKey key, String keyPassword, Iterable<? extends X509Certificate> keyCertChain) { + return forServer(key, keyPassword, toArray(keyCertChain, EMPTY_X509_CERTIFICATES)); + } + /** * Creates a builder for new server-side {@link SslContext}. * @@ -215,6 +243,13 @@ public SslContextBuilder trustManager(X509Certificate... trustCertCollection) { return this; } + /** + * Trusted certificates for verifying the remote endpoint's certificate, {@code null} uses the system default. + */ + public SslContextBuilder trustManager(Iterable<? extends X509Certificate> trustCertCollection) { + return trustManager(toArray(trustCertCollection, EMPTY_X509_CERTIFICATES)); + } + /** * Trusted manager for verifying the remote endpoint's certificate. {@code null} uses the system default. */ @@ -257,6 +292,17 @@ public SslContextBuilder keyManager(PrivateKey key, X509Certificate... keyCertCh return keyManager(key, null, keyCertChain); } + /** + * Identifying certificate for this host. {@code keyCertChain} and {@code key} may + * be {@code null} for client contexts, which disables mutual authentication. + * + * @param key a PKCS#8 private key + * @param keyCertChain an X.509 certificate chain + */ + public SslContextBuilder keyManager(PrivateKey key, Iterable<? extends X509Certificate> keyCertChain) { + return keyManager(key, toArray(keyCertChain, EMPTY_X509_CERTIFICATES)); + } + /** * Identifying certificate for this host. {@code keyCertChainFile} and {@code keyFile} may * be {@code null} for client contexts, which disables mutual authentication. @@ -341,6 +387,20 @@ public SslContextBuilder keyManager(PrivateKey key, String keyPassword, X509Cert return this; } + /** + * Identifying certificate for this host. {@code keyCertChain} and {@code key} may + * be {@code null} for client contexts, which disables mutual authentication. + * + * @param key a PKCS#8 private key file + * @param keyPassword the password of the {@code key}, or {@code null} if it's not + * password-protected + * @param keyCertChain an X.509 certificate chain + */ + public SslContextBuilder keyManager(PrivateKey key, String keyPassword, + Iterable<? extends X509Certificate> keyCertChain) { + return keyManager(key, keyPassword, toArray(keyCertChain, EMPTY_X509_CERTIFICATES)); + } + /** * Identifying manager for this host. {@code keyManagerFactory} may be {@code null} for * client contexts, which disables mutual authentication. Using a {@link KeyManagerFactory} @@ -427,6 +487,15 @@ public SslContextBuilder protocols(String... protocols) { return this; } + /** + * The TLS protocol versions to enable. + * @param protocols The protocols to enable, or {@code null} to enable the default protocols. + * @see SSLEngine#setEnabledCipherSuites(String[]) + */ + public SslContextBuilder protocols(Iterable<String> protocols) { + return protocols(toArray(protocols, EMPTY_STRINGS)); + } + /** * {@code true} if the first write request shouldn't be encrypted. */ @@ -464,4 +533,15 @@ public SslContext build() throws SSLException { ciphers, cipherFilter, apn, protocols, sessionCacheSize, sessionTimeout, enableOcsp, keyStoreType); } } + + private static <T> T[] toArray(Iterable<? extends T> iterable, T[] prototype) { + if (iterable == null) { + return null; + } + final List<T> list = new ArrayList<T>(); + for (T element : iterable) { + list.add(element); + } + return list.toArray(prototype); + } }
null
test
val
"2019-10-24T14:57:00"
"2019-06-28T08:39:25Z"
anuraaga
val
netty/netty/9725_9729
netty/netty
netty/netty/9725
netty/netty/9729
[ "keyword_pr_to_issue" ]
8d99aa1235d07376f9df3ae3701692091117725a
82376fd889b3f34af318bf48eb28673ef5789a31
[ "processSelectedKey should be an hot path and very likely to be compiled at C2 level so I don't think it would make any difference perf-wise. For readability is fine, agree :)", "@MrYangxf sounds good... Can you please submit a PR ?", "@normanmaurer Of course :)" ]
[ "this `return` can be removed now. ", "this seems to violate the original logic. The healthy channel may be closed when `eventLoop != this`", "@MrYangxf not sure I understand your concern... With your change `return` is the last statement in the method which makes it redundant.. What I am missing ?", "@normanmaurer ...`return` is contained in the `if(!k.isValid())` statement instead of the last line of the method...", "@MrYangxf doh! stupid me ... I did not notice that there is another scope. You are right, of course. " ]
"2019-10-30T02:29:46Z"
[]
A redundant comparison operation
Hi, when I read the source code, I happened to notice this https://github.com/netty/netty/blob/ff2a7929235257642d8f63edb1c580347307f3ee/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java#L669-L674 The `eventLoop == null` is unnecessary. Although the c2 compiler optimizes it, c1 doesn’t seem to optimize it. Though it’s a small problem,it doesn’t look elegant enough. equivalent ```java if (eventLoop == this) { unsafe.close(unsafe.voidPromise()); } return; ```
[ "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[ "transport/src/main/java/io/netty/channel/nio/NioEventLoop.java" ]
[]
diff --git a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java index ace430afc95..36a0b76ddc0 100644 --- a/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java +++ b/transport/src/main/java/io/netty/channel/nio/NioEventLoop.java @@ -666,11 +666,10 @@ private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) { // and thus the SelectionKey could be cancelled as part of the deregistration process, but the channel is // still healthy and should not be closed. // See https://github.com/netty/netty/issues/5125 - if (eventLoop != this || eventLoop == null) { - return; + if (eventLoop == this) { + // close the channel if the key is not valid anymore + unsafe.close(unsafe.voidPromise()); } - // close the channel if the key is not valid anymore - unsafe.close(unsafe.voidPromise()); return; }
null
val
val
"2019-10-29T20:48:18"
"2019-10-29T09:18:38Z"
MrYangxf
val
netty/netty/9738_9742
netty/netty
netty/netty/9738
netty/netty/9742
[ "keyword_issue_to_pr", "keyword_pr_to_issue" ]
656371ee733454f9cf7691f60d2d1acd16c41c9a
369e667427dbadca95da935cbe8404b3691c68cb
[ "@bsideup can you have a look ? It works with all other JDK version that we use on the CI :/", "@normanmaurer JDK13+ requires a flag ( `-XX:+AllowRedefinitionToAddDeleteMethod` ) to be set, see https://github.com/reactor/BlockHound/issues/33 for more details.\r\n\r\nWe're still investigating what to do about it, since it was a behaviour change in OpenJDK's instrumentation logic...", "@bsideup I see... I will add these to the java13 profile then for now.", "@bsideup this did not work :(\r\n\r\n```\r\n[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (default-test) on project netty-transport-blockhound-tests: There are test failures.\r\n[ERROR]\r\n[ERROR] Please refer to /Users/norman/Documents/workspace/netty/transport-blockhound-tests/target/surefire-reports for the individual test results.\r\n[ERROR] Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.\r\n[ERROR] The forked VM terminated without properly saying goodbye. VM crash or System.exit called?\r\n[ERROR] Command was /bin/sh -c cd /Users/norman/Documents/workspace/netty/transport-blockhound-tests && /Users/norman/.jabba/jdk/adopt@1.13.0-1/Contents/Home/bin/java -server -dsa -da -ea:io.netty... -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+AllowRedefinitionToAddDeleteMethod -D_ -D_ -D_ -D_ -jar /Users/norman/Documents/workspace/netty/transport-blockhound-tests/target/surefire/surefirebooter16207235744509438163.jar /Users/norman/Documents/workspace/netty/transport-blockhound-tests/target/surefire 2019-10-31T11-59-30_222-jvmRun1 surefire15970040776949416610tmp surefire_08978095701992529987tmp\r\n[ERROR] Error occurred in starting fork, check output in log\r\n[ERROR] Process Exit Code: 1\r\n[ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?\r\n[ERROR] Command was /bin/sh -c cd /Users/norman/Documents/workspace/netty/transport-blockhound-tests && /Users/norman/.jabba/jdk/adopt@1.13.0-1/Contents/Home/bin/java -server -dsa -da -ea:io.netty... -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+AllowRedefinitionToAddDeleteMethod -D_ -D_ -D_ -D_ -jar /Users/norman/Documents/workspace/netty/transport-blockhound-tests/target/surefire/surefirebooter16207235744509438163.jar /Users/norman/Documents/workspace/netty/transport-blockhound-tests/target/surefire 2019-10-31T11-59-30_222-jvmRun1 surefire15970040776949416610tmp surefire_08978095701992529987tmp\r\n[ERROR] Error occurred in starting fork, check output in log\r\n[ERROR] Process Exit Code: 1\r\n[ERROR] \tat org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:669)\r\n[ERROR] \tat org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:282)\r\n[ERROR] \tat org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)\r\n[ERROR] \tat org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)\r\n[ERROR] \tat org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)\r\n[ERROR] \tat org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)\r\n[ERROR] \tat org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)\r\n[ERROR] \tat org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)\r\n[ERROR] \tat org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)\r\n[ERROR] \tat org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)\r\n[ERROR] \tat org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)\r\n[ERROR] \tat org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)\r\n[ERROR] \tat org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)\r\n[ERROR] \tat org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)\r\n[ERROR] \tat org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)\r\n[ERROR] \tat org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)\r\n[ERROR] \tat org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)\r\n[ERROR] \tat org.apache.maven.cli.MavenCli.execute(MavenCli.java:955)\r\n[ERROR] \tat org.apache.maven.cli.MavenCli.doMain(MavenCli.java:290)\r\n[ERROR] \tat org.apache.maven.cli.MavenCli.main(MavenCli.java:194)\r\n[ERROR] \tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n[ERROR] \tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n[ERROR] \tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n[ERROR] \tat java.base/java.lang.reflect.Method.invoke(Method.java:567)\r\n[ERROR] \tat org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)\r\n[ERROR] \tat org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)\r\n[ERROR] \tat org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)\r\n[ERROR] \tat org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)\r\n[ERROR] \tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n[ERROR] \tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n[ERROR] \tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n[ERROR] \tat java.base/java.lang.reflect.Method.invoke(Method.java:567)\r\n[ERROR] \tat org.apache.maven.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:39)\r\n[ERROR] \tat org.apache.maven.wrapper.WrapperExecutor.execute(WrapperExecutor.java:122)\r\n[ERROR] \tat org.apache.maven.wrapper.MavenWrapperMain.main(MavenWrapperMain.java:60)\r\n[ERROR]\r\n[ERROR] -> [Help 1]\r\n[ERROR]\r\n[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.\r\n[ERROR] Re-run Maven using the -X switch to enable full debug logging.\r\n[ERROR]\r\n[ERROR] For more information about the errors and possible solutions, please read the following articles:\r\n[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException\r\n```", "@normanmaurer on it, trying locally", "@bsideup thanks... PRs welcome :)", "@normanmaurer interesting. It works when I run BlockHound's tests with Java 13 but fails in Netty.\r\nI will continue the investigation and eventually send a PR.\r\n\r\nTrying to find how to check that the flag is actually set in the tests' JVM...", "status update:\r\nI am still unable to fix it in Netty, although it seems to work in isolation:\r\nhttps://gist.github.com/bsideup/02d09c5723d956b27c958fa881de5495\r\n\r\nChecking whether it is a problem with BlockHound or with the whitelisting we do in the integration, so that the call is instrumented but not reported\r\n\r\n**update 2:**\r\nrunning it from IntelliJ with JDK13 + `-XX:+AllowRedefinitionToAddDeleteMethods` works. Seems to be something related to Maven...", "Ok, it seems that Maven was using an old SNAPSHOT from pre-BlockHound era, this is why running it with IDEA was giving me a positive result but with Maven I was getting errors.\r\n\r\nI submitted #9742 with a simple profile-based fix, and will submit another one with a test checking that `ServiceLoader` can load the integration", "Ok, I think now we're good :) #9743 is an optional addition to make it easier to debug issues in future, since I spent quite some time fighting with JDK13 while the problem was in my Maven setup and Maven's vision on how modular projects should be built 😅" ]
[]
"2019-10-31T14:56:23Z"
[]
NettyBlockHoundIntegrationTest.testBlockingCallsInNettyThreads fails with JDK 13
``` [INFO] --- xml-maven-plugin:1.0.1:check-format (check-style) @ netty-transport-blockhound-tests --- [INFO] [INFO] --- maven-dependency-plugin:2.10:get (get-jetty-alpn-agent) @ netty-transport-blockhound-tests --- [INFO] Resolving org.mortbay.jetty.alpn:jetty-alpn-agent:jar:2.0.8 with transitive dependencies [INFO] [INFO] --- build-helper-maven-plugin:1.10:parse-version (parse-version) @ netty-transport-blockhound-tests --- [INFO] [INFO] --- maven-antrun-plugin:1.8:run (write-version-properties) @ netty-transport-blockhound-tests --- [INFO] Executing tasks main: [echo] Current commit: 47f82b6 on 2019-10-30 19:36:03 +0100 [mkdir] Created dir: /code/transport-blockhound-tests/target/classes/META-INF [propertyfile] Creating new property file: /code/transport-blockhound-tests/target/classes/META-INF/io.netty.versions.properties [INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ netty-transport-blockhound-tests --- [INFO] [INFO] --- maven-resources-plugin:3.0.1:resources (default-resources) @ netty-transport-blockhound-tests --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /code/transport-blockhound-tests/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ netty-transport-blockhound-tests --- [INFO] No sources to compile [INFO] [INFO] --- forbiddenapis:2.2:check (check-forbidden-apis) @ netty-transport-blockhound-tests --- [INFO] Skipping forbidden-apis checks. [INFO] [INFO] --- animal-sniffer-maven-plugin:1.16:check (default) @ netty-transport-blockhound-tests --- [INFO] Checking unresolved references to org.codehaus.mojo.signature:java18:1.0 [INFO] [INFO] >>> maven-bundle-plugin:2.5.4:manifest (generate-manifest) > process-classes @ netty-transport-blockhound-tests >>> [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven) @ netty-transport-blockhound-tests --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-tools) @ netty-transport-blockhound-tests --- [INFO] [INFO] --- maven-checkstyle-plugin:3.0.0:check (check-style) @ netty-transport-blockhound-tests --- [INFO] Starting audit... Audit done. [INFO] [INFO] --- xml-maven-plugin:1.0.1:check-format (check-style) @ netty-transport-blockhound-tests --- [INFO] [INFO] --- maven-dependency-plugin:2.10:get (get-jetty-alpn-agent) @ netty-transport-blockhound-tests --- [INFO] Resolving org.mortbay.jetty.alpn:jetty-alpn-agent:jar:2.0.8 with transitive dependencies [INFO] [INFO] --- build-helper-maven-plugin:1.10:parse-version (parse-version) @ netty-transport-blockhound-tests --- [INFO] [INFO] --- maven-antrun-plugin:1.8:run (write-version-properties) @ netty-transport-blockhound-tests --- [INFO] Executing tasks main: [echo] Current commit: 47f82b6 on 2019-10-30 19:36:03 +0100 [delete] Deleting: /code/transport-blockhound-tests/target/classes/META-INF/io.netty.versions.properties [propertyfile] Creating new property file: /code/transport-blockhound-tests/target/classes/META-INF/io.netty.versions.properties [INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ netty-transport-blockhound-tests --- [INFO] [INFO] --- maven-resources-plugin:3.0.1:resources (default-resources) @ netty-transport-blockhound-tests --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /code/transport-blockhound-tests/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ netty-transport-blockhound-tests --- [INFO] No sources to compile [INFO] [INFO] --- forbiddenapis:2.2:check (check-forbidden-apis) @ netty-transport-blockhound-tests --- [INFO] Skipping forbidden-apis checks. [INFO] [INFO] --- animal-sniffer-maven-plugin:1.16:check (default) @ netty-transport-blockhound-tests --- [INFO] Checking unresolved references to org.codehaus.mojo.signature:java18:1.0 [INFO] [INFO] <<< maven-bundle-plugin:2.5.4:manifest (generate-manifest) < process-classes @ netty-transport-blockhound-tests <<< [INFO] [INFO] [INFO] --- maven-bundle-plugin:2.5.4:manifest (generate-manifest) @ netty-transport-blockhound-tests --- [WARNING] Manifest io.netty:netty-transport-blockhound-tests:jar:5.0.0.Final-SNAPSHOT : Unused Export-Package instructions: [io.netty.*] [WARNING] Manifest io.netty:netty-transport-blockhound-tests:jar:5.0.0.Final-SNAPSHOT : Unused Import-Package instructions: [sun.misc.*, sun.security.*] [INFO] [INFO] --- maven-resources-plugin:3.0.1:testResources (default-testResources) @ netty-transport-blockhound-tests --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /code/transport-blockhound-tests/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ netty-transport-blockhound-tests --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 1 source file to /code/transport-blockhound-tests/target/test-classes [INFO] [INFO] --- forbiddenapis:2.2:testCheck (check-forbidden-test-apis) @ netty-transport-blockhound-tests --- [INFO] Skipping forbidden-apis checks. [INFO] [INFO] --- maven-surefire-plugin:2.22.1:test (default-test) @ netty-transport-blockhound-tests --- [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 1. See FAQ web page and the dump file /code/transport-blockhound-tests/target/surefire-reports/2019-10-31T01-09-13_174-jvmRun1.dumpstream [INFO] Running io.netty.util.internal.NettyBlockHoundIntegrationTest 01:33:52.851 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework 01:33:52.866 [main] DEBUG i.n.u.i.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024 01:33:52.866 [main] DEBUG i.n.u.i.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.521 s <<< FAILURE! - in io.netty.util.internal.NettyBlockHoundIntegrationTest [ERROR] testBlockingCallsInNettyThreads(io.netty.util.internal.NettyBlockHoundIntegrationTest) Time elapsed: 0.32 s <<< FAILURE! java.lang.AssertionError: Expected an exception due to a blocking call but none was thrown at org.junit.Assert.fail(Assert.java:88) at io.netty.util.internal.NettyBlockHoundIntegrationTest.testBlockingCallsInNettyThreads(NettyBlockHoundIntegrationTest.java:48) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) [INFO] [INFO] Results: [INFO] [ERROR] Failures: [ERROR] NettyBlockHoundIntegrationTest.testBlockingCallsInNettyThreads:48 Expected an exception due to a blocking call but none was thrown [INFO] [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0 [INFO] ```
[ "transport-blockhound-tests/pom.xml" ]
[ "transport-blockhound-tests/pom.xml" ]
[]
diff --git a/transport-blockhound-tests/pom.xml b/transport-blockhound-tests/pom.xml index 9105858435e..c0a0c8d3ddd 100644 --- a/transport-blockhound-tests/pom.xml +++ b/transport-blockhound-tests/pom.xml @@ -31,6 +31,18 @@ <name>Netty/Transport/BlockHound/Tests</name> + <profiles> + <profile> + <id>java13</id> + <activation> + <jdk>13</jdk> + </activation> + <properties> + <argLine.common>-XX:+AllowRedefinitionToAddDeleteMethods</argLine.common> + </properties> + </profile> + </profiles> + <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target>
null
train
test
"2019-10-31T12:16:24"
"2019-10-31T09:44:33Z"
normanmaurer
val
netty/netty/8279_9804
netty/netty
netty/netty/8279
netty/netty/9804
[ "keyword_pr_to_issue" ]
fb5e2cd3aa5644c294905fce3b45c2278f17e15d
660611c4507def91ffd6bf13f294e21826b584a5
[ "@rhenwood-arm if we can easily cross-compile there we could do it I guess.", "@rhenwood-arm btw if you do so you may also be interested in providing a pr for netty-tcnative.", "Thanks for the prompt response and guidance @normanmaurer !\r\n\r\nI'll spend some time looking into this in detail.", "@rhenwood-arm any update ?", "@rhenwood-arm ping", "Hi @normanmaurer \r\nI am working with a colleague on this activity, so I've reached out to them to see where they are. Still waiting for an update.\r\n\r\nNOTE: I am assuming that this is a 'nice to have' for the project today. Please let me know if you have observed other interest in this work which will help me prioritize it.\r\n", "@rhenwood-arm you are right... nice to have and not urgent at all. Just wanted to see where we are here.", "We are using netty on ARM64v8. and waiting this very important feature.", "Hi @rhenwood-arm do you have start some work around this topic ? I'm also interested to provide cross compilation for arm v7 32 bit architecture. \r\n", "Hi,\r\n\r\nI don't have time to work on this currently, but I will ask to see if I can get some help...", "It looks like the EPEL 6 repository would need to be added to the release requirement to at least provide the GCC cross compilers for AArch64 and Armv7.\r\n\r\n@normanmaurer would you be amenable to that?\r\n", "@rhenwood-arm sounds fair :)", "@rhenwood-arm ping.. ", "Thanks for the reminder @normanmaurer . At a high level, I think the plan to move forward is:\r\n\r\n1. Add EPEL as a release dependency to this project. This will provide the cross compiler.\r\n2. Update the pom here: https://github.com/netty/netty/blob/4.1/transport-native-epoll/pom.xml so that it uses the cross compiler to build for AArch64 and Armv7 as well.\r\n3. Update these instructions: https://netty.io/wiki/releasing-new-version.html so the new native libraries are correctly packaged during release.\r\n\r\nCan this plan be improved?\r\n\r\nI do not currently have time to work on this. I see if I can get some help. @aygalinc do you have time to help?", "We have the same requirement about netty on aarch64 as well. Any progress? Anything I can do to help with the ARM release?", "@rhenwood-arm @normanmaurer Can I take over this issue? I'd like to do the work that @rhenwood-arm pointed.", "Sure 🙏\n\n> Am 13.11.2019 um 00:46 schrieb wangxiyuan <notifications@github.com>:\n> \n> \n> @rhenwood-arm @normanmaurer Can I take over this issue? I'd like to do the work that @rhenwood-arm pointed.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "@wangxiyuan : please do! :)", "I tested locally with java_1.8.0_222 and maven 3.6.0 on an ARM machine. The build and tests for 4.1.44 are passed at all. I'll test cross-compile next. I'll create the PR for it if everything works well.\r\n", "@rhenwood-arm @normanmaurer I tested cross compile locally. The official yum package for `gcc-aarch64-linux-gnu` doesn't work. It missed lots of files. I used linaro ones instead. The minimum version is 4.9: https://releases.linaro.org/components/toolchain/binaries/4.9-2016.02/aarch64-linux-gnu/ and it relies on glibc 2.14 which CentOS 6.9 doesn't support by default. So I bumped glibc from `2.13` to `2.14` by hand. Then with the linaro gcc, the cross compile passed. \r\n\r\nAs you can see, since the tools on CentOS 6 is a little old by default, we can't cross compile by default. How do you think about the step? Is it acceptable?", "@wangxiyuan unfortunately bumping up GLIBC is not an option imho. So what we could do is to only support it on centos7 and newer for arm. ", "@normanmaurer Using CentOS 7 is a better way of cause.\r\n\r\nI'm not sure how `transport-native-epoll` jar is released to Maven Repo. But I find that it relies on CentOS 6.9 if I'm correct. https://github.com/netty/netty/blob/4.1/transport-native-epoll/pom.xml#L96\r\nIs it possible to bump to 7?", "Nope it is not possible to require centos7 in general. I wonder if we can just build on both and release\n\n> Am 20.11.2019 um 10:16 schrieb wangxiyuan <notifications@github.com>:\n> \n> \n> @normanmaurer Using CentOS 7 is a better way of cause.\n> \n> I'm not sure how transport-native-epoll jar is released to Maven Repo. But I find that it relies on CentOS 6.9 if I'm correct. https://github.com/netty/netty/blob/4.1/transport-native-epoll/pom.xml#L96\n> Is it possible to bump to 7?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "@normanmaurer make sense. Let me try 7 now.\r\n\r\nBTW, I'm not sure how to support both 6 and 7 in netty. Will there be two jar, one for 6 and another for 7?", "@wangxiyuan there would be one jar for arm and one for x86 I guess . And the one for arm would be centos7 ?", "The glibc version seems to be the critical factor in providing architecture support. I suggest separate multiple releases of the same code-base are distinguished with that in-mind. One obvious approach is to include the glibc version in the release name. So far, I personally think will be the simplest way to avoid confusion and gives a good path to future-proofing new architectures.\r\n\r\nI can also see labeling with centos6, centos7 etc (or el6, el7, el8) would be an OK alternative for me personally... but I do recognize there are other distributions out there that people might choose to run :)", "@rhenwood-arm @normanmaurer I just added a new profile `linux-aarch64` for `transport-native-epoll` and its dependents `transport-native-unix-common` to build the aarch64 packge. I tested it locally already. Here is the step to cross compile:\r\n1. Make sure the OS is CentOS 7.6 (which contains glibc>=2.14 by default )\r\n2. Download aarch64-gcc toolchin https://releases.linaro.org/components/toolchain/binaries/4.9-2016.02/aarch64-linux-gnu/gcc-linaro-4.9-2016.02-x86_64_aarch64-linux-gnu.tar.xz\r\n3. Unpack it and add its `bin` folder into the user's default PATH( `maven-hawtjni-plugin` will use the default PATH to find aarch64-gcc, `export` doesn't work.)\r\n4. go into `transport-native-unix-common`, try `mvn clean install -Plinux-aarch64`.\r\n5. go into `transport-native-epoll`, try `mvn clean install -Plinux-aarch64`.\r\n\r\nIs that OK?\r\n\r\nAnd I don't know how netty releases jar to maven center. Should I add some release change to parent pom.xml?", "@wangxiyuan left a comment on the PR. I think we should add a docker file for this and then we can make it happen as part of the release process.", "sure, Let me add one. Thanks for your quick response.", "Any progress on this? This seems to build fine on an AWS a1 machine running RHEL8. Getting the binaries for aarch64 released would be helpful for people wanting to run Hadoop/Spark etc on Arm machines." ]
[ "This complete plugin config is not needed anymore :) Just remove it", "I don't know how to release the package. Any suggestion?", "Done", "@wangxiyuan can you make all the docker / docker-compose stuff consistent with what we have atm ? If you don't have time I can also do it tho.", "I can but I don't know how to config the release management. So if you can take over this job. I'll be very appreciate. Thanks.", "I will", "Any progress, or anything elase I can do? I can update the dockerfile of cause.", "s/Dockfile/Dockerfile/", "Is there a https mirror available?", "Actually we can drop this completely, Netty has mvnw (Maven wrapper) included in the netty repo.", "./mvnw here instead of mvn", "x86_64 this reads 32bit..", "extract all occurences of 4.9-2016.02 into variable? eg. ENV TOOLCHAIN_VERSION=4.9-2016.02", "why was this dependency included? shouldn't it be handled here? https://github.com/netty/netty/blob/a9731b43e0e86359319233416fa815ed3a79c564/transport-native-epoll/pom.xml#L128", "This is github display problem. See L347. I added the `dependencies` there which will be handled by L275", "Will do.", "@wangxiyuan why are we skipping tests here?", "Since this is cross compile, the test environment is still on X86. So when run test, it still require `libnetty_tcnative_linux_x86_64.so` which will lead the error like\r\n\r\n```\r\njava.lang.IllegalArgumentException: Failed to load any of the given libraries: [netty_tcnative_linux_x86_64_fedora, netty_tcnative_linux_x86_64, netty_tcnative_x86_64, netty_tcnative]\r\n```\r\n", "can you add a comment explaining that?", "```suggestion\r\n # since we are cross compiling netty_tcnative as arch64 it cannot be loaded on x86_64 -> skipTests\r\n cross-compile-aarch64:\r\n```", "I added a suggested change.", "Add it at L55. Please take a look", "this should be removed... we should expect that will be present and mount it via our docker file just as we do for other docker-compose things. ", "Done" ]
"2019-11-25T06:41:21Z"
[]
aarch64 native bits.
Hi, It is valuable for me to have native aarch64 bits included in the release artifacts. I see there has been interest in a cross-compile previously: https://github.com/netty/netty/issues/7459 It appears that the release is made on a RHEL 6 x86_64 compatible and I would like to explore making the necessary changes to the project to add in a cross-compile step for aarch64. Assuming this is accurate, is the project open to a pull request to generate aarch64 bits during the release cycle? best regards, Richard
[ "all/pom.xml", "docker/README.md", "docker/docker-compose.yaml", "transport-native-epoll/pom.xml", "transport-native-unix-common/pom.xml" ]
[ "all/pom.xml", "docker/Dockerfile.cross_compile_aarch64", "docker/README.md", "docker/docker-compose.yaml", "transport-native-epoll/pom.xml", "transport-native-unix-common/pom.xml" ]
[]
diff --git a/all/pom.xml b/all/pom.xml index afbdcb2a052..ebf7d21dc42 100644 --- a/all/pom.xml +++ b/all/pom.xml @@ -57,6 +57,14 @@ <scope>compile</scope> <optional>true</optional> </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-transport-native-epoll</artifactId> + <version>${project.version}</version> + <classifier>linux-aarch_64</classifier> + <scope>compile</scope> + <optional>true</optional> + </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>netty-transport-native-kqueue</artifactId> @@ -89,6 +97,14 @@ <scope>compile</scope> <optional>true</optional> </dependency> + <dependency> + <groupId>${project.groupId}</groupId> + <artifactId>netty-transport-native-epoll</artifactId> + <version>${project.version}</version> + <classifier>linux-aarch_64</classifier> + <scope>compile</scope> + <optional>true</optional> + </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>netty-transport-native-kqueue</artifactId> diff --git a/docker/Dockerfile.cross_compile_aarch64 b/docker/Dockerfile.cross_compile_aarch64 new file mode 100644 index 00000000000..077e46262a3 --- /dev/null +++ b/docker/Dockerfile.cross_compile_aarch64 @@ -0,0 +1,18 @@ +FROM centos:7.6.1810 + +ARG gcc_version=4.9-2016.02 +ENV GCC_VERSION $gcc_version + +# Install requirements +RUN yum install -y wget tar git make redhat-lsb-core autoconf automake libtool glibc-devel libaio-devel openssl-devel apr-devel lksctp-tools + +# Install Java +RUN yum install -y java-1.8.0-openjdk-devel + +# Install aarch64 gcc toolchain +RUN set -x && \ + wget https://releases.linaro.org/components/toolchain/binaries/$GCC_VERSION/aarch64-linux-gnu/gcc-linaro-$GCC_VERSION-x86_64_aarch64-linux-gnu.tar.xz && \ + tar xvf gcc-linaro-$GCC_VERSION-x86_64_aarch64-linux-gnu.tar.xz + +ENV PATH="/gcc-linaro-$GCC_VERSION-x86_64_aarch64-linux-gnu/bin:${PATH}" +ENV JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk/" diff --git a/docker/README.md b/docker/README.md index d96242b3212..378c673c844 100644 --- a/docker/README.md +++ b/docker/README.md @@ -16,4 +16,11 @@ docker-compose -f docker/docker-compose.yaml -f docker/docker-compose.centos-6.1 docker-compose -f docker/docker-compose.yaml -f docker/docker-compose.centos-7.111.yaml run test ``` +## aarch64 cross compile for transport-native-epoll on X86_64 + +``` +docker-compose -f docker/docker-compose.yaml run cross-compile-aarch64 +``` +The default version of aarch64 gcc is `4.9-2016.02`. Update the parameter `gcc_version` in `docker-compose.yaml` to use a version you want. + etc, etc diff --git a/docker/docker-compose.yaml b/docker/docker-compose.yaml index 5d9ec200ae5..f56cb53aaab 100644 --- a/docker/docker-compose.yaml +++ b/docker/docker-compose.yaml @@ -40,3 +40,23 @@ services: - ..:/code:delegated - ~/.m2:/root/.m2:delegated entrypoint: /bin/bash + + cross-compile-aarch64-runtime-setup: + image: netty:cross_compile_aarch64 + build: + context: . + dockerfile: Dockerfile.cross_compile_aarch64 + args: + gcc_version : "4.9-2016.02" + + cross-compile-aarch64: + image: netty:cross_compile_aarch64 + depends_on: [cross-compile-aarch64-runtime-setup] + volumes: + - ~/.ssh:/root/.ssh:delegated + - ~/.gnupg:/root/.gnupg:delegated + - ..:/code:delegated + - ~/.m2:/root/.m2:delegated + # Since we are cross compiling netty-transport-native-epoll as aarch64 which cannot be loaded on x86_64, we add `skipTests` here to skip the test. + command: /bin/bash -cl "pushd ./transport-native-unix-common && ../mvnw clean install -Plinux-aarch64 && popd && pushd ./transport-native-epoll && ../mvnw clean install -Plinux-aarch64 -DskipTests && popd" + working_dir: /code diff --git a/transport-native-epoll/pom.xml b/transport-native-epoll/pom.xml index 5ae8bde4d50..5c94a8511cb 100644 --- a/transport-native-epoll/pom.xml +++ b/transport-native-epoll/pom.xml @@ -196,6 +196,154 @@ </plugins> </build> + <dependencies> + <dependency> + <groupId>io.netty</groupId> + <artifactId>netty-transport-native-unix-common</artifactId> + <version>${project.version}</version> + <classifier>${jni.classifier}</classifier> + <!-- + The unix-common with classifier dependency is optional because it is not a runtime dependency, but a build time + dependency to get the static library which is built directly into the shared library generated by this project. + --> + <optional>true</optional> + </dependency> + </dependencies> + </profile> + <profile> + <id>linux-aarch64</id> + <properties> + <jni.classifier>${os.detected.name}-aarch64</jni.classifier> + </properties> + <build> + <pluginManagement> + <plugins> + <plugin> + <artifactId>maven-enforcer-plugin</artifactId> + <version>1.4.1</version> + <dependencies> + <!-- Provides the 'requireFilesContent' enforcer rule. --> + <dependency> + <groupId>com.ceilfors.maven.plugin</groupId> + <artifactId>enforcer-rules</artifactId> + <version>1.2.0</version> + </dependency> + </dependencies> + </plugin> + </plugins> + </pluginManagement> + <plugins> + <plugin> + <artifactId>maven-enforcer-plugin</artifactId> + <executions> + <execution> + <id>enforce-release-environment</id> + <goals> + <goal>enforce</goal> + </goals> + <configuration> + <rules> + <requireProperty> + <regexMessage> + Cross compile and Release process must be performed on linux-x86_64. + </regexMessage> + <property>os.detected.classifier</property> + <regex>^linux-x86_64.*</regex> + </requireProperty> + <requireFilesContent> + <message> + Cross compile and Release process must be performed on RHEL 7.6 or its derivatives. + </message> + <files> + <file>/etc/redhat-release</file> + </files> + <content>release 7.6</content> + </requireFilesContent> + </rules> + </configuration> + </execution> + </executions> + </plugin> + <plugin> + <artifactId>maven-dependency-plugin</artifactId> + <executions> + <!-- unpack the unix-common static library and include files --> + <execution> + <id>unpack</id> + <phase>generate-sources</phase> + <goals> + <goal>unpack-dependencies</goal> + </goals> + <configuration> + <includeGroupIds>${project.groupId}</includeGroupIds> + <includeArtifactIds>netty-transport-native-unix-common</includeArtifactIds> + <classifier>${jni.classifier}</classifier> + <outputDirectory>${unix.common.lib.dir}</outputDirectory> + <includes>META-INF/native/**</includes> + <overWriteReleases>false</overWriteReleases> + <overWriteSnapshots>true</overWriteSnapshots> + </configuration> + </execution> + </executions> + </plugin> + + <plugin> + <groupId>org.fusesource.hawtjni</groupId> + <artifactId>maven-hawtjni-plugin</artifactId> + <executions> + <execution> + <id>build-native-lib</id> + <configuration> + <name>netty_transport_native_epoll_aarch_64</name> + <nativeSourceDirectory>${nativeSourceDirectory}</nativeSourceDirectory> + <libDirectory>${project.build.outputDirectory}</libDirectory> + <!-- We use Maven's artifact classifier instead. + This hack will make the hawtjni plugin to put the native library + under 'META-INF/native' rather than 'META-INF/native/${platform}'. --> + <platform>.</platform> + <configureArgs> + <arg>${jni.compiler.args.ldflags}</arg> + <arg>${jni.compiler.args.cflags}</arg> + <configureArg>--libdir=${project.build.directory}/native-build/target/lib</configureArg> + <configureArg>--host=aarch64-linux-gnu</configureArg> + </configureArgs> + </configuration> + <goals> + <goal>generate</goal> + <goal>build</goal> + </goals> + </execution> + </executions> + </plugin> + <plugin> + <artifactId>maven-jar-plugin</artifactId> + <executions> + <!-- Generate the JAR that contains the native library in it. --> + <execution> + <id>native-jar</id> + <goals> + <goal>jar</goal> + </goals> + <configuration> + <archive> + <manifest> + <addDefaultImplementationEntries>true</addDefaultImplementationEntries> + </manifest> + <manifestEntries> + <Bundle-NativeCode>META-INF/native/libnetty_transport_native_epoll_aarch_64.so; osname=Linux; processor=aarch_64,*</Bundle-NativeCode> + <Automatic-Module-Name>${javaModuleName}</Automatic-Module-Name> + </manifestEntries> + <index>true</index> + <manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile> + </archive> + <classifier>${jni.classifier}</classifier> + </configuration> + </execution> + </executions> + </plugin> + </plugins> + </build> + <dependencies> <dependency> <groupId>io.netty</groupId> diff --git a/transport-native-unix-common/pom.xml b/transport-native-unix-common/pom.xml index 6e65bffe1b8..e228266bd87 100644 --- a/transport-native-unix-common/pom.xml +++ b/transport-native-unix-common/pom.xml @@ -207,6 +207,71 @@ </plugins> </build> </profile> + <profile> + <id>linux-aarch64</id> + <properties> + <jni.classifier>${os.detected.name}-aarch64</jni.classifier> + <jni.platform>linux</jni.platform> + <exe.compiler>aarch64-linux-gnu-gcc</exe.compiler> + <exe.archiver>aarch64-linux-gnu-ar</exe.archiver> + </properties> + <build> + <plugins> + <plugin> + <artifactId>maven-antrun-plugin</artifactId> + <executions> + <!-- Build the additional JAR that contains the native library. --> + <execution> + <id>native-jar</id> + <phase>package</phase> + <goals> + <goal>run</goal> + </goals> + <configuration> + <target> + <copy todir="${nativeJarWorkdir}"> + <zipfileset src="${defaultJarFile}" /> + </copy> + <copy todir="${nativeJarWorkdir}" includeEmptyDirs="false"> + <zipfileset dir="${nativeLibOnlyDir}" /> + <regexpmapper handledirsep="yes" from="^(?:[^/]+/)*([^/]+)$" to="META-INF/native/lib/\1" /> + </copy> + <copy todir="${nativeJarWorkdir}" includeEmptyDirs="false"> + <zipfileset dir="${nativeIncludeDir}" /> + <regexpmapper handledirsep="yes" from="^(?:[^/]+/)*([^/]+).h$" to="META-INF/native/include/\1.h" /> + </copy> + <jar destfile="${nativeJarFile}" manifest="${nativeJarWorkdir}/META-INF/MANIFEST.MF" basedir="${nativeJarWorkdir}" index="true" excludes="META-INF/MANIFEST.MF,META-INF/INDEX.LIST" /> + <attachartifact file="${nativeJarFile}" classifier="${jni.classifier}" type="jar" /> + </target> + </configuration> + </execution> + <!-- invoke the make file to build a static library --> + <execution> + <id>build-native-lib</id> + <phase>generate-sources</phase> + <goals> + <goal>run</goal> + </goals> + <configuration> + <target> + <exec executable="${exe.make}" failonerror="true" resolveexecutable="true"> + <env key="CC" value="${exe.compiler}" /> + <env key="AR" value="${exe.archiver}" /> + <env key="LIB_DIR" value="${nativeLibOnlyDir}" /> + <env key="OBJ_DIR" value="${nativeObjsOnlyDir}" /> + <env key="JNI_PLATFORM" value="${jni.platform}" /> + <env key="CFLAGS" value="-O3 -Werror -Wno-attributes -fPIC -fno-omit-frame-pointer -Wunused-variable -fvisibility=hidden" /> + <env key="LDFLAGS" value="-Wl,--no-as-needed -lrt" /> + <env key="LIB_NAME" value="${nativeLibName}" /> + </exec> + </target> + </configuration> + </execution> + </executions> + </plugin> + </plugins> + </build> + </profile> <profile> <id>freebsd</id> <activation>
null
train
test
"2020-04-15T10:21:24"
"2018-09-10T20:13:38Z"
rhenwood-arm
val
netty/netty/7210_9856
netty/netty
netty/netty/7210
netty/netty/9856
[ "keyword_pr_to_issue" ]
15b6ed92a03ad8cd5e95bc0971f09afa5094f29c
4138cba86138b390f934e615ad4f96bad621124d
[ "The fix can't just be to use a different Set implementation, the current equals/hashCode of io.netty.handler.codec.http.cookie.DefaultCookie will prevent any Set from ever holding multiple Cookies with the same name, because there is no path or domain to differentiate on. Ideally ServerCookieDecoder::decode would return a List<Cookie>, but that would be a breaking interface change.\r\n\r\nI would recommend adding `value` to the equals/hashCode/compareTo of DefaultCookie. If this is the right approach, I'm happen to provide a PR.\r\n\r\nAdditionally the javadoc for ServerCookieDecoder::decode say Set-Cookie, but I'm pretty sure the class is meant to decode Cookie headers, not Set-Cookie. I would also recommend a LinkedHashSet instead of TreeSet, since there is some value in maintaining the order that the client sent the cookie pairs in.", "> Ideally ServerCookieDecoder::decode would return a List, but that would be a breaking interface change.\r\n\r\nMaybe provide a different set of decode methods and deprecate existing ones. Thoughts?\r\n\r\n> I would recommend adding value to the equals/hashCode/compareTo of DefaultCookie. If this is the right approach, I'm happen to provide a PR.\r\n\r\nWe could also pass a Comparator to the TreeSet constructor that would account for the value.", "I don't even think the decoder should return Cookie classes if only name and value are ever set. At least I got confused and thought I could access domain/path.\r\n\r\nTwo suggestions:\r\n\r\n- decode to Map<String, List<String>> just like QueryStringDecoder#parameters()\r\n\r\n- decode to an Iterable<Map.Entry<String, String>> just like HttpHeaders", "Just hit this today on 4.1.30" ]
[]
"2019-12-07T00:48:30Z"
[]
ServerCookieDecoder does not support duplicate cookies
### Expected behavior According to [RFC-6265 (section 4.2.2)](https://tools.ietf.org/html/rfc6265#section-4.2.2), multiple cookie pairs with the same name are not explicitly excluded from happening. It is even hinted that it should be expected, In particular, if the Cookie header contains two cookies with the same name (e.g., that were set with different Path or Domain attributes), servers SHOULD NOT rely upon the order in which these cookies appear in the header. I would expect that ServerCookieDecoder (and the deprecated CookieDecoder) would return multiple cookies of the same name. This is particularly important because a client might have multiple cookies for different paths or domains, and the client WILL send them all. It's unspecified if clients will send the duplicate cookies in a single Cookie header or multiple Cookie headers, but it is commonly seen in a single Cookie headers, which is why this an issue. ### Actual behavior ServerCookieDecoder will return a single cookie when provided with multiple cookies with identical names. [This happens because a TreeSet is used to collect the Cookies](https://github.com/quidryan/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/cookie/ServerCookieDecoder.java#L71) while the DefaultCookie class has a hashCode/equals/compareTo that doesn't take into account the value of the cookie. ### Steps to reproduce ``` @Test public void testDecodingDuplicateCookies() { String c1 = "myCookie=myValue;"; String c2 = "myCookie=myValue2;"; String c3 = "myCookie=myValue3;"; Set<Cookie> cookies = ServerCookieDecoder.STRICT.decode(c1 + c2 + c3); assertEquals(3, cookies.size()); Iterator<Cookie> it = cookies.iterator(); Cookie cookie = it.next(); assertNotNull(cookie); assertEquals("myValue", cookie.value()); cookie = it.next(); assertNotNull(cookie); assertEquals("myValue2", cookie.value()); cookie = it.next(); assertNotNull(cookie); assertEquals("myValue3", cookie.value()); } ``` The `assertEquals(3, cookies.size());` will fail, since only one cookie will be returned. ### Minimal yet complete reproducer code (or URL to code) See above. ### Netty version 4.1.11.FINAL
[ "codec-http/src/main/java/io/netty/handler/codec/http/cookie/ServerCookieDecoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/cookie/ServerCookieDecoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/cookie/ServerCookieDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/cookie/ServerCookieDecoder.java index cf5349b0863..f56fb919a80 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/cookie/ServerCookieDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/cookie/ServerCookieDecoder.java @@ -17,7 +17,10 @@ import static io.netty.util.internal.ObjectUtil.checkNotNull; +import java.util.ArrayList; +import java.util.Collection; import java.util.Collections; +import java.util.List; import java.util.Set; import java.util.TreeSet; @@ -56,20 +59,41 @@ private ServerCookieDecoder(boolean strict) { super(strict); } + /** + * Decodes the specified Set-Cookie HTTP header value into a {@link Cookie}. Unlike {@link #decode(String)}, this + * includes all cookie values present, even if they have the same name. + * + * @return the decoded {@link Cookie} + */ + public List<Cookie> decodeAll(String header) { + List<Cookie> cookies = new ArrayList<Cookie>(); + decode(cookies, header); + return Collections.unmodifiableList(cookies); + } + /** * Decodes the specified Set-Cookie HTTP header value into a {@link Cookie}. * * @return the decoded {@link Cookie} */ public Set<Cookie> decode(String header) { + Set<Cookie> cookies = new TreeSet<Cookie>(); + decode(cookies, header); + return cookies; + } + + /** + * Decodes the specified Set-Cookie HTTP header value into a {@link Cookie}. + * + * @return the decoded {@link Cookie} + */ + private void decode(Collection<? super Cookie> cookies, String header) { final int headerLen = checkNotNull(header, "header").length(); if (headerLen == 0) { - return Collections.emptySet(); + return; } - Set<Cookie> cookies = new TreeSet<Cookie>(); - int i = 0; boolean rfc2965Style = false; @@ -149,7 +173,5 @@ public Set<Cookie> decode(String header) { cookies.add(cookie); } } - - return cookies; } }
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java index b157bc3c73b..b6b33655757 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/cookie/ServerCookieDecoderTest.java @@ -15,6 +15,7 @@ */ package io.netty.handler.codec.http.cookie; +import java.util.List; import org.junit.Test; import java.util.Iterator; @@ -53,6 +54,26 @@ public void testDecodingMultipleCookies() { assertEquals("myValue3", cookie.value()); } + @Test + public void testDecodingAllMultipleCookies() { + String c1 = "myCookie=myValue;"; + String c2 = "myCookie=myValue2;"; + String c3 = "myCookie=myValue3;"; + + List<Cookie> cookies = ServerCookieDecoder.STRICT.decodeAll(c1 + c2 + c3); + assertEquals(3, cookies.size()); + Iterator<Cookie> it = cookies.iterator(); + Cookie cookie = it.next(); + assertNotNull(cookie); + assertEquals("myValue", cookie.value()); + cookie = it.next(); + assertNotNull(cookie); + assertEquals("myValue2", cookie.value()); + cookie = it.next(); + assertNotNull(cookie); + assertEquals("myValue3", cookie.value()); + } + @Test public void testDecodingGoogleAnalyticsCookie() { String source =
val
test
"2019-12-07T01:27:03"
"2017-09-14T05:38:40Z"
quidryan
val
netty/netty/9861_9865
netty/netty
netty/netty/9861
netty/netty/9865
[ "keyword_issue_to_pr" ]
9a0ccf24f32ff8d5a9b3485cc379ae04ea1c0cd3
8494b046ec7e4f28dbd44bc699cc4c4c92251729
[ "@ZeddYu I fixed 1) https://github.com/netty/netty/pull/9865 but I am not sure about 2) as netty is just a library so I think the user should verify the transfer-encoding as it also depends on what handlers are in the pipeline.", "@normanmaurer But I test it with following code:\r\n```java\r\n HttpHeaders headers = request.headers();\r\n if (!headers.isEmpty()) {\r\n for (Map.Entry<String, String> h: headers) {\r\n CharSequence key = h.getKey();\r\n CharSequence value = h.getValue();\r\n buf.append(\"HEADER: \").append(key).append(\" = \").append(value).append(\"\\r\\n\");\r\n }\r\n buf.append(\"\\r\\n\");\r\n }\r\n```\r\nUse this request headerto test:\r\n```\r\nTransfer-Encoding: chunked\r\nTransfer-Encoding: some\r\n```\r\nOnly get one TE header is 'some'.\r\nAnd for the 1), why I use two CL header to test is still accepted?", "@ZeddYu sorry I don't understand your question... Can you please try to rephrase ? As shown in the test-case of #9865 multiple `Content-Length` headers are not allowed anymore .", "@normanmaurer All right. I paste an img.\r\n![](https://zeddyuimg.oss-cn-shanghai.aliyuncs.com/20191210195522.png)\r\nUsing two CL header is still accepted. Netty version is 4.1.43 Final.", "@ZeddYu sure as this is not merged yet and will be part of netty 4.1.44.Final. ", "@normanmaurer All right. But it seems it has got another question.\r\n![](https://zeddyuimg.oss-cn-shanghai.aliyuncs.com/20191210195815.png)\r\nYou see. It allows CRLF after the colon. But RFC7230 says only allow CRLF + 1*(SP/HTAB).\r\nAnd for the 2) and this , I am sure it will cause problems.", "Here it is, the obs-fold.\r\n```\r\nheader-field = field-name \":\" OWS field-value OWS\r\n\r\n field-name = token\r\n field-value = *( field-content / obs-fold )\r\n field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ]\r\n field-vchar = VCHAR / obs-text\r\n\r\n obs-fold = CRLF 1*( SP / HTAB )\r\n ; obsolete line folding\r\n ; see Section 3.2.4\r\n```\r\nAccording to https://tools.ietf.org/html/rfc7230#section-3.3.2.", "@ZeddYu If you think there is an other issue related to multi-line headers please open another issue. IHMO this is not related to this one. ", "@normanmaurer What about the 2)?", "@ZeddYu again I am not sure why we should do anything about 2) as what is supported depends on what the user is doing with the pipeline etc. Netty is just a library and not a server implementation itself. For example http_parser is not doing anything about 2) as well. ", "@normanmaurer But just like \r\n\r\n> @normanmaurer But I test it with following code:\r\n> \r\n> ```java\r\n> HttpHeaders headers = request.headers();\r\n> if (!headers.isEmpty()) {\r\n> for (Map.Entry<String, String> h: headers) {\r\n> CharSequence key = h.getKey();\r\n> CharSequence value = h.getValue();\r\n> buf.append(\"HEADER: \").append(key).append(\" = \").append(value).append(\"\\r\\n\");\r\n> }\r\n> buf.append(\"\\r\\n\");\r\n> }\r\n> ```\r\n> \r\n> Use this request headerto test:\r\n> \r\n> ```\r\n> Transfer-Encoding: chunked\r\n> Transfer-Encoding: some\r\n> ```\r\n> \r\n> Only get one TE header is 'some'.\r\n\r\n@normanmaurer But just like what I said just now. I can't seem to get the first TE header. I use the code which is from https://netty.io/4.1/xref/io/netty/example/http/snoop/HttpSnoopServer.html.\r\n\r\n", "@ZeddYu please can you provide a few reproducer I can run... This back and forth just eats up a lot of time on both of our ends. ", "Sorry for that.\r\nApplication.class:\r\n```java\r\npublic class Application {\r\n\r\n public static void main(String[] args) throws Exception{\r\n HttpServer server = new HttpServer(8888);// 8081为启动端口\r\n server.start();\r\n }\r\n}\r\n```\r\nHttpServer.class:\r\n```java\r\nimport io.netty.bootstrap.ServerBootstrap;\r\nimport io.netty.channel.ChannelFuture;\r\nimport io.netty.channel.EventLoopGroup;\r\nimport io.netty.channel.nio.NioEventLoopGroup;\r\nimport io.netty.channel.socket.nio.NioServerSocketChannel;\r\nimport io.netty.handler.logging.LogLevel;\r\nimport io.netty.handler.logging.LoggingHandler;\r\n\r\nimport java.net.InetSocketAddress;\r\n\r\n/**\r\n * netty server\r\n * 2018/11/1.\r\n */\r\npublic class HttpServer {\r\n\r\n int port = 8888;\r\n\r\n public HttpServer(int port){\r\n this.port = port;\r\n }\r\n\r\n public void start() throws Exception{\r\n ServerBootstrap bootstrap = new ServerBootstrap();\r\n EventLoopGroup boss = new NioEventLoopGroup();\r\n EventLoopGroup work = new NioEventLoopGroup();\r\n bootstrap.group(boss,work)\r\n .handler(new LoggingHandler(LogLevel.DEBUG))\r\n .channel(NioServerSocketChannel.class)\r\n .childHandler(new HttpServerInitializer());\r\n\r\n ChannelFuture f = bootstrap.bind(new InetSocketAddress(port)).sync();\r\n System.out.println(\" server start up on port : \" + port);\r\n f.channel().closeFuture().sync();\r\n\r\n }\r\n\r\n}\r\n```\r\nHttpServerInitialize.class\r\n```java\r\nimport io.netty.channel.ChannelInitializer;\r\nimport io.netty.channel.ChannelPipeline;\r\nimport io.netty.channel.socket.SocketChannel;\r\nimport io.netty.handler.codec.http.HttpObjectAggregator;\r\nimport io.netty.handler.codec.http.HttpServerCodec;\r\n\r\npublic class HttpServerInitializer extends ChannelInitializer<SocketChannel> {\r\n\r\n @Override\r\n protected void initChannel(SocketChannel channel) throws Exception {\r\n ChannelPipeline pipeline = channel.pipeline();\r\n pipeline.addLast(new HttpServerCodec());\r\n pipeline.addLast(\"httpAggregator\",new HttpObjectAggregator(512*1024)); \r\n pipeline.addLast(new HttpRequestHandler());\r\n\r\n }\r\n}\r\n```\r\nHttpRequestHandler.class\r\n```java\r\nimport io.netty.buffer.ByteBuf;\r\nimport io.netty.buffer.Unpooled;\r\nimport io.netty.channel.ChannelFutureListener;\r\nimport io.netty.channel.ChannelHandlerContext;\r\nimport io.netty.channel.SimpleChannelInboundHandler;\r\nimport io.netty.handler.codec.DecoderResult;\r\nimport io.netty.handler.codec.http.DefaultFullHttpResponse;\r\nimport io.netty.handler.codec.http.FullHttpResponse;\r\nimport io.netty.handler.codec.http.HttpContent;\r\nimport io.netty.handler.codec.http.HttpHeaderNames;\r\nimport io.netty.handler.codec.http.HttpUtil;\r\nimport io.netty.handler.codec.http.HttpHeaderValues;\r\nimport io.netty.handler.codec.http.HttpHeaders;\r\nimport io.netty.handler.codec.http.HttpObject;\r\nimport io.netty.handler.codec.http.HttpRequest;\r\nimport io.netty.handler.codec.http.LastHttpContent;\r\nimport io.netty.handler.codec.http.QueryStringDecoder;\r\nimport io.netty.handler.codec.http.cookie.Cookie;\r\nimport io.netty.handler.codec.http.cookie.ServerCookieDecoder;\r\nimport io.netty.handler.codec.http.cookie.ServerCookieEncoder;\r\nimport io.netty.util.CharsetUtil;\r\n\r\nimport java.util.List;\r\nimport java.util.Map;\r\nimport java.util.Map.Entry;\r\nimport java.util.Set;\r\n\r\nimport static io.netty.handler.codec.http.HttpResponseStatus.*;\r\nimport static io.netty.handler.codec.http.HttpVersion.*;\r\n\r\npublic class HttpRequestHandler extends SimpleChannelInboundHandler<Object> {\r\n\r\n private HttpRequest request;\r\n /** Buffer that stores the response content */\r\n private final StringBuilder buf = new StringBuilder();\r\n\r\n @Override\r\n public void channelReadComplete(ChannelHandlerContext ctx) {\r\n ctx.flush();\r\n }\r\n\r\n @Override\r\n public void channelRead(ChannelHandlerContext ctx, Object msg) {\r\n if (msg instanceof HttpRequest) {\r\n HttpRequest request = this.request = (HttpRequest) msg;\r\n\r\n if (HttpUtil.is100ContinueExpected(request)) {\r\n sendContinue(ctx);\r\n }\r\n\r\n buf.setLength(0);\r\n buf.append(\"WELCOME TO THE WILD WILD WEB SERVER\\r\\n\");\r\n buf.append(\"===================================\\r\\n\");\r\n\r\n buf.append(\"VERSION: \").append(request.protocolVersion()).append(\"\\r\\n\");\r\n buf.append(\"HOSTNAME: \").append(request.headers().get(HttpHeaderNames.HOST, \"unknown\")).append(\"\\r\\n\");\r\n buf.append(\"REQUEST_URI: \").append(request.uri()).append(\"\\r\\n\\r\\n\");\r\n\r\n HttpHeaders headers = request.headers();\r\n if (!headers.isEmpty()) {\r\n for (Map.Entry<String, String> h: headers) {\r\n CharSequence key = h.getKey();\r\n CharSequence value = h.getValue();\r\n buf.append(\"HEADER: \").append(key).append(\" = \").append(value).append(\"\\r\\n\");\r\n }\r\n buf.append(\"\\r\\n\");\r\n }\r\n\r\n QueryStringDecoder queryStringDecoder = new QueryStringDecoder(request.uri());\r\n Map<String, List<String>> params = queryStringDecoder.parameters();\r\n if (!params.isEmpty()) {\r\n for (Entry<String, List<String>> p: params.entrySet()) {\r\n String key = p.getKey();\r\n List<String> vals = p.getValue();\r\n for (String val : vals) {\r\n buf.append(\"PARAM: \").append(key).append(\" = \").append(val).append(\"\\r\\n\");\r\n }\r\n }\r\n buf.append(\"\\r\\n\");\r\n }\r\n\r\n appendDecoderResult(buf, request);\r\n }\r\n\r\n if (msg instanceof HttpContent) {\r\n HttpContent httpContent = (HttpContent) msg;\r\n\r\n ByteBuf content = httpContent.content();\r\n if (content.isReadable()) {\r\n buf.append(\"CONTENT: \");\r\n buf.append(content.toString(CharsetUtil.UTF_8));\r\n buf.append(\"\\r\\n\");\r\n appendDecoderResult(buf, request);\r\n }\r\n\r\n if (msg instanceof LastHttpContent) {\r\n buf.append(\"END OF CONTENT\\r\\n\");\r\n\r\n LastHttpContent trailer = (LastHttpContent) msg;\r\n if (!trailer.trailingHeaders().isEmpty()) {\r\n buf.append(\"\\r\\n\");\r\n for (CharSequence name: trailer.trailingHeaders().names()) {\r\n for (CharSequence value: trailer.trailingHeaders().getAll(name)) {\r\n buf.append(\"TRAILING HEADER: \");\r\n buf.append(name).append(\" = \").append(value).append(\"\\r\\n\");\r\n }\r\n }\r\n buf.append(\"\\r\\n\");\r\n }\r\n\r\n if (!writeResponse(trailer, ctx)) {\r\n // If keep-alive is off, close the connection once the content is fully written.\r\n ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);\r\n }\r\n }\r\n }\r\n }\r\n\r\n @Override\r\n protected void channelRead0(ChannelHandlerContext channelHandlerContext, Object o) throws Exception {\r\n\r\n }\r\n\r\n private static void appendDecoderResult(StringBuilder buf, HttpObject o) {\r\n DecoderResult result = o.decoderResult();\r\n if (result.isSuccess()) {\r\n return;\r\n }\r\n\r\n buf.append(\".. WITH DECODER FAILURE: \");\r\n buf.append(result.cause());\r\n buf.append(\"\\r\\n\");\r\n }\r\n\r\n private boolean writeResponse(HttpObject currentObj, ChannelHandlerContext ctx) {\r\n // Decide whether to close the connection or not.\r\n boolean keepAlive = HttpUtil.isKeepAlive(request);\r\n // Build the response object.\r\n FullHttpResponse response = new DefaultFullHttpResponse(\r\n HTTP_1_1, currentObj.decoderResult().isSuccess()? OK : BAD_REQUEST,\r\n Unpooled.copiedBuffer(buf.toString(), CharsetUtil.UTF_8));\r\n\r\n response.headers().set(HttpHeaderNames.CONTENT_TYPE, \"text/plain; charset=UTF-\");\r\n\r\n if (keepAlive) {\r\n // Add 'Content-Length' header only for a keep-alive connection.\r\n response.headers().setInt(HttpHeaderNames.CONTENT_LENGTH, response.content().readableBytes());\r\n // Add keep alive header as per:\r\n // - http://www.w.org/Protocols/HTTP/./draft-ietf-http-v-spec-.html#Connection\r\n response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);\r\n }\r\n\r\n // Encode the cookie.\r\n String cookieString = request.headers().get(HttpHeaderNames.COOKIE);\r\n if (cookieString != null) {\r\n Set<Cookie> cookies = ServerCookieDecoder.STRICT.decode(cookieString);\r\n if (!cookies.isEmpty()) {\r\n // Reset the cookies if necessary.\r\n for (Cookie cookie: cookies) {\r\n response.headers().add(HttpHeaderNames.SET_COOKIE, ServerCookieEncoder.STRICT.encode(cookie));\r\n }\r\n }\r\n } else {\r\n // Browser sent no cookie. Add some.\r\n response.headers().add(HttpHeaderNames.SET_COOKIE, ServerCookieEncoder.STRICT.encode(\"key1\", \"value1\"));\r\n response.headers().add(HttpHeaderNames.SET_COOKIE, ServerCookieEncoder.STRICT.encode(\"key2\", \"value2\"));\r\n }\r\n\r\n // Write the response.\r\n ctx.write(response);\r\n\r\n return keepAlive;\r\n }\r\n\r\n private static void sendContinue(ChannelHandlerContext ctx) {\r\n FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, CONTINUE, Unpooled.EMPTY_BUFFER);\r\n ctx.write(response);\r\n }\r\n\r\n @Override\r\n public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {\r\n cause.printStackTrace();\r\n ctx.close();\r\n }\r\n}\r\n```", "@normanmaurer And can I request a CVE for 1)?", "Fix my description of 2).\r\nI use this http request.\r\n```http\r\nPOST / HTTP/1.1\r\nHost:localhost\r\nConnection: close\r\nContent-Length: 1\r\nTransfer-Encoding: chunked, something\r\n\r\n0\r\n```\r\nAnd netty use CL header because of 'something' in TE header. And I get the '0' data successfully.", "@ZeddYu yes please request a CVE for incorrectly handling Content-Length and Transfer-Encoding: chunked headers. ", "It appears this was assigned a CVE ID\r\nhttps://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7238", "@joshbressers If I understand correctly, this was assigned CVE-2019-20445 (according to the CVE description)\r\n\r\nhttps://nvd.nist.gov/vuln/detail/CVE-2019-20445\r\n\r\n@normanmaurer Do you know if your pull request #9865 addresses both CVE-2019-20445 and CVE-2020-7238?", "@artem-smotrakov yes", "@normanmaurer Thanks for confirming that!\r\n\r\nI just wrote a couple of tests based on https://github.com/jdordonezn/CVE-2020-72381/issues/1 which is mentioned in CVE-2020-7238. The test passed although I am not 100% sure if they are sufficient\r\n\r\n```\r\ndiff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java\r\nindex 1e780b7959..2548af0e2a 100644\r\n--- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java\r\n+++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java\r\n@@ -325,7 +325,30 @@ public class HttpRequestDecoderTest {\r\n public void testWhitespace() {\r\n String requestStr = \"GET /some/path HTTP/1.1\\r\\n\" +\r\n \"Transfer-Encoding : chunked\\r\\n\" +\r\n- \"Host: netty.io\\n\\r\\n\";\r\n+ \"Host: netty.io\\r\\n\\r\\n\";\r\n+ testInvalidHeaders0(requestStr);\r\n+ }\r\n+\r\n+ @Test\r\n+ public void testWhitespaceBeforeTransferEncoding01() {\r\n+ String requestStr = \"GET /some/path HTTP/1.1\\r\\n\" +\r\n+ \" Transfer-Encoding : chunked\\r\\n\" +\r\n+ \"Content-Length: 1\\r\\n\" +\r\n+ \"Host: netty.io\\r\\n\\r\\n\" +\r\n+ \"a\";\r\n+ testInvalidHeaders0(requestStr);\r\n+ }\r\n+\r\n+ @Test\r\n+ public void testWhitespaceBeforeTransferEncoding02() {\r\n+ String requestStr = \"POST / HTTP/1.1\" +\r\n+ \" Transfer-Encoding : chunked\\r\\n\" +\r\n+ \"Host: target.com\" +\r\n+ \"Content-Length: 65\\r\\n\\r\\n\" +\r\n+ \"0\\r\\n\\r\\n\" +\r\n+ \"GET /maliciousRequest HTTP/1.1\\r\\n\" +\r\n+ \"Host: evilServer.com\\r\\n\" +\r\n+ \"Foo: x\";\r\n testInvalidHeaders0(requestStr);\r\n }\r\n\r\n```", "@artem-smotrakov thanks... would you be willing to submit these tests as a PR so we can include them and ensure we not regress ?", "@normanmaurer No problem, I'll open a pull request.", "Is there any plan to backport this to 3.x? There are certain libraries still out there coupled to the 3.x package naming structure so if this fix isn't backported, they will be forever vulnerable.", "@JLLeitschuh sorry but no backport will be done ... netty 3.x is EOL for many years and if projects did not update yet they may have a reason to do now." ]
[ "Should we check the RFC test case?\r\n```\r\nContent-Length: 42, 42\r\n```", "done", "Maybe catch `NumberFormatException` (\"Content-Length: 42, 42\") and re throw with a meaningful message ?", "I would prefer to just keep it as it is as it did throw the same type of exception before already ", "OK, but it can be improved in Netty 5 ;)", "I netty 5 I think we want to re-evulate the http stuff anyway /cc @Scottmitch ", "nit: should this be done after the `if` check below to avoid parsing the `content-length` header if there are multiple such headers?:\r\n\r\n```java\r\nif (message.protocolVersion() == HttpVersion.HTTP_1_1) {\r\n .....\r\n}\r\ncontentLength = Long.parseLong(values.get(0));\r\n```", "nit: Rename `size` to `contentLengthValuesCount` to disambiguate `size`.", "```suggestion\r\n \"'Both Content-Length: \" + contentLength + \"' and 'Transfer-Encoding: chunked' found\");\r\n```", "Looking at the code, this is not an issue but you may want to test non-contiguous `content-length` headers:\r\n\r\n```java\r\nString requestStr = \"GET /some/path HTTP/1.1\\r\\n\" +\r\n \"Content-Length: 1\\r\\n\" +\r\n \"Connection: close\\r\\n\" +\r\n \"Content-Length: 0\\r\\n\\r\\n\" +\r\n \"b\";\r\n" ]
"2019-12-10T10:17:16Z"
[]
Non-proper handling of Content-Length and Transfer-Encoding: chunked headers
### Expected behavior 1.Only accept one Content-Length.RFC 7230 says `duplicate Content-Length header fields have been generated or combined by an upstream message processor, then the recipient MUST either reject the message as invalid or replace the duplicated field-values with a single valid Content-Length`. 2.Only accept identity and chunked Transport-Encoding In this implementation, the order does not matter (it probably should). The Go implementation only uses the first value of the header.Seems to be in sync with the behaviour of AWS ALB. All other valid (gzip, compress, etc.) and invalid TE will return a 501, since we don't have readers for them I figured this was the right move, but feel free to correct me ### Actual behavior 1. But netty accept all. 2.Netty accpet random TE. ### Steps to reproduce Use two CL to reproduce the first. Use a chunked TE header and a random TE header. Smiliar with [9571](https://github.com/netty/netty/issues/9571). It also cause http smuggling. Or see the other issue [benoitc/gunicorn#2176](https://github.com/benoitc/gunicorn/issues/2176) and the PR [benoitc/gunicorn#2181](https://github.com/benoitc/gunicorn/pull/2181) ### Minimal yet complete reproducer code (or URL to code) ### Netty version all ### JVM version (e.g. `java -version`) ### OS version (e.g. `uname -a`)
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java index 28f048252fe..768bd3b26f5 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java @@ -600,23 +600,61 @@ private State readHeaders(ByteBuf buffer) { if (name != null) { headers.add(name, value); } + // reset name and value fields name = null; value = null; - State nextState; + List<String> values = headers.getAll(HttpHeaderNames.CONTENT_LENGTH); + int contentLengthValuesCount = values.size(); + + if (contentLengthValuesCount > 0) { + // Guard against multiple Content-Length headers as stated in + // https://tools.ietf.org/html/rfc7230#section-3.3.2: + // + // If a message is received that has multiple Content-Length header + // fields with field-values consisting of the same decimal value, or a + // single Content-Length header field with a field value containing a + // list of identical decimal values (e.g., "Content-Length: 42, 42"), + // indicating that duplicate Content-Length header fields have been + // generated or combined by an upstream message processor, then the + // recipient MUST either reject the message as invalid or replace the + // duplicated field-values with a single valid Content-Length field + // containing that decimal value prior to determining the message body + // length or forwarding the message. + if (contentLengthValuesCount > 1 && message.protocolVersion() == HttpVersion.HTTP_1_1) { + throw new IllegalArgumentException("Multiple Content-Length headers found"); + } + contentLength = Long.parseLong(values.get(0)); + } if (isContentAlwaysEmpty(message)) { HttpUtil.setTransferEncodingChunked(message, false); - nextState = State.SKIP_CONTROL_CHARS; + return State.SKIP_CONTROL_CHARS; } else if (HttpUtil.isTransferEncodingChunked(message)) { - nextState = State.READ_CHUNK_SIZE; + // See https://tools.ietf.org/html/rfc7230#section-3.3.3 + // + // If a message is received with both a Transfer-Encoding and a + // Content-Length header field, the Transfer-Encoding overrides the + // Content-Length. Such a message might indicate an attempt to + // perform request smuggling (Section 9.5) or response splitting + // (Section 9.4) and ought to be handled as an error. A sender MUST + // remove the received Content-Length field prior to forwarding such + // a message downstream. + // + // This is also what http_parser does: + // https://github.com/nodejs/http-parser/blob/v2.9.2/http_parser.c#L1769 + if (contentLengthValuesCount > 0 && message.protocolVersion() == HttpVersion.HTTP_1_1) { + throw new IllegalArgumentException( + "Both 'Content-Length: " + contentLength + "' and 'Transfer-Encoding: chunked' found"); + } + + return State.READ_CHUNK_SIZE; } else if (contentLength() >= 0) { - nextState = State.READ_FIXED_LENGTH_CONTENT; + return State.READ_FIXED_LENGTH_CONTENT; } else { - nextState = State.READ_VARIABLE_LENGTH_CONTENT; + return State.READ_VARIABLE_LENGTH_CONTENT; } - return nextState; } private long contentLength() {
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java index 8a2345837fe..1e780b7959f 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java @@ -323,29 +323,75 @@ public void testTooLargeHeaders() { @Test public void testWhitespace() { - EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder()); String requestStr = "GET /some/path HTTP/1.1\r\n" + "Transfer-Encoding : chunked\r\n" + "Host: netty.io\n\r\n"; - - assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII))); - HttpRequest request = channel.readInbound(); - assertTrue(request.decoderResult().isFailure()); - assertTrue(request.decoderResult().cause() instanceof IllegalArgumentException); - assertFalse(channel.finish()); + testInvalidHeaders0(requestStr); } @Test public void testHeaderWithNoValueAndMissingColon() { - EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder()); String requestStr = "GET /some/path HTTP/1.1\r\n" + "Content-Length: 0\r\n" + "Host:\r\n" + "netty.io\r\n\r\n"; + testInvalidHeaders0(requestStr); + } + + @Test + public void testMultipleContentLengthHeaders() { + String requestStr = "GET /some/path HTTP/1.1\r\n" + + "Content-Length: 1\r\n" + + "Content-Length: 0\r\n\r\n" + + "b"; + testInvalidHeaders0(requestStr); + } + + @Test + public void testMultipleContentLengthHeaders2() { + String requestStr = "GET /some/path HTTP/1.1\r\n" + + "Content-Length: 1\r\n" + + "Connection: close\r\n" + + "Content-Length: 0\r\n\r\n" + + "b"; + testInvalidHeaders0(requestStr); + } + + @Test + public void testContentLengthHeaderWithCommaValue() { + String requestStr = "GET /some/path HTTP/1.1\r\n" + + "Content-Length: 1,1\r\n\r\n" + + "b"; + testInvalidHeaders0(requestStr); + } + @Test + public void testMultipleContentLengthHeadersWithFolding() { + String requestStr = "POST / HTTP/1.1\r\n" + + "Host: example.com\r\n" + + "Connection: close\r\n" + + "Content-Length: 5\r\n" + + "Content-Length:\r\n" + + "\t6\r\n\r\n" + + "123456"; + testInvalidHeaders0(requestStr); + } + + @Test + public void testContentLengthHeaderAndChunked() { + String requestStr = "POST / HTTP/1.1\r\n" + + "Host: example.com\r\n" + + "Connection: close\r\n" + + "Content-Length: 5\r\n" + + "Transfer-Encoding: chunked\r\n\r\n" + + "0\r\n\r\n"; + testInvalidHeaders0(requestStr); + } + + private static void testInvalidHeaders0(String requestStr) { + EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder()); assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII))); HttpRequest request = channel.readInbound(); - System.err.println(request.headers().names().toString()); assertTrue(request.decoderResult().isFailure()); assertTrue(request.decoderResult().cause() instanceof IllegalArgumentException); assertFalse(channel.finish());
train
test
"2019-12-12T16:18:27"
"2019-12-09T18:54:41Z"
ZeddYu
val
netty/netty/9867_9874
netty/netty
netty/netty/9867
netty/netty/9874
[ "keyword_pr_to_issue" ]
a7c18d44b46e02dadfe3da225a06e5091f5f328e
0992718f87bf0f0c23660489b1f7e697a24f2a4b
[ "Very good idea. Recently I had the same thoughts. Also, it would be cool to have size limited compression. So for small messages like < 20 bytes you don't need compression. ", "@dvlato @doom369 if I'm not mistaken this should be already possible by just subclassing `HttpContentCompressor` and overriding `acceptOutboundMessage(Object)`.", "@njhill you are right. Also, `HttpContentCompressor` already has `contentSizeThreshold`. However, there is no such option for WebSockets. And it is not that trivial there :).", "@dvlato @doom369 guys this is already exist see #8910 we use this functionality on production.", "You can implement something like this:\r\n```\r\npublic class SelectiveCompressionFilterProvider implements WebSocketExtensionFilterProvider {\r\n\r\n private final int contentSizeThreshold;\r\n\r\n public SelectiveCompressionFilterProvider(int contentSizeThreshold) {\r\n this.contentSizeThreshold = contentSizeThreshold;\r\n }\r\n\r\n @Override\r\n public WebSocketExtensionFilter encoderFilter() {\r\n return webSocketFrame -> {\r\n if (contentSizeThreshold > 0) {\r\n return (webSocketFrame instanceof BinaryWebSocketFrame || webSocketFrame instanceof TextWebSocketFrame)\r\n && webSocketFrame.isFinalFragment()\r\n && webSocketFrame.content().readableBytes() < contentSizeThreshold;\r\n }\r\n\r\n return false;\r\n };\r\n }\r\n\r\n @Override\r\n public WebSocketExtensionFilter decoderFilter() {\r\n return WebSocketExtensionFilter.NEVER_SKIP;\r\n }\r\n\r\n}\r\n```\r\nand pass it to `PerMessageDeflateServerExtensionHandshaker` for example.", "@amizurov cool, thanks for hint!", "@dvlato @njhill here the code that allow to skip certain content types:\r\n\r\n```\r\npublic final class BlynkHttpCompressor extends HttpContentCompressor {\r\n\r\n @Override\r\n protected Result beginEncode(HttpResponse response, String acceptEncoding) throws Exception {\r\n if (response.headers().containsValue(HttpHeaderNames.CONTENT_TYPE, \"font/woff2\", false)) {\r\n return null;\r\n }\r\n return super.beginEncode(response, acceptEncoding);\r\n }\r\n\r\n}\r\n```", "@dvlato as @doom369 said you can easily implement this. So closing this issue", "Hi, \r\nI'd already tried the beginEncode() solution and now I've added some support in 'acceptOutboundMessage' but I still see high spikes in the latency of our application (in the 99% percentile) when the compression filter has been added even though the client does not send any 'accept-encoding' header! Has anyone also experienced this issue?\r\n\r\n", "@dvlato it could be anything. You need to profile in order to make correct conclusions that the problem in `HttpContentCompressor`. According to the code in `HttpContentCompressor.beginEncode()` it does nothing if no encoding header was provided." ]
[]
"2019-12-12T11:53:59Z"
[ "not a bug" ]
Make content compression configurable per content type
### Expected behavior Content compression can be enabled for specific content types. This mean we would like to configure ContentCompressor so we can define what content types should be compressed; in this case we wouldn't compress JPEG or GIF files as those formats are already compressed. This PR tried to fix the issue: https://github.com/netty/netty/pull/3847 but it seems it was abandoned. ### Actual behavior Compression affects all the content types, even if there's little no gain from compressing that type (e.g., JPEG files). ### Netty version netty-4.1.43.Final
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpContentCompressor.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/HttpContentCompressor.java", "codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java" ]
[ "codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java" ]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentCompressor.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentCompressor.java index e80469f26e4..b40fae66bc5 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentCompressor.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentCompressor.java @@ -132,15 +132,15 @@ public void handlerAdded(ChannelHandlerContext ctx) throws Exception { } @Override - protected Result beginEncode(HttpResponse headers, String acceptEncoding) throws Exception { + protected Result beginEncode(HttpResponse httpResponse, String acceptEncoding) throws Exception { if (this.contentSizeThreshold > 0) { - if (headers instanceof HttpContent && - ((HttpContent) headers).content().readableBytes() < contentSizeThreshold) { + if (httpResponse instanceof HttpContent && + ((HttpContent) httpResponse).content().readableBytes() < contentSizeThreshold) { return null; } } - String contentEncoding = headers.headers().get(HttpHeaderNames.CONTENT_ENCODING); + String contentEncoding = httpResponse.headers().get(HttpHeaderNames.CONTENT_ENCODING); if (contentEncoding != null) { // Content-Encoding was set, either as something specific or as the IDENTITY encoding // Therefore, we should NOT encode here diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java index 813965089c4..6f1070c21fd 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java @@ -288,8 +288,8 @@ private boolean encodeContent(HttpContent c, List<Object> out) { /** * Prepare to encode the HTTP message content. * - * @param headers - * the headers + * @param httpResponse + * the http response * @param acceptEncoding * the value of the {@code "Accept-Encoding"} header * @@ -299,7 +299,7 @@ private boolean encodeContent(HttpContent c, List<Object> out) { * {@code null} if {@code acceptEncoding} is unsupported or rejected * and thus the content should be handled as-is (i.e. no encoding). */ - protected abstract Result beginEncode(HttpResponse headers, String acceptEncoding) throws Exception; + protected abstract Result beginEncode(HttpResponse httpResponse, String acceptEncoding) throws Exception; @Override public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java index 6301ee8c0b5..9bb88385a8c 100644 --- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java +++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentEncoderTest.java @@ -41,7 +41,7 @@ public class HttpContentEncoderTest { private static final class TestEncoder extends HttpContentEncoder { @Override - protected Result beginEncode(HttpResponse headers, String acceptEncoding) { + protected Result beginEncode(HttpResponse httpResponse, String acceptEncoding) { return new Result("test", new EmbeddedChannel(new MessageToByteEncoder<ByteBuf>() { @Override protected void encode(ChannelHandlerContext ctx, ByteBuf in, ByteBuf out) throws Exception { @@ -395,7 +395,7 @@ public void testHttp1_0() throws Exception { public void testCleanupThrows() { HttpContentEncoder encoder = new HttpContentEncoder() { @Override - protected Result beginEncode(HttpResponse headers, String acceptEncoding) throws Exception { + protected Result beginEncode(HttpResponse httpResponse, String acceptEncoding) throws Exception { return new Result("myencoding", new EmbeddedChannel( new ChannelInboundHandlerAdapter() { @Override
train
test
"2019-12-11T15:49:07"
"2019-12-10T14:43:43Z"
dvlato
val
netty/netty/9911_9912
netty/netty
netty/netty/9911
netty/netty/9912
[ "keyword_issue_to_pr", "keyword_pr_to_issue" ]
06a5173e8d700f5f431557f21f647f2199bb3d52
607bc05a2c1453323b2f8c67cbd4aff0d5b17531
[ "Is there any other information I can give that would be helpful?", "@cilki thanks for reporting, is this something that started in 4.1.44.Final?", "@njhill Yes. I just tested 4.1.40.Final through 4.1.43.Final and they all work as expected.", "Thanks @cilki, sorry this is my bad, I should have time to submit a fix later today but you may need to stay on 4.1.43.Final until 4.1.45 is released.\r\n\r\nThe exception swallowing is likely unrelated but we should probably investigate that separately.", "Thanks. I can confirm that reverting b0feb5a81fd03f35b9e26108efc71070d0a52288 fixes the issue.", "I've opened #9912 to fix the main bug, but haven't looked at the exception swallowing yet." ]
[ "@normanmaurer I added this because I noticed that it fails when running the tests with `-Dio.netty.noUnsafe=true` (I think it's actually N/A in that case). Should we include test runs with Unsafe disabled in the CI?", "@njhill yes will add it.", "@njhill we need to call `src.release()` and `dst.release()` as well.", "@normanmaurer done", "@njhill nit: remove final ?", "sure, did it for the adjacent test too", "@njhill this may produce leaks as you loose the \"wrapper\".\r\n\r\nYou need to do something like this:\r\n\r\n```java\r\nByteBuf src =PooledByteBufAllocator.DEFAULT.directBuffer(512);\r\nByteBuf dst = PooledByteBufAllocator.DEFAULT.directBuffer(512);\r\n\r\n// This causes the internal reused ByteBuffer duplicate limit to be set to 128\r\ndst.writeBytes(ByteBuffer.allocate(128));\r\n \r\n// Ensure internal ByteBuffer duplicate limit is properly reset (used in memoryCopy non-Unsafe case)\r\nPooledByteBuf<ByteBuffer> pooledSrc = unwrapIfNeeded(src);\r\nPooledByteBuf<ByteBuffer> pooledDst = unwrapIfNeeded(dst);\r\n\r\npooledDst.chunk.arena.memoryCopy(pooledSrc.memory, 0, pooledDst, 512);\r\n\r\nsrc.release();\r\ndst.release();\r\n```\r\n\r\n", "@normanmaurer sorry I realized this after pushing but was already falling asleep :) now fixed" ]
"2019-12-30T06:08:29Z"
[ "defect" ]
BufferOverflowException with specific buffer sizes
When Netty reallocates pooled buffers that are approximately 256 to 1024 bytes, a `BufferOverflowException` can be thrown. **Furthermore, the exception is swallowed** and is not printed by `LoggingHandler` (I was only able to find it by capturing it from a debugger). I'm on JDK 13 and therefore `Unsafe` isn't available. ### Expected behavior Buffers that are between 256 and 1024 bytes should be reallocated successfully. In my application. it always works when the buffer is outside of that range and always throws when the buffer is inside of the range. ### Actual behavior ``` io.netty.handler.codec.EncoderException: java.nio.BufferOverflowException at io.netty.codec@4.1.44.Final/io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:125) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:715) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:707) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:790) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700) at io.netty.codec@4.1.44.Final/io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:112) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:715) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:707) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:790) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700) at io.netty.handler@4.1.44.Final/io.netty.handler.logging.LoggingHandler.write(LoggingHandler.java:235) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:715) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:762) at io.netty.transport@4.1.44.Final/io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1089) at io.netty.common@4.1.44.Final/io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.common@4.1.44.Final/io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.transport@4.1.44.Final/io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.common@4.1.44.Final/io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.common@4.1.44.Final/io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.common@4.1.44.Final/io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:830) Caused by: java.nio.BufferOverflowException at java.base/java.nio.DirectByteBuffer.put(DirectByteBuffer.java:411) at io.netty.buffer@4.1.44.Final/io.netty.buffer.PoolArena$DirectArena.memoryCopy(PoolArena.java:795) at io.netty.buffer@4.1.44.Final/io.netty.buffer.PoolArena$DirectArena.memoryCopy(PoolArena.java:704) at io.netty.buffer@4.1.44.Final/io.netty.buffer.PoolArena.reallocate(PoolArena.java:405) at io.netty.buffer@4.1.44.Final/io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:118) at io.netty.buffer@4.1.44.Final/io.netty.buffer.AbstractByteBuf.ensureWritable0(AbstractByteBuf.java:306) at io.netty.buffer@4.1.44.Final/io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:282) at io.netty.codec@4.1.44.Final/io.netty.handler.codec.protobuf.ProtobufVarint32LengthFieldPrepender.encode(ProtobufVarint32LengthFieldPrepender.java:48) at io.netty.codec@4.1.44.Final/io.netty.handler.codec.protobuf.ProtobufVarint32LengthFieldPrepender.encode(ProtobufVarint32LengthFieldPrepender.java:40) at io.netty.codec@4.1.44.Final/io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107) ... 20 more ``` This leads to messages of certain sizes getting dropped out of the pipeline. ### Minimal yet complete reproducer code (or URL to code) I haven't been able to reproduce outside of my application (and I tried pretty hard). I suspect that the allocation sequence of pooled buffers in my application must be important. I'll try to come up with a way to test this easily in my application. ### Netty version ``` 4.1.44.Final ``` ### JVM version (e.g. `java -version`) ``` openjdk 13.0.1 2019-10-15 OpenJDK Runtime Environment (build 13.0.1+9) OpenJDK 64-Bit Server VM (build 13.0.1+9, mixed mode) ``` ### OS version (e.g. `uname -a`) ``` Linux OCTOLOGY 5.4.6-arch1-1 #1 SMP PREEMPT Sat, 21 Dec 2019 16:34:41 +0000 x86_64 GNU/Linux ```
[ "buffer/src/main/java/io/netty/buffer/PooledByteBuf.java", "buffer/src/main/java/io/netty/buffer/PooledDirectByteBuf.java" ]
[ "buffer/src/main/java/io/netty/buffer/PooledByteBuf.java", "buffer/src/main/java/io/netty/buffer/PooledDirectByteBuf.java" ]
[ "buffer/src/test/java/io/netty/buffer/PoolArenaTest.java" ]
diff --git a/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java b/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java index 9e33c0c5942..3ef1e26cba0 100644 --- a/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/PooledByteBuf.java @@ -154,6 +154,8 @@ protected final ByteBuffer internalNioBuffer() { ByteBuffer tmpNioBuf = this.tmpNioBuf; if (tmpNioBuf == null) { this.tmpNioBuf = tmpNioBuf = newInternalNioBuffer(memory); + } else { + tmpNioBuf.clear(); } return tmpNioBuf; } diff --git a/buffer/src/main/java/io/netty/buffer/PooledDirectByteBuf.java b/buffer/src/main/java/io/netty/buffer/PooledDirectByteBuf.java index dee7fe6774e..3d77ecf1953 100644 --- a/buffer/src/main/java/io/netty/buffer/PooledDirectByteBuf.java +++ b/buffer/src/main/java/io/netty/buffer/PooledDirectByteBuf.java @@ -260,7 +260,7 @@ public ByteBuf setBytes(int index, ByteBuffer src) { } index = idx(index); - tmpBuf.clear().position(index).limit(index + length); + tmpBuf.limit(index + length).position(index); tmpBuf.put(src); return this; } @@ -274,7 +274,7 @@ public int setBytes(int index, InputStream in, int length) throws IOException { return readBytes; } ByteBuffer tmpBuf = internalNioBuffer(); - tmpBuf.clear().position(idx(index)); + tmpBuf.position(idx(index)); tmpBuf.put(tmp, 0, readBytes); return readBytes; }
diff --git a/buffer/src/test/java/io/netty/buffer/PoolArenaTest.java b/buffer/src/test/java/io/netty/buffer/PoolArenaTest.java index 2c1bd12db32..3ad331ddb30 100644 --- a/buffer/src/test/java/io/netty/buffer/PoolArenaTest.java +++ b/buffer/src/test/java/io/netty/buffer/PoolArenaTest.java @@ -20,6 +20,8 @@ import org.junit.Assert; import org.junit.Test; +import static org.junit.Assume.assumeTrue; + import java.nio.ByteBuffer; public class PoolArenaTest { @@ -46,6 +48,7 @@ public void testNormalizeAlignedCapacity() throws Exception { @Test public void testDirectArenaOffsetCacheLine() throws Exception { + assumeTrue(PlatformDependent.hasUnsafe()); int capacity = 5; int alignment = 128; @@ -64,7 +67,7 @@ public void testDirectArenaOffsetCacheLine() throws Exception { } @Test - public final void testAllocationCounter() { + public void testAllocationCounter() { final PooledByteBufAllocator allocator = new PooledByteBufAllocator( true, // preferDirect 0, // nHeapArena @@ -107,4 +110,26 @@ public final void testAllocationCounter() { Assert.assertEquals(1, metric.numNormalDeallocations()); Assert.assertEquals(1, metric.numNormalAllocations()); } + + @Test + public void testDirectArenaMemoryCopy() { + ByteBuf src = PooledByteBufAllocator.DEFAULT.directBuffer(512); + ByteBuf dst = PooledByteBufAllocator.DEFAULT.directBuffer(512); + + PooledByteBuf<ByteBuffer> pooledSrc = unwrapIfNeeded(src); + PooledByteBuf<ByteBuffer> pooledDst = unwrapIfNeeded(dst); + + // This causes the internal reused ByteBuffer duplicate limit to be set to 128 + pooledDst.writeBytes(ByteBuffer.allocate(128)); + // Ensure internal ByteBuffer duplicate limit is properly reset (used in memoryCopy non-Unsafe case) + pooledDst.chunk.arena.memoryCopy(pooledSrc.memory, 0, pooledDst, 512); + + src.release(); + dst.release(); + } + + @SuppressWarnings("unchecked") + private PooledByteBuf<ByteBuffer> unwrapIfNeeded(ByteBuf buf) { + return (PooledByteBuf<ByteBuffer>) (buf instanceof PooledByteBuf ? buf : buf.unwrap()); + } }
test
test
"2019-12-27T09:14:50"
"2019-12-29T19:30:23Z"
cilki
val
netty/netty/6168_9924
netty/netty
netty/netty/6168
netty/netty/9924
[ "keyword_pr_to_issue" ]
06a5173e8d700f5f431557f21f647f2199bb3d52
1543218d3e7afcb33a90b728b14370395a3deca0
[ "@normanmaurer @jasobrown - FYI.", "This should probably be reopened. My PR only fixes this for subclasses of ZlibDecoder; it does not fix this for any other compression algorithms." ]
[ "Can you add to the doc describing what `0` does?", "same comment", "can this be private/final? ", "optional: check preferredSize is non-negative?", "+1", "nit: `else` block not needed", "I think either the second `ensureWritable` param should be `false` or the return should be compared to `3` rather than `1`?", "For consistency shouldn't `maxAllocationReached` be called here in the `preferredSize < maxAllocation` case?", "Not sure that the semantics of the method are clear... if it's overridden to not throw then what state will the provided buffer be in, what is the impl expected to do with it and what is it expected to return? Maybe it would make most sense to pass false for the `force` param of `ensureWritable` above so that the buffer isn't changed, and also pass `preferredSize` and `ctx` to this method as additional parameters?\r\n\r\nnit: and maybe clearer method name would be something like `decompressionBufferExhausted`?", "preferredSize was not checked by the existing code, and adding a check will actually duplicate the check because ByteBuf or ByteBufAllocator (whichever is ultimately called) will check that it's non-negative.\r\n\r\nAbout the `preferredSize < maxAllocation` case, I'm not following you. preferredSize will often be less than maxAllocation since preferredSize is a guesstimate of how much more space is needed to finish decompressing the input. So e.g. maxAllocation might be set to 1 MiB but the decoder estimates it only needs 10 KiB to finish decompressing.", "I'm pretty sure this is correct as-is. The desired behavior is that if capacity < maxAllocation, we want to expand the buffer, even if we can't expand it by the preferredSize. Any expansion at all makes it possible that the decoder will successfully complete the decompression, since preferredSize is just a guesstimate. Hence we want to expand even if the expansion is less than what we'd prefer. If it turns out that's not enough space, then this method will be called again, and the buffer will already have capacity = maxAllocation and remaining = 0, and we'll fail at that point.", "You're right that this probably shouldn't return anything - I'll change that. Can't remember exactly what I was thinking when I did that. My use case for wanting to override this is so that I can potentially log the data that has been decompressed so far before throwing the exception.", "I'll make it final but leave it as protected in case a subclass wants to read it for any reason.", "@rdicroce apologies, I meant `preferredSize > maxAllocation`", "Since preferredSize is a guesstimate of how much space is needed, it's possible that the compression ratio is poor and we don't need that much space. So e.g. the compressed size could be 100 KiB, and so JdkZlibDecoder and ZlibDecoder estimate they need 200 KiB to decompress. So they pass preferredSize = 200 KiB but then it turns out the decompressed data is only 150 KiB in size. If maxAllocation is 175 KiB, then decompression ought to succeed since 150 < 175, even though 200 > 175.", "OK I follow now, I missed that subtlety of `ensureWritable`, thinking that `1` would only apply to the `force == false` case. This is the same confusion as the comment above and I see that your logic there is consistent. Sorry!\r\n\r\nIt probably wouldn't harm to add a comment noting that one \"final\" attempt will always still be made with the buffer at `maxAllocation` even if that is `< preferredSize`.", "nit: final ", "call `chDecoder.finish()` and assert the return value", "nit: protected (as this is an abstract class)", "nit: protected (as this is an abstract class)", "Good catch - finish() was throwing an exception because the input buffer wasn't being consumed and the decoders weren't marking themselves as finished, so another attempt was being made to read the buffer. I've updated JZlibDecoder and JdkZlibDecoder to consume the buffer and mark themselves as finished at the same time they release the decompression buffer." ]
"2020-01-07T19:28:51Z"
[]
Compression/Decompression Codecs should enforce memory allocation size limits
### Expected behavior To protect against OOME the compression and decompression codecs should explicitly limit the amount of data they compress and decompress. We may be vulnerable to OOME from large or malicious input. ### Actual behavior In light of https://github.com/netty/netty/pull/5997 most of the compression/decompression codecs don't enforce limits on buffer allocation sizes. ### Steps to reproduce N/A ### Minimal yet complete reproducer code (or URL to code) N/A ### Netty version 4.1.7-SNAPSHOT ### JVM version (e.g. `java -version`) N/A ### OS version (e.g. `uname -a`) N/A
[ "codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java", "codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java", "codec/src/main/java/io/netty/handler/codec/compression/ZlibDecoder.java" ]
[ "codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java", "codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java", "codec/src/main/java/io/netty/handler/codec/compression/ZlibDecoder.java" ]
[ "codec/src/test/java/io/netty/handler/codec/compression/JZlibTest.java", "codec/src/test/java/io/netty/handler/codec/compression/JdkZlibTest.java", "codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest1.java", "codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest2.java", "codec/src/test/java/io/netty/handler/codec/compression/ZlibTest.java" ]
diff --git a/codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java b/codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java index f87344ab599..6c65cd5fe6a 100644 --- a/codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java +++ b/codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java @@ -18,6 +18,7 @@ import com.jcraft.jzlib.Inflater; import com.jcraft.jzlib.JZlib; import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; import io.netty.channel.ChannelHandlerContext; import io.netty.util.internal.ObjectUtil; @@ -35,7 +36,21 @@ public class JZlibDecoder extends ZlibDecoder { * @throws DecompressionException if failed to initialize zlib */ public JZlibDecoder() { - this(ZlibWrapper.ZLIB); + this(ZlibWrapper.ZLIB, 0); + } + + /** + * Creates a new instance with the default wrapper ({@link ZlibWrapper#ZLIB}) + * and specified maximum buffer allocation. + * + * @param maxAllocation + * Maximum size of the decompression buffer. Must be &gt;= 0. + * If zero, maximum size is decided by the {@link ByteBufAllocator}. + * + * @throws DecompressionException if failed to initialize zlib + */ + public JZlibDecoder(int maxAllocation) { + this(ZlibWrapper.ZLIB, maxAllocation); } /** @@ -44,6 +59,21 @@ public JZlibDecoder() { * @throws DecompressionException if failed to initialize zlib */ public JZlibDecoder(ZlibWrapper wrapper) { + this(wrapper, 0); + } + + /** + * Creates a new instance with the specified wrapper and maximum buffer allocation. + * + * @param maxAllocation + * Maximum size of the decompression buffer. Must be &gt;= 0. + * If zero, maximum size is decided by the {@link ByteBufAllocator}. + * + * @throws DecompressionException if failed to initialize zlib + */ + public JZlibDecoder(ZlibWrapper wrapper, int maxAllocation) { + super(maxAllocation); + ObjectUtil.checkNotNull(wrapper, "wrapper"); int resultCode = z.init(ZlibUtil.convertWrapperType(wrapper)); @@ -60,6 +90,22 @@ public JZlibDecoder(ZlibWrapper wrapper) { * @throws DecompressionException if failed to initialize zlib */ public JZlibDecoder(byte[] dictionary) { + this(dictionary, 0); + } + + /** + * Creates a new instance with the specified preset dictionary and maximum buffer allocation. + * The wrapper is always {@link ZlibWrapper#ZLIB} because it is the only format that + * supports the preset dictionary. + * + * @param maxAllocation + * Maximum size of the decompression buffer. Must be &gt;= 0. + * If zero, maximum size is decided by the {@link ByteBufAllocator}. + * + * @throws DecompressionException if failed to initialize zlib + */ + public JZlibDecoder(byte[] dictionary, int maxAllocation) { + super(maxAllocation); this.dictionary = ObjectUtil.checkNotNull(dictionary, "dictionary"); int resultCode; resultCode = z.inflateInit(JZlib.W_ZLIB); @@ -105,11 +151,11 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t final int oldNextInIndex = z.next_in_index; // Configure output. - ByteBuf decompressed = ctx.alloc().heapBuffer(inputLength << 1); + ByteBuf decompressed = prepareDecompressBuffer(ctx, null, inputLength << 1); try { loop: for (;;) { - decompressed.ensureWritable(z.avail_in << 1); + decompressed = prepareDecompressBuffer(ctx, decompressed, z.avail_in << 1); z.avail_out = decompressed.writableBytes(); z.next_out = decompressed.array(); z.next_out_index = decompressed.arrayOffset() + decompressed.writerIndex(); @@ -165,4 +211,9 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t z.next_out = null; } } + + @Override + protected void decompressionBufferExhausted(ByteBuf buffer) { + finished = true; + } } diff --git a/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java b/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java index 776d1e7816b..7e694222aaf 100644 --- a/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java +++ b/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java @@ -16,6 +16,7 @@ package io.netty.handler.codec.compression; import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; import io.netty.channel.ChannelHandlerContext; import io.netty.util.internal.ObjectUtil; @@ -65,7 +66,19 @@ private enum GzipState { * Creates a new instance with the default wrapper ({@link ZlibWrapper#ZLIB}). */ public JdkZlibDecoder() { - this(ZlibWrapper.ZLIB, null, false); + this(ZlibWrapper.ZLIB, null, false, 0); + } + + /** + * Creates a new instance with the default wrapper ({@link ZlibWrapper#ZLIB}) + * and the specified maximum buffer allocation. + * + * @param maxAllocation + * Maximum size of the decompression buffer. Must be &gt;= 0. + * If zero, maximum size is decided by the {@link ByteBufAllocator}. + */ + public JdkZlibDecoder(int maxAllocation) { + this(ZlibWrapper.ZLIB, null, false, maxAllocation); } /** @@ -74,7 +87,20 @@ public JdkZlibDecoder() { * supports the preset dictionary. */ public JdkZlibDecoder(byte[] dictionary) { - this(ZlibWrapper.ZLIB, dictionary, false); + this(ZlibWrapper.ZLIB, dictionary, false, 0); + } + + /** + * Creates a new instance with the specified preset dictionary and maximum buffer allocation. + * The wrapper is always {@link ZlibWrapper#ZLIB} because it is the only format that + * supports the preset dictionary. + * + * @param maxAllocation + * Maximum size of the decompression buffer. Must be &gt;= 0. + * If zero, maximum size is decided by the {@link ByteBufAllocator}. + */ + public JdkZlibDecoder(byte[] dictionary, int maxAllocation) { + this(ZlibWrapper.ZLIB, dictionary, false, maxAllocation); } /** @@ -83,18 +109,41 @@ public JdkZlibDecoder(byte[] dictionary) { * supported atm. */ public JdkZlibDecoder(ZlibWrapper wrapper) { - this(wrapper, null, false); + this(wrapper, null, false, 0); + } + + /** + * Creates a new instance with the specified wrapper and maximum buffer allocation. + * Be aware that only {@link ZlibWrapper#GZIP}, {@link ZlibWrapper#ZLIB} and {@link ZlibWrapper#NONE} are + * supported atm. + * + * @param maxAllocation + * Maximum size of the decompression buffer. Must be &gt;= 0. + * If zero, maximum size is decided by the {@link ByteBufAllocator}. + */ + public JdkZlibDecoder(ZlibWrapper wrapper, int maxAllocation) { + this(wrapper, null, false, maxAllocation); } public JdkZlibDecoder(ZlibWrapper wrapper, boolean decompressConcatenated) { - this(wrapper, null, decompressConcatenated); + this(wrapper, null, decompressConcatenated, 0); + } + + public JdkZlibDecoder(ZlibWrapper wrapper, boolean decompressConcatenated, int maxAllocation) { + this(wrapper, null, decompressConcatenated, maxAllocation); } public JdkZlibDecoder(boolean decompressConcatenated) { - this(ZlibWrapper.GZIP, null, decompressConcatenated); + this(ZlibWrapper.GZIP, null, decompressConcatenated, 0); } - private JdkZlibDecoder(ZlibWrapper wrapper, byte[] dictionary, boolean decompressConcatenated) { + public JdkZlibDecoder(boolean decompressConcatenated, int maxAllocation) { + this(ZlibWrapper.GZIP, null, decompressConcatenated, maxAllocation); + } + + private JdkZlibDecoder(ZlibWrapper wrapper, byte[] dictionary, boolean decompressConcatenated, int maxAllocation) { + super(maxAllocation); + ObjectUtil.checkNotNull(wrapper, "wrapper"); this.decompressConcatenated = decompressConcatenated; @@ -177,7 +226,7 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t inflater.setInput(array); } - ByteBuf decompressed = ctx.alloc().heapBuffer(inflater.getRemaining() << 1); + ByteBuf decompressed = prepareDecompressBuffer(ctx, null, inflater.getRemaining() << 1); try { boolean readFooter = false; while (!inflater.needsInput()) { @@ -208,7 +257,7 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t } break; } else { - decompressed.ensureWritable(inflater.getRemaining() << 1); + decompressed = prepareDecompressBuffer(ctx, decompressed, inflater.getRemaining() << 1); } } @@ -238,6 +287,11 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t } } + @Override + protected void decompressionBufferExhausted(ByteBuf buffer) { + finished = true; + } + @Override protected void handlerRemoved0(ChannelHandlerContext ctx) throws Exception { super.handlerRemoved0(ctx); diff --git a/codec/src/main/java/io/netty/handler/codec/compression/ZlibDecoder.java b/codec/src/main/java/io/netty/handler/codec/compression/ZlibDecoder.java index d01bc6b4de7..26fd3e74a76 100644 --- a/codec/src/main/java/io/netty/handler/codec/compression/ZlibDecoder.java +++ b/codec/src/main/java/io/netty/handler/codec/compression/ZlibDecoder.java @@ -16,6 +16,8 @@ package io.netty.handler.codec.compression; import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; +import io.netty.channel.ChannelHandlerContext; import io.netty.handler.codec.ByteToMessageDecoder; /** @@ -23,9 +25,72 @@ */ public abstract class ZlibDecoder extends ByteToMessageDecoder { + /** + * Maximum allowed size of the decompression buffer. + */ + protected final int maxAllocation; + + /** + * Same as {@link #ZlibDecoder(int)} with maxAllocation = 0. + */ + public ZlibDecoder() { + this(0); + } + + /** + * Construct a new ZlibDecoder. + * @param maxAllocation + * Maximum size of the decompression buffer. Must be &gt;= 0. + * If zero, maximum size is decided by the {@link ByteBufAllocator}. + */ + public ZlibDecoder(int maxAllocation) { + if (maxAllocation < 0) { + throw new IllegalArgumentException("maxAllocation must be >= 0"); + } + this.maxAllocation = maxAllocation; + } + /** * Returns {@code true} if and only if the end of the compressed stream * has been reached. */ public abstract boolean isClosed(); + + /** + * Allocate or expand the decompression buffer, without exceeding the maximum allocation. + * Calls {@link #decompressionBufferExhausted(ByteBuf)} if the buffer is full and cannot be expanded further. + */ + protected ByteBuf prepareDecompressBuffer(ChannelHandlerContext ctx, ByteBuf buffer, int preferredSize) { + if (buffer == null) { + if (maxAllocation == 0) { + return ctx.alloc().heapBuffer(preferredSize); + } + + return ctx.alloc().heapBuffer(Math.min(preferredSize, maxAllocation), maxAllocation); + } + + // this always expands the buffer if possible, even if the expansion is less than preferredSize + // we throw the exception only if the buffer could not be expanded at all + // this means that one final attempt to deserialize will always be made with the buffer at maxAllocation + if (buffer.ensureWritable(preferredSize, true) == 1) { + // buffer must be consumed so subclasses don't add it to output + // we therefore duplicate it when calling decompressionBufferExhausted() to guarantee non-interference + // but wait until after to consume it so the subclass can tell how much output is really in the buffer + decompressionBufferExhausted(buffer.duplicate()); + buffer.skipBytes(buffer.readableBytes()); + throw new DecompressionException("Decompression buffer has reached maximum size: " + buffer.maxCapacity()); + } + + return buffer; + } + + /** + * Called when the decompression buffer cannot be expanded further. + * Default implementation is a no-op, but subclasses can override in case they want to + * do something before the {@link DecompressionException} is thrown, such as log the + * data that was decompressed so far. + */ + protected void decompressionBufferExhausted(ByteBuf buffer) { + } + }
diff --git a/codec/src/test/java/io/netty/handler/codec/compression/JZlibTest.java b/codec/src/test/java/io/netty/handler/codec/compression/JZlibTest.java index 28f3919c604..015559ed3b7 100644 --- a/codec/src/test/java/io/netty/handler/codec/compression/JZlibTest.java +++ b/codec/src/test/java/io/netty/handler/codec/compression/JZlibTest.java @@ -23,7 +23,7 @@ protected ZlibEncoder createEncoder(ZlibWrapper wrapper) { } @Override - protected ZlibDecoder createDecoder(ZlibWrapper wrapper) { - return new JZlibDecoder(wrapper); + protected ZlibDecoder createDecoder(ZlibWrapper wrapper, int maxAllocation) { + return new JZlibDecoder(wrapper, maxAllocation); } } diff --git a/codec/src/test/java/io/netty/handler/codec/compression/JdkZlibTest.java b/codec/src/test/java/io/netty/handler/codec/compression/JdkZlibTest.java index 54a48a9caeb..5ff19f11540 100644 --- a/codec/src/test/java/io/netty/handler/codec/compression/JdkZlibTest.java +++ b/codec/src/test/java/io/netty/handler/codec/compression/JdkZlibTest.java @@ -38,8 +38,8 @@ protected ZlibEncoder createEncoder(ZlibWrapper wrapper) { } @Override - protected ZlibDecoder createDecoder(ZlibWrapper wrapper) { - return new JdkZlibDecoder(wrapper); + protected ZlibDecoder createDecoder(ZlibWrapper wrapper, int maxAllocation) { + return new JdkZlibDecoder(wrapper, maxAllocation); } @Test(expected = DecompressionException.class) diff --git a/codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest1.java b/codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest1.java index 9e16e1a3dcd..3c312742c85 100644 --- a/codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest1.java +++ b/codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest1.java @@ -23,7 +23,7 @@ protected ZlibEncoder createEncoder(ZlibWrapper wrapper) { } @Override - protected ZlibDecoder createDecoder(ZlibWrapper wrapper) { - return new JZlibDecoder(wrapper); + protected ZlibDecoder createDecoder(ZlibWrapper wrapper, int maxAllocation) { + return new JZlibDecoder(wrapper, maxAllocation); } } diff --git a/codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest2.java b/codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest2.java index 8717019ed95..00c6e18c424 100644 --- a/codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest2.java +++ b/codec/src/test/java/io/netty/handler/codec/compression/ZlibCrossTest2.java @@ -25,8 +25,8 @@ protected ZlibEncoder createEncoder(ZlibWrapper wrapper) { } @Override - protected ZlibDecoder createDecoder(ZlibWrapper wrapper) { - return new JdkZlibDecoder(wrapper); + protected ZlibDecoder createDecoder(ZlibWrapper wrapper, int maxAllocation) { + return new JdkZlibDecoder(wrapper, maxAllocation); } @Test(expected = DecompressionException.class) diff --git a/codec/src/test/java/io/netty/handler/codec/compression/ZlibTest.java b/codec/src/test/java/io/netty/handler/codec/compression/ZlibTest.java index 24f76b2a810..5e9d1288fe8 100644 --- a/codec/src/test/java/io/netty/handler/codec/compression/ZlibTest.java +++ b/codec/src/test/java/io/netty/handler/codec/compression/ZlibTest.java @@ -15,7 +15,9 @@ */ package io.netty.handler.codec.compression; +import io.netty.buffer.AbstractByteBufAllocator; import io.netty.buffer.ByteBuf; +import io.netty.buffer.ByteBufAllocator; import io.netty.buffer.ByteBufInputStream; import io.netty.buffer.Unpooled; import io.netty.channel.embedded.EmbeddedChannel; @@ -88,8 +90,12 @@ public abstract class ZlibTest { rand.nextBytes(BYTES_LARGE); } + protected ZlibDecoder createDecoder(ZlibWrapper wrapper) { + return createDecoder(wrapper, 0); + } + protected abstract ZlibEncoder createEncoder(ZlibWrapper wrapper); - protected abstract ZlibDecoder createDecoder(ZlibWrapper wrapper); + protected abstract ZlibDecoder createDecoder(ZlibWrapper wrapper, int maxAllocation); @Test public void testGZIP2() throws Exception { @@ -345,6 +351,25 @@ public void testZLIB_OR_NONE3() throws Exception { testCompressLarge(ZlibWrapper.GZIP, ZlibWrapper.ZLIB_OR_NONE); } + @Test + public void testMaxAllocation() throws Exception { + int maxAllocation = 1024; + ZlibDecoder decoder = createDecoder(ZlibWrapper.ZLIB, maxAllocation); + EmbeddedChannel chDecoder = new EmbeddedChannel(decoder); + TestByteBufAllocator alloc = new TestByteBufAllocator(chDecoder.alloc()); + chDecoder.config().setAllocator(alloc); + + try { + chDecoder.writeInbound(Unpooled.wrappedBuffer(deflate(BYTES_LARGE))); + fail("decompressed size > maxAllocation, so should have thrown exception"); + } catch (DecompressionException e) { + assertTrue(e.getMessage().startsWith("Decompression buffer has reached maximum size")); + assertEquals(maxAllocation, alloc.getMaxAllocation()); + assertTrue(decoder.isClosed()); + assertFalse(chDecoder.finish()); + } + } + private static byte[] gzip(byte[] bytes) throws IOException { ByteArrayOutputStream out = new ByteArrayOutputStream(); GZIPOutputStream stream = new GZIPOutputStream(out); @@ -360,4 +385,34 @@ private static byte[] deflate(byte[] bytes) throws IOException { stream.close(); return out.toByteArray(); } + + private static final class TestByteBufAllocator extends AbstractByteBufAllocator { + private ByteBufAllocator wrapped; + private int maxAllocation; + + TestByteBufAllocator(ByteBufAllocator wrapped) { + this.wrapped = wrapped; + } + + public int getMaxAllocation() { + return maxAllocation; + } + + @Override + public boolean isDirectBufferPooled() { + return wrapped.isDirectBufferPooled(); + } + + @Override + protected ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity) { + maxAllocation = Math.max(maxAllocation, maxCapacity); + return wrapped.heapBuffer(initialCapacity, maxCapacity); + } + + @Override + protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) { + maxAllocation = Math.max(maxAllocation, maxCapacity); + return wrapped.directBuffer(initialCapacity, maxCapacity); + } + } }
val
test
"2019-12-27T09:14:50"
"2016-12-30T18:14:58Z"
Scottmitch
val
netty/netty/9919_9933
netty/netty
netty/netty/9919
netty/netty/9933
[ "keyword_pr_to_issue" ]
76fb4c894af15cd1e30495a91074b2d95940e451
41c47b41bf0caa2f0faffec901a374516fbd19f7
[ "Same problem repeated once again on the same server. \r\n```\r\n\"epollEventLoopGroup-7-3\" #22 prio=10 os_prio=0 cpu=67508345.04ms elapsed=617139.89s tid=0x00007f5b68006000 nid=0x3de5 runnable [0x00007f5b59674000]\r\n java.lang.Thread.State: RUNNABLE\r\n at io.netty.buffer.AbstractByteBuf.checkIndex0(AbstractByteBuf.java:1432)\r\n at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1419)\r\n at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1414)\r\n at io.netty.buffer.AbstractByteBuf.getByte(AbstractByteBuf.java:356)\r\n at io.netty.buffer.AbstractByteBuf.getUnsignedByte(AbstractByteBuf.java:369)\r\n at io.netty.handler.ssl.AbstractSniHandler.decode(AbstractSniHandler.java:62)\r\n at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:493)\r\n at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432)\r\n at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:271)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)\r\n at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)\r\n at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)\r\n at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)\r\n at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)\r\n at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\r\n at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792)\r\n at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475)\r\n at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)\r\n at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)\r\n at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\r\n at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\r\n at java.lang.Thread.run(java.base@12.0.2/Thread.java:835)\r\n```", "Thanks @igor-lukyanov, I think I found the problem (introduced in 4.1.44.Final) - see #9933. I have renamed this issue to better reflect the bug, hope that's ok." ]
[ "add `fail();` above", "@normanmaurer done, I actually just copied the existing `testFallbackToDefaultContext()` as a starting point, which had this `fail()` commented out for some reason. So I've uncommented it there too since an exception does also get thrown in that case.", "@njhill thanks... Lets see if this works for all JDK versions that we use on the CI :) If so we can close the issue as well. ", "@njhill after some more digging this actually depends on if `jdkCompatibilityMode` is true of not. Let me do a follow up to fix it and then investigate later on." ]
"2020-01-09T23:01:58Z"
[ "defect" ]
Event loop threads hang in SniHandler for SSL records with majorVersion != 3
Under some unknown and rare circumstances 2 worker threads started to consume 100% CPU time of a single core each, permanently and indefinitely. It didn't prevent the app from its normal functioning, nevertheless excessive consumption of cpu is not a normal behaviour. Noticed this behaviour once by now. ### Actual behavior Epoll transport enabled. Two threads (out of four) burn 100% CPU time, each has the following stacktrace: "epollEventLoopGroup-7-1" #20 prio=10 os_prio=0 cpu=120413943.05ms elapsed=889961.34s tid=0x00007fa110003000 nid=0x4381 runnable [0x00007fa124c2d000] java.lang.Thread.State: RUNNABLE "epollEventLoopGroup-7-3" #22 prio=10 os_prio=0 cpu=55102833.55ms elapsed=889961.34s tid=0x00007fa110008800 nid=0x4383 runnable [0x00007fa124a2b000] ``` "epollEventLoopGroup-7-3" #22 prio=10 os_prio=0 cpu=54941845.34ms elapsed=889798.92s tid=0x00007fa110008800 nid=0x4383 runnable [0x00007fa124a2b000] java.lang.Thread.State: RUNNABLE at io.netty.buffer.AbstractByteBuf.checkIndex0(AbstractByteBuf.java:1432) at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1419) at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1414) at io.netty.buffer.AbstractByteBuf.getByte(AbstractByteBuf.java:356) at io.netty.buffer.AbstractByteBuf.getUnsignedByte(AbstractByteBuf.java:369) at io.netty.handler.ssl.AbstractSniHandler.decode(AbstractSniHandler.java:62) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:493) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:271) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792) at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(java.base@12.0.2/Thread.java:835) ``` ### Steps to reproduce Haven't yet found the scenario. ### Netty version 4.1.44 ### JVM version (e.g. `java -version`) openjdk 12.0.2 2019-07-16 OpenJDK Runtime Environment (build 12.0.2+9-Debian-1) OpenJDK 64-Bit Server VM (build 12.0.2+9-Debian-1, mixed mode, sharing) ### OS version (e.g. `uname -a`) Linux hostname 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3 (2019-09-02) x86_64 GNU/Linux
[ "handler/src/main/java/io/netty/handler/ssl/AbstractSniHandler.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/AbstractSniHandler.java" ]
[ "handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java" ]
diff --git a/handler/src/main/java/io/netty/handler/ssl/AbstractSniHandler.java b/handler/src/main/java/io/netty/handler/ssl/AbstractSniHandler.java index 8cd9bf0f861..fa7e9059b9d 100644 --- a/handler/src/main/java/io/netty/handler/ssl/AbstractSniHandler.java +++ b/handler/src/main/java/io/netty/handler/ssl/AbstractSniHandler.java @@ -153,8 +153,9 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t select(ctx, extractSniHostname(handshakeBuffer, 0, handshakeLength)); return; } + break; } - break; + // fall-through default: // not tls, ssl or application data, do not try sni select(ctx, null);
diff --git a/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java b/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java index 8cd3d766cda..3f468db0dd8 100644 --- a/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java +++ b/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java @@ -320,7 +320,7 @@ public void testFallbackToDefaultContext() throws Exception { ch.writeInbound(Unpooled.wrappedBuffer(message)); // TODO(scott): This should fail because the engine should reject zero length records during handshake. // See https://github.com/netty/netty/issues/6348. - // fail(); + fail(); } catch (Exception e) { // expected } @@ -344,6 +344,43 @@ public void testFallbackToDefaultContext() throws Exception { } } + @Test(timeout = 10000) + public void testMajorVersionNot3() throws Exception { + SslContext nettyContext = makeSslContext(provider, false); + + try { + DomainNameMapping<SslContext> mapping = new DomainNameMappingBuilder<SslContext>(nettyContext).build(); + + SniHandler handler = new SniHandler(mapping); + EmbeddedChannel ch = new EmbeddedChannel(handler); + + // invalid + byte[] message = {22, 2, 0, 0, 0}; + try { + // Push the handshake message. + ch.writeInbound(Unpooled.wrappedBuffer(message)); + fail(); + } catch (Exception e) { + // expected + } + + ch.close(); + + // When the channel is closed the SslHandler will write an empty buffer to the channel. + ByteBuf buf = ch.readOutbound(); + if (buf != null) { + assertFalse(buf.isReadable()); + buf.release(); + } + + assertThat(ch.finish(), is(false)); + assertThat(handler.hostname(), nullValue()); + assertThat(handler.sslContext(), is(nettyContext)); + } finally { + releaseAll(nettyContext); + } + } + @Test public void testSniWithApnHandler() throws Exception { SslContext nettyContext = makeSslContext(provider, true);
train
test
"2020-01-09T15:08:49"
"2020-01-03T09:58:45Z"
igor-lukyanov
val
netty/netty/9944_9967
netty/netty
netty/netty/9944
netty/netty/9967
[ "keyword_pr_to_issue" ]
066a180a43186ee5a54afdcca9077ab07328ff6a
fb3ced28cf61f2dd8638a11fd0ede62b46cc2db7
[ "Sounds Good \n\n> Am 11.01.2020 um 12:32 schrieb Dmitriy Dumanskiy <notifications@github.com>:\n> \n> \n> WebSocketCloseFrameHandler could be easily merged into WebSocketProtocolHandler and thus we don't need to add/hold an additional handler in the websockets pipeline. I think it would nice improvement for websockets pipeline.\n> \n> @normanmaurer @ursaj @amizurov WDYT?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n" ]
[ "@doom369 as this is package-private we can just remove the class imho. ", "Removed" ]
"2020-01-25T23:06:37Z"
[]
Merge WebSocketCloseFrameHandler into WebSocketProtocolHandler in order to minimize pipeline
`WebSocketCloseFrameHandler` could be easily merged into `WebSocketProtocolHandler` and thus we don't need to add/hold an additional handler in the websockets pipeline. I think it would nice improvement for websockets pipeline. @normanmaurer @ursaj @amizurov WDYT?
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandler.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketCloseFrameHandler.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketProtocolHandler.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerProtocolHandler.java" ]
[ "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandler.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketProtocolHandler.java", "codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerProtocolHandler.java" ]
[]
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandler.java index 1b8928aae49..8fda6eb008b 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandler.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandler.java @@ -78,7 +78,8 @@ public enum ClientHandshakeStateEvent { * Client protocol configuration. */ public WebSocketClientProtocolHandler(WebSocketClientProtocolConfig clientConfig) { - super(checkNotNull(clientConfig, "clientConfig").dropPongFrames()); + super(checkNotNull(clientConfig, "clientConfig").dropPongFrames(), + clientConfig.sendCloseFrame(), clientConfig.forceCloseTimeoutMillis()); this.handshaker = WebSocketClientHandshakerFactory.newHandshaker( clientConfig.webSocketUri(), clientConfig.version(), @@ -378,9 +379,5 @@ public void handlerAdded(ChannelHandlerContext ctx) { ctx.pipeline().addBefore(ctx.name(), Utf8FrameValidator.class.getName(), new Utf8FrameValidator()); } - if (clientConfig.sendCloseFrame() != null) { - cp.addBefore(ctx.name(), WebSocketCloseFrameHandler.class.getName(), - new WebSocketCloseFrameHandler(clientConfig.sendCloseFrame(), clientConfig.forceCloseTimeoutMillis())); - } } } diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketCloseFrameHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketCloseFrameHandler.java deleted file mode 100644 index 3d4284a93fa..00000000000 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketCloseFrameHandler.java +++ /dev/null @@ -1,97 +0,0 @@ -/* - * Copyright 2019 The Netty Project - * - * The Netty Project licenses this file to you under the Apache License, - * version 2.0 (the "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - * License for the specific language governing permissions and limitations - * under the License. - */ -package io.netty.handler.codec.http.websocketx; - -import io.netty.channel.ChannelFuture; -import io.netty.channel.ChannelFutureListener; -import io.netty.channel.ChannelHandlerContext; -import io.netty.channel.ChannelOutboundHandlerAdapter; -import io.netty.channel.ChannelPromise; -import io.netty.util.ReferenceCountUtil; -import io.netty.util.concurrent.ScheduledFuture; -import io.netty.util.internal.ObjectUtil; - -import java.nio.channels.ClosedChannelException; -import java.util.concurrent.TimeUnit; - -/** - * Send {@link CloseWebSocketFrame} message on channel close, if close frame was not sent before. - */ -final class WebSocketCloseFrameHandler extends ChannelOutboundHandlerAdapter { - private final WebSocketCloseStatus closeStatus; - private final long forceCloseTimeoutMillis; - private ChannelPromise closeSent; - - WebSocketCloseFrameHandler(WebSocketCloseStatus closeStatus, long forceCloseTimeoutMillis) { - this.closeStatus = ObjectUtil.checkNotNull(closeStatus, "closeStatus"); - this.forceCloseTimeoutMillis = forceCloseTimeoutMillis; - } - - @Override - public void close(final ChannelHandlerContext ctx, final ChannelPromise promise) throws Exception { - if (!ctx.channel().isActive()) { - ctx.close(promise); - return; - } - if (closeSent == null) { - write(ctx, new CloseWebSocketFrame(closeStatus), ctx.newPromise()); - } - flush(ctx); - applyCloseSentTimeout(ctx); - closeSent.addListener(new ChannelFutureListener() { - @Override - public void operationComplete(ChannelFuture future) { - ctx.close(promise); - } - }); - } - - @Override - public void write(final ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { - if (closeSent != null) { - ReferenceCountUtil.release(msg); - promise.setFailure(new ClosedChannelException()); - return; - } - if (msg instanceof CloseWebSocketFrame) { - promise = promise.unvoid(); - closeSent = promise; - } - super.write(ctx, msg, promise); - } - - private void applyCloseSentTimeout(ChannelHandlerContext ctx) { - if (closeSent.isDone() || forceCloseTimeoutMillis < 0) { - return; - } - - final ScheduledFuture<?> timeoutTask = ctx.executor().schedule(new Runnable() { - @Override - public void run() { - if (!closeSent.isDone()) { - closeSent.tryFailure(new WebSocketHandshakeException("send close frame timed out")); - } - } - }, forceCloseTimeoutMillis, TimeUnit.MILLISECONDS); - - closeSent.addListener(new ChannelFutureListener() { - @Override - public void operationComplete(ChannelFuture future) { - timeoutTask.cancel(false); - } - }); - } -} diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketProtocolHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketProtocolHandler.java index 84f96ea344d..ccbccac0d06 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketProtocolHandler.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketProtocolHandler.java @@ -16,14 +16,27 @@ package io.netty.handler.codec.http.websocketx; +import io.netty.channel.ChannelFuture; +import io.netty.channel.ChannelFutureListener; import io.netty.channel.ChannelHandlerContext; +import io.netty.channel.ChannelOutboundHandler; +import io.netty.channel.ChannelPromise; import io.netty.handler.codec.MessageToMessageDecoder; +import io.netty.util.ReferenceCountUtil; +import io.netty.util.concurrent.ScheduledFuture; +import java.net.SocketAddress; +import java.nio.channels.ClosedChannelException; import java.util.List; +import java.util.concurrent.TimeUnit; -abstract class WebSocketProtocolHandler extends MessageToMessageDecoder<WebSocketFrame> { +abstract class WebSocketProtocolHandler extends MessageToMessageDecoder<WebSocketFrame> + implements ChannelOutboundHandler { private final boolean dropPongFrames; + private final WebSocketCloseStatus closeStatus; + private final long forceCloseTimeoutMillis; + private ChannelPromise closeSent; /** * Creates a new {@link WebSocketProtocolHandler} that will <i>drop</i> {@link PongWebSocketFrame}s. @@ -40,7 +53,15 @@ abstract class WebSocketProtocolHandler extends MessageToMessageDecoder<WebSocke * {@code true} if {@link PongWebSocketFrame}s should be dropped */ WebSocketProtocolHandler(boolean dropPongFrames) { + this(dropPongFrames, null, 0L); + } + + WebSocketProtocolHandler(boolean dropPongFrames, + WebSocketCloseStatus closeStatus, + long forceCloseTimeoutMillis) { this.dropPongFrames = dropPongFrames; + this.closeStatus = closeStatus; + this.forceCloseTimeoutMillis = forceCloseTimeoutMillis; } @Override @@ -65,6 +86,94 @@ private static void readIfNeeded(ChannelHandlerContext ctx) { } } + @Override + public void close(final ChannelHandlerContext ctx, final ChannelPromise promise) throws Exception { + if (closeStatus == null || !ctx.channel().isActive()) { + ctx.close(promise); + } else { + if (closeSent == null) { + write(ctx, new CloseWebSocketFrame(closeStatus), ctx.newPromise()); + } + flush(ctx); + applyCloseSentTimeout(ctx); + closeSent.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + ctx.close(promise); + } + }); + } + } + + @Override + public void write(final ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { + if (closeSent != null) { + ReferenceCountUtil.release(msg); + promise.setFailure(new ClosedChannelException()); + return; + } + if (msg instanceof CloseWebSocketFrame) { + promise = promise.unvoid(); + closeSent = promise; + } + ctx.write(msg, promise); + } + + private void applyCloseSentTimeout(ChannelHandlerContext ctx) { + if (closeSent.isDone() || forceCloseTimeoutMillis < 0) { + return; + } + + final ScheduledFuture<?> timeoutTask = ctx.executor().schedule(new Runnable() { + @Override + public void run() { + if (!closeSent.isDone()) { + closeSent.tryFailure(new WebSocketHandshakeException("send close frame timed out")); + } + } + }, forceCloseTimeoutMillis, TimeUnit.MILLISECONDS); + + closeSent.addListener(new ChannelFutureListener() { + @Override + public void operationComplete(ChannelFuture future) { + timeoutTask.cancel(false); + } + }); + } + + @Override + public void bind(ChannelHandlerContext ctx, SocketAddress localAddress, + ChannelPromise promise) throws Exception { + ctx.bind(localAddress, promise); + } + + @Override + public void connect(ChannelHandlerContext ctx, SocketAddress remoteAddress, + SocketAddress localAddress, ChannelPromise promise) throws Exception { + ctx.connect(remoteAddress, localAddress, promise); + } + + @Override + public void disconnect(ChannelHandlerContext ctx, ChannelPromise promise) + throws Exception { + ctx.disconnect(promise); + } + + @Override + public void deregister(ChannelHandlerContext ctx, ChannelPromise promise) throws Exception { + ctx.deregister(promise); + } + + @Override + public void read(ChannelHandlerContext ctx) throws Exception { + ctx.read(); + } + + @Override + public void flush(ChannelHandlerContext ctx) throws Exception { + ctx.flush(); + } + @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { ctx.fireExceptionCaught(cause); diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerProtocolHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerProtocolHandler.java index 12cf291c1d8..c3a3b2eadc1 100644 --- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerProtocolHandler.java +++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerProtocolHandler.java @@ -113,7 +113,10 @@ public String selectedSubprotocol() { * Server protocol configuration. */ public WebSocketServerProtocolHandler(WebSocketServerProtocolConfig serverConfig) { - super(checkNotNull(serverConfig, "serverConfig").dropPongFrames()); + super(checkNotNull(serverConfig, "serverConfig").dropPongFrames(), + serverConfig.sendCloseFrame(), + serverConfig.forceCloseTimeoutMillis() + ); this.serverConfig = serverConfig; } @@ -229,10 +232,6 @@ public void handlerAdded(ChannelHandlerContext ctx) { cp.addBefore(ctx.name(), Utf8FrameValidator.class.getName(), new Utf8FrameValidator()); } - if (serverConfig.sendCloseFrame() != null) { - cp.addBefore(ctx.name(), WebSocketCloseFrameHandler.class.getName(), - new WebSocketCloseFrameHandler(serverConfig.sendCloseFrame(), serverConfig.forceCloseTimeoutMillis())); - } } @Override
null
train
test
"2020-01-24T15:40:39"
"2020-01-11T11:32:16Z"
doom369
val
netty/netty/9971_9974
netty/netty
netty/netty/9971
netty/netty/9974
[ "keyword_pr_to_issue" ]
2023a4f60759772261280159bccb4869e63c14e7
663fbaa50661eb5576d88aa0d053f2ccc2fae317
[ "https://github.com/netty/netty/pull/9972/commits/69b3089cd6fd173c0ea9348033ee3919af14ab15" ]
[ "How about `out.clear()` here too?", "this should not be needed. " ]
"2020-01-28T13:40:43Z"
[]
SslHandler.flush throws IllegalArgumentException on upstream write exception
### Expected behavior Did not expect flush to throw IllegalArgumentException ### Actual behavior ``` promise already done: DefaultChannelPromise@3d07c0a4( io.netty.channel.AbstractChannelHandlerContext.isNotValidPromise - Line 910 (AbstractChannelHandlerContext.java) io.netty.channel.AbstractChannelHandlerContext.write - Line 714 (AbstractChannelHandlerContext.java) io.netty.handler.ssl.SslHandler.finishWrap - Line 850 (SslHandler.java) io.netty.handler.ssl.SslHandler.wrap - Line 836 (SslHandler.java) io.netty.handler.ssl.SslHandler.wrapAndFlush - Line 753 (SslHandler.java) io.netty.handler.ssl.SslHandler.flush - Line 734 (SslHandler.java) io.netty.channel.AbstractChannelHandlerContext.invokeFlush0 - Line 776 (AbstractChannelHandlerContext.java) io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush - Line 802 (AbstractChannelHandlerContext.java) io.netty.channel.AbstractChannelHandlerContext.write - Line 814 (AbstractChannelHandlerContext.java) io.netty.channel.AbstractChannelHandlerContext.writeAndFlush - Line 794 (AbstractChannelHandlerContext.java) ``` ### Steps to reproduce Occurs when an upstream handler throws an exception in write() ### Minimal yet complete reproducer code (or URL to code) ### Netty version 4.1.22-current ### JVM version (e.g. `java -version`) 1.8 ### OS version (e.g. `uname -a`) linux
[ "handler/src/main/java/io/netty/handler/ssl/SslHandler.java" ]
[ "handler/src/main/java/io/netty/handler/ssl/SslHandler.java" ]
[]
diff --git a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java index bb76f9a246b..b180b0424e6 100644 --- a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java +++ b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java @@ -858,11 +858,25 @@ private void wrap(ChannelHandlerContext ctx, boolean inUnwrap) throws SSLExcepti case NOT_HANDSHAKING: setHandshakeSuccessIfStillHandshaking(); // deliberate fall-through - case NEED_WRAP: - finishWrap(ctx, out, promise, inUnwrap, false); + case NEED_WRAP: { + ChannelPromise p = promise; + + // Null out the promise so it is not reused in the finally block in the cause of + // finishWrap(...) throwing. promise = null; - out = null; + final ByteBuf b; + + if (out.isReadable()) { + // There is something in the out buffer. Ensure we null it out so it is not re-used. + b = out; + out = null; + } else { + // If out is not readable we can re-use it and so save an extra allocation + b = null; + } + finishWrap(ctx, b, p, inUnwrap, false); break; + } case NEED_UNWRAP: needUnwrap = true; return;
null
train
test
"2020-01-28T06:04:20"
"2020-01-27T21:19:35Z"
atcurtis
val