How to prevent concurrency lock on flushBuffer in Spring Boot Tomcat Jackson?
I am having issues with concurrency when writing JSON out from my Spring Boot WAR app deployed to Tomcat 8. In the screenshot from AppDynamics there seems to be a considerable wait when the jackson library is performing _flushBuffer.
This issue arises under load testing for even a small amount (< 10) users.
I have configured the messageConverters in my configuration class.
@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> converters)
converters.add(new MappingJackson2HttpMessageConverter(
new Jackson2ObjectMapperBuilder().dateFormat(new SimpleDateFormat("yyyy-MM-dd'T'HH:mmZ")).mixIn(LiquidAssignment.class,
InventoryProviderAssignmentMixin.class)
.deserializerByType(ActionData.class, new ActionDataDeserializer()).build()));
converters.add(new MappingJackson2XmlHttpMessageConverter());
I am using
Spring Boot 1.5.4
Java 1.8
Jackson 2.9.7
Tomcat 8.5.33
java spring spring-boot tomcat jackson
add a comment |
I am having issues with concurrency when writing JSON out from my Spring Boot WAR app deployed to Tomcat 8. In the screenshot from AppDynamics there seems to be a considerable wait when the jackson library is performing _flushBuffer.
This issue arises under load testing for even a small amount (< 10) users.
I have configured the messageConverters in my configuration class.
@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> converters)
converters.add(new MappingJackson2HttpMessageConverter(
new Jackson2ObjectMapperBuilder().dateFormat(new SimpleDateFormat("yyyy-MM-dd'T'HH:mmZ")).mixIn(LiquidAssignment.class,
InventoryProviderAssignmentMixin.class)
.deserializerByType(ActionData.class, new ActionDataDeserializer()).build()));
converters.add(new MappingJackson2XmlHttpMessageConverter());
I am using
Spring Boot 1.5.4
Java 1.8
Jackson 2.9.7
Tomcat 8.5.33
java spring spring-boot tomcat jackson
add a comment |
I am having issues with concurrency when writing JSON out from my Spring Boot WAR app deployed to Tomcat 8. In the screenshot from AppDynamics there seems to be a considerable wait when the jackson library is performing _flushBuffer.
This issue arises under load testing for even a small amount (< 10) users.
I have configured the messageConverters in my configuration class.
@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> converters)
converters.add(new MappingJackson2HttpMessageConverter(
new Jackson2ObjectMapperBuilder().dateFormat(new SimpleDateFormat("yyyy-MM-dd'T'HH:mmZ")).mixIn(LiquidAssignment.class,
InventoryProviderAssignmentMixin.class)
.deserializerByType(ActionData.class, new ActionDataDeserializer()).build()));
converters.add(new MappingJackson2XmlHttpMessageConverter());
I am using
Spring Boot 1.5.4
Java 1.8
Jackson 2.9.7
Tomcat 8.5.33
java spring spring-boot tomcat jackson
I am having issues with concurrency when writing JSON out from my Spring Boot WAR app deployed to Tomcat 8. In the screenshot from AppDynamics there seems to be a considerable wait when the jackson library is performing _flushBuffer.
This issue arises under load testing for even a small amount (< 10) users.
I have configured the messageConverters in my configuration class.
@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> converters)
converters.add(new MappingJackson2HttpMessageConverter(
new Jackson2ObjectMapperBuilder().dateFormat(new SimpleDateFormat("yyyy-MM-dd'T'HH:mmZ")).mixIn(LiquidAssignment.class,
InventoryProviderAssignmentMixin.class)
.deserializerByType(ActionData.class, new ActionDataDeserializer()).build()));
converters.add(new MappingJackson2XmlHttpMessageConverter());
I am using
Spring Boot 1.5.4
Java 1.8
Jackson 2.9.7
Tomcat 8.5.33
java spring spring-boot tomcat jackson
java spring spring-boot tomcat jackson
edited Nov 13 at 0:14
zack6849
1,69131625
1,69131625
asked Nov 12 at 20:34
randomprogrammer1
133
133
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Looking at the source code of UTF8JsonGenerator._flushBuffer()
, there is no indication of LockSupport.parkNanos()
. So it has probably been inlined by the JIT compiler from OutputStream.write()
.
My guess is it's the place where – for your application – Tomcat typically waits until the client has accepted all the output (expect for the last piece that fits into the typical connection buffer size) before it can close the connection.
We have had bad experience with slow clients in the past. Until they have retrieved all the output, they block a thread in Tomcat. And blocking a few dozens threads in Tomcat seriously reduces the throughput of a busy web app.
Increasing the number of threads isn't the best option as the blocked threads also occupy a considerable amount of memory. So what you want is that Tomcat can handle a request as quickly as possible and then move on to the next request.
We have solved the problem by configuring our reverse proxy, which we always had in front of Tomcat, to immediately consume all output from Tomcat and deliver it to the client at the client's speed. The reverse proxy is very efficient at handling slow clients.
In our case, we have used nginx. We also looked at Apache httpd. But at the time, it wasn't capable of doing it.
Additional Note
Clients that unexpectedly disconnect also look like slow clients to the server as it takes some time until it has been fully established that the connection is broken.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53269696%2fhow-to-prevent-concurrency-lock-on-flushbuffer-in-spring-boot-tomcat-jackson%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Looking at the source code of UTF8JsonGenerator._flushBuffer()
, there is no indication of LockSupport.parkNanos()
. So it has probably been inlined by the JIT compiler from OutputStream.write()
.
My guess is it's the place where – for your application – Tomcat typically waits until the client has accepted all the output (expect for the last piece that fits into the typical connection buffer size) before it can close the connection.
We have had bad experience with slow clients in the past. Until they have retrieved all the output, they block a thread in Tomcat. And blocking a few dozens threads in Tomcat seriously reduces the throughput of a busy web app.
Increasing the number of threads isn't the best option as the blocked threads also occupy a considerable amount of memory. So what you want is that Tomcat can handle a request as quickly as possible and then move on to the next request.
We have solved the problem by configuring our reverse proxy, which we always had in front of Tomcat, to immediately consume all output from Tomcat and deliver it to the client at the client's speed. The reverse proxy is very efficient at handling slow clients.
In our case, we have used nginx. We also looked at Apache httpd. But at the time, it wasn't capable of doing it.
Additional Note
Clients that unexpectedly disconnect also look like slow clients to the server as it takes some time until it has been fully established that the connection is broken.
add a comment |
Looking at the source code of UTF8JsonGenerator._flushBuffer()
, there is no indication of LockSupport.parkNanos()
. So it has probably been inlined by the JIT compiler from OutputStream.write()
.
My guess is it's the place where – for your application – Tomcat typically waits until the client has accepted all the output (expect for the last piece that fits into the typical connection buffer size) before it can close the connection.
We have had bad experience with slow clients in the past. Until they have retrieved all the output, they block a thread in Tomcat. And blocking a few dozens threads in Tomcat seriously reduces the throughput of a busy web app.
Increasing the number of threads isn't the best option as the blocked threads also occupy a considerable amount of memory. So what you want is that Tomcat can handle a request as quickly as possible and then move on to the next request.
We have solved the problem by configuring our reverse proxy, which we always had in front of Tomcat, to immediately consume all output from Tomcat and deliver it to the client at the client's speed. The reverse proxy is very efficient at handling slow clients.
In our case, we have used nginx. We also looked at Apache httpd. But at the time, it wasn't capable of doing it.
Additional Note
Clients that unexpectedly disconnect also look like slow clients to the server as it takes some time until it has been fully established that the connection is broken.
add a comment |
Looking at the source code of UTF8JsonGenerator._flushBuffer()
, there is no indication of LockSupport.parkNanos()
. So it has probably been inlined by the JIT compiler from OutputStream.write()
.
My guess is it's the place where – for your application – Tomcat typically waits until the client has accepted all the output (expect for the last piece that fits into the typical connection buffer size) before it can close the connection.
We have had bad experience with slow clients in the past. Until they have retrieved all the output, they block a thread in Tomcat. And blocking a few dozens threads in Tomcat seriously reduces the throughput of a busy web app.
Increasing the number of threads isn't the best option as the blocked threads also occupy a considerable amount of memory. So what you want is that Tomcat can handle a request as quickly as possible and then move on to the next request.
We have solved the problem by configuring our reverse proxy, which we always had in front of Tomcat, to immediately consume all output from Tomcat and deliver it to the client at the client's speed. The reverse proxy is very efficient at handling slow clients.
In our case, we have used nginx. We also looked at Apache httpd. But at the time, it wasn't capable of doing it.
Additional Note
Clients that unexpectedly disconnect also look like slow clients to the server as it takes some time until it has been fully established that the connection is broken.
Looking at the source code of UTF8JsonGenerator._flushBuffer()
, there is no indication of LockSupport.parkNanos()
. So it has probably been inlined by the JIT compiler from OutputStream.write()
.
My guess is it's the place where – for your application – Tomcat typically waits until the client has accepted all the output (expect for the last piece that fits into the typical connection buffer size) before it can close the connection.
We have had bad experience with slow clients in the past. Until they have retrieved all the output, they block a thread in Tomcat. And blocking a few dozens threads in Tomcat seriously reduces the throughput of a busy web app.
Increasing the number of threads isn't the best option as the blocked threads also occupy a considerable amount of memory. So what you want is that Tomcat can handle a request as quickly as possible and then move on to the next request.
We have solved the problem by configuring our reverse proxy, which we always had in front of Tomcat, to immediately consume all output from Tomcat and deliver it to the client at the client's speed. The reverse proxy is very efficient at handling slow clients.
In our case, we have used nginx. We also looked at Apache httpd. But at the time, it wasn't capable of doing it.
Additional Note
Clients that unexpectedly disconnect also look like slow clients to the server as it takes some time until it has been fully established that the connection is broken.
edited Nov 12 at 21:07
answered Nov 12 at 20:59
Codo
50.5k11110148
50.5k11110148
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53269696%2fhow-to-prevent-concurrency-lock-on-flushbuffer-in-spring-boot-tomcat-jackson%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown