Communication Problem - Pay attention to any unsaved data etc. Invalid JSON from server: 1/x

Since upgrading from 6.x to v7, now on v7.1.7, my staging and production environment show the attached image at the top of the screen instead of the normal session timeout black bar.

I am using the following configuration in our corporate AWS environment.

Two c4.large RHEL servers. 3.75GB memory with 2 vcores.

Each server is running the web and core blocks.

They are fronted by an AWS load-balancer configuration using SSL offloading. So HTTPS at the load-balancer forwarding HTTP 8080 on the CUBA servers.

I build using UberJAR and the .jar file runs on each server with the embedded Jetty web server.

Every was working fine for more that 12 months until I upgraded to 7.x and switched to SSL Offloading. I had to switch to SSL offloading as I could no longer configure Jetty to work with HTTP to HTTPS rewriting, which to me is a mandatory requirement and should be an out of the box option. I posted a solution on this site that worked with the Jetty version in 6, that broke in 7.

I read a post on here where the user was running Kubernetes and it was suggest more RAM was needed. I find it hard the believe that 3.75GB isn’t enough, but I haven’t tried increasing it yet as we are in the middle of end user testing and training.

https://issues.jenkins-ci.org/browse/JENKINS-50132

I do see the error mentioned below in those posts in my log files to.

java.nio.channels.ClosedChannelException

Any help is greatly appreciated.

Mike.

Here’s the other post I found mentioning the “Invalid JSON from server: 1/x” error.

I should also add the complete error from the app.log.

2020-08-13 05:40:07.452 WARN  [Atmosphere-Scheduler-0] org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension - 
java.nio.channels.ClosedChannelException: null
	at org.eclipse.jetty.websocket.common.io.FrameFlusher.enqueue(FrameFlusher.java:109) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.outgoingFrame(AbstractWebSocketConnection.java:515) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.AbstractExtension.nextOutgoingFrame(AbstractExtension.java:155) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.PerMessageDeflateExtension.nextOutgoingFrame(PerMessageDeflateExtension.java:123) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension.access$1000(CompressExtension.java:42) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension$Flusher.compress(CompressExtension.java:554) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension$Flusher.deflate(CompressExtension.java:451) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension$Flusher.process(CompressExtension.java:431) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension.outgoingFrame(CompressExtension.java:218) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.ExtensionStack$Flusher.process(ExtensionStack.java:400) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.outgoingFrame(ExtensionStack.java:277) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.uncheckedSendFrame(WebSocketRemoteEndpoint.java:307) ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.jsr356.JsrAsyncRemote.sendText(JsrAsyncRemote.java:189) ~[fins-gfal-web-fins2s.jar:na]
	at org.atmosphere.container.version.JSR356WebSocket.write(JSR356WebSocket.java:73) ~[shared/:na]
	at org.atmosphere.websocket.WebSocket.write(WebSocket.java:255) ~[shared/:na]
	at org.atmosphere.websocket.WebSocket.write(WebSocket.java:220) ~[shared/:na]
	at org.atmosphere.websocket.WebSocket.write(WebSocket.java:46) ~[shared/:na]
	at org.atmosphere.cpr.AtmosphereResponseImpl$Stream.write(AtmosphereResponseImpl.java:957) ~[shared/:na]
	at org.atmosphere.cpr.AtmosphereResponseImpl.write(AtmosphereResponseImpl.java:803) ~[shared/:na]
	at org.atmosphere.interceptor.HeartbeatInterceptor$4.call(HeartbeatInterceptor.java:356) ~[shared/:na]
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
	at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

In fact I’m experiencing similar situation.

Setup:

AWS Fargate (no kubernetes)
ALB without sticky sessions, but TLS
UberJar in Docker
CUBA 7.2.x

Initially I was thinking about sticky sessions, but it is only one instance behind the LB…

Really interested what the outcome of it will be.

I forget to mention that taking the ELB out of the equation does stop the error, so I don’t believe it’s the code/build, just this configuration. The ELB is configured to rewrite any HTTP requests to HTTPS and AWS eventually forwards 443 to 8080 and 80 to 8080. It’s a little more complex than that as we’re using Target Groups and doing some rewriting in AWS to avoid users having to add /app onto the end of the URL.

So I have no doubt that it’s our AWS ELB configuration somewhere, just looking for anyone else that my have encountered that error with this type of configuration.

As the error is being reported by the Jetty Atmosphere-Scheduler, there maybe a setting in the Jetty configs I’m missing too.

1 Like

I am using sticky sessions on the ELB.

Hi…
I’m still facing the problem… And it is prompted very often… I’ve added 4GB of RAM, I am waiting for the solution… I also doubt it is error from Load Balancer

My Setup is:
Rancher with Kubernetes(v1.17.4-rancher1-2)
Network Provider: Canal(Network Isolation Available)
Load Balancer: Ingress
Cuba 7.1.x
UberJar in Docker

After many attempts at sticking with UberJAR and Jetty I decided to switch to using Tomcat.

Since switching to Tomcat, the Communication Problem has not appeared once in our Stage environment and we’ll be switching Production to Tomcat too.

Hi…
Is it possible to solve this error using UberJar…? Is there any update on this… We are struck as we unable to use the CI/CD environment for deployment…

It would be much helpful, if we solve this error…

1 Like

We are also experiencing this bug. Same error log as already posted.

1 Like

Hi!

It’s an old topic but I encountered the same problem with the UberJar running in a Docker container behind an nginx proxy in a second container. I noticed that the error occurs on web socket reconnection. Such reconnections happen regularly and most of the time everything works fine, but occasionally the reconnect fails with mentioned error message.

Although I could not pinpoint the actual problem, in my case it helped to enforce HTTP/1.0 communication between proxy and CUBA. By default the nginx-proxy image uses HTTP/1.1. The error is gone now without any noticable disadvantages. For those who are interested: I put the configuration directive “proxy_http_version 1.0;” into the server-domain-specific configuration located in the “vhost.d” folder.

It’s clearly a workaround, but it helps.

2 Likes