Communication Problem - Pay attention to any unsaved data etc. Invalid JSON from server: 1/x

Since upgrading from 6.x to v7, now on v7.1.7, my staging and production environment show the attached image at the top of the screen instead of the normal session timeout black bar.

I am using the following configuration in our corporate AWS environment.

Two c4.large RHEL servers. 3.75GB memory with 2 vcores.

Each server is running the web and core blocks.

They are fronted by an AWS load-balancer configuration using SSL offloading. So HTTPS at the load-balancer forwarding HTTP 8080 on the CUBA servers.

I build using UberJAR and the .jar file runs on each server with the embedded Jetty web server.

Every was working fine for more that 12 months until I upgraded to 7.x and switched to SSL Offloading. I had to switch to SSL offloading as I could no longer configure Jetty to work with HTTP to HTTPS rewriting, which to me is a mandatory requirement and should be an out of the box option. I posted a solution on this site that worked with the Jetty version in 6, that broke in 7.

I read a post on here where the user was running Kubernetes and it was suggest more RAM was needed. I find it hard the believe that 3.75GB isn’t enough, but I haven’t tried increasing it yet as we are in the middle of end user testing and training.

I do see the error mentioned below in those posts in my log files to.


Any help is greatly appreciated.


Here’s the other post I found mentioning the “Invalid JSON from server: 1/x” error.

I should also add the complete error from the app.log.

2020-08-13 05:40:07.452 WARN  [Atmosphere-Scheduler-0] org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension - 
java.nio.channels.ClosedChannelException: null
	at ~[fins-gfal-web-fins2s.jar:na]
	at ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.AbstractExtension.nextOutgoingFrame( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.PerMessageDeflateExtension.nextOutgoingFrame( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension.access$1000( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension$Flusher.compress( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension$Flusher.deflate( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension$Flusher.process( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.util.IteratingCallback.processing( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.util.IteratingCallback.iterate( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.compress.CompressExtension.outgoingFrame( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.ExtensionStack$Flusher.process( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.util.IteratingCallback.processing( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.util.IteratingCallback.iterate( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.outgoingFrame( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.uncheckedSendFrame( ~[fins-gfal-web-fins2s.jar:na]
	at org.eclipse.jetty.websocket.jsr356.JsrAsyncRemote.sendText( ~[fins-gfal-web-fins2s.jar:na]
	at org.atmosphere.container.version.JSR356WebSocket.write( ~[shared/:na]
	at org.atmosphere.websocket.WebSocket.write( ~[shared/:na]
	at org.atmosphere.websocket.WebSocket.write( ~[shared/:na]
	at org.atmosphere.websocket.WebSocket.write( ~[shared/:na]
	at org.atmosphere.cpr.AtmosphereResponseImpl$Stream.write( ~[shared/:na]
	at org.atmosphere.cpr.AtmosphereResponseImpl.write( ~[shared/:na]
	at org.atmosphere.interceptor.HeartbeatInterceptor$ ~[shared/:na]
	at java.base/ ~[na:na]
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker( ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor$ ~[na:na]
	at java.base/ ~[na:na]

In fact I’m experiencing similar situation.


AWS Fargate (no kubernetes)
ALB without sticky sessions, but TLS
UberJar in Docker
CUBA 7.2.x

Initially I was thinking about sticky sessions, but it is only one instance behind the LB…

Really interested what the outcome of it will be.

I forget to mention that taking the ELB out of the equation does stop the error, so I don’t believe it’s the code/build, just this configuration. The ELB is configured to rewrite any HTTP requests to HTTPS and AWS eventually forwards 443 to 8080 and 80 to 8080. It’s a little more complex than that as we’re using Target Groups and doing some rewriting in AWS to avoid users having to add /app onto the end of the URL.

So I have no doubt that it’s our AWS ELB configuration somewhere, just looking for anyone else that my have encountered that error with this type of configuration.

As the error is being reported by the Jetty Atmosphere-Scheduler, there maybe a setting in the Jetty configs I’m missing too.

1 Like

I am using sticky sessions on the ELB.

I’m still facing the problem… And it is prompted very often… I’ve added 4GB of RAM, I am waiting for the solution… I also doubt it is error from Load Balancer

My Setup is:
Rancher with Kubernetes(v1.17.4-rancher1-2)
Network Provider: Canal(Network Isolation Available)
Load Balancer: Ingress
Cuba 7.1.x
UberJar in Docker

After many attempts at sticking with UberJAR and Jetty I decided to switch to using Tomcat.

Since switching to Tomcat, the Communication Problem has not appeared once in our Stage environment and we’ll be switching Production to Tomcat too.

Is it possible to solve this error using UberJar…? Is there any update on this… We are struck as we unable to use the CI/CD environment for deployment…

It would be much helpful, if we solve this error…

1 Like

We are also experiencing this bug. Same error log as already posted.

1 Like


It’s an old topic but I encountered the same problem with the UberJar running in a Docker container behind an nginx proxy in a second container. I noticed that the error occurs on web socket reconnection. Such reconnections happen regularly and most of the time everything works fine, but occasionally the reconnect fails with mentioned error message.

Although I could not pinpoint the actual problem, in my case it helped to enforce HTTP/1.0 communication between proxy and CUBA. By default the nginx-proxy image uses HTTP/1.1. The error is gone now without any noticable disadvantages. For those who are interested: I put the configuration directive “proxy_http_version 1.0;” into the server-domain-specific configuration located in the “vhost.d” folder.

It’s clearly a workaround, but it helps.