CUBA File Storage - Amazon S3 upload error - region setting


i’ve a question regarding the AWS S3 storage option for the file storage of CUBA. I used the instructions from here. In local mode (running via studio) everything worked fine.
On a deployment scenario with two different tomcats and app-core.war and app.war separated, an error occurs while uploading the file to S3 (see below):

17:10:37.047 ERROR c.h.c.c.c.FileUploadController - Unable to upload file I/O error: Could not save file 0ea0d0ea-8616-5097-f609-6feb822866e4.png. <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>********</AWSAccessKeyId><StringToSign>********</CanonicalRequest><CanonicalRequestBytes>********</CanonicalRequestBytes><RequestId>FD5A06F41AAB8B17</RequestId><HostId>********</HostId></Error>
	at ~[cuba-core-6.2.3.jar:6.2.3]

The configuration did not change with the two options so i would guess that this is not the problem. The desired region is “eu-west-1”. Since i’ve no glue why this is not working in an environment with two wars in two tomcats within two docker containers but in the studio environment i just searched through the code.

I came across one thing that i don’t have any idea if it helps but perhaps you can give me an answer to this:

In the method “getAmazonUrl” of the Class AmazonS3FileStorage it says that it uses S3 - path style for the URL: “the region-specific endpoint to the target object expressed in path style”. But in the docs of AWS it says that if using path style a region specific URL has to be set (like… ) as described here:

Amazon S3 supports virtual hosted-style and path-style access in all regions. The path-style syntax, however, requires that you use the region-specific endpoint when attempting to access a bucket. For example, if you have a bucket called mybucket that resides in the EU (Ireland) region, you want to use path-style syntax, and the object is named puppy.jpg, the correct URI is

Unlike the comment says, the URL creation looks a little bit more like virtual host style if i understand the implementation correctly:

return new URL(String.format("", amazonS3Config.getBucket(), resolveFileName(fileDescr)));

As i said i’m not even sure if this thing has anything to do with the actual error, but it just was the first obvious straw to clutch at :slight_smile:

Perhaps you can give me a little hint about the implementation or even about the error that occurs only in “production” mode but not in studio mode.


1 Like

Hi Mario,
What is the type of Amazon Machine Image(AMI) do you use for tomcats?
I will try to reproduce the error

Hi Andrey,

thanks for your response. Im using ECS for deployment. Therefore the AMI is “Amazon ECS-Optimized Amazon Linux AMI” - so Amazon Linux. The Docker Container for the ECS is the default tomcat8:jre-8 image provided by Dockerhub. It mainly looks like this:

### Dockerfile
FROM tomcat:8-jre8
RUN rm -rf /usr/local/tomcat/webapps/*
ADD container-files/app-core.war /usr/local/tomcat/webapps/ROOT.war

ADD container-files/context.xml /usr/local/tomcat/conf/
ADD container-files/ /usr/local/tomcat/conf/app-core/
ADD container-files/jgroups.xml /opt/cuba_home/app-core/conf/
ADD container-files/logback.xml /opt/cuba_home/

ENV CATALINA_OPTS="${CATALINA_OPTS} -Dlogback.configurationFile=/opt/cuba_home/logback.xml"
ENV CATALINA_OPTS="${CATALINA_OPTS} -Dapp.home=/opt/cuba_home"
ADD /usr/local/tomcat/lib/postgresql.jar


In order to reproduce the error you can do the following. I tried the same with my example application: cuba-ordermanagement what i did with the real one, and the error still exists. Here are the steps to see the error:

  1. Install docker & docker compose
  2. git clone
  3. git checkout cuba-on-aws-ecs (the branch is for my newly created blog post, but it will show the problem)
  4. change the spring.xml so that the “cuba_FileStorage” bean is using the S3 bean
  5. deployment/ cuba-ordermanagement app-core latest
  6. deployment/ cuba-ordermanagement app latest
  7. cd deployment
  8. docker-compose up -d
  9. web app is up and running on port 8080 (localhost or docker-machine ip if you are on mac / windows)
  10. change the s3 settings in “application properties” UI: region: eu-west-1, bucket: yourBucketName, etc.
  11. try to upload a file and get the message: “unable to save file” on the UI
  12. docker logs deployment_app-core_1

Is has nothing to do with AWS as a hosting provider nor with ECS. I see only two problem areas. First is the fact that they run in Docker containers. Second is that the are deployed as two war files in two tomcat and therefore without local service invocation.

I could try it without Docker but i just didn’t have the time to create a manual installation of tomcat. If everything does not work i’ll give it a try in order to not make my problems to yours because i use Docker.


Hi, Mario
Thanks for example. Problem is reproduced in local mode (running via studio) if you setup property cuba.useLocalServiceInvocation to false. We will fix it in the next platform release.

Hi Andrey,

this sounds great. Not that its a bug, but that it is not my personal programming / Docker related mistake… Can you tell me what the problem is? As i digged into it that much, i’m just interested about the problem. Is it related to the actual binary content that has to get over the wire in a non local service invocation environment? I thought about something like UTF-8 problems or so, but i wasn’t really sure.

Will this be part of the next bugfix release or any of the next releases? Do you have a glue when this will happen? Just curious due to a timing issue here on my side. Otherwise i might change the implementation to the new EFS stuff from AWS or would think about another solution.


Hi Mario,
The problem is with incorrect size of messages sent to Amazon S3. We calculate message size based on file size and fixed chunk size (file is uploaded by chunks). Chunk size is not same for each chunk because reading bytes from InputStream does not guarantee that it will read as many bytes as possible. So actual message size is not same and Amazon S3 reject this request.

We will fix it in the next bugfix release on 12 August.

Hi Mario,
the problem is fixed in the platform version 6.2.5.

Hi Andrey,

thanks for the hint and the prompt fix!


1 Like