Production Requirements questions

Hi all,

I’m looking at production rollout options for a SAAS system I’m working on. It has very sensitive data so I’m thinking of each Client having a seperate database and having each Client on their own instance of CUBA.

A client will have user numbers ranging from just a single user to 100 users per client.

So what I’m wondering is what would be the minimum spec server preferably in AWS that could run CUBA for say a single user who is pretty active. I know there are a lot of variables but I’m after opinions and considerations.

Here is a list of the EC2 instance types.

https://aws.amazon.com/ec2/instance-types/

I’m secretly hoping that perhaps a t2.nano will suffice for a single user.

Unfortunately, no one will give you guidance on minimal resources for your application. It highly depends on your business logic.

Usually, we use the following rule of thumb for our ECM systems: 15-20 MB of RAM per each user session on Web Client + minimum 300 MB Heap for system. Yes, you can start from 512 MB instance but make sure that domain and specifics of your app fits in this limits. It is not strong numbers, do not rely on them and test your solution first.

Thanks @artamonov that is a good starting point and at least I know it isn’t out the realms of possibility to run on a small instance.

I’m afraid you won’t fit into nano instance, even for a single-user client. But please share your experience.

Hi @knstvk

Thanks for your reply. Could you just briefly explain why it won’t fit in a nano instance?

Thanks

Hi John,

It’s from my personal experience with a little single-user app deployed as WAR on Tomcat. It was too slow on AWS, so I ended up deploying it on Linode 1G instance, where Tomcat with 512M heap works just great. So if you don’t use specific AWS services, consider alternative providers.

Thanks @knstvk

I have two problems being that I need to keep it cheap to make it work and in Australia we don’t have many options unlike the rest of the world

Thanks
John

Hi,

If you don’t want to directly being constraint by the instance sizes you can abstract by those if you chose a infrastructure technology that allows you to do that. E.g. take a AWS ECS cluster. You can define multiple EC2 hosts, but the amount of RAM / CPU is defined completly flexible by the containers.

If you want to go event one step further and don’t be constraint by your own choice of the EC2 hosts, you can use something like AWS Fargate (with Kubernetes or without) where you don’t even have to define the hosts (in Azure there is something similar called Azure container instances). Instead your abstraction is the container.

Besides that, it is a little contradictory to one the one hand saw that you want to increase the security by letting every customer have their own infrastructure and on the other hand say, you care about if it is a t2.nano or t2.micro (and with this this “small” amout of money you save). Because if you want to optimize on that scale, you’ll probably be better of if you decide to put different customers to the same infrastructure (single DB e.g.) and invest more in other parts of the security topic.

Additionally the most expensive costs will be the DB anyways (if you really decide to go explicit DB server for every customer), because probably is it more realistic that you will lose a DB or something in comparison to having a data breach from one tenant to the other. So when taking a lot at RDS, if you really care about HA & data security then you need multi AZ support for HA, regular backups, automatic security patches etc.

Therefore I think there are better ways of increasing security (probably to lower costs) then completely isolating the infrastructure for every customer.

Bye
Mario

2 Likes

Hi Mario. @mario

Thanks for the detailed reply. I am certainly open to getting the best solution whatever that may be. I am however constrained by costs to a degree. There are other products on top of CUBA and Infrastructure I am making use of that need to be added in to the total.

Firstly I don’t have much experience with cloud infrastructure. We don’t have a lot of choice here in Australia so I was opting for AWS.

I was intending on using AWS Aurora. I’ve had a go and it works fine and is wire compatible with PostgreSQL. So I’m happy to have a database per customer I’d even have a limited number shared between customers.

I like the idea of containers and the management of them appears easier. I’ve read your articles about AWS and they were very interesting. AWS Fargate seems right up my street, the less I have to deal with the better at the moment.

Due to the sensitivity of the data clients can get quite picky about security and data sovereignty.

I feel like I’m a bit stuck between a rock and a hard place at the moment. I need to keep costs down per customer yet deliver enough performance. I don’t know yet if the multi-tenancy component is suitable enough in separating the data and I don’t seem to be able to run multiple databases from the same web/middleware servers.

Any ideas, tips would be more than greatly appreciated. :grin:

John

What do you mean by that? A RDS instance per Customer? Or a DB schema in a Single / a few RDS aurora clusters? Because the fix costs for an RDS cluster is quite high depending on how much HA you want to have into it. Much much higher compared to t2.nano vs. t2.micro.

Fargate and EKS are not in Sydney region unfortunately, so you would need to either do the heavy lifting on your own or use ECS. But with auto scaling of EC2 instances it does not really make a huge deal…

I would really consider using tenant with single DB based approach, because you can make that one app & DB cluster much more HA and secure because the fix costs will shared by multiple customers. Or at least group the customers by “plan” and for each plan diversify the strategy on how the data is isolated based on what they are paying… :slight_smile:

Bye
Mario

I mean that one instance of AWS Aurora can have multiple databases on it. I realize the DB setup will be a bigger hit and that economies of scale are needed to bring the per user cost down.

I might have to go back and look more at the multi-tenancy add-on that is being developed.

Unfortunately here in Australia we are kind of the last on the list for things and the options here suck. Though I’m targeting multiple countries so I’m more open elsewhere. TBH even Singapore might be a goer from here, which may open more options up.

I’m hoping to target the larger companies who should have a little more of the “shut up and take my money” attitude. :slight_smile:

You might also consider Aurora Serverless which will just as Fargate eliminate thinking about DB servers at all. Also you will be charged on a per-use basis (just like with Lambda), so that for single users / few users you might push down the DB costs quite dramatically depending on the usage patterns. For those single user DBs that might actually work, for low latency requirements AWS suggests to stick to regular Aurora.
But it is in preview mode only right now.

Yeah that looks interesting too. I was trying to get a preview a while back but no luck. That is one service that may make it to Australia pretty quickly as Aurora did.

Hi,

This is something you might be interested in reading:

Cloud architecture: The end of multi-tenancy?
https://www.linkedin.com/pulse/architecture-constraints-end-multi-tenancy-gregor-hohpe

Bye
Mario

Thanks @mario

I agree in that having seperate systems for each user is nice it still boils down to cost and in Australia hosting is more expensive than elsewhere in the world.