CI/CD for multiple enviroments

Hi
I’m building a Gitlab CI/CD for CUBA with Docker. Depending on what branch it will be build I need to change DB connection settings and generate new scripts for particulate Database.

My assumption was to

  1. Change db connection parameters (I have them currently in jetty-env.xml)

  2. Run ./gradlew assembleDbScripts

  3. Run ./gradlew buildUberJar

  4. Move generated zip to Docker directory

  5. Unpack jar and upload it to docker image

  6. Build Docker based on simple dockerfile

    COPY … /opt/TitanForms
    WORKDIR /opt/TitanForms
    CMD ls -al
    CMD chmod 777 TitanForms.jar
    CMD java -Dapp.home=/opt/TitanForms-home -jar TitanForms.jar

  7. publish the jar for particular environment.(e.g. dev, test, etc.)

I don’t know if this works correctly but I assume that JNDI connection for ./gradlew assembleDbScripts will not work properly this way… how should I do that ? any one have any options ?

Hi @mariusz.koprowski,

Shouldn’t this task be done at “development time”? If your application (or app component) supports more than one database type, you should generate scripts for all of them using Studio and add the generated script files to SCM (Git). Later, when your Gitlab CI pipeline runs, all the needed scripts for all supported databases will be available.

I see two problems here. The first is that your image should not be different depending on the environment you’ll run it. And the second is that DB settings (and other settings as well) should not be defined inside your image as well (there is no problem having default values though).

Consider using Java system property OR OS environment variable (Application Properties) to define the correct properties for the environment you’ll be running your application. Take a look at Spring Profiles as well.

Regards,
Peterson.

1 Like

Hi @peterson.br
the idea behind this is like that.

I have 4 environments setup for the same application

  1. Development
  2. Test
  3. Stage - Replica of production for deployment purposes
  4. Production

All of them are in Docker and K8S with DB on SQL Server.
What I want to achieve is to be able to generate image for each environment with proper changes in application. so the images will always be different as the application in each environment will be on different stage of development. Because of that DB will be on different stage of development to. That’s why I wanted to be able to generate scripts for DB changes for each DB in pipelines and apply them using UberJar on start of the image… Now the question is how to pass DB connection settings in building to generate the scripts… is it possible or should a developer do that each time and commit scripts for each enviroment

And one more thing I have all datasources set as JNDI

Hi,

The “Generate database scripts” action is not the same as “gradlew assembleDbScripts”.
“Generate database scripts” is implemented in the UI of the CUBA Studio and it can’t be called from command line in CI/CD environment.
“gradlew assembleDbScripts” - just assembles zip archive from script files already existing in the project.

So it’s not possible to achieve your goals with automatic generation of DB update scripts in CI/CD.

Workflow suggested by CUBA Studio implies that database update scripts are completely driven by the changes in the data model, and update scripts are the same in every environment.
Created new entity --> update script with “create table” needs to be run everywhere.
Added attribute to existing entity -> update script with “add column” is created.
And so on.

HI @AlexBudarov
So just to confirm before each automatic build on each environment Developer have to manually Generate database scripts and check them into GIt. correct ?

Generating database migration scripts can be done only manually from Studio.
Regarding the question how to store these scripts for later usage - I don’t know, maybe it’s better idea to build uberJar with these scripts right from the IDE.

Honestly I don’t understand why you need such approach, why do you need to re-generate database migration scripts every time for many databases. Even for products with multiple deployments people usually update their database with one set of scripts.
If you want to clean junk scripts after deployment phase, you can download dump of the DB schema to your local environment and re-generate update scripts just once by comparing release version of the product with DB schema etalon.

Note that was another topic describing similar approach: Is there a way to make CUBA reset its database scripts entirely from the model? - CUBA.Platform