Hi
I’ll share my personal advice on this matter, based on my past experiences with other platforms/frameworks, and inherent B2B integration issues, and progressive migration from legacy systems to new ones.
First of all, this is a dangerous path… Unfortunately, I have yet to see a project succeed when taking this approach. I mean, a big project, where the only viable approach is progressive migration from the legacy system to a new platform. I’ll elaborate this a bit more in the end of this post.
But the most important point is: Never ever try to use the same database tables by giving read/write access to multiple applications at the same time.
You should always have a single point of truth for your business logic, where the business rules/constraints are enforced for every consumer involved. This can be in any form, like REST services, SOAP/XML services, a CUBA middleware or (please don’t do this) a bunch of Stored Procedures in your DB.
Additionally, you should try to avoid going against what your target platform offers you.
So, let CUBA generate the DDL for you, and use other means to achieve interoperability with existing data and consumers. Every mature RDBMS out there gives you a plenty of options in this regard, and MS Sql Server is among them.
This is definitely an interesting option to add to Studio, but for the time being you could manually edit both the table name and columns when creating them. I know this is less convenient, but it works, and must be done only once at entity creation. In the end, you’ll not spend much time in entity creation, compared to the other programming tasks.
Again, I suggest you to stick with the default naming convention, and create new schema from scratch.
What does really matter here is consistency, not the actual convention used. And you have plenty of options to use in your RDBMS if you need alternate names (Views, Stored procedures, Synonyms, …).
Now, some more words on the progressive approach I was referring to before.
By progressive I mean separating the legacy monolith into smaller units, develop them one at a time (or more if you have a big enough team), and put them live along with the legacy application, by ensuring proper interoperability.
Here is an exemplified process that outlines the approach (please note that is only a high-level view, the actual steps can be more articulated):
- Take the legacy system, and spend some (can be much) time analysing it with the goal of a better modularisation. The outcome should be at least a diagram that decompose the system into smaller cooperating modules, that can be developed and tested as discrete units.
- Take first a “module” that has as less dependants as possible, that is a module in the outer area of the above diagram (ideally it should have “zero” dependants, that is no “dependency arrows” pointing to it in the diagram)
- MSSQL specific: create a NEW schema in your target database for the new application (be aware of this), OR create a completely new DB for it (it depends on many factors, outside of the scope of this post)
- Create your application module in CUBA, but carefully choose the root package and namespace because changing it afterwards is really really hard…
- Model your entities from scratch, without much regard of the old system. You should pretend starting from scratch, and so spend time analysing and documenting your requirements first, then model the entities accordingly.
- Pay particular attention to the dependencies of this new module. If it depends on other modules not already developed, you must decide how to consume the legacy data in your new code. Using read-only views mapped to the legacy data is a viable option.
Now for the hardest part - the integration with existing consumers (the following process can run in parallel with the previous one):
- Document all the consumers (clients) of the legacy application: the application itself, external tools, reporting services, external parties (if you have some B2B online or batch data migrations)
- For each of the above consumers, spend some time documenting how they interact with the legacy database(s). The most important property to document is if they are readers, writers or both (r/w access)
- For the readers (for example report services) the solution is very straightforward: develop appropriate views or stored procedures that mimic the old schema, using the new tables as the underlying source. A less easy task, but more powerful in complex cases, is to develop either triggers and/or batches to migrate data from the new schema to the legacy one (this can be done selectively, only if a view or SP would be too complex for a particular object)
- For the writers… well you should exclude them, because maintaining a 2-way sync at the data level is really hard. Not impossible, but requires effort. If the legacy consumers are only GUI applications, it’s better to invest some time by making the data that now is produced by the new component read-only in them.
Obviously this approach (maintaining both the new and the legacy app in production) has several drawbacks for the users, first of all the need of using two different applications in day-to-day work.
Hope this helps you in your future choices
Bye
Paolo