In reviewing the framework code, it appears the LocalServiceProxy still serializes the parameters and result. Why? If executing in the same JVM this should be able to be bypassed - I assume with some caveats that the parameters cannot be mutated.
The current scenario seems to adversely affect performance - since almost all ‘client tier’ operations end up hitting a DataSource instance.
The documentation shows various configurations, but the one that is missing, and I would assume to be the best performing is:
==> host1, running tomcat with client tier, and middle tier
==> host2, running tomcat with client tier, and middle tier
===> database server
It is not any different in terms of single point of failure than the current environment, since if a web tier server goes down, all of those users need to log into a different instance, since the user sessions are not replicated - the same with the configuration shown above.
With a proper LocalServiceProxy, essentially just a method reference, with setup of the security context, the performance should be dramatically improve.
Do you have any profiling results showing that CPU spends much time in LocalServiceProxy code, or it adds significant latency to DataService calls?
I assume that we are working with database which probably is located on another machine, and even the fastest SQL query means network call. Any network call is many times slower than serializing list of entities in memory.
From my experience, most of CPU time on application server (in CUBA app) is spent in ORM (EclipseLink) code.
That is probably correct is most cases, but there are plenty of services in the middle tier that can be memory only (cache lookup), and or very high speed (messaging subsystems). Serialization (Java, or any other) is always expensive relative to simple argument passing.
But the real point is, the framework has special handling for ‘in JVM web and middle tier’ - the LocalServiceProxy. Why spend the development effort on this unless increased efficiency was the goal, and if that is the goal, the unnecessary serialization hinders this.
I am sure I can write a synthetic benchmark that shows the serialization using excessive CPU - just write a simple service that is passed a very large array, does a simple transformation, and returns it. The serialization is going to be the bulk of the CPU time, but that is kind of besides the point.
If the LocalProxyService is solely to bypass the network layer - that already happens by the OS - when the source and destination endpoints are on the same machine.
I am sure my background in the low-latency space skews my opinion here, but at the end of the day, why be less efficient when you don’t have to. If it mucked up the clarity of the code, that would be a valid reason for not doing it, but the code now if more complex than it needs to be (with the additional marking annotation BypassSerialization, etc.)
I am aware of at least two reasons for re-serialization:
Make deep clone of service arguments to avoid side effects on the UI if middleware service changes argument entities.
If there was no cloning, then changing argument entities on middleware would trigger datasource listeners attached to entities, which leads to datasource being marked as modified, UI events triggered etc. Which is an unexpected behavior for most users.
Re-serialization re-creates entity classes bound to another classloader.
“app-core” and “app” webapps have separate classloaders and both have a copy of app-global.jar (and other jars), e.g. in the app server there are two loaded classes for every entity.