Thursday, January 3, 2008

Java RTS 2.0 Update1 Released

Java RTS 2.0 Update-1 (90 days evaluation copy) is available for download. (Link: Download Java RTS)

The version available on Sun's site is for Solaris (both x86 & SPARC) only. IBM offers another version which runs on Linux with open-source real-time Linux extensions. There is no version available for Windows (with third party real-time extension). This update release provides support for Solaris 10 u4 and J2SE 5.0 u13 libraries as well as various security enhancements.

Java RTS features are delivered via a separate real-time enabled version of the Java VM. This VM can process both traditional Java SE components and Java RTS components. With Java RTS, real-time components and non-real-time components can coexist and share data on a single system.

This release is compliant with Real-Time Specification for Java (RTSJ). In fact, it is the first commercial implementation of RTSJ.

Here is more information on Java RTS (Taken from Sun's Document - as it is)

Here are the important features and benefits offered by the RTSJ and Sun's Java RTS implementation:

  • New Real-Time Threads, Scheduling, and Synchronization

    The RTSJ introduces the concept of two new threads: real-time threads and no-heap real-time threads (a thread which cannot be interrupted by garbage collection). These threads offer more precise scheduling than with standard Java threads. They have 28 levels of priority and unlike standard Java, their priority is strictly enforced.

    Real-time threads are synchronized and are not subject to so-called priority inversion situations, where a lower priority thread has a block on a resource needed by a higher priority thread and thus prevents the higher priority thread from running. Rigorous testing with partners has shown that Java RTS completely avoids any priority inversions - which is crucial for mission-critical applications.

  • New Memory Management Schemes

    The RTSJ defines two new types of memory areas that allow real-time applications to avoid unpredictable delays commonly caused by traditional garbage collectors:

    1. Immortal memory holds objects without destroying them except when the program ends. This means that objects created in immortal memory must be carefully allocated and managed.
    2. Scoped memory is used only while a process works within a particular section, or scope, of the program such as in a method. Objects are automatically destroyed when the process leaves the scope. This is a useful feature akin to garbage collection in that discrete creation and deletion is not required as in the immortal memory case - but the process must be sure to exit the scope to ensure memory is reaped.

    Neither immortal nor scoped memories are garbage collected, so using them avoids problems of GC interference.

  • Asynchronous Events Handling & Asynchronous Transfer of Control

    The RTSJ provides two mechanisms for asynchronous communication: asynchronous event handling, and asynchronous transfer of control.

    Asynchronous event handlers deal with external events (known as "happenings") which can occur outside the JVM. The RTSJ is unique in that it allows developers to schedule the response to asynchronous events in order to avoid disrupting the temporal integrity of the rest of the real-time application.

    Asynchronous Transfer of Control (ATC) provides a carefully controlled way for one thread to interrupt another thread in a safe manner.

  • Time & Timers

    The RTSJ specifies several ways to specify high-resolution (nanosecond accuracy) time including absolute time and relative time.

  • Direct Access to Physical Memory

    While still maintaining security protections, the RTSJ allows direct access to physical memory. This means that device drivers can be created and written entirely in Java. Previously, Java applications had to link to native code to communicate directly with the hardware.

Java RTS Minimum Recommended System Requirements

  • Dual core or dual CPU sytem with 512 MB
  • Solaris 10 u3 SPARC or x86

What is a Real Time System (RTS)?

Real Time system does not mean a "fast" system. It means the system should respond to a worldly-event reliably and predictably. For example:

  1. When an elevator reaches a floor, an event is generated. In response to this event, the system should stop the elevator immediately. If the system responds to this event late, the elevator would stop halfway. In this case, the timing is critical. If the system is very fast, but does not confirm to timing-contracts, it would not serve the purpose.
  2. Another example is 'Anti-braking System (ABS)' in a car. When a wheel is locked, the system must respond to it by releasing it in timely manner. Any delay may have dangerous outcome.

A system is said to be real-time if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed. One of the definition (I found on Internet) is:

"A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time at which the result is produced. If the timing constraints of the system are not met, system failure is said to have occurred."

Most of the real-time systems are low level applications to control the physical hardware.

Hard and Soft Real Time System

In real time system, the time constraint is the one of the most important factor. In some applications, the time constraint is not flexible. Any minor deviation from the time constraint is considered as failure. This type of system is called Hard Real Time System. For example, a car engine control system is a hard real-time system because a delayed signal may cause engine failure or damage. Other examples of hard real-time embedded systems include medical systems such as heart pacemakers and industrial process controllers.

There are other scenarios where some latency is acceptable, which are called Soft Real Time System. Example: the software that maintains and updates the flight plans for commercial airliners. These can operate to a latency of seconds. Live audio-video systems are also usually soft real-time; violation of constraints results in degraded quality, but the system can continue to operate.

Components to implement a Real Time System

  1. Operating System: The operating should support real time processing. It provides all the facilities to build a real-time application. It is a multi-tasking operating system and guarantees that the deadlines are met.

    VxWorks, QNX are few examples of operating system. Windows XP is not a real-time operating system. Through, there are few third party plug-ins, which are plugged into Windows XP to provide real-time facilities.

    If an application is developed & deployed on a real time operating system, it does not mean that the application is real-time. The application must be designed in a proper way to provide real-time behavior. It is similar to saying that everybody cannot drive the car. You need good training too for driving the car.
  2. Application Framework: To create a new real-time system, the application development framework or environment should provide real-time facility too. For example: Java is not a real-time development environment. It has garbage collector, which can be initiated anytime to claim the free memory. When Garbage collection runs, it takes a lot of resources and other threads slow down. This delays in processing the event by other threads. This is one example where Java fails as real-time application development framework. Same is true for .NET too.

    Usually, C & C++ languages are usually used to develop real-time applications.
  3. Application Design: The design of application should be such that it provides real-time functionality. If resources are available and design is not correct, you can't have a real-time solution.

Tuesday, January 1, 2008

Which persistence framework to use (JPA, JDO, Hibernate, Toplink) ?

With multiple persistence technologies in Java world, it was always difficult to evaluate all the existing frameworks and choose the best (& mature) one. Sun came up with unsuccessful technologies like entity beans and JDO. BEA created its own framework JDO based Kodo framework. Oracle's Toplink is another mature framework, which is in use for last several years. Open source communities create great persistence frameworks (Hibernate, iBATIS), which are being used heavily.

Java is an Object Oriented programming language, and it  was an unanimous desire to have a good object-relational mapping (OR Mapping) based persistence framework. In last few years, it has been established that Hibernate & Toplink are winner in this race. These are now considered as de-facto standard for persistence-framework. For last four years, I've seen Hibernate being used on all enterprise projects (except one, which was on iBATIS).

To compete with Hibernate and Toplink, Sun came up with OR-Mapping based framework "JDO". However, it failed to create an impression on java-community. BEA created its product Kodo based on JDO specification. Gavin King has highlighted few shortcomings in JDO in his article (Link: Gavin's Article). Currently, JDO provides more facilities than JPA (see comparison). However, Sun has indicated that JPA is the standard specification for OR-mapping based persistence. Here is one of FAQ from Sun (Link: JPA FAQ):

Q: What will happen to other data persistence APIs now that the Java Persistence API is available?

A: The Java Persistence API is now the standard API for persistence and object/relational mapping for the Java EE platform. Earlier APIs of course will not go away, but we expect that they will become less interesting once this new standard API is available.

Finally, experts from all popular frameworks (Hibernate, Toplink etc.) sat together to come up with a specification to define common APIs, that would be implemented or supported by the existing popular frameworks. And, JPA 1.0 was born. Hibernate, Toplink and other products were enhanced to support JPA specification.

With JPA, it is possible not to use one common APIs for many persistence frameworks. If the application is using JPA with Hibernate, it would not require any change in the code to replace Hibernate with Toplink or any other JPA based framework. Apache is working on another JPA based framework named as OpenJPA.

BEA and IBM are shipping OpenJPA as JPA implementation in their application servers WebLogic and WebSphere respectively. Oracle  has Toplink in its application server.

Conclusion

JPA is not the standard specification for OR-Mapping based persistence framework. It has support from all the major vendors including IBM, BEA, Oracle, SUN etc. Hibernate and Toplink are enhanced to support JPA. These two frameworks are in use for last several years and are mature now. It is easy to find developers with experience with these two frameworks. IBM and BEA are shipping a new OpenJPA framework, which it not proven as a mature product so far. So, in my opinion, the new projects should use Hibernate or Toplink with JPA even if the application would be deployed on WebLogic or WebSphere Application Server. And, while building the application, caution should be taken not to use any proprietary API from Hibernate or Toplink to provide portability in future.

Future of SCA after Java EE 6 ?

JSR 316 has been approved to create Java EE (version 6) specification, which is targeted to ship in 2008. (Link: JSR 316)

The scope of Java EE 6 includes the SCA-defined facilities. As per the JSR, the scope of Java EE 6 related to SOA is:

  • SOA Support
    The Java EE platform is already widely used for SOA applications. As more and more enterprises realize the benefits of a SOA architecture, requirements on the platform have increased, both in terms of functionality and interoperability. 

    The Service Component Architecture is defining facilities that could be used by composite applications in a SOA environment. This expert group will consider whether any of the SCA-defined facilities are appropriate for inclusion in this release of the Java EE platform.

IBM and BEA are working together to make SCA technology as an alternative or replacement to existing Java EE technology. SCA is being promoted as a new framework to create services and components. It provides IoC containers and framework to create composite components. However, the current specification does not provide all facilities provided by Java EE (transaction, JNDI, connection-pooling, object-pooling, JCA, web-container etc.). So, the current version of SCA cannot be a replacement of Java EE. It can be used to create new services only.

Java EE is a mature technology being used by almost all Java based system for last several years. The customers are comfortable with this technology. Introducing new technology has its own challenges, risks and cost. Until there is a profit in moving to the new technology, which would require new software license, hardware, money, time, and resources, a CEO or CTO of an enterprise would not take a decision in favor of it.

The new version of Java EE 6 specification includes the new facilities offered by SCA to create new services. IBM and BEA should be building these new facilities in their application servers. With this, SCA may not provide any new facility, and there may not be a business case for an enterprise to build new services/ solutions based on SCA. Lets wait and see what IBM & BEA are doing with SCA.

SCA is an IoC Container, Component Model & Assembly Specification

IBM and BEA are pushing a new technology 'SCA' to build services. It provides a framework to create loosely coupled composite-application, where dependencies of components are defined declaratively. The components can be written in different languages and on different platform. Lets take a close looks of it with respect to existing technologies currently available starting with overview.

Overview of SCA

SCA defines a component framework to create services.  The basic building block for SCA is a component implementation. One component can compose of other components. The dependency between components is defined in the assembly-descriptor, which provide loose-coupling among components. The dependent components can be local or remote and can be implemented in different technologies.

A component implementation is described by the following attributes:

  1. Services: This describes the function this type of component provides
  2. References: This describes the dependencies this type of component has in order to function
  3. Properties: This defines configuration parameters that can controls how the program logic can behave. For example, what currency to use.
  4. Intent policies: This describes assumptions on how the component will behave. There are two types of policies.
    • Implementation policy- Impact the behavior of an implementation. For example, transaction, monitor and logging
    • Interaction policy - defines how the components behave with one another. For example, security.

The key benefits of SCA are:

  1. Loose Coupling: Components integrate with other components without needing to know how other components are implemented. Components can be easily invoked either synchronously or asynchronously.
  2. Flexibility: Components can easily be replaced by other components. It also provides the flexibility to create solutions using existing services (bottom up) or model the business and develop the solution (top down).
  3. Productivity: It enables you to focus on your business logic rather than be slowed with concerns about infrastructure. SCA simplifies development experience for all developers (including J2EE and Integration Developers), integrators and application deployers.

Now, lets see sample codes to understand how a SCA based service is created. (All codes are taken from Apache Tuscany SCA user-guide : http://incubator.apache.org/tuscany/sca-java-user-guide.html)

The service is defined as an interface:

1 public interface CalculatorService { 2 double add(double n1, double n2); 3 double subtract(double n1, double n2); 4 double multiply(double n1, double n2); 5 double divide(double n1, double n2); 6 }

The service is implemented as Java Class:

1 public class CalculatorServiceImpl implements CalculatorService { 2 3 private AddService addService; 4 private SubtractService subtractService; 5 private MultiplyService multiplyService; 6 private DivideService divideService; 7 8 @Reference 9 public void setAddService(AddService addService) { 10 this.addService = addService; 11 } 12 13 ...set methods for the other attributes would go here 14 15 public double add(double n1, double n2) { 16 return addService.add(n1, n2); 17 } 18 19 ...implementations of the other methods would go here 20 }

In the code above, note that 'setAddService' method has an annotation 'Reference'. It indicates that 'AddService' is an interface. Its implementation will be defined in the assembly-descriptor (a XML file) using dependency-injection pattern. This is an example where SCA is acting as IoC (Inversion of Control) engine like Spring-framework.

The reference component 'AddService' is defined as an Interface:

1 public interface AddService { 2 3 double add(double n1, double n2); 4 }

The component is implemented as Java Class:

1 public class AddServiceImpl implements AddService { 2 3 public double add(double n1, double n2) { 4 return n1 + n2; 5 } 6 }

So far, a service and one component are defined and implemented. The service-implementation has a dependency on the component (AddService). This is defined in the assembly-descriptor:

1 <composite xmlns="http://www.osoa.org/xmlns/sca/1.0" 2 name="Calculator"> 3 4 <service name="CalculatorService" promote="CalculatorServiceComponent/CalculatorService"> 5 <interface.java interface="calculator.CalculatorService"/> 6 <binding.jsonrpc/> 7 </service> 8 9 <component name="CalculatorServiceComponent"> 10 <implementation.java class="calculator.CalculatorServiceImpl"/> 11 <reference name="addService" target="AddServiceComponent" /> 12 <!-- references to SubtractComponent, MultiplyComponent and DivideComponent --> 13 </component> 14 15 <component name="AddServiceComponent"> 16 <implementation.java class="calculator.AddServiceImpl"/> 17 </component> 18 19 <!-- definitions of SubtractComponent, MultiplyComponent and DivideComponent --> 20 21 </composite>

In the code above, the 'reference' tag defines the implementation of 'AddService' component. This provides loose-coupling between components. If the implementation of AddService component is changed, change in assembly-descriptor is required only. The assembly-descriptor defines the binding and policy information too.

Lets review different aspect of SCA.

SCA as an IoC Container (Similar to Spring)

SCA's component can be compared to Spring's bean. SCA components have properties and references. The references are other dependent components. The dependent components (references) & properties are populated using the setters or constructors following injection design pattern. This is similar to the bean-definition in Spring-framework. Auto-wiring feature of SCA 1.0 is similar to Spring's Auto-wire feature too.

Looking at SCA specification, it seems that SCA is not providing any additional feature, which is not available in Spring IoC. However, if SCA IoC is provided by the SCA-runtime, it will be available for all the application by default. This would not require to include any other IoC container in the project.

SCA as Java EE Container

SCA provides a framework to create java components similar to EJBs. The new components are implemented as POJOs, and its attributes (security, binding etc.) are defined using annotation or assembly descriptor. IBM and BEA are suggesting to implement entire business-layer using SCA framework. However, Java EE specification provides other standard guidelines too:

  • Transaction
  • Thread Management
  • Security
  • Web Service Stype (Wrapped, Unwrapped, JAXB Serialization etc.)
  • Object Pooling
  • JCA
  • Connection Pooling

Currently, SCA specification does not address above features in its version 1.0 specification. For another couple of years, it is certain that Java EE stack will be used by the projects. Until SCA matures and provides all the functionality available in Java EE, it is not likely that the customers would decide to move to this new technology. Lets wait and see how the industry takes its own course.

Assembly Descriptor

SCA defines an assembly descriptor to define the services, references and properties. SCA supports to use components created in different languages and platform to be used as references. This will help in creating a new service using the existing components written in other languages. But, the existing components cannot be used as SCA component as it is. It would require to create a SCA wrapper following SCA specification. In real world, there are very few cases where the client uses different technologies in one domain. The client may have multiple technologies in different domain. In that case, the integration should be done using orchestration (BPEL). Even SCA supports composition of components in a single domain.

Conclusion

SCA is an interesting technology and big product companies (IBM, BEA etc.) are behind it. However, this new technology is being offered as a replacement to a well established & mature Java EE technology. Very likely, the next release of Java EE (6) would include all the new features provided by SCA. SCA does not provide a roadmap whether it can be used as business-layer of web-application. Almost all enterprises have web-applications, and they want to have minimum set of technologies. Current Java EE technology provides a good framework to create new services. Bringing new technology to create new services or migration effort to transform current services (business layer) to new technology has its own challenges and risks.

Since big companies are behind SCA, it cannot be written off easily. They can try to force the existing user-base (like Microsoft) to use the new technology. But, it would not be an easy task for them. For next couple of years, it is not likely that the new projects would use SCA based system until it establishes itself as a mature technology similar to Java EE. We'll watch closely how it evolves.