Customizing the Cloud Foundry Java Buildpack

In this post I’ll describe the process of customizing the Cloud Foundry Java build pack for the fortressDemo2 sample application.

In general, the requirement to do build pack customization arises any time you need to make changes to an application’s runtime stack. In terms of some of the specific security issues that we care about around here, you will need to customize the build pack any time you have a requirement to configure the set of CA certificates that are held in the JSSE Truststore of the JDK.  Many enterprises choose to operate their own internal CA, and most have policies about which third-party certificate authorities will be trusted, and so this is a very common requirement. Similarly, you’ll need to customize the Java build pack if you want to implement JEE container-based security via a custom Tomcat Realm. Fortunately, the Cloud Foundry build pack engineers thought about these issues, and so the procedure is fairly straight-forward.  We’ll show the specific steps that you’ll need to do, and we’ll touch on some of the security considerations of maintaining your build packs.

The basic procedure that we’ll need to perform goes as follows:

  • Fork the existing Java build pack from the GitHub repository.
  • Update the  /resources/open_jdk_jre/lib/security directory with your custom cacerts JSSE trust store file.
  • Add the jar file that implements your custom JEE realm provider to the /resources/tomcat/lib directory.
  • Do a git commit, and then push these changes back up to your repository
    • (and/or re-bundle an offline version of the build pack archive)
  • Push the application up to Cloud Foundry, and specify your customized build pack as an option on the push command.

In the following paragraphs, we’ll go through each of these steps in a bit more detail.

Fork the Repo

This part is easy.  Just fork the existing Java build pack repository at GitHub and then, using your favorite Git client, clone your copy of the repository onto your local machine. Keeping your customizations in a public repository enables you to share your good work with others who need those changes, and makes it easy to merge any upstream changes in the future. Also, depending upon how much work you need to do, consider creating a Git branch for your changes. You’ll probably want to isolate the changes you make to the build pack, just as you would do with any other development effort.

Log into GitHub, visit https://github.com/cloudfoundry/java-buildpack and press the “Fork” button. After that, use your favorite Git client to clone your copy of the repository.  In my case, that looked like this:

 $ git clone https://github.com/johnpfield/java-buildpack.git

Then, (depending upon your working style), create a Git branch, and we can start making some changes.

Update the JDK Trust Store

The Cloud Foundry build pack engineers designed a simple way for us to modify the JDK security configuration details. This enables you to adjust policy details such as the trust store, the java.policy file, or the java.security file.

There is a /resources subdirectory just below the Java build pack project root.  Below that, there are subdirectories for the Oracle JDK, the OpenJDK, and Tomcat. We’re going to use the OpenJDK for our deployment, so we need to copy our trust store file into the /resources/open_jdk_jre/lib/security subdirectory. This file is traditionally called cacerts, or more recently, jssecacerts. Assuming you are moving this over from a locally tested JDK installation, this would look something like:

$ cp $JAVA_HOME/jre/lib/security/cacerts  ~/projects/java-buildpack/resources/open_jdk_jre/lib/security/mycacerts

Of course, before doing this you should probably use the JDK keytool command along with SHA checksums to confirm that this trust store file actually contains only the certificates you’ll want to trust. Once that’s been done, just copy the trust store over to the designated place. Similarly, you can also customize the contents of java.policy or java.security as needed, and copy those over.

Adding the custom Realm Provider to Tomcat

Adding our custom JEE realm provider means putting the appropriate implementation jar onto the Tomcat container’s class path. Our preferred provider is Fortress Sentry. Assuming this is being migrated from a standalone Tomcat installation using a recent release of Fortress Sentry, this would look something like:

$ cp $CATALINA_HOME/lib/fortressProxyTomcat7-1.0-RC39.jar ~/projects/java-buildpack/resources/tomcat/lib/fortressProxyTomcat7-1.0-RC39.jar

As described in the Tomcat docs, actually enlisting the custom realm can be done at the level of the individual applications, or for a virtual host, or for all of the applications hosted on that container. In my recent PoC I was doing this for the specific application, which means there was no other configuration needed as part of the java-buildpack. The application-specific scope of the custom realm means we only needed to add that configuration to the META-INF/context.xml file, within the application war file.

If this custom realm needed to be activated for the whole container, or a virtual host, then we would need to edit the configuration of the Tomcat server.xml, and move that updated server.xml file over to  /resources/tomcat/conf/server.xml.

Easy, Expert, or Offline

Cloud Foundry build packs support three operational modes, called “Easy,” “Expert,” and “Offline.” The Easy mode is the default.  In this mode, the staging process will pull down the current release of the build pack from the repository maintained by Pivotal. This will ensure that the application is run with the “latest-and-greatest” runtime stack, and you’ll always have the latest security patches. This mode “just works,” and is what is recommended for everyone starting out.

Expert mode is similar, except that you maintain your own copy of the repository, which can be hosted inside the enterprise. This will be initialized by creating a local replica of the official Pivotal repository. Of course, this has all the benefits and responsibilities of local control, i.e. you maintain it. The main motivation for Expert mode is that since the repository is inside the enterprise, the staging process does not need to download stuff from the public internet every time an application is pushed.

The “Offline” mode is pretty much what you would think. Rather than referencing an external repository during staging and deployment, you can work offline, i.e. without making any connections to a live repository. In this mode, you create a ZIP file that contains your complete build pack, and upload that to your Cloud Foundry deployment. When you subsequently push your application(s), you’ll specify that build pack archive by name. Of course, this approach ensures consistency and repeatability. None of the runtime stack bits will ever vary, until and unless you decide to upload a new ZIP file. But you also run the risk of falling behind in terms of having the very latest JDK or Tomcat security fixes. Another potential downside of these ZIP files is bloated storage requirements. If every application team maintains their own ZIP files — all containing the same Tomcat release — there is likely to be a lot of process redundancy, and wasted disk.

At the end of the day, each of the methods has it’s pros and cons, and you’ll need to decide what makes sense for your situation. For the purposes of this post, Easy and Expert are equivalent, as they are both online options, and it’s just a matter of the particular URL that is referenced. Offline mode requires the additional step of creating and uploading the archive.

Custom Build Pack, Online Option

Assuming you want to work in the “online” style, you should commit and push your build pack changes to your fork of the repository. i.e.

$ cd ~/projects/java-buildpack 
$ # Modify as needed. Then...
$ git add .
$ git commit -m "Add custom JSSE trust store and JEE realm provider."
$ git push

Then you can do the cf push of the application to the Cloud Controller:

$ cd ~/projects/fortressdemo2
$ mvn clean package
$ cf push fortressdemo2 -t 60 -p target/fortressdemo2.war \ 
-b https://github.com/johnpfield/java-buildpack.git

Your application will be staged and run using the online version of the custom build pack.

Custom Build Pack, Offline Option

To use an offline version of the custom build pack, you will first bundle the ZIP file locally, and then upload this blob to the Cloud Foundry deployment. Finally, you can do the cf push operation, specifying the named build pack as your runtime stack.

To do this you’ll need to have Ruby installed. I used Ruby version 2.1.2, via RVM.

$ cd ~/projects/java-buildpack
$ bundle install
$ bundle exec rake package OFFLINE=true

After the build pack is ready, you can upload it to Cloud Foundry:

$ cd build
$ cf create-buildpack fortressdemo2-java-buildpack \
        ./java-buildpack-offline-bb567da.zip 1

And finally, you can specify that Cloud Foundry should apply that build pack when you push the application:

$ cd ~/projects/fortressdemo2
$ cf push fortressdemo2 -t 90 -p target/fortressdemo2.war \
    -b fortressdemo2-java-buildpack

That’s it! You can confirm that the application is running using your custom JEE realm and JSSE trust store by examining your configuration files, and logs:

$ cf files fortressdemo2 app/.java-buildpack/tomcat/lib

The response should include the Fortress jar, and look something like this:

Getting files for app fortressdemo2 in org ps-emc / space dev as admin...
OK

annotations-api.jar                15.6K
catalina-ant.jar                   52.2K
catalina-ha.jar                    129.8K
catalina-tribes.jar                250.8K
catalina.jar                       1.5M
...
<SNIP>
...
fortressProxyTomcat7-1.0-RC38.jar  10.6K
...

And you can also confirm that your custom certificate trust store and policy files are actually being used:

$ cf files fortressdemo2 app/.java-buildpack/open_jdk_jre/lib/security

The response will look something like this:

Getting files for app fortressdemo2 in org ps-emc / space dev as admin...
OK
US_export_policy.jar            620B
mycacerts                       1.2K
java.policy                     2.5K
java.security                   17.4K
local_policy.jar                1.0K

Finally, it is important to note that the intent for any Java build pack is that it be designed to support a class of applications, and not just a single application. So having a build pack specialized for Fortress Sentry deployments is in fact a very plausible use case scenario. The above URL referencing my GitHub repository is real, so if you want to quickly deploy the fortressDemo2 application in your own Cloud Foundry instance, feel free to use that repository, and issue pull requests for any changes.

How to Monitor a Remote JVM running on RHEL

Even when I’m not working on security, I’m still working on security.

I’ve recently been working on a large customer application deployment that has required some performance analysis and tuning. The usual first step in this process is to use a tool like JConsole, a very useful management and monitoring utility that is included in the JDK. In short, JConsole is an application developer’s tool that complies with the Java Management Extensions (JMX) specification. It has a nice GUI, and it allows you to monitor either a locally or remotely executing Java process, by attaching to the running JVM process via RMI. Of course, before you can attach to the running JVM you need to be appropriately authenticated and authorized. In this post I’ll provide a brief overview of the steps that were required to connect from JConsole running locally on my MacBook Pro, to the remote JVM process, which I was running on a RHEL v6 virtual machine in the lab.

Required JMX Configurations

Before you can run JConsole, there are a number of changes that need to be made to the JMX configuration on the target machine. In this specific case, I’m using OpenJDK on RHEL, and the relevant files are located in /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/lib/management.

The first step is to do a ‘cd’ into that directory, and edit the following files appropriately:

# cd /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/lib/management
# vi jmxremote.password
# vi jmxremote.access
# vi management.properties

There is a template file in the OpenJDK distribution called “jmxremote.password.template” that you can copy to “jmxremote.password” in order to get started. Note also that the permissions on that password file must be set correctly, or you’ll see a complaint when you start the JVM. While you are setting this up, be sure to do a ‘chmod’ to make this file read/write by the owner, only:

# chmod 600 jmxremote.password

In general, these configuration files contain good comments, and all that is really required is to uncomment the lines corresponding to the settings you plan to use. Just to get started, the easiest approach is to disable SSL, and use password authentication for a read-only user. You can edit jmxremote.password to contain your favorite username/password combination, and subsequently edit jmxremote.access to give that username appropriate access. In my case, this was just read-only access. Some sample lines from these two files follow:

#
#jmxremote.password:
#
architectedSecUser h4rd2Guesss!
#
#
#jmxremote.access:
#
architectedSecUser readonly

If you are on an untrusted network and you are planning to monitor a program that handles sensitive data, you’ll want to enable SSL for your RMI connection. Doing that means that you will need to go through the standard drill of configuring the JDK keystore and certificate truststore. I won’t go over those individual steps here since it would be beyond the scope of this post.  Hmmm…come to think of it, perhaps I should revisit the general topic of JDK keystore/truststore configuration in a subsequent post. The world can never have too many certificate tutorials 🙂

Listening on the Right Interface

The first real glitch I hit in this task was that the target RHEL machine had no resolvable hostname. This is actually pretty common with developer machines running in a virtualized setting. Machines are cloned and the IP address is changed, but frequently there is no unique hostname assigned, and DNS is never updated. Doing a ‘hostname’ command on the machine will yield something like “localhost.localdomain.” The problem with this situation is that when we run the target application JVM with JMX access enabled, it will be listening only on local looppback address 127.0.0.1, and won’t be accepting connections on the LAN interface. When we issue our RMI request from JConsole targeting the remote IP address on the local LAN (say, something like 10.0.0.22), we’ll see an error like “connection refused.”

To diagnose this situation, get a shell prompt on the target machine and issue the command:

# hostname -i

If it returns “127.0.0.1”, or “127.0.1.1”, or “localhost” you don’t have a proper hostname configured. You will either have to interactively update the hostname and/or edit the /etc/hosts file. Alternatively, you can ignore the hostname issue and just specify the IP address explicitly as a Java system property when you start the JVM. Here are the values I used when starting my JVM:

java \
-D<any other system properties needed>
-Dcom.sun.management.jmxremote \ 
-Dcom.sun.management.jmxremote.authenticate=true \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.port=18745
-Djava.rmi.server.hostname=10.0.0.22
-cp myApplication.jar myMainClass

You would now run JConsole on your local machine, and connect to the remote host, at the chosen IP address and port. If you are not using username/password authentication you can connect as follows:

# jconsole 10.0.0.22:18745

If you are using username/password authentication, you will have to enter your credentials in the New Connection… dialog box in the GUI. In most cases, that’s all there is to it.  But, of course, the connection still did not work for me.  Grrrr…those darned security guys! 😉

Configuring the Linux iptables Firewall Rules

Even after the target application JMV was up and running on the remote machine, and listenting on the correct address, I still could not get JConsole to connect to it from my local laptop. Since I was able to get an SSH session on the remote Linux box I immediately concluded that there had to be an issue with the specific port(s) JConsole was trying to reach…Hmmm…I just chose port number 18745 randomly (OK, it was really pseudo-randomly)… Maybe I’m hitting up against some firewall rule(s) for that port? Perhaps port 22 (SSH) is allowed, but port 18745 is not allowed? In fact, who knows what other dynamic ports JMX may be trying to open? So, in an attempt to determine what ports were being used, I next ran JConsole with some logging turned on.

To turn on the logging for JConsole, you can create a file named “logging.properties” in the directory from which you will be running JConsole, and set the configuration argument on the JConsole command line. Do the following to create the logging.properties file:

# cd /myproject
# touch logging.properties 
# vi logging.properties

Cut/Paste insert this into logging.properties:

handlers= java.util.logging.ConsoleHandler
.level=INFO
java.util.logging.FileHandler.pattern = %h/java%u.log
java.util.logging.FileHandler.limit = 50000
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter
java.util.logging.ConsoleHandler.level = FINEST
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
javax.management.level=FINEST
javax.management.remote.level=FINEST

Then, go ahead and start jconsole with:

jconsole -J-Djava.util.logging.config.file=logging.properties

Now, when you try to connect, you will see log output on your terminal window which reveals the dynamic port number that JConsole is trying to use for RMI lookup. An example is shown below:

Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [javax.management.remote.rmi.RMIConnector: rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[10.0.0.22:45219](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] connecting...
Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [javax.management.remote.rmi.RMIConnector: rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[10.0.0.22:45219](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] finding stub...
Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [javax.management.remote.rmi.RMIConnector: rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[10.0.0.22:45219](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] connecting stub...
Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [javax.management.remote.rmi.RMIConnector: rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[10.0.0.22:45219](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] getting connection...
Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [javax.management.remote.rmi.RMIConnector: rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[10.0.0.22:45219](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] failed to connect: java.rmi.ConnectException: Connection refused to host: 10.0.0.22; nested exception is:
 java.net.ConnectException: Connection refused
Jan 28, 2014 1:57:56 PM RMIConnector close
FINER: [javax.management.remote.rmi.RMIConnector: rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[10.0.0.22:45219](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] closing.

In this case example, JConsole was attempting to create a connection to port 45219.  Now that we know that little tid-bit of crucial information, we can go ahead and update the Linux firewall policy in /etc/sysconfig/iptables to allow that specific port number.  Do the following:

# su root
# vi /etc/sysconfig/iptables
# add 2 lines similar to these to iptables policy.
-A INPUT -m state --state NEW -m tcp -p tcp --dport 18745 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 45219 -j ACCEPT

# service iptables restart

As shown, I needed to restart the firewall process after making the necessary policy changes.  After that, I was able to just reconnect from JConsole, and this time the connection to the remote machine succeeded, and I could proceed to monitor my application’s resource utilization.

Like I said, even when I’m not doing security, I’m still doing security.

Happy Remote Monitoring!

JavaOne 2013: Using ANSI RBAC to Secure Java Enterprise Applications

I recently had the opportunity to do a presentation at the JavaOne 2013 conference in San Francisco.  My co-presenter was Shawn McKinney of Joshua Tree Software, the creator of Fortress, the open source RBAC engine.  Our talk provided an introduction to ANSI RBAC, and then went on to describe a POC implementation that I did using Fortress.  The session was well attended, but for those of you who couldn’t be there at JavaOne this year, I’ll use this post to provide a brief recap of some of my key points from that talk.

The first part of the talk provided an introduction to ANSI RBAC.  The slides themselves constitute an excellent summary, and so I don’t need to repeat those points here. The second part of the talk focused on the POC.  The security requirement was to add RBAC enforcements to an existing enterprise Web application that was written in Java.   Had the application been written using a mainstream application development framework such as Spring, adding the RBAC enablement would have been relatively straight-forward.  (As a general rule, though, I only get called on to deal with the really challenging security architecture problems, so I had a feeling that this could not possibly be as easy at it first seemed.   There had to be a catch somewhere…).

Well, it turned out that the application did not use Spring Framework or Spring Security.  In fact, the application did not use any familiar framework, but rather was written using a proprietary application framework.   And it quickly became clear that modifying the source code for this framework was not going to be an option.  I had no access to source code for the target applications or framework, and the original application programmer was nowhere to be found…  The situation was shaping up to be one of those worse-case scenarios:   The business needed to close a known security and compliance gap, the source code was not available, and no framework modifications were going to be possible.

AOP to the Rescue

As described in the presentation deck, the solution to this conundrum was to use Aspect Oriented programming (AOP).  As the target system was written in Java, it was possible to use AspectJ to define intercepts for any type or interface in the system.  Following the sage advice of David Wheeler, I decided that the best approach would be to add one more level of indirection, and implement AspectJ advice on both the application framework, and the container’s security provider interface.

Specifically, I decided to write a set of AspectJ pointcuts to intercept any calls made to the sensitive resources being managed by the proprietary application framework.  This could be done with knowledge of those APIs from the Javadoc for the public API, or even from an application stack trace.  A “before advice” would be injected at those primary pointcuts, and this new code would make a call to the container’s security provider.  That would both protect those application framework resources with a new enforcement point, and also keep all the security checks centralized through the existing container provider.

Next, I wrote a pointcut on the container security provider interface itself, on the Policy.implies() method.  This secondary pointcut would be associated with an “around advice.”  This around advice would first call the thisJoinPoint.proceed() method, so that the existing provider could still do it’s job for any resources declared in the application’s deployment descriptor.  (It’s important to remember that the container’s security provider is still going to be called though it’s normal invocation path, and not just from the advice we put on the application framework).  After the container’s security provider returned from the Policy.implies() method, we would have the opportunity to check to see if the specific resource in question were one of the types that needed to be subjected to RBAC enforcement rules.  If so, the advice code would adjudicate an access control decision by delegating the request to an appropriate ANSI RBAC compliant Policy Decision Point (PDP), e.g. Fortress.  Whatever the result returned from that PDP (whether “allow” or “deny”), that value would become the container provider’s access control decision.  If permitted, the original API request is allowed to continue.  Otherwise, I throw an AccessControlException.  Since the Java AccessControlException is a runtime (unchecked) exception, all of the existing framework and application code could remain unmodified.

The JavaOne 2013 presentation deck has a nice figure that illustrates this overall solution architecture.   And if you’d like to see the end result of all these AOP injections in action, check out this video, which shows a quick demo of the secured application.

Conclusion

The moral of this story is that AOP can be a really valuable tool in the security architect’s tool kit.  Using AOP provides excellent support for the principle of Separation of Concerns, and can help to minimize the overall development cost.  This technology can be used to add targeted enforcement, even when the source code for the services you are securing is not available to you.  And, finally, it is worth noting that this solution is not limited to a specific JEE container.  I tested this with Geronimo 3.0.1, but the solution should work equally well with any JSR-115 compliant container.

Update (December 2013)

A recording of this talk is now available.

 

Using EJB 3 Interceptors for Instance-based Policy Enforcement

A core principle of a good application security architecture is to maintain an appropriate Separation of Concerns.  We need to avoid tangling the code for the application’s business logic, with the code that implements the security enforcements.   In general, the best way to accomplish this is to leverage framework-based enforcement mechanisms whenever possible.  My previous post provided a working example of how to do this in practice, using EJB 3 container-based security with Apache Geronimo.  We showed how to enforce Role-based Access Control for both Web resources and EJB methods, using the standard XML deployment descriptors.  We implemented all of the required security enforcements without making any changes to the actual application code.

Unfortunately, the container-based security enforcement provided by a standard JEE deployment descriptor is limited to protecting object types.  That is, we can easily enforce a policy that says “only tellers may view customer accounts.”  But what if we have a requirement to enforce permissions on specific object instances?  In that case, the declarative approach alone would be insufficient (for one thing, we don’t want to redeploy the application every time a new object instance is created!).  In order to do any type of instance-based policy enforcement, we’ll need to use another mechanism.

Fortunately, EJB 3 also includes the capability to use Aspect Oriented Programming techniques in your application.  A full discussion of AOP techniques is beyond the scope of this post, but the basic idea is that one can inject (at runtime) additional code that implements logic for cross-cutting concerns.  For example, one could add additional logging into an existing set of classes.  Of course, security is my favorite cross-cutting concern, and so in this post I’ll discuss how to use an EJB 3 Interceptor to provide this instance-based policy enforcement.  Because the interceptor is code is completely decoupled from the application code it is again possible to maintain an appropriate Separation of Concerns.  We can add the needed security enforcement logic without impacting the existing business logic.  All we’ll need to do is implement the stand-alone interceptor class, and then wire it into the target application(s) via the deployment descriptor, and redeploy.

How To Do It

There are basically four steps to making this all work:

  1. Implement an interceptor class that will perform the needed security enforcements, and bundle it as a jar file.
  2. Update the application’s EJB deployment descriptor to declare the interceptor class, and bind it to a target EJB class and/or method.
  3. Repackage the enterprise application EAR file to include both the new jar files, and the updated deployment descriptor.
  4. Redeploy the enterprise application.

The following paragraphs provide some additional details on each of these steps.  Note that the complete source code for this example is available over on my GitHub repository.   You’ll want to clone both the psi-jee project (the Pivotal Security Interceptor for JEE), and also the mytime project (the target application).

Implementing the Interceptor Class

I implemented an interceptor that provides an around advice that will examine the values of the arguments passed to the target EJB when it is invoked.  If the values of the calling arguments are consistent with the application security policy, then the call can be allowed to proceed.  If the values of the arguments would violate a security policy, then the user’s request would be denied.  As an illustrative example, consider a hypothetical EJB interface that performs a funds transfer function.  The declarative security enforcements captured in the deployment descriptor could ensure that only users with the right role can access the funds transfer interface.  But only the interceptor will be able to examine the actual values of the parameters passed into the funds transfer method at runtime — these arguments might be String types such as the “FromAccountNumber,” the “ToAccountNumber,” and the “TransferAmount.”  (Of course this is an over-simplified example, but you get the idea).

The interceptor class has an interface contract with just one calling argument, the InvocationContext.  This InvocationContext parameter can be used to obtain the actual arguments that were passed to the EJB that we are in the process of intercepting.  This can be seen in lines 45-47 of the example code.    Of course, the specific EJB arguments returned will vary from one application to the next, but with appropriate knowledge of the application bean(s) you are planning on intercepting, you should know what arguments you need to look for in the params array, and what test(s) to apply.

OK, so now we know what this EJB invocation is trying to do, but we still need to know who is trying to do it.

Since we are running in an EJB container, we can always get a handle to the current EJBContext.  From there we can obtain the currently authenticated principal (the user id).  Looking at lines 73-77 of the code, you can see that I defined a private variable to hold the EJB SessionContext, and then used the getCallerPrincipal() method to find out the userid of the current caller.

Thus, with just a few lines of code we can get access to who is calling the bean method, and what they are trying to do.  Given that runtime context we can then implement whatever security policy checks are appropriate for the business.  If all is well, we call the invocationContext.proceed() method.  Otherwise, we can throw an exception.

Updating the Deployment Descriptors

The new interceptor class must be included in the target application’s deployment descriptor.  This is makes perfect sense, since this is how the EJB container knows that you have defined an interceptor class, and also which EJBs are being targeted by the interceptor.  I won’t repeat the essential snippet of XML here, as it is already captured in the README file of the mytime example that I’ve posted on GitHub.  Of course, the complete deployment descriptor can also be found in the file META-INF/ejb-jar.xml, in the mytime repository.

Repackaging the Target Application

Before you re-package the target enterprise application, you’ll need to make at least one change to the maven POM of your ejb-jar project.

If your target application’s ejb-jar project does not already use the maven-jar-plugin, then you’ll want to add this plugin to the <build>/<plugins> element of the target EJB jar project’s maven POM.  Using the maven-jar-plugin with the configuration as shown will ensure that the MANIFEST.MF file included within the EJB jar will contain a required Classpath: directive.  Basically, you need to have a line in the MANIFEST.MF that tells the container to include the new jar (i.e. the interceptor jar) on the class path of the EJB at runtime.  Take a look inside the EJB jar file after the re-build completes and you should see that the interceptor jar has now been added to that line that says Classpath:  …  psi-jee-1.0.0.jar).

If you’re already using the maven-jar-plugin in this way, then you’ll just need to add the dependency for the new interceptor jar, and the Classpath: value should be updated accordingly.

As noted, I’ve posted all the code for both the interceptor and the target application over at GitHub, so feel free to do a git clone and get started on using EJB 3 interceptors for instance-based enforcements.  Pull requests are always welcome.

Conclusion

While this post has provided specific implementation guidance for an EJB 3 deployment, the real point here is not that we advocating the use of EJB.   Rather, the real take-away is that a good security architect always follows the principle of Separation of Concerns.  Using an interceptor is always good security technique, and fortunately EJB now has support for that.  Security enforcement for object types are delegated to the container.  Security enforcement for object instances are delegated to an interceptor.  The application developer stays focused on the business logic.  The result will be a system that is both more secure, and more maintainable.

If you’re not working in an EJB environment, then this same architectural pattern can still be applied to other containers.  The approach would be similar for, say, instance-based policy enforcements for a RESTful service, accessed over HTTP(S) using a servlet filter deployed to Tomcat, or Jetty.

Finally, if you need to do the enforcements in a container-independent way (perhaps because you need to deploy your application to a number of different containers at runtime), then using an application developer framework such as the Spring Framework with a Spring Security filter chain is clearly the way to go.