Customizing the Cloud Foundry Java Buildpack

In this post I’ll describe the process of customizing the Cloud Foundry Java build pack for the fortressDemo2 sample application.

In general, the requirement to do build pack customization arises any time you need to make changes to an application’s runtime stack. In terms of some of the specific security issues that we care about around here, you will need to customize the build pack any time you have a requirement to configure the set of CA certificates that are held in the JSSE Truststore of the JDK.  Many enterprises choose to operate their own internal CA, and most have policies about which third-party certificate authorities will be trusted, and so this is a very common requirement. Similarly, you’ll need to customize the Java build pack if you want to implement JEE container-based security via a custom Tomcat Realm. Fortunately, the Cloud Foundry build pack engineers thought about these issues, and so the procedure is fairly straight-forward.  We’ll show the specific steps that you’ll need to do, and we’ll touch on some of the security considerations of maintaining your build packs.

The basic procedure that we’ll need to perform goes as follows:

  • Fork the existing Java build pack from the GitHub repository.
  • Update the  /resources/open_jdk_jre/lib/security directory with your custom cacerts JSSE trust store file.
  • Add the jar file that implements your custom JEE realm provider to the /resources/tomcat/lib directory.
  • Do a git commit, and then push these changes back up to your repository
    • (and/or re-bundle an offline version of the build pack archive)
  • Push the application up to Cloud Foundry, and specify your customized build pack as an option on the push command.

In the following paragraphs, we’ll go through each of these steps in a bit more detail.

Fork the Repo

This part is easy.  Just fork the existing Java build pack repository at GitHub and then, using your favorite Git client, clone your copy of the repository onto your local machine. Keeping your customizations in a public repository enables you to share your good work with others who need those changes, and makes it easy to merge any upstream changes in the future. Also, depending upon how much work you need to do, consider creating a Git branch for your changes. You’ll probably want to isolate the changes you make to the build pack, just as you would do with any other development effort.

Log into GitHub, visit https://github.com/cloudfoundry/java-buildpack and press the “Fork” button. After that, use your favorite Git client to clone your copy of the repository.  In my case, that looked like this:

 $ git clone https://github.com/johnpfield/java-buildpack.git

Then, (depending upon your working style), create a Git branch, and we can start making some changes.

Update the JDK Trust Store

The Cloud Foundry build pack engineers designed a simple way for us to modify the JDK security configuration details. This enables you to adjust policy details such as the trust store, the java.policy file, or the java.security file.

There is a /resources subdirectory just below the Java build pack project root.  Below that, there are subdirectories for the Oracle JDK, the OpenJDK, and Tomcat. We’re going to use the OpenJDK for our deployment, so we need to copy our trust store file into the /resources/open_jdk_jre/lib/security subdirectory. This file is traditionally called cacerts, or more recently, jssecacerts. Assuming you are moving this over from a locally tested JDK installation, this would look something like:

$ cp $JAVA_HOME/jre/lib/security/cacerts  ~/projects/java-buildpack/resources/open_jdk_jre/lib/security/mycacerts

Of course, before doing this you should probably use the JDK keytool command along with SHA checksums to confirm that this trust store file actually contains only the certificates you’ll want to trust. Once that’s been done, just copy the trust store over to the designated place. Similarly, you can also customize the contents of java.policy or java.security as needed, and copy those over.

Adding the custom Realm Provider to Tomcat

Adding our custom JEE realm provider means putting the appropriate implementation jar onto the Tomcat container’s class path. Our preferred provider is Fortress Sentry. Assuming this is being migrated from a standalone Tomcat installation using a recent release of Fortress Sentry, this would look something like:

$ cp $CATALINA_HOME/lib/fortressProxyTomcat7-1.0-RC39.jar ~/projects/java-buildpack/resources/tomcat/lib/fortressProxyTomcat7-1.0-RC39.jar

As described in the Tomcat docs, actually enlisting the custom realm can be done at the level of the individual applications, or for a virtual host, or for all of the applications hosted on that container. In my recent PoC I was doing this for the specific application, which means there was no other configuration needed as part of the java-buildpack. The application-specific scope of the custom realm means we only needed to add that configuration to the META-INF/context.xml file, within the application war file.

If this custom realm needed to be activated for the whole container, or a virtual host, then we would need to edit the configuration of the Tomcat server.xml, and move that updated server.xml file over to  /resources/tomcat/conf/server.xml.

Easy, Expert, or Offline

Cloud Foundry build packs support three operational modes, called “Easy,” “Expert,” and “Offline.” The Easy mode is the default.  In this mode, the staging process will pull down the current release of the build pack from the repository maintained by Pivotal. This will ensure that the application is run with the “latest-and-greatest” runtime stack, and you’ll always have the latest security patches. This mode “just works,” and is what is recommended for everyone starting out.

Expert mode is similar, except that you maintain your own copy of the repository, which can be hosted inside the enterprise. This will be initialized by creating a local replica of the official Pivotal repository. Of course, this has all the benefits and responsibilities of local control, i.e. you maintain it. The main motivation for Expert mode is that since the repository is inside the enterprise, the staging process does not need to download stuff from the public internet every time an application is pushed.

The “Offline” mode is pretty much what you would think. Rather than referencing an external repository during staging and deployment, you can work offline, i.e. without making any connections to a live repository. In this mode, you create a ZIP file that contains your complete build pack, and upload that to your Cloud Foundry deployment. When you subsequently push your application(s), you’ll specify that build pack archive by name. Of course, this approach ensures consistency and repeatability. None of the runtime stack bits will ever vary, until and unless you decide to upload a new ZIP file. But you also run the risk of falling behind in terms of having the very latest JDK or Tomcat security fixes. Another potential downside of these ZIP files is bloated storage requirements. If every application team maintains their own ZIP files — all containing the same Tomcat release — there is likely to be a lot of process redundancy, and wasted disk.

At the end of the day, each of the methods has it’s pros and cons, and you’ll need to decide what makes sense for your situation. For the purposes of this post, Easy and Expert are equivalent, as they are both online options, and it’s just a matter of the particular URL that is referenced. Offline mode requires the additional step of creating and uploading the archive.

Custom Build Pack, Online Option

Assuming you want to work in the “online” style, you should commit and push your build pack changes to your fork of the repository. i.e.

$ cd ~/projects/java-buildpack 
$ # Modify as needed. Then...
$ git add .
$ git commit -m "Add custom JSSE trust store and JEE realm provider."
$ git push

Then you can do the cf push of the application to the Cloud Controller:

$ cd ~/projects/fortressdemo2
$ mvn clean package
$ cf push fortressdemo2 -t 60 -p target/fortressdemo2.war \ 
-b https://github.com/johnpfield/java-buildpack.git

Your application will be staged and run using the online version of the custom build pack.

Custom Build Pack, Offline Option

To use an offline version of the custom build pack, you will first bundle the ZIP file locally, and then upload this blob to the Cloud Foundry deployment. Finally, you can do the cf push operation, specifying the named build pack as your runtime stack.

To do this you’ll need to have Ruby installed. I used Ruby version 2.1.2, via RVM.

$ cd ~/projects/java-buildpack
$ bundle install
$ bundle exec rake package OFFLINE=true

After the build pack is ready, you can upload it to Cloud Foundry:

$ cd build
$ cf create-buildpack fortressdemo2-java-buildpack \
        ./java-buildpack-offline-bb567da.zip 1

And finally, you can specify that Cloud Foundry should apply that build pack when you push the application:

$ cd ~/projects/fortressdemo2
$ cf push fortressdemo2 -t 90 -p target/fortressdemo2.war \
    -b fortressdemo2-java-buildpack

That’s it! You can confirm that the application is running using your custom JEE realm and JSSE trust store by examining your configuration files, and logs:

$ cf files fortressdemo2 app/.java-buildpack/tomcat/lib

The response should include the Fortress jar, and look something like this:

Getting files for app fortressdemo2 in org ps-emc / space dev as admin...
OK

annotations-api.jar                15.6K
catalina-ant.jar                   52.2K
catalina-ha.jar                    129.8K
catalina-tribes.jar                250.8K
catalina.jar                       1.5M
...
<SNIP>
...
fortressProxyTomcat7-1.0-RC38.jar  10.6K
...

And you can also confirm that your custom certificate trust store and policy files are actually being used:

$ cf files fortressdemo2 app/.java-buildpack/open_jdk_jre/lib/security

The response will look something like this:

Getting files for app fortressdemo2 in org ps-emc / space dev as admin...
OK
US_export_policy.jar            620B
mycacerts                       1.2K
java.policy                     2.5K
java.security                   17.4K
local_policy.jar                1.0K

Finally, it is important to note that the intent for any Java build pack is that it be designed to support a class of applications, and not just a single application. So having a build pack specialized for Fortress Sentry deployments is in fact a very plausible use case scenario. The above URL referencing my GitHub repository is real, so if you want to quickly deploy the fortressDemo2 application in your own Cloud Foundry instance, feel free to use that repository, and issue pull requests for any changes.

Some Basic Security Considerations for Cloud Foundry Services

In this post, I’m going to discuss some of the basic security considerations for using Cloud Foundry services.

In a nutshell, Cloud Foundry currently supports two different kinds of services.  First, there are Managed Services.  These are the “native” services that are part of Cloud Foundry, and implement the Cloud Foundry service broker API.  The canonical example of a managed service is the MySQL database service.  Upon request, Cloud Foundry is able to provision capacity for a MySQL database instance, provision credentials, and bind this newly created instance to an application.

In addition, Cloud Foundry also supports User-Provided Services.  These are enterprise services that do not implement the Cloud Foundry service broker API, and (probably) exist outside the perimeter of the Cloud Foundry deployment.  An example would be, say, an existing Oracle database that runs on another platform, but is accessed by the application(s) deployed into Cloud Foundry.

Finally, it is important to note that there are no restrictions on the number or types of services that an application may bind.  Cloud Foundry applications are able to use any combination of these two service types, if and as needed.

Now, having introduced the basic idea of the two different service types, I’d like to use the remainder of this post to discuss some of their security implications.  What are the basic security considerations that an architect should think about when using these Cloud Foundry service types? What are their pros and cons? What are the implications for your existing security mechanisms? Your technical control procedures?

Some Security Considerations When Using Managed Services

One advantage of using a managed service is that, in general, the credentials that are needed to access that service will not need to be pre-configured into your applications. Rather, these can be dynamically provisioned and injected into the application at runtime.

This was the case with my most recent PoC, in which I leveraged a MySQL database. Because MySQL is available as a managed service in Cloud Foundry, I was able to use the cf create-service command line interface to create the needed service instance in the appropriate organization and space, and then I just needed to do a cf bind-service. My application would be supplied with the necessary JDBC connection string and credentials at runtime. In the specific case of MySQL, the credentials created by the service broker were both unique to the instance, and strongly chosen (i.e., my application received a user id of “5GhaoxJwtCymalOI,” with an equally obtuse password, for use when accessing a schema named “cf_f8e2b861_2070_47bf_bfd0_b8c863f4d5d2.”  Nice.)

If your current deployment process for application database login details involves manually provisioning and distributing the credentials (e.g. via an email to the deployer), then moving to a managed service will likely improve your overall risk profile. You won’t need to have a valid userid and password for your production database sitting around in an email or a properties file somewhere.  The credentials are made available just in time, via the VCAP_SERVICES environment variable, and should not need to be persisted anywhere else in the application configuration.

Since the credentials can be strongly chosen, and do not need to be persisted, it is unlikely that an attacker could obtain these through routine leakage, or directly penetrate your production service by brute-force guessing.  They likely contain enough entropy to ensure that anyone trying to directly guess the service credentials will likely wind up triggering an alarm, or at least getting noticed in your log monitoring solution.

Of course, keep in mind that old saying about “your mileage may vary.”  The specific algorithm used by your service broker may differ, and those supplied credentials may have more or less entropy than you actually expected.  Whether you use one of the existing service brokers, or build your own, be sure to check whether the entropy is sufficient for your required level of assurance.  The service broker may provision credentials using a low entropy approach, or may even use an existing account database, so don’t assume these will always be unguessable.

Finally, it is still important to maintain a meaningful audit trail.  For regulatory compliance reasons, you may have to periodically produce (and review) reports that list all the production database accounts that are being used, along with their access rights. In the case of Cloud Foundry managed services, all these account mappings are maintained by the Cloud Controller.  To continue to satisfy these existing reporting requirements you may find that you have to build the necessary integration(s) and collect these details using the cf command line interface or the PCF console.  Alternatively, perhaps you can work with your auditor to amend your reporting requirements, so that the control procedures you’ll follow with the cloud based applications are updated to reflect the new reality.  For example, instead of reporting on specific applications and corresponding database accounts, one could provide evidence that the application in question always uses service bindings, and therefore no hardcoded database credentials are being persisted to disk in properties files.

It is also worth noting that security operations teams looking for suspicious activity will typically seek to correlate log events across related user ids, services, IP address, and geographies, and so on.  One can be sure that a user id that is dynamically bound via the VCAP_SERVICES environment variable should always be used from the corresponding DEA node, and not also be being used to access the database from IP address that is not part of the DEA pool.

Similarly, you may need to correlate the backend user id that accessed a service with the front end user id that initiated the original request.  Establishing that linkage may require traversing (and/or correlating) yet another level of indirection.  In addition, that linkage may also vary over time, as the service is bound, unbound and rebound.   In summary, consider your requirements for log correlation, and take advantage of any new opportunities that the cloud deployment makes possible, but be aware that some of your existing correlations may not work as well when service credentials are being dynamically assigned.

Some Security Considerations for User-Provided Services

It is possible for a Cloud Foundry application to access an external service without creating a corresponding user-provided service definition. The application can continue to use whatever configuration technique it used before, likely by “hard-coding” the necessary connection details into the application configuration. For example, the application may still have a JSON or XML properties file, or a Spring Java configuration class that captures the details of where to find an Oracle database. However, because these credentials are statically maintained in the application configuration, and will likely be associated with manual workflow processes, they may be susceptible to data leakage. It would work, but you’re not really getting all the advantages of being in the cloud.

Creating a corresponding user-provided service definition is simply a way to tell Cloud Foundry about the service access. Once this is done, the applications deployed into the cloud can look to leverage the presence of the VCAP_SERVICES environment variable. Just as in the case of a managed service, the application can use the credentials found there, in order to access the service endpoint. Thus, defining a user-provided service simply enables Cloud Foundry to inject the necessary credentials, and this means that the credentials no longer have to be carried in a properties file within the application war file. The actual service itself can remain unchanged.

Of course the application would likely need to be upgraded to take advantage of the presence of the VCAP_SERVICES environment variable, but this can easily be done in every programming language, and in Java it can be made even simpler via the use of a connector component like Spring Cloud.

It’s also important to point out that the actual credential provisioning process is still entirely up to you. Once the service credentials are known, these are stored in the Cloud Controller database, via the command cf create-user-provided-service. If you have established account provisioning control procedures that are mature, and well integrated, then it might make perfect sense to just continue to leverage those. The responsibility of keeping custody of the credentials shifts from the application configuration, to the Cloud Controller database. That would seem to be a good thing. It is safe to say that whenever something security-related can be factored out of your developers’ day, then we should probably choose to do that.

Since Cloud Foundry introduces the native access control concepts of the Organization and Space, a decision needs to be made about how those existing services will be administered as user-provided services. Administrators, developers, and applications can only see and bind services that they have permission for within their space, and so you’ll need to think about how to administer that user-provided service definition in the Cloud Foundry environment. How does the maintenance of that proxy or placeholder record in the Cloud Controller correlate with the administration of the real resource(s)?  Does the administrator who is responsible for the real resource also administer the user-provided service definition in the cloud?  What new audit reports and record keeping correlations will be needed?

Will the user-provided service require the application to use SSL/TLS for access?  If so, then that client application deployed to the Cloud Foundry environment may need to be pushed (deployed) with a customized build pack.  Just as we prefer to factor out the login credentials from the application configuration, we’d also prefer to factor out the certificate management functions.  (This is not hard to do, but would be out of scope for the current post, so I’ll cover that in my next post).

Conclusion

Moving your applications to Cloud Foundry will clearly help your organization optimize both infrastructure utilization, and developer productivity.  But, in addition, I would assert that the proper use of managed- and user-provided services within Cloud Foundry has the potential to make your application deployments even more secure than they would have been, if you weren’t running in the cloud.  Using both managed services and user-provided services has many potential advantages, including:

  • reducing an application’s attack surface
  • improving the confidentiality of production service credentials
  • improving the consistency of operational procedures
  • enabling improvements in security event monitoring
  • improving visibility, thereby enabling more efficient audit processes

The only major caveat here is to consider how your existing operational controls will need to evolve in view of an ever-accelerating application lifecycle.   Most organizations find that some existing control procedures will be obviated, and others will need to be amended. But the real gains come from the new and more efficient controls that are made possible by the improved visibility that comes from having consistent service bindings.

Configuring SSL/TLS for Cloud Foundry

Whenever we deploy an enterprise Java Web application we should consider turning on SSL/TLS.  In the case of Tomcat, that means going through the certificate issuance process with the CA of your choice, and then editing your /conf/server.xml file, and adding an SSL/TLS-enabled connector.  The required configuration will look something like this:

<Connector port="8443" maxThreads="42" scheme="https" secure="true" 
SSLEnabled="true“ keystoreFile="/path/to/my/keystore" 
keystorePass="Don'tPanic!" clientAuth="false" sslProtocol="TLS"/>

However, as mentioned in my previous post, things are different in Cloud Foundry.

In the case of deploying your application to Pivotal Cloud Foundry, you don’t have the requirement to directly configure the Tomcat server.xml file. We will explain in detail why this is, but first it must be said that the whole point of deploying to the PaaS is that the developer does not need to deal with these things.

Instead, the approach is that the SSL/TLS connection from the user’s browser will terminate at the entry point to Cloud Foundry.  The application itself runs inside the perimeter of the Cloud Foundry deployment.  In my case, this is behind the HA Proxy server instance that serves as the front door for all the applications that are running in the cloud.   To understand this, let’s take a look at the way a typical enterprise application URL is remapped when it is deployed to the cloud.

Typical Enterprise Application URL Format

When the user accesses an application that has been deployed to a standalone Tomcat instance (or a cluster) the URL typically has the form:

https://hostname.com:8443/myapplication/page.html

Thus, the X.509 certificate that is presented to the users browser authenticates the hostname of the Tomcat server that is hosting the application.  The application itself is  identified by the context portion of the URL, the “/myapplication” part.  If you host another application on the same Tomcat server, it will likely be located at, say:

https://hostname.com:8443/my-other-application/page.html

…And so on.  There are likely a number of such applications on that enterprise Tomcat instance.  (We should mention that it’s also possible to use different ports and virtual hosts, though that is less common with the enterprise applications deployed inside a typical data center).   The key point here is that the SSL/TLS server certificate actually identifies the Tomcat server instance, and the SSL/TLS session is terminated at a port on the server that hosts the application.

Application URL Format in Pivotal Cloud Foundry

When that same enterprise application has been deployed to Pivotal Cloud Foundry, the URL will have the format:

https://myapplication.my-cf.com/page.html

And my other application would have the URL:

https://my-other-application.my-cf.com/page.html

Notice how the Tomcat application context now becomes a subdomain.  The user’s HTTP request for that application is routed (via the client’s DNS) — not to the Web application server — but to the entry point of Cloud Foundry.  In my deployment, that is the IP address of the Cloud Foundry HA Proxy server.  You can also choose to use a hardware load balancer, or another software proxy.  All the applications exist on a subnet behind that designated proxy server.  So the SSL/TLS session from the users browser is terminated by the proxy server, and the traffic to the application server itself (through the Cloud Foundry Router, and on through to the DEA) happens inside the perimeter of the cloud.  The subject name in the SSL/TLS certificate is the cloud subdomain.

Now, don’t panic.  Before we jump up and demand that the user’s SSL/TLS connection be set up end-to-end — from the browser all the way to the Tomcat server instance — we should consider that there is a precedent for this.  It is actually quite common for enterprise SSL/TLS connections to be terminated at a proxy server. This may be done to leverage a dedicated host (that contains SSL/TLS accelerator hardware), or perhaps for security monitoring purposes.  In fact, many enterprise security operations teams will also terminate any outbound SSL/TLS connections at an enterprise perimeter firewall such as a Blue Coat box.

The threat model here is that the entry point to the cloud is a high availability, secured proxy server.  Once the traffic traverses that perimeter, it is on a trusted subnet.  In fact, the actual  IP address and port number where the Tomcat application server are running are not visible from outside of the cloud. The only way to get an HTTP request to that port is to go via the secure proxy. This pattern is a well established best practice amongst security architecture practitioners.

In addition, we need to keep in mind that there are additional security capabilities — such as a Warden container — that are present in the Cloud Foundry deployment, that typically don’t exist on the average enterprise Tomcat instance.  (Discussing Warden containers in more detail would be out of scope for this post, but I’ll be sure to revisit this topic — along with Docker — in a future post).

So, now, let’s finally take a look at how the administrator will configure that SSL/TLS certificate in Pivotal Cloud Foundry.

Configuring SSL/TLS Certificate in Pivotal Cloud Foundry

Configuring the SSL/TLS certificate for Pivotal Cloud Foundry may be done using the Pivotal Operations Manager UI.  From the Installation Dashboard, click on Pivotal Elastic Runtime tile, and then the settings tab, and “HA Proxy.” This makes it easy for the administrator to supply a certificate PEM file, or choose to let PCF generate the certificate for you.  Below is a screen shot that shows the page.

pcf-opsmgr-haproxy-crt

If you happen to be are deploying your Cloud Foundry instance using the open source code, e.g. using Bosh, then the PEM certificate file for the HA proxy may be configured manually.  Look in the /templates subdirectory of the haproxy job to find the configuration details.  When the instance is deployed, the certificate and private key will ultimately find their way to the bosh manifest file.  The PEM encoded certificate and private key are found in the manifest under “jobs”, i.e.

jobs:
- name: ha_proxy
  template: haproxy
  release: cf
  lifecycle: service
  instances: 1
  resource_pool: ha_proxy
  persistent_disk: 0
  networks:
  - name: default
    static_ips:
    - 10.x.y.z
  properties:
    networks:
      apps: default
  ha_proxy:
    timeout: 300
    ssl_pem: "-----BEGIN CERTIFICATE——
          ....foo bar....
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
      ...foo bat...
-----END RSA PRIVATE KEY-----"
router:
 servers:

In either case — whether you are using PCF or the open source code base — it’s important to recognize that this is a task needs to be done once by the security administrator for the cloud — and not by every developer that deploys an application, or another instance of Tomcat.

Conclusion

Moving your enterprise application deployments into Cloud Foundry will mean that your developers will have the benefit of always having an SSL/TLS connection from their users’ browsers, to the “front door” of the cloud infrastructure, for free.  One of the key benefits of a PaaS is that your busy developers do not have to waste valuable time dealing with configuration details such as provisioning server certificates, and editing their Tomcat server configurations. The developers can spend their time writing code that drives revenue for the business, and then just do “cf push <my-application>”.  Dealing with the operational security of the application server infrastructure is factored out of their workday.

Finally, it’s important to note that while using Cloud Foundry makes things easy by eliminating routine configuration tasks like enabling SSL/TLS, it is still possible to customize your Tomcat server configuration, if and as needed. This would be done by creating and deploying a customized Cloud Foundry Buildpack, which contains your enterprise-specific changes.  In my next post, I’ll describe a use case scenario in which a customized build pack is necessary, and we’ll describe how this is done.

Migrating a Secure Java Web Application into Cloud Foundry

It’s been quite a while since I’ve had the time to write a new post here, but that’s not because there is nothing interesting to discuss.  On the contrary, the absence of any new posts was actually a reflection of how busy I have been over the summer.  For at least the last 4 or 5 months, I’ve been pretty much “heads down,” working on some interesting security challenges for our Pivotal Cloud Foundry customers.  The good news for all my loyal readers is that, in that time, I’ve built up quite a backlog of interesting stuff to write about.  And now that fall is in the air, I’ve finally gotten to the point where I’ll have some time to share what I’ve been up to.  I promise that it will have been worth the wait.

My Most Recent PoC Effort

I’ve recently completed an interesting project in which I’ve successfully migrated a secure enterprise Java Web application into Cloud Foundry.  This was a Proof-of-Concept effort, but it was by no means a “Hello, World!” exercise.   The application was non-trivial, and had a number of interesting security requirements.  The overall goal was to demonstrate to a customer how an existing Java Web application (one that already followed all the current best practices for securing enterprise deployments) could be successfully migrated into the Cloud Foundry environment.  And now that it’s happily running in Cloud Foundry, the application benefits from all the additional capabilities of the PaaS, while still maintaining all the same enterprise security features as before.

In order to respect the privacy of the customer, I won’t be able to discuss the actual customer application, which is proprietary.  Instead, I’ve decided to describe this effort in terms of another, equivalent, secure Java Web application — one that has the benefit of being available in open source, and has the same basic security properties as the real customer application.  The sample application we’ll be using is the FortressDemo2 application, written by my colleague Shawn McKinney.  As noted, this code is available in open source (with good license terms 🙂 ) and it is an excellent example of how to properly implement a secure Java Web application.

The Target Application Requirements

The FortressDemo2 application uses an RDBS for storing customer data, and depends on OpenLDAP as it’s policy and identity store.  The application uses JEE container-based security (e.g. a custom Tomcat Realm) for authentication and coarse-grained authorization. Internally, the application leverages Spring Security to implement its page-level policy enforcement points.  All interprocess communications are secured via SSL/TLS.  Thus, the user’s browser connection, and the connections made from the application to the database, and the application to the OpenLDAP server are all secured using SSL/TLS. Finally, the certificates used for these SSL/TLS connections have all been issued by the enterprise security operations team, and so have been signed by a Certificate Authority that is maintained internally to the enterprise.  As noted, this application is a perfect proxy for the real thing, and so if we can show how to run this application in Cloud Foundry, then we can successfully run a real customer application.

So, to summarize, the requirements we face are as follows.  Deploy a secure Java Web application to Cloud Foundry, including:

  1. Use of SSL/TLS for all inter-process communication, with locally issued certificates.
  2. Secure provisioning of SQL database credentials for access to production database.
  3. Configuration of a customized JSSE trust store.
  4. Configuration of a custom Tomcat Realm provider for JEE container-based security.

In the next few posts, I’ll be describing what you need to know to achieve these security requirements, and successfully migrate your own secure enterprise Java application into Cloud Foundry.