Get Me Out of Here!

…Or, How To Fix Your Outbound SSL/TLS Connection Problems

So, the other day I was working on an problem for a customer, and I hit up against a very common glitch using SSL/TLS going outbound. I’ve stumbled onto this more than once now, and so I thought that it would be helpful to share the fix. If nothing else, this post will serve as a record of the procedure so that I will be able to quickly reproduce my own fix next time.

The Context

Enterprises need to protect their perimeter, and will (almost) always use SSL Termination at their “edge” routers.  This means that any SSL/TLS connections you try to make going outbound are going to be intercepted and eavesdropped, in order to inspect the contents of the traffic. The actual infrastructure will vary of course, but the relevant network gear will typically be a box from, say, Cisco/Ironport, or an F5 Big IP, or BlueCoat, etc.

When you try to create an outbound SSL connection to, say, https://www dot any site dot com/downloads, you can expect that the network box handling your request will prematurely terminate that SSL/TLS connection, and examine the contents of the request in real-time. If the request looks OK, then the router will repeat your original request to that target URL.  In effect, your SSL/TLS connection is actually being subjected to a classic man-in-the-middle (MITM) attack.  Except that in this case it’s not really an attack — it’s being done to defend the enterprise, and to comply with the applicable government regulations.

As the old saying goes, you can’t manage what you can’t see.  And so the CISO couldn’t effectively claim “due care” compliance with data protection and privacy regulations when any user can create an opaque SSL/TLS connection going outbound from the enterprise. Think of security use case scenarios such as a disgruntled employee, or an undetected strain of malware, exfiltrating private customer data over an encrypted channel.

So there are good reasons for doing this, and in most cases the users will never even see it.  However, if your SSL/TLS request was initiated from a developer VM in the lab, using an Ubuntu command line tool — rather than from an enterprise managed browser — then your request will likely fail.

Where the Rubber Meets the Road

When you are working from your favorite browser, and you try to do an HTTPS GET from your favorite download site, the browser will check it’s local trust store to see if it recognizes the x.509 certificate that the target site has presented.  If the certificate for the site has been signed by a CA that your browser trusts, then all is well, and you’ll smoothly connect.

If not, then you may see a security warning message, asking you if you want to accept the new certificate.  Different browsers will behave differently, but in general you’ll see that type of security exception for any new, unrecognized certificate.  If you look carefully at the certificate contents you might notice that the certificate you’re being asked to accept has a subjectName which is consistent with the site you’re trying to reach.  However, the certificate issuerName is going to be the local enterprise security operations team, rather than a widely recognized CA.  So, if you’re on site working for a client named Acme Widgets, you’ll see something that effectively says: “Security warning:  Do you want to accept the certificate for issued by Acme Widgets, Inc.?”  “Hmmmm…”, you think, “I didn’t know Amazon got their PKI certificates from Acme?!”

They didn’t.  What’s happening is that your outbound SSL/TLS request has been intercepted, and a certificate was issued (often, on-the-fly) stating that the enterprise vouches for the target site.  Once you accept that certificate, your SSL session to the edge router is established.  And a new SSL session is then created, as a next hop, to the target site.  The trust for that second hop is established using the actual site certificate for the target site.  The enterprise’s edge router is acting as a man-in-the-middle.

Usually, the browser’s trust store has already been updated as part of installing the security infrastructure, and so the local enterprise CA is already considered a trusted authority.  The typical user won’t even see the warning message, and everything just works.  The user gets to do their uploads or downloads, and the enterprise gets to inspect exactly what content is traversing their infrastructure.

Beyond the Browser

So what’s the essential security architecture problem here?  Too many trust stores!

The annoying glitch occurs when you are initiating that SSL/TLS request from a program that does not use the browser’s trust store.  The first symptom is likely to be a connection timeout, excessive connection retries, or a stack trace complaining about network connectivity.  Of course when you try to debug this, and you connect to the target site using the browser, it just works.  Grrrr!  After investigating further you’ll find that the only time you can connect to the required site is from a browser, but never from any other command line utilities.  Again, the reason it just works from the browser is that the MITM CA certificate is already there, in the browser trust store.  But if you use a command line utility like curl or wget it will fail, because the MITM CA certificate is not in the corresponding trust store.

OpenSSL to the Rescue

You can use openssl to diagnose and fix this issue.  Do the following from a shell prompt:

$  openssl s_client -showcerts -connect

The result should be a fairly verbose (but fascinating) view of the SSL handshake negotiation in progress.  You’ll see that the client connect request is sent, and consistent with the protocol spec, the receiver presents the server’s certificate chain.  Adding -debug will show even more.  Looking carefully at that certificate chain will reveal that it is probably not the one you expect, and the last line of the output will appear as follows:

Verify return code: 20 (unable to get local issuer certificate)

Translated, this means that the CA certificate for the issuer (i.e. the enterprise MITM Certificate Authority) is not present in the default trust store.  Assuming the software you are running is leveraging the OpenSSL certificate store, then by updating that common store, you can fix any number of utilities or applications that rely upon it.  If an application or utility uses it’s own trust store, then you’ll have to update that specific trust store as well. (For example, both Ruby and Java use their own trust stores).

You can use your browser to get a copy of the MITM certificate(s), or just do a cut and paste from the verbose output obtained above.  The text that appears between the markers




is a PEM (Privacy Enhanced Mail) encoding of the certificate (Base64).

If you used a browser, and the certificate was not already known, you’ll want to accept the certificate and be sure to check that box that says “permanently store this exception.” This will place a copy of the certificate into your browsers trust store. Either way, you can then export it from the browser trust store, and do an scp to copy it to the target machine, and import it into the required trust store. Depending upon the browser you are using you may be able to save the certificate directly in the PEM format, or you might have to first save it as binary file (DER format with a file extension of .der, .cer, or .crt). Once you have a copy of the certificate in one format then you can always use OpenSSL to convert the certificate to any other format, for example:

$ openssl x509 -inform der -in mitm-ca-cert.der -outform pem -out mitm-ca-cert.pem

Where to Find the Trust Store

It’s a security best practice to the store CA certificates trusted only by the local site separately from the CA certificates that are distributed with the OS.  On any recent Ubuntu/Debian installation, the certificate store for the CA certs that come with the distro will be located in  /etc/ssl/certs. The CA certificates to trust that are specific to the local site belong in /usr/local/share/ca-certificates.

As an aside, it’s also worth noting that on Ubuntu, the Firefox browser will store globally trusted certificates in /usr/share/ca-certificates/mozilla, while the user-specific CA certificates (that is, user personal preferences) are stored in the users profile, under ~/.mozilla/firefox.  To export certificates from Firefox, you can use the menu choice Edit /Preferences, and then choose Advanced button, and then the Certificates tab. In the case of Mac OS X, the certificates are located in /Users/<username>/Library/Application Support/Firefox/Profiles. Of course, if you’re on a Mac, you can just use the Keychain Access application to view and manage certificates, including an export in PEM format.

As regular readers of this blog already well know, the Java trust store is usually located in $JAVA_HOME/jre/lib/security/cacerts. You can use the java keytool to import the MITM certificates you need using a command line.  Using OpenJDK7 that looks something like the following:

$JAVA_HOME/bin/keytool -importcert -alias mitm-ca-cert -file mitm-ca-cert.cer -keystore cacerts

Note that the Java keytool has options for different keystore types (typically “jks”, or “pkcs12”) and you can import either the binary (.cer) or base64 (.pem) encoded certificates, as needed. For this post I’d like to stay focused on the native platform keystore, so I won’t say any more about Java here.  Instead we’ll discuss Ruby and Java in more detail in a subsequent post.

Updating the Trust Store

The /usr/local/share/ca-certificates directory should contain one file for each additional certificate that you trust, in PEM format. Just drop the needed mitm-ca-certificate.pem file(s) into this directory, and make sure that they are owned by root, and have a file permission mask of 644. Then run the c_rehash command to create the required hash entries.

$ sudo mv ~/mitm-ca-cert.pem /usr/local/share/ca-certificates/mitm-ca-cert.pem
$ sudo chown root:root /usr/local/share/ca-certificates/mitm-ca-cert.pem
$ sudo chmod 644 /usr/local/share/ca-certificates/mitm-ca-cert.pem
$ cd /usr/local/share/ca-certificates
$ sudo c_rehash .

That last command deserves some explanation.  The c_rehash command comes from OpenSSL.  For each PEM file found in the directory, it creates a SHA-1 hash of the certificate subjectName and then creates a new filesystem entry with that hash as the filename, which is created as a soft link to the PEM file containing that subjectName (within the same directory).  Basically, this is done for performance reasons. There can be many certificates in this directory, and this is a good way to locate what we need quickly.  These hashed filenames follow the format HHHHHHHH.D where “H” is a hexadecimal digit and “D” is an incrementing integer (the “D” is used just in case there are hash collisions).

Finally, not all applications and utilities are aware of using both /etc/ssl/certs and /usr/local/share/ca-certificates. That is, some programs will only look in /etc/ssl/certs. To make sure these programs are able to find the newly added MITM CA certificates, we can create some additional soft links on the file system:

$ cd /etc/ssl/certs
$ sudo ln -s /usr/local/share/ca-certificates/mitm-ca-cert.pem
$ sudo c_rehash .

This ensures that there are soft link entries in the /etc/ssl/certs directory that point to each of the new certificates dropped into /usr/local/share/ca-certificates, for those that look only in /etc/ssl/certs.

You can easily show the relationships between these files using a simple ls -la command.

$ ls -la /etc/ssl/certs


lrwxrwxrwx   1   root  root      15    Mar 13 16:20   ad21de5b.0 -> mitm-ca-cert.pem


lrw-r--r--   1   root  root    1245    Mar 13 16:20   mitm-ca-cert.pem -> /usr/local/share/ca-certificates/mitm-ca-cert.pem 


Create as many PEM files and soft links as you need in order to establish the full chain of trust for any egress points that you will be using. That is, there may be an enterprise root CA, followed by a number of geographic or divisional CAs, who in turn issue an end-entity certificates to their SSL Termination router(s). The SSL/TLS protocol handshake will attempt to establish the chain of trust from the certificate that is presented by the counter party, to a trusted certificate found in the appropriate trust store.  This attestation may require one, two, or possibly more hops. YMMV.  You can always test whether you have the necessary CA certificate(s) in the right places, by explicitly naming a CA PEM file (or directory) on the OpenSSL command line:

$# A test to check if we have the CA cert needed to connect outbound
$  openssl s_client -connect -CAfile mitm-ca-cert.pem

$# Now try it with the whole directory, instead of a specific file
$ openssl s_client -connect -CApath /usr/local/share/ca-certificates

$# response should look like the following:

Verify return code: 0 (ok)

If tests like these succeed then you have successfully established a complete chain of trust, and all of your future outbound SSL/TLS connections should complete without any issues.


Of course, it’s not our purpose here to defeat the SSL/TLS Termination, or to work around it.  Rather, we’re working hard to comply with it, so that we can succeed in making that outbound SSL/TLS connection.  We just want our Linux command line operations to succeed, so that we can go back to getting our real work done!

Customizing the Cloud Foundry Java Buildpack

In this post I’ll describe the process of customizing the Cloud Foundry Java build pack for the fortressDemo2 sample application.

In general, the requirement to do build pack customization arises any time you need to make changes to an application’s runtime stack. In terms of some of the specific security issues that we care about around here, you will need to customize the build pack any time you have a requirement to configure the set of CA certificates that are held in the JSSE Truststore of the JDK.  Many enterprises choose to operate their own internal CA, and most have policies about which third-party certificate authorities will be trusted, and so this is a very common requirement. Similarly, you’ll need to customize the Java build pack if you want to implement JEE container-based security via a custom Tomcat Realm. Fortunately, the Cloud Foundry build pack engineers thought about these issues, and so the procedure is fairly straight-forward.  We’ll show the specific steps that you’ll need to do, and we’ll touch on some of the security considerations of maintaining your build packs.

The basic procedure that we’ll need to perform goes as follows:

  • Fork the existing Java build pack from the GitHub repository.
  • Update the  /resources/open_jdk_jre/lib/security directory with your custom cacerts JSSE trust store file.
  • Add the jar file that implements your custom JEE realm provider to the /resources/tomcat/lib directory.
  • Do a git commit, and then push these changes back up to your repository
    • (and/or re-bundle an offline version of the build pack archive)
  • Push the application up to Cloud Foundry, and specify your customized build pack as an option on the push command.

In the following paragraphs, we’ll go through each of these steps in a bit more detail.

Fork the Repo

This part is easy.  Just fork the existing Java build pack repository at GitHub and then, using your favorite Git client, clone your copy of the repository onto your local machine. Keeping your customizations in a public repository enables you to share your good work with others who need those changes, and makes it easy to merge any upstream changes in the future. Also, depending upon how much work you need to do, consider creating a Git branch for your changes. You’ll probably want to isolate the changes you make to the build pack, just as you would do with any other development effort.

Log into GitHub, visit and press the “Fork” button. After that, use your favorite Git client to clone your copy of the repository.  In my case, that looked like this:

 $ git clone

Then, (depending upon your working style), create a Git branch, and we can start making some changes.

Update the JDK Trust Store

The Cloud Foundry build pack engineers designed a simple way for us to modify the JDK security configuration details. This enables you to adjust policy details such as the trust store, the java.policy file, or the file.

There is a /resources subdirectory just below the Java build pack project root.  Below that, there are subdirectories for the Oracle JDK, the OpenJDK, and Tomcat. We’re going to use the OpenJDK for our deployment, so we need to copy our trust store file into the /resources/open_jdk_jre/lib/security subdirectory. This file is traditionally called cacerts, or more recently, jssecacerts. Assuming you are moving this over from a locally tested JDK installation, this would look something like:

$ cp $JAVA_HOME/jre/lib/security/cacerts  ~/projects/java-buildpack/resources/open_jdk_jre/lib/security/mycacerts

Of course, before doing this you should probably use the JDK keytool command along with SHA checksums to confirm that this trust store file actually contains only the certificates you’ll want to trust. Once that’s been done, just copy the trust store over to the designated place. Similarly, you can also customize the contents of java.policy or as needed, and copy those over.

Adding the custom Realm Provider to Tomcat

Adding our custom JEE realm provider means putting the appropriate implementation jar onto the Tomcat container’s class path. Our preferred provider is Fortress Sentry. Assuming this is being migrated from a standalone Tomcat installation using a recent release of Fortress Sentry, this would look something like:

$ cp $CATALINA_HOME/lib/fortressProxyTomcat7-1.0-RC39.jar ~/projects/java-buildpack/resources/tomcat/lib/fortressProxyTomcat7-1.0-RC39.jar

As described in the Tomcat docs, actually enlisting the custom realm can be done at the level of the individual applications, or for a virtual host, or for all of the applications hosted on that container. In my recent PoC I was doing this for the specific application, which means there was no other configuration needed as part of the java-buildpack. The application-specific scope of the custom realm means we only needed to add that configuration to the META-INF/context.xml file, within the application war file.

If this custom realm needed to be activated for the whole container, or a virtual host, then we would need to edit the configuration of the Tomcat server.xml, and move that updated server.xml file over to  /resources/tomcat/conf/server.xml.

Easy, Expert, or Offline

Cloud Foundry build packs support three operational modes, called “Easy,” “Expert,” and “Offline.” The Easy mode is the default.  In this mode, the staging process will pull down the current release of the build pack from the repository maintained by Pivotal. This will ensure that the application is run with the “latest-and-greatest” runtime stack, and you’ll always have the latest security patches. This mode “just works,” and is what is recommended for everyone starting out.

Expert mode is similar, except that you maintain your own copy of the repository, which can be hosted inside the enterprise. This will be initialized by creating a local replica of the official Pivotal repository. Of course, this has all the benefits and responsibilities of local control, i.e. you maintain it. The main motivation for Expert mode is that since the repository is inside the enterprise, the staging process does not need to download stuff from the public internet every time an application is pushed.

The “Offline” mode is pretty much what you would think. Rather than referencing an external repository during staging and deployment, you can work offline, i.e. without making any connections to a live repository. In this mode, you create a ZIP file that contains your complete build pack, and upload that to your Cloud Foundry deployment. When you subsequently push your application(s), you’ll specify that build pack archive by name. Of course, this approach ensures consistency and repeatability. None of the runtime stack bits will ever vary, until and unless you decide to upload a new ZIP file. But you also run the risk of falling behind in terms of having the very latest JDK or Tomcat security fixes. Another potential downside of these ZIP files is bloated storage requirements. If every application team maintains their own ZIP files — all containing the same Tomcat release — there is likely to be a lot of process redundancy, and wasted disk.

At the end of the day, each of the methods has it’s pros and cons, and you’ll need to decide what makes sense for your situation. For the purposes of this post, Easy and Expert are equivalent, as they are both online options, and it’s just a matter of the particular URL that is referenced. Offline mode requires the additional step of creating and uploading the archive.

Custom Build Pack, Online Option

Assuming you want to work in the “online” style, you should commit and push your build pack changes to your fork of the repository. i.e.

$ cd ~/projects/java-buildpack 
$ # Modify as needed. Then...
$ git add .
$ git commit -m "Add custom JSSE trust store and JEE realm provider."
$ git push

Then you can do the cf push of the application to the Cloud Controller:

$ cd ~/projects/fortressdemo2
$ mvn clean package
$ cf push fortressdemo2 -t 60 -p target/fortressdemo2.war \ 

Your application will be staged and run using the online version of the custom build pack.

Custom Build Pack, Offline Option

To use an offline version of the custom build pack, you will first bundle the ZIP file locally, and then upload this blob to the Cloud Foundry deployment. Finally, you can do the cf push operation, specifying the named build pack as your runtime stack.

To do this you’ll need to have Ruby installed. I used Ruby version 2.1.2, via RVM.

$ cd ~/projects/java-buildpack
$ bundle install
$ bundle exec rake package OFFLINE=true

After the build pack is ready, you can upload it to Cloud Foundry:

$ cd build
$ cf create-buildpack fortressdemo2-java-buildpack \
        ./ 1

And finally, you can specify that Cloud Foundry should apply that build pack when you push the application:

$ cd ~/projects/fortressdemo2
$ cf push fortressdemo2 -t 90 -p target/fortressdemo2.war \
    -b fortressdemo2-java-buildpack

That’s it! You can confirm that the application is running using your custom JEE realm and JSSE trust store by examining your configuration files, and logs:

$ cf files fortressdemo2 app/.java-buildpack/tomcat/lib

The response should include the Fortress jar, and look something like this:

Getting files for app fortressdemo2 in org ps-emc / space dev as admin...

annotations-api.jar                15.6K
catalina-ant.jar                   52.2K
catalina-ha.jar                    129.8K
catalina-tribes.jar                250.8K
catalina.jar                       1.5M
fortressProxyTomcat7-1.0-RC38.jar  10.6K

And you can also confirm that your custom certificate trust store and policy files are actually being used:

$ cf files fortressdemo2 app/.java-buildpack/open_jdk_jre/lib/security

The response will look something like this:

Getting files for app fortressdemo2 in org ps-emc / space dev as admin...
US_export_policy.jar            620B
mycacerts                       1.2K
java.policy                     2.5K                   17.4K
local_policy.jar                1.0K

Finally, it is important to note that the intent for any Java build pack is that it be designed to support a class of applications, and not just a single application. So having a build pack specialized for Fortress Sentry deployments is in fact a very plausible use case scenario. The above URL referencing my GitHub repository is real, so if you want to quickly deploy the fortressDemo2 application in your own Cloud Foundry instance, feel free to use that repository, and issue pull requests for any changes.

Some Basic Security Considerations for Cloud Foundry Services

In this post, I’m going to discuss some of the basic security considerations for using Cloud Foundry services.

In a nutshell, Cloud Foundry currently supports two different kinds of services.  First, there are Managed Services.  These are the “native” services that are part of Cloud Foundry, and implement the Cloud Foundry service broker API.  The canonical example of a managed service is the MySQL database service.  Upon request, Cloud Foundry is able to provision capacity for a MySQL database instance, provision credentials, and bind this newly created instance to an application.

In addition, Cloud Foundry also supports User-Provided Services.  These are enterprise services that do not implement the Cloud Foundry service broker API, and (probably) exist outside the perimeter of the Cloud Foundry deployment.  An example would be, say, an existing Oracle database that runs on another platform, but is accessed by the application(s) deployed into Cloud Foundry.

Finally, it is important to note that there are no restrictions on the number or types of services that an application may bind.  Cloud Foundry applications are able to use any combination of these two service types, if and as needed.

Now, having introduced the basic idea of the two different service types, I’d like to use the remainder of this post to discuss some of their security implications.  What are the basic security considerations that an architect should think about when using these Cloud Foundry service types? What are their pros and cons? What are the implications for your existing security mechanisms? Your technical control procedures?

Some Security Considerations When Using Managed Services

One advantage of using a managed service is that, in general, the credentials that are needed to access that service will not need to be pre-configured into your applications. Rather, these can be dynamically provisioned and injected into the application at runtime.

This was the case with my most recent PoC, in which I leveraged a MySQL database. Because MySQL is available as a managed service in Cloud Foundry, I was able to use the cf create-service command line interface to create the needed service instance in the appropriate organization and space, and then I just needed to do a cf bind-service. My application would be supplied with the necessary JDBC connection string and credentials at runtime. In the specific case of MySQL, the credentials created by the service broker were both unique to the instance, and strongly chosen (i.e., my application received a user id of “5GhaoxJwtCymalOI,” with an equally obtuse password, for use when accessing a schema named “cf_f8e2b861_2070_47bf_bfd0_b8c863f4d5d2.”  Nice.)

If your current deployment process for application database login details involves manually provisioning and distributing the credentials (e.g. via an email to the deployer), then moving to a managed service will likely improve your overall risk profile. You won’t need to have a valid userid and password for your production database sitting around in an email or a properties file somewhere.  The credentials are made available just in time, via the VCAP_SERVICES environment variable, and should not need to be persisted anywhere else in the application configuration.

Since the credentials can be strongly chosen, and do not need to be persisted, it is unlikely that an attacker could obtain these through routine leakage, or directly penetrate your production service by brute-force guessing.  They likely contain enough entropy to ensure that anyone trying to directly guess the service credentials will likely wind up triggering an alarm, or at least getting noticed in your log monitoring solution.

Of course, keep in mind that old saying about “your mileage may vary.”  The specific algorithm used by your service broker may differ, and those supplied credentials may have more or less entropy than you actually expected.  Whether you use one of the existing service brokers, or build your own, be sure to check whether the entropy is sufficient for your required level of assurance.  The service broker may provision credentials using a low entropy approach, or may even use an existing account database, so don’t assume these will always be unguessable.

Finally, it is still important to maintain a meaningful audit trail.  For regulatory compliance reasons, you may have to periodically produce (and review) reports that list all the production database accounts that are being used, along with their access rights. In the case of Cloud Foundry managed services, all these account mappings are maintained by the Cloud Controller.  To continue to satisfy these existing reporting requirements you may find that you have to build the necessary integration(s) and collect these details using the cf command line interface or the PCF console.  Alternatively, perhaps you can work with your auditor to amend your reporting requirements, so that the control procedures you’ll follow with the cloud based applications are updated to reflect the new reality.  For example, instead of reporting on specific applications and corresponding database accounts, one could provide evidence that the application in question always uses service bindings, and therefore no hardcoded database credentials are being persisted to disk in properties files.

It is also worth noting that security operations teams looking for suspicious activity will typically seek to correlate log events across related user ids, services, IP address, and geographies, and so on.  One can be sure that a user id that is dynamically bound via the VCAP_SERVICES environment variable should always be used from the corresponding DEA node, and not also be being used to access the database from IP address that is not part of the DEA pool.

Similarly, you may need to correlate the backend user id that accessed a service with the front end user id that initiated the original request.  Establishing that linkage may require traversing (and/or correlating) yet another level of indirection.  In addition, that linkage may also vary over time, as the service is bound, unbound and rebound.   In summary, consider your requirements for log correlation, and take advantage of any new opportunities that the cloud deployment makes possible, but be aware that some of your existing correlations may not work as well when service credentials are being dynamically assigned.

Some Security Considerations for User-Provided Services

It is possible for a Cloud Foundry application to access an external service without creating a corresponding user-provided service definition. The application can continue to use whatever configuration technique it used before, likely by “hard-coding” the necessary connection details into the application configuration. For example, the application may still have a JSON or XML properties file, or a Spring Java configuration class that captures the details of where to find an Oracle database. However, because these credentials are statically maintained in the application configuration, and will likely be associated with manual workflow processes, they may be susceptible to data leakage. It would work, but you’re not really getting all the advantages of being in the cloud.

Creating a corresponding user-provided service definition is simply a way to tell Cloud Foundry about the service access. Once this is done, the applications deployed into the cloud can look to leverage the presence of the VCAP_SERVICES environment variable. Just as in the case of a managed service, the application can use the credentials found there, in order to access the service endpoint. Thus, defining a user-provided service simply enables Cloud Foundry to inject the necessary credentials, and this means that the credentials no longer have to be carried in a properties file within the application war file. The actual service itself can remain unchanged.

Of course the application would likely need to be upgraded to take advantage of the presence of the VCAP_SERVICES environment variable, but this can easily be done in every programming language, and in Java it can be made even simpler via the use of a connector component like Spring Cloud.

It’s also important to point out that the actual credential provisioning process is still entirely up to you. Once the service credentials are known, these are stored in the Cloud Controller database, via the command cf create-user-provided-service. If you have established account provisioning control procedures that are mature, and well integrated, then it might make perfect sense to just continue to leverage those. The responsibility of keeping custody of the credentials shifts from the application configuration, to the Cloud Controller database. That would seem to be a good thing. It is safe to say that whenever something security-related can be factored out of your developers’ day, then we should probably choose to do that.

Since Cloud Foundry introduces the native access control concepts of the Organization and Space, a decision needs to be made about how those existing services will be administered as user-provided services. Administrators, developers, and applications can only see and bind services that they have permission for within their space, and so you’ll need to think about how to administer that user-provided service definition in the Cloud Foundry environment. How does the maintenance of that proxy or placeholder record in the Cloud Controller correlate with the administration of the real resource(s)?  Does the administrator who is responsible for the real resource also administer the user-provided service definition in the cloud?  What new audit reports and record keeping correlations will be needed?

Will the user-provided service require the application to use SSL/TLS for access?  If so, then that client application deployed to the Cloud Foundry environment may need to be pushed (deployed) with a customized build pack.  Just as we prefer to factor out the login credentials from the application configuration, we’d also prefer to factor out the certificate management functions.  (This is not hard to do, but would be out of scope for the current post, so I’ll cover that in my next post).


Moving your applications to Cloud Foundry will clearly help your organization optimize both infrastructure utilization, and developer productivity.  But, in addition, I would assert that the proper use of managed- and user-provided services within Cloud Foundry has the potential to make your application deployments even more secure than they would have been, if you weren’t running in the cloud.  Using both managed services and user-provided services has many potential advantages, including:

  • reducing an application’s attack surface
  • improving the confidentiality of production service credentials
  • improving the consistency of operational procedures
  • enabling improvements in security event monitoring
  • improving visibility, thereby enabling more efficient audit processes

The only major caveat here is to consider how your existing operational controls will need to evolve in view of an ever-accelerating application lifecycle.   Most organizations find that some existing control procedures will be obviated, and others will need to be amended. But the real gains come from the new and more efficient controls that are made possible by the improved visibility that comes from having consistent service bindings.

Configuring SSL/TLS for Cloud Foundry

Whenever we deploy an enterprise Java Web application we should consider turning on SSL/TLS.  In the case of Tomcat, that means going through the certificate issuance process with the CA of your choice, and then editing your /conf/server.xml file, and adding an SSL/TLS-enabled connector.  The required configuration will look something like this:

<Connector port="8443" maxThreads="42" scheme="https" secure="true" 
SSLEnabled="true“ keystoreFile="/path/to/my/keystore" 
keystorePass="Don'tPanic!" clientAuth="false" sslProtocol="TLS"/>

However, as mentioned in my previous post, things are different in Cloud Foundry.

In the case of deploying your application to Pivotal Cloud Foundry, you don’t have the requirement to directly configure the Tomcat server.xml file. We will explain in detail why this is, but first it must be said that the whole point of deploying to the PaaS is that the developer does not need to deal with these things.

Instead, the approach is that the SSL/TLS connection from the user’s browser will terminate at the entry point to Cloud Foundry.  The application itself runs inside the perimeter of the Cloud Foundry deployment.  In my case, this is behind the HA Proxy server instance that serves as the front door for all the applications that are running in the cloud.   To understand this, let’s take a look at the way a typical enterprise application URL is remapped when it is deployed to the cloud.

Typical Enterprise Application URL Format

When the user accesses an application that has been deployed to a standalone Tomcat instance (or a cluster) the URL typically has the form:

Thus, the X.509 certificate that is presented to the users browser authenticates the hostname of the Tomcat server that is hosting the application.  The application itself is  identified by the context portion of the URL, the “/myapplication” part.  If you host another application on the same Tomcat server, it will likely be located at, say:

…And so on.  There are likely a number of such applications on that enterprise Tomcat instance.  (We should mention that it’s also possible to use different ports and virtual hosts, though that is less common with the enterprise applications deployed inside a typical data center).   The key point here is that the SSL/TLS server certificate actually identifies the Tomcat server instance, and the SSL/TLS session is terminated at a port on the server that hosts the application.

Application URL Format in Pivotal Cloud Foundry

When that same enterprise application has been deployed to Pivotal Cloud Foundry, the URL will have the format:

And my other application would have the URL:

Notice how the Tomcat application context now becomes a subdomain.  The user’s HTTP request for that application is routed (via the client’s DNS) — not to the Web application server — but to the entry point of Cloud Foundry.  In my deployment, that is the IP address of the Cloud Foundry HA Proxy server.  You can also choose to use a hardware load balancer, or another software proxy.  All the applications exist on a subnet behind that designated proxy server.  So the SSL/TLS session from the users browser is terminated by the proxy server, and the traffic to the application server itself (through the Cloud Foundry Router, and on through to the DEA) happens inside the perimeter of the cloud.  The subject name in the SSL/TLS certificate is the cloud subdomain.

Now, don’t panic.  Before we jump up and demand that the user’s SSL/TLS connection be set up end-to-end — from the browser all the way to the Tomcat server instance — we should consider that there is a precedent for this.  It is actually quite common for enterprise SSL/TLS connections to be terminated at a proxy server. This may be done to leverage a dedicated host (that contains SSL/TLS accelerator hardware), or perhaps for security monitoring purposes.  In fact, many enterprise security operations teams will also terminate any outbound SSL/TLS connections at an enterprise perimeter firewall such as a Blue Coat box.

The threat model here is that the entry point to the cloud is a high availability, secured proxy server.  Once the traffic traverses that perimeter, it is on a trusted subnet.  In fact, the actual  IP address and port number where the Tomcat application server are running are not visible from outside of the cloud. The only way to get an HTTP request to that port is to go via the secure proxy. This pattern is a well established best practice amongst security architecture practitioners.

In addition, we need to keep in mind that there are additional security capabilities — such as a Warden container — that are present in the Cloud Foundry deployment, that typically don’t exist on the average enterprise Tomcat instance.  (Discussing Warden containers in more detail would be out of scope for this post, but I’ll be sure to revisit this topic — along with Docker — in a future post).

So, now, let’s finally take a look at how the administrator will configure that SSL/TLS certificate in Pivotal Cloud Foundry.

Configuring SSL/TLS Certificate in Pivotal Cloud Foundry

Configuring the SSL/TLS certificate for Pivotal Cloud Foundry may be done using the Pivotal Operations Manager UI.  From the Installation Dashboard, click on Pivotal Elastic Runtime tile, and then the settings tab, and “HA Proxy.” This makes it easy for the administrator to supply a certificate PEM file, or choose to let PCF generate the certificate for you.  Below is a screen shot that shows the page.


If you happen to be are deploying your Cloud Foundry instance using the open source code, e.g. using Bosh, then the PEM certificate file for the HA proxy may be configured manually.  Look in the /templates subdirectory of the haproxy job to find the configuration details.  When the instance is deployed, the certificate and private key will ultimately find their way to the bosh manifest file.  The PEM encoded certificate and private key are found in the manifest under “jobs”, i.e.

- name: ha_proxy
  template: haproxy
  release: cf
  lifecycle: service
  instances: 1
  resource_pool: ha_proxy
  persistent_disk: 0
  - name: default
    - 10.x.y.z
      apps: default
    timeout: 300
    ssl_pem: "-----BEGIN CERTIFICATE——
-----BEGIN RSA PRIVATE KEY----- bat...

In either case — whether you are using PCF or the open source code base — it’s important to recognize that this is a task needs to be done once by the security administrator for the cloud — and not by every developer that deploys an application, or another instance of Tomcat.


Moving your enterprise application deployments into Cloud Foundry will mean that your developers will have the benefit of always having an SSL/TLS connection from their users’ browsers, to the “front door” of the cloud infrastructure, for free.  One of the key benefits of a PaaS is that your busy developers do not have to waste valuable time dealing with configuration details such as provisioning server certificates, and editing their Tomcat server configurations. The developers can spend their time writing code that drives revenue for the business, and then just do “cf push <my-application>”.  Dealing with the operational security of the application server infrastructure is factored out of their workday.

Finally, it’s important to note that while using Cloud Foundry makes things easy by eliminating routine configuration tasks like enabling SSL/TLS, it is still possible to customize your Tomcat server configuration, if and as needed. This would be done by creating and deploying a customized Cloud Foundry Buildpack, which contains your enterprise-specific changes.  In my next post, I’ll describe a use case scenario in which a customized build pack is necessary, and we’ll describe how this is done.

Migrating a Secure Java Web Application into Cloud Foundry

It’s been quite a while since I’ve had the time to write a new post here, but that’s not because there is nothing interesting to discuss.  On the contrary, the absence of any new posts was actually a reflection of how busy I have been over the summer.  For at least the last 4 or 5 months, I’ve been pretty much “heads down,” working on some interesting security challenges for our Pivotal Cloud Foundry customers.  The good news for all my loyal readers is that, in that time, I’ve built up quite a backlog of interesting stuff to write about.  And now that fall is in the air, I’ve finally gotten to the point where I’ll have some time to share what I’ve been up to.  I promise that it will have been worth the wait.

My Most Recent PoC Effort

I’ve recently completed an interesting project in which I’ve successfully migrated a secure enterprise Java Web application into Cloud Foundry.  This was a Proof-of-Concept effort, but it was by no means a “Hello, World!” exercise.   The application was non-trivial, and had a number of interesting security requirements.  The overall goal was to demonstrate to a customer how an existing Java Web application (one that already followed all the current best practices for securing enterprise deployments) could be successfully migrated into the Cloud Foundry environment.  And now that it’s happily running in Cloud Foundry, the application benefits from all the additional capabilities of the PaaS, while still maintaining all the same enterprise security features as before.

In order to respect the privacy of the customer, I won’t be able to discuss the actual customer application, which is proprietary.  Instead, I’ve decided to describe this effort in terms of another, equivalent, secure Java Web application — one that has the benefit of being available in open source, and has the same basic security properties as the real customer application.  The sample application we’ll be using is the FortressDemo2 application, written by my colleague Shawn McKinney.  As noted, this code is available in open source (with good license terms 🙂 ) and it is an excellent example of how to properly implement a secure Java Web application.

The Target Application Requirements

The FortressDemo2 application uses an RDBS for storing customer data, and depends on OpenLDAP as it’s policy and identity store.  The application uses JEE container-based security (e.g. a custom Tomcat Realm) for authentication and coarse-grained authorization. Internally, the application leverages Spring Security to implement its page-level policy enforcement points.  All interprocess communications are secured via SSL/TLS.  Thus, the user’s browser connection, and the connections made from the application to the database, and the application to the OpenLDAP server are all secured using SSL/TLS. Finally, the certificates used for these SSL/TLS connections have all been issued by the enterprise security operations team, and so have been signed by a Certificate Authority that is maintained internally to the enterprise.  As noted, this application is a perfect proxy for the real thing, and so if we can show how to run this application in Cloud Foundry, then we can successfully run a real customer application.

So, to summarize, the requirements we face are as follows.  Deploy a secure Java Web application to Cloud Foundry, including:

  1. Use of SSL/TLS for all inter-process communication, with locally issued certificates.
  2. Secure provisioning of SQL database credentials for access to production database.
  3. Configuration of a customized JSSE trust store.
  4. Configuration of a custom Tomcat Realm provider for JEE container-based security.

In the next few posts, I’ll be describing what you need to know to achieve these security requirements, and successfully migrate your own secure enterprise Java application into Cloud Foundry.


How To Maintain RHEL Firewall Policies Using iptables

In my last post I described how to use JConsole to monitor an application that was running in a JVM on a remote host.  The main challenge I had encountered in this task was dealing with the network connectivity issues that always exist between the developer’s laptop (the client) and the application service, running on a server in the lab.  Specifically, we accounted for incorrect hostname to IP address resolution, and configuring the appropriate access policies for the Linux NetFilter firewall on the target RHEL machine. Long story short, the solution steps I described included instructions to edit the iptables file directly. That is  — being a hacker at heart — I just found it way simpler to open “/etc/sysconfig/iptables” using vi, and just edit the access policies in raw form, rather than using the proper interactive administration commands.

Of course, we claim to be security professionals around here, and so we want all our security policies to be managed appropriately: using standard, repeatable processes, and with a suitable audit trail.  So, as of today, I pledge no more editing iptables configuration files by hand. In this post, I’ll attempt to redeem myself, by describing the correct way to modify these firewall policies, using the iptables command line.

The iptables Concept of Operations

NetFilter is the linux kernel firewall. It is maintained by the command line tool “iptables”.  As we’ve already seen, the name of the relevant, underlying configuration file is “/etc/sysconfig/iptables.

The basic concept of operations is just like other firewalls — the system uses rules to decide what to do with each network packet seen.  These rules are maintained in a couple of data tables (hence, the name “iptables”).  There are actually three tables of interest, and these are called the “Filter” table, the “NAT” table, and the “mangle” table.  Each of these tables serves to administratively organize a set of rules needed for different purposes:

  • Filter Table – The rules in this table are used for filtering network packets.
  • NAT Table – The rules in this table are used for performing Network Address Translation.
  • Mangle Table – The rules in this table are used for any other custom packet modifications that may be needed.

In order to enable the remote debugging we would like to do, we’ll be working with the Filter table.

The next thing you need to know is that iptables operates using an event-based model. There are a number of predefined lifecycle events that occur between the time that a packet is received on a host network interface, and when it is passed through to a user process.  Similarly, there are predefined lifecycle events that occur after a packet has been sent outbound from a user process, but before it actually exits the host through the network interface.

The full set of lifecycle events is specified as follows:


It is important to note that not all combinations of lifecycle events and tables is supported.  More on this in a moment.

The PREROUTING lifecycle event is defined to occur just after the packet has been received on the local network interface, but before any other handling occurs. This is where we have first opportunity to affect the handling of an inbound packet.  This is the place to do things like alter the destination IP address to implement Destination NAT, or DNAT.  The packet is received and may be destined for a particular port and address on the local network.  You can write a rule that alters where the packet is delivered, i.e. sending it to a different destination IP address and/or port on the network.

The next lifecycle event is called INPUT. This event is defined to occur after PREROUTING, but before the packet is passed through to a user process.  This lifecycle event where we can choose to enforce rules like dropping packets that have been received from a known-bad address.

Conversely, the OUTPUT event occurs just after a network packet is sent outbound from a user process. An example application of using the OUTPUT lifecycle event could be to do outbound filtering, or accounting on network usage for an application doing an upload. This event provides us with an opportunity to affect the packet handling immediately after the user process has done the send, but before the packet has been transferred to the (outbound) network interface.

The POSTROUTING event occurs just before the network packet goes out the door, i.e. just before it actually leaves on the outbound network interface. This is our last chance to apply any policy to the packet. This is the right place to implement rules for Source NAT, or SNAT. For a system that serves the role of an internet proxy or gateway, we can use the POSTROUTING event as an opportunity to do apply a rule to set the source IP address of the outbound packet. One common use case is to use both PREROUTING and POSTROUTING events to prevent external hosts from seeing any internal IP address. Implementing both source and destination NAT enables a gateway host to expose only its own public IP, and keep the addresses of the internal hosts hidden from the outside world.

Finally, the FORWARD lifecycle event applies to packets that are received on one network interface, and will be sent right back out on another interface. Again, this is a common function for a proxy or gateway host. The FORWARD lifecycle event is relevent to those packets that are handled entirely within the network stack, and are not delivered to a user processes.

As a convenience, we refer to the set, or “chain” of policies that are associated with a specific lifecycle event by using the name of that event.  So, when talking about this we might say something like “we need to modify the INPUT rule chain of the Filter table.”

Again, it is important to note that not all of the combinations of tables and lifecycle events are supported. Only the combinations that are required (i.e. that are meaningful) are actually supported.

So, for the case of the NAT table, there are 3 rule chains that are meaningful:


For the Filter table, the following 3 rule chains are supported:


For the mangle table, all of the lifecycle events are supported.  This is the most general case, and thus the mangle table can be used in situations where the NAT table and the filter table are for some reason not sufficient.  I’ve never needed to deal with the mangle table in my production work, so I won’t cover that in detail here. In general, it is used for specialized processing requirements, such as being able to adjust status bits in an IP header (i.e. changing IP protocol header option flags).

Tell Me Where it Hurts

Recall that after the initial failure of JConsole to connect to our remote JVM, we speculated that we had a firewall issue. My educated guess was that the port I was trying to reach was being blocked by the iptables policies. So, the first step is to review the existing iptables policies to see what we have in place.

The command we need to show the existing firewall policies is as follows:

# iptables -L INPUT -n -v --line-numbers

Sample output would look something like:

Chain INPUT (policy ACCEPT)
num target  prot  opt  source    destination 
1   ACCEPT  all    --  anywhere  anywhere state RELATED,ESTABLISHED 
2   ACCEPT  icmp   --  anywhere  anywhere 
3   ACCEPT  all    --  anywhere  anywhere 
4   ACCEPT  tcp    --  anywhere  anywhere state NEW tcp dpt:ssh 
5   ACCEPT  tcp    --  anywhere  anywhere state NEW tcp dpt:rmiregistry 
6   ACCEPT  udp    --  anywhere  anywhere state NEW udp dpt:rmiregistry 
7   ACCEPT  tcp    --  anywhere  anywhere state NEW tcp dpts:irdmi:8079 
8   ACCEPT  tcp    --  anywhere  anywhere state NEW tcp dpt:webcache 
9   ACCEPT  udp    --  anywhere  anywhere state NEW udp dpt:webcache 
10  REJECT  all    --  anywhere  anywhere reject-with icmp-host-prohibited 
[root@localhost ~]#

The -L says to list the policies, and the -v means, as usual, to be verbose, and finally we ask to include ordinal line numbers in the output. The argument INPUT identifies the rule chain associated with the INPUT lifecycle event. By default iptables operates on the Filter table.

From this output one can verify whether the specific IP address, port number, and protocol that we need for JConsole will be ACCEPTed or REJECTED.  At this point, the port we need is not listed and so our connection attempt will not be ACCEPTed.

In order to add the new rule, we would do the following command:

# iptables -I INPUT 10 -m state --state NEW -m tcp -p tcp --dport 18745 -j ACCEPT

In this commands we are saying that we want to add an additional rule to the INPUT chain of the filter table (again, the filter table is the default if no table is specified). That is, when a NEW packet arrives over the protocol “tcp” for destination port (“dport”) 18735, we want the policy to be to to ACCEPT that packet. The -j actually means to “jump” to the indicated target. We choose to jump to the built-in rule ACCEPT. It’s important to note that the new rule should be made the tenth one, i.e. added after the existing rule found in position 9. That is why we used the –line-numbers option in the list command. The rules are processed in order so it is important to insert new rules in the proper place. For example, if we placed the new ACCEPT rule after a more general REJECT rule, then the ACCEPT rule will never be reached.

The -m flag is invoking the state module, which is followed by the option that indicates that we are interested in connection requests that are in the “NEW” state, (as opposed to, say, connections in the “ESTABLISHED” state.

The output of a -L list operation would now appear as follows:

Chain INPUT (policy ACCEPT)
num  target  prot  opt  source    destination 
1    ACCEPT  all   --   anywhere  anywhere state RELATED,ESTABLISHED 
2    ACCEPT  icmp  --   anywhere  anywhere 
3    ACCEPT  all   --   anywhere  anywhere 
4    ACCEPT  tcp   --   anywhere  anywhere state NEW tcp dpt:ssh 
5    ACCEPT  tcp   --   anywhere  anywhere state NEW tcp dpt:rmiregistry 
6    ACCEPT  udp   --   anywhere  anywhere state NEW udp dpt:rmiregistry 
7    ACCEPT  tcp   --   anywhere  anywhere state NEW tcp dpts:irdmi:8079 
8    ACCEPT  tcp   --   anywhere  anywhere state NEW tcp dpt:webcache 
9    ACCEPT  udp   --   anywhere  anywhere state NEW udp dpt:webcache 
10   ACCEPT  tcp   --   anywhere  anywhere state NEW tcp dpt:18745 
11   REJECT  all   --   anywhere  anywhere reject-with icmp-host-prohibited

Of course, using the above command, one could add as many additional rules as needed. Just specify the source and destination addresses, protocol, and port(s) as needed.

In order to delete a rule, we can again specify the line number of the rule to the delete command. Here we delete the 10th rule from the INPUT chain of the filter table.

# iptables -D INPUT 10

After doing an add or a delete, it’s a good idea to list the rules again, in order to make sure you have what you think you need, and the rules have been inserted or deleted as expected, and in the right order.

So, now that we’re all comfortable administering iptables firewall policies via a proper command line (or even better, via a version controlled script), there’s no longer any excuse to edit the iptables files directly using vi.  While that may be quick and easy, it is not necessarily a reliable, repeatable process.   And in this business, accuracy counts.  And, if we have to do something more than once, it always makes sense to automate it.

How to Monitor a Remote JVM running on RHEL

Even when I’m not working on security, I’m still working on security.

I’ve recently been working on a large customer application deployment that has required some performance analysis and tuning. The usual first step in this process is to use a tool like JConsole, a very useful management and monitoring utility that is included in the JDK. In short, JConsole is an application developer’s tool that complies with the Java Management Extensions (JMX) specification. It has a nice GUI, and it allows you to monitor either a locally or remotely executing Java process, by attaching to the running JVM process via RMI. Of course, before you can attach to the running JVM you need to be appropriately authenticated and authorized. In this post I’ll provide a brief overview of the steps that were required to connect from JConsole running locally on my MacBook Pro, to the remote JVM process, which I was running on a RHEL v6 virtual machine in the lab.

Required JMX Configurations

Before you can run JConsole, there are a number of changes that need to be made to the JMX configuration on the target machine. In this specific case, I’m using OpenJDK on RHEL, and the relevant files are located in /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/lib/management.

The first step is to do a ‘cd’ into that directory, and edit the following files appropriately:

# cd /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/lib/management
# vi jmxremote.password
# vi jmxremote.access
# vi

There is a template file in the OpenJDK distribution called “jmxremote.password.template” that you can copy to “jmxremote.password” in order to get started. Note also that the permissions on that password file must be set correctly, or you’ll see a complaint when you start the JVM. While you are setting this up, be sure to do a ‘chmod’ to make this file read/write by the owner, only:

# chmod 600 jmxremote.password

In general, these configuration files contain good comments, and all that is really required is to uncomment the lines corresponding to the settings you plan to use. Just to get started, the easiest approach is to disable SSL, and use password authentication for a read-only user. You can edit jmxremote.password to contain your favorite username/password combination, and subsequently edit jmxremote.access to give that username appropriate access. In my case, this was just read-only access. Some sample lines from these two files follow:

architectedSecUser h4rd2Guesss!
architectedSecUser readonly

If you are on an untrusted network and you are planning to monitor a program that handles sensitive data, you’ll want to enable SSL for your RMI connection. Doing that means that you will need to go through the standard drill of configuring the JDK keystore and certificate truststore. I won’t go over those individual steps here since it would be beyond the scope of this post.  Hmmm…come to think of it, perhaps I should revisit the general topic of JDK keystore/truststore configuration in a subsequent post. The world can never have too many certificate tutorials 🙂

Listening on the Right Interface

The first real glitch I hit in this task was that the target RHEL machine had no resolvable hostname. This is actually pretty common with developer machines running in a virtualized setting. Machines are cloned and the IP address is changed, but frequently there is no unique hostname assigned, and DNS is never updated. Doing a ‘hostname’ command on the machine will yield something like “localhost.localdomain.” The problem with this situation is that when we run the target application JVM with JMX access enabled, it will be listening only on local looppback address, and won’t be accepting connections on the LAN interface. When we issue our RMI request from JConsole targeting the remote IP address on the local LAN (say, something like, we’ll see an error like “connection refused.”

To diagnose this situation, get a shell prompt on the target machine and issue the command:

# hostname -i

If it returns “”, or “”, or “localhost” you don’t have a proper hostname configured. You will either have to interactively update the hostname and/or edit the /etc/hosts file. Alternatively, you can ignore the hostname issue and just specify the IP address explicitly as a Java system property when you start the JVM. Here are the values I used when starting my JVM:

java \
-D<any other system properties needed> \ \ \
-cp myApplication.jar myMainClass

You would now run JConsole on your local machine, and connect to the remote host, at the chosen IP address and port. If you are not using username/password authentication you can connect as follows:

# jconsole

If you are using username/password authentication, you will have to enter your credentials in the New Connection… dialog box in the GUI. In most cases, that’s all there is to it.  But, of course, the connection still did not work for me.  Grrrr…those darned security guys! 😉

Configuring the Linux iptables Firewall Rules

Even after the target application JMV was up and running on the remote machine, and listenting on the correct address, I still could not get JConsole to connect to it from my local laptop. Since I was able to get an SSH session on the remote Linux box I immediately concluded that there had to be an issue with the specific port(s) JConsole was trying to reach…Hmmm…I just chose port number 18745 randomly (OK, it was really pseudo-randomly)… Maybe I’m hitting up against some firewall rule(s) for that port? Perhaps port 22 (SSH) is allowed, but port 18745 is not allowed? In fact, who knows what other dynamic ports JMX may be trying to open? So, in an attempt to determine what ports were being used, I next ran JConsole with some logging turned on.

To turn on the logging for JConsole, you can create a file named “” in the directory from which you will be running JConsole, and set the configuration argument on the JConsole command line. Do the following to create the file:

# cd /myproject
# touch 
# vi

Cut/Paste insert this into

handlers= java.util.logging.ConsoleHandler
java.util.logging.FileHandler.pattern = %h/java%u.log
java.util.logging.FileHandler.limit = 50000
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter
java.util.logging.ConsoleHandler.level = FINEST
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter

Then, go ahead and start jconsole with:


Now, when you try to connect, you will see log output on your terminal window which reveals the dynamic port number that JConsole is trying to use for RMI lookup. An example is shown below:

Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [ rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] connecting...
Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [ rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] finding stub...
Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [ rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] connecting stub...
Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [ rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] getting connection...
Jan 28, 2014 1:57:56 PM RMIConnector connect
FINER: [ rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] failed to connect: java.rmi.ConnectException: Connection refused to host:; nested exception is: Connection refused
Jan 28, 2014 1:57:56 PM RMIConnector close
FINER: [ rmiServer=RMIServerImpl_Stub[UnicastRef [liveRef: [endpoint:[](remote),objID:[5380841b:143da2d37a6:-7fff, 403789961180333858]]]]] closing.

In this case example, JConsole was attempting to create a connection to port 45219.  Now that we know that little tid-bit of crucial information, we can go ahead and update the Linux firewall policy in /etc/sysconfig/iptables to allow that specific port number.  Do the following:

# su root
# vi /etc/sysconfig/iptables
# add 2 lines similar to these to iptables policy.
-A INPUT -m state --state NEW -m tcp -p tcp --dport 18745 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 45219 -j ACCEPT

# service iptables restart

As shown, I needed to restart the firewall process after making the necessary policy changes.  After that, I was able to just reconnect from JConsole, and this time the connection to the remote machine succeeded, and I could proceed to monitor my application’s resource utilization.

Like I said, even when I’m not doing security, I’m still doing security.

Happy Remote Monitoring!