Some Basic Security Considerations for Cloud Foundry Services

In this post, I’m going to discuss some of the basic security considerations for using Cloud Foundry services.

In a nutshell, Cloud Foundry currently supports two different kinds of services.  First, there are Managed Services.  These are the “native” services that are part of Cloud Foundry, and implement the Cloud Foundry service broker API.  The canonical example of a managed service is the MySQL database service.  Upon request, Cloud Foundry is able to provision capacity for a MySQL database instance, provision credentials, and bind this newly created instance to an application.

In addition, Cloud Foundry also supports User-Provided Services.  These are enterprise services that do not implement the Cloud Foundry service broker API, and (probably) exist outside the perimeter of the Cloud Foundry deployment.  An example would be, say, an existing Oracle database that runs on another platform, but is accessed by the application(s) deployed into Cloud Foundry.

Finally, it is important to note that there are no restrictions on the number or types of services that an application may bind.  Cloud Foundry applications are able to use any combination of these two service types, if and as needed.

Now, having introduced the basic idea of the two different service types, I’d like to use the remainder of this post to discuss some of their security implications.  What are the basic security considerations that an architect should think about when using these Cloud Foundry service types? What are their pros and cons? What are the implications for your existing security mechanisms? Your technical control procedures?

Some Security Considerations When Using Managed Services

One advantage of using a managed service is that, in general, the credentials that are needed to access that service will not need to be pre-configured into your applications. Rather, these can be dynamically provisioned and injected into the application at runtime.

This was the case with my most recent PoC, in which I leveraged a MySQL database. Because MySQL is available as a managed service in Cloud Foundry, I was able to use the cf create-service command line interface to create the needed service instance in the appropriate organization and space, and then I just needed to do a cf bind-service. My application would be supplied with the necessary JDBC connection string and credentials at runtime. In the specific case of MySQL, the credentials created by the service broker were both unique to the instance, and strongly chosen (i.e., my application received a user id of “5GhaoxJwtCymalOI,” with an equally obtuse password, for use when accessing a schema named “cf_f8e2b861_2070_47bf_bfd0_b8c863f4d5d2.”  Nice.)

If your current deployment process for application database login details involves manually provisioning and distributing the credentials (e.g. via an email to the deployer), then moving to a managed service will likely improve your overall risk profile. You won’t need to have a valid userid and password for your production database sitting around in an email or a properties file somewhere.  The credentials are made available just in time, via the VCAP_SERVICES environment variable, and should not need to be persisted anywhere else in the application configuration.

Since the credentials can be strongly chosen, and do not need to be persisted, it is unlikely that an attacker could obtain these through routine leakage, or directly penetrate your production service by brute-force guessing.  They likely contain enough entropy to ensure that anyone trying to directly guess the service credentials will likely wind up triggering an alarm, or at least getting noticed in your log monitoring solution.

Of course, keep in mind that old saying about “your mileage may vary.”  The specific algorithm used by your service broker may differ, and those supplied credentials may have more or less entropy than you actually expected.  Whether you use one of the existing service brokers, or build your own, be sure to check whether the entropy is sufficient for your required level of assurance.  The service broker may provision credentials using a low entropy approach, or may even use an existing account database, so don’t assume these will always be unguessable.

Finally, it is still important to maintain a meaningful audit trail.  For regulatory compliance reasons, you may have to periodically produce (and review) reports that list all the production database accounts that are being used, along with their access rights. In the case of Cloud Foundry managed services, all these account mappings are maintained by the Cloud Controller.  To continue to satisfy these existing reporting requirements you may find that you have to build the necessary integration(s) and collect these details using the cf command line interface or the PCF console.  Alternatively, perhaps you can work with your auditor to amend your reporting requirements, so that the control procedures you’ll follow with the cloud based applications are updated to reflect the new reality.  For example, instead of reporting on specific applications and corresponding database accounts, one could provide evidence that the application in question always uses service bindings, and therefore no hardcoded database credentials are being persisted to disk in properties files.

It is also worth noting that security operations teams looking for suspicious activity will typically seek to correlate log events across related user ids, services, IP address, and geographies, and so on.  One can be sure that a user id that is dynamically bound via the VCAP_SERVICES environment variable should always be used from the corresponding DEA node, and not also be being used to access the database from IP address that is not part of the DEA pool.

Similarly, you may need to correlate the backend user id that accessed a service with the front end user id that initiated the original request.  Establishing that linkage may require traversing (and/or correlating) yet another level of indirection.  In addition, that linkage may also vary over time, as the service is bound, unbound and rebound.   In summary, consider your requirements for log correlation, and take advantage of any new opportunities that the cloud deployment makes possible, but be aware that some of your existing correlations may not work as well when service credentials are being dynamically assigned.

Some Security Considerations for User-Provided Services

It is possible for a Cloud Foundry application to access an external service without creating a corresponding user-provided service definition. The application can continue to use whatever configuration technique it used before, likely by “hard-coding” the necessary connection details into the application configuration. For example, the application may still have a JSON or XML properties file, or a Spring Java configuration class that captures the details of where to find an Oracle database. However, because these credentials are statically maintained in the application configuration, and will likely be associated with manual workflow processes, they may be susceptible to data leakage. It would work, but you’re not really getting all the advantages of being in the cloud.

Creating a corresponding user-provided service definition is simply a way to tell Cloud Foundry about the service access. Once this is done, the applications deployed into the cloud can look to leverage the presence of the VCAP_SERVICES environment variable. Just as in the case of a managed service, the application can use the credentials found there, in order to access the service endpoint. Thus, defining a user-provided service simply enables Cloud Foundry to inject the necessary credentials, and this means that the credentials no longer have to be carried in a properties file within the application war file. The actual service itself can remain unchanged.

Of course the application would likely need to be upgraded to take advantage of the presence of the VCAP_SERVICES environment variable, but this can easily be done in every programming language, and in Java it can be made even simpler via the use of a connector component like Spring Cloud.

It’s also important to point out that the actual credential provisioning process is still entirely up to you. Once the service credentials are known, these are stored in the Cloud Controller database, via the command cf create-user-provided-service. If you have established account provisioning control procedures that are mature, and well integrated, then it might make perfect sense to just continue to leverage those. The responsibility of keeping custody of the credentials shifts from the application configuration, to the Cloud Controller database. That would seem to be a good thing. It is safe to say that whenever something security-related can be factored out of your developers’ day, then we should probably choose to do that.

Since Cloud Foundry introduces the native access control concepts of the Organization and Space, a decision needs to be made about how those existing services will be administered as user-provided services. Administrators, developers, and applications can only see and bind services that they have permission for within their space, and so you’ll need to think about how to administer that user-provided service definition in the Cloud Foundry environment. How does the maintenance of that proxy or placeholder record in the Cloud Controller correlate with the administration of the real resource(s)?  Does the administrator who is responsible for the real resource also administer the user-provided service definition in the cloud?  What new audit reports and record keeping correlations will be needed?

Will the user-provided service require the application to use SSL/TLS for access?  If so, then that client application deployed to the Cloud Foundry environment may need to be pushed (deployed) with a customized build pack.  Just as we prefer to factor out the login credentials from the application configuration, we’d also prefer to factor out the certificate management functions.  (This is not hard to do, but would be out of scope for the current post, so I’ll cover that in my next post).

Conclusion

Moving your applications to Cloud Foundry will clearly help your organization optimize both infrastructure utilization, and developer productivity.  But, in addition, I would assert that the proper use of managed- and user-provided services within Cloud Foundry has the potential to make your application deployments even more secure than they would have been, if you weren’t running in the cloud.  Using both managed services and user-provided services has many potential advantages, including:

  • reducing an application’s attack surface
  • improving the confidentiality of production service credentials
  • improving the consistency of operational procedures
  • enabling improvements in security event monitoring
  • improving visibility, thereby enabling more efficient audit processes

The only major caveat here is to consider how your existing operational controls will need to evolve in view of an ever-accelerating application lifecycle.   Most organizations find that some existing control procedures will be obviated, and others will need to be amended. But the real gains come from the new and more efficient controls that are made possible by the improved visibility that comes from having consistent service bindings.