Some have said "the cloud changes everything." That was also said about virtualization, so it's hardly surprising that cloud computing and virtualization together would have a major impact on application development and application lifecycle management. For some, the focus has been on testing and deployment issues generated where cloud platforms are involved, but more companies are recognizing that their development and ALM planning and processes need to be modernized for
Both virtualization and cloud computing break the normal bond between application and resources through the use of a resource pool. The first step in modernizing development and ALM processes, including application integration, is to review enterprise architecture, business process management and ALM practices to identify any areas where fixed-resource assumptions exist.
An example of a place where this is likely is security; many business security plans rely on presumptions of physical security for data center resources and even on security protection for specific servers and databases. These are much more difficult to sustain when using virtualization or cloud computing. Many management practices are also focused on devices, and where device-to-application assignments are assumed to be fixed, the practices can break down when the relationships are made virtual.
Modern application development and management must design for virtual resources. That, in general, means treating applications as a collection of abstract components that are deployed on a pool of shared resources. This structure must be presumed as the goal in application design and all testing (with the possible exception of component testing) must be conducted based on this application/resource relationship.
The obvious but still critical first step is getting all development and deployment tools (integrated development environment, deployment and integration scripting or DevOps, and fault, capacity, accounting and performance management tools) compatible themselves with the virtual resource model. This doesn't mean that all these things have to run on or in a virtual or cloud resource pool, but that they operate with one. Often this is a matter of organizing how virtual resources can be addressed through directories or management interfaces.
The primary difference between cloud virtualization deployments and physical-server deployments is the need to support discovery and integration of component elements of the application. In most cases this involves the use of directory tools (domain name system, Lightweight Delivery Access Protocol, UDDI) to record the location of deployed elements and allow that location to be accessed by others. Applications must support component and resource location via directories rather than hard-coded addresses for this to work. The mechanism for doing this will vary depending on the software platform, whether service-oriented architecture is used, etc.
Since ALM represents the guiding of applications through a development-to-production-then-revision lifecycle, one major issue in molding ALM practices to virtualization and the cloud is determining how the lifecycle progression can be broken into properly isolated "sandboxes." Some users have worked to separate their lifecycle sandboxes by using independent directories to locate application components, but this may not prevent having users or other software accidentally crossing between production and test versions.
Other users have maintained physical servers and distinct test resources for nonproduction sandboxes, but this means that testing of applications or changes isn't being done in the same environment in which the application will ultimately run. Overall, it's probably safest to set up different virtual networks with different application components in each, representing each of the lifecycle stages that involve testing, certification or production. The tools that support testing and deployment, including integration and management tools, will have to not only contend with multiple virtual environments but also keep them separated to prevent operating processes from accidentally crossing boundaries and contaminating production systems.
SaaS services can create particularly insidious issues in ALM if users aren't careful. While virtualization, IaaS and PaaS all provide users with direct control over the number of application instances available, and thus whether test and production systems are strictly partitioned, the facilities available for segregation in SaaS will vary. Often creating test "sandboxes" will mean multiple accounts, each essentially a new SaaS contract with new costs. It may be difficult to tell by inspecting software parameters whether a given account represents the production system or a test system, and special care should be taken, first, to learn just what ALM-friendly partitioning of instances is available from the vendor and, second, to manage how these instances are used.
Users offer two specific hints for ALM in cases where SaaS and internal software components are combined to create a single application. First, create a single "test" instance of the SaaS application and use it for all the nonproduction phases of ALM. That reduces the SaaS cost but also the risk of confusion when using integration tools to bind all the components into a single application. Second, when a new version of the SaaS application is introduced, hold the internal components in the last validated state and run all of the ALM test phases to validate the new SaaS version. Then replace the old SaaS version with the new and proceed in the normal way to integrate any changes to internal components.
ALM for virtual and cloud environments is complex, but more so in processes than in tools. That makes it all the more critical to tune the ALM procedures to ensure that ALM meets the goals of application stability that justify it in the first place.
This was first published in October 2012