Occasionally I will encounter Enterprises who say they want to move their legacy applications to the cloud. When I do, they usually say it's because they're running out of power and space in their own data centers or it's because they don't have the human capital to continue managing the infrastructure supporting these applications. The challenge with legacy applications is that very few of them were designed to scale horizontally. Instead they scale vertically in that you need to add compute and IO capacity to a VM instance rather than adding new instances. This presents a problem for a lot of IaaS offerings because VM instances only come in a few sizes and reconfiguring a VM instance often requires downtime. Moreover, a lot of legacy applications are brittle in that they have a hard dependency on the underlying infrastructure or other components of the application such that when a failure occurs, users experience down time. Finally, legacy applications may have traffic patterns, like high read operations, which could adversely affect their total cost of ownership. This is because many IaaS and PaaS offerings charge for network egress.
The cloud is ideally suited for loosely coupled applications that are designed to scale horizontally. When building new applications, Enterprises should design them for resiliency, i.e. they should assume failures will periodically occur and design their application to gracefully recover from those failures. If possible, the application should continue running with a reduce set of capabilities rather than experiencing an outage. A good way to test the resiliency of an application is by injecting faults. Netflix has done a lot of work in this area and has since open sourced its Chaos Monkey fault injection system which tests how a service reacts to different types of failures. For more information, see http://techblog.netflix.com/2011/07/netflix-simian-army.html.
So what types of applications are good for the cloud? Good candidates are applications with predicable or unpredictable bursting where there are intense spikes in activity. Ordinarily, if you were run these sorts of applications in your own data center, you'd have to account and plan for these periodic spikes in activity. With cloud computing, you only pay for the added capacity when you need it. Applications that are growing fast, or those with on-off patterns, like a performance review application, are also suitable candidates.
Unfortunately, a majority of legacy applications did not account for the on-demand computing environment we have today so when you move them to the cloud you are essentially paying for them to run all the time. That said, you may still be able to derive savings by moving some of these workloads to the cloud, at least from a power standpoint. Let me explain: say you have a server that consumes 453 watts of power on average. If your data center has a PUE of 1.8 which is the industry average, you will pay roughly $491 a year to keep that server powered on at industrial power rates (US). Compare that to a medium size VM in Azure which is only $115 per year or an extra large instance which at $460 per year is still less than it would cost to run in your own data center, and that's for 8x1.6GHz CPU and 14GB of RAM.
A Surround Strategy
When talking about legacy applications, I like to use the following analogy. LOB or legacy applications generate data which is equivalent to gold in the ground. It has value to the enterprise but it may be restricted to a segment of the workforce and it can only be mined by LOB applications. Rather than moving those applications to the cloud, you can surround them with lightweight, modern applications delivered from the cloud that expose the data, turning the gold into jewelry which ultimately creates greater value for the Enterprise. This is essentially what Microsoft has done with it's ERP system where it created web services layer to expose information that can be consumed from these lightweight purpose built apps.