Cloud Computing Forecast: Still Hazy

Moving to the cloud holds promise and risks for engineering companies.

Moving to the cloud holds promise and risks for engineering companies.

By Brian Albright

Cloud computing has caught fire with both chief information officers (CIOs) and the press, and nearly every industry software vendor has announced some sort of cloud-based offering or a cloud “vision,” regardless of whether that vision has anything to do with an actual cloud computing service.

Cloud Security

According to Gartner’s 2011 CIO Agenda Survey, technology executives expect to expand their use of cloud and software-as-a-service (SaaS) technologies significantly. Three percent of executives currently have the majority of their IT systems running in the cloud; in the next four years, that number could leap to 43%.

There are three flavors of cloud computing that could potentially be of use to designers and engineers:

1. The hosted software model. Instead of having a licensed copy of an application on the designer’s desktop, you access the solution via the Internet. In some cases, that may mean that the application exists on the software vendor’s servers; in others, disparate locations within a company may share one instance of an application that exists on either a private or public cloud.

2. Cloud-based storage. This provides access to shared, external storage capacity.

3. Infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) cloud offerings that offer server capacity on demand.

(See “Defining the Cloud” on page 41 for a discussion of more cloud computing terms.)

Most AutoCAD solution providers have announced some sort of cloud or hosted software offering, but engineers have been reluctant to embrace them. For energy- and resource-hungry simulation and rendering tasks, though, the IaaS model could have particular promise for the engineering community.

“I think that these kinds of engineering use cases are some of the most immediately compelling, because they are big expenditures,” says Gartner analyst Lydia Leong. “If you can move some of the workload into the cloud, you get agility and lower costs. As a supplement or alternative approach, it’s quite attractive.”

New Offerings Come with Risk
So far, only a small percentage of design firms are using cloud-based solutions for simulations and rendering. Slightly more have taken advantage of cloud services for file storage or collaboration activities.

A few technology vendors have even developed entire solutions around this concept. UK-based Dezineforce, for example, offers a hosted high-performance computing (HPC) design simulation platform that is available as an on-site appliance or cloud-based service. Consulting firm Intelligent Fluid Solutions used it recently to simulate the optimal position for turbines on a wind farm, saving the firm the cost of investing in an HPC cluster.

NEi Software, meanwhile, has developed a cloud-based solution called Stratus that allows engineers to run basic analysis projects and view outcomes on iPhones or iPads, as well as tackle some pre-finite element analysis (FEA) tasks.

Still, moving any operation to a set of outside computing resources has inherent risks in terms of security, reliability and cost. “The classic trade-off is security and control of having your own infrastructure, vs. having the versatility of being on a shared infrastructure,” says Avi Freedman, chief technology officer of ServerCentral, a managed data center solutions provider in Chicago.

If you are thinking about tapping extra computing power in the cloud, you should conduct a thorough risk analysis to determine the potential impact of a failure or breach, and the probability of those risks. Once that’s complete, work with the service provider to establish a mitigation plan for the most critical problems.

“People have to understand what the actual risks are, and try to separate those fears from the reality,” Leong says. “Some risks can be technically mitigated, like ensuring that there is security in place. Some are part of the overall business risk that comes from having a lower-cost solution.”

A Question of Bandwidth
For companies considering moving simulations or other computing tasks to the cloud, one primary obstacle is bandwidth. Uploading massive meshes and then downloading the post-data files can cause a bottleneck at the end-user level, depending on the type of internal network and WAN involved. If you are moving a terabyte of data over the Internet to the cloud, it can have tremendous ramifications for performance on both ends of the transaction.

“If you are trying to do simulation and modeling, and you have to upload 3GB of raw data, and you have a DSL line in your office, that’s a bottleneck on your end that will be very difficult to manage,” Freedman says. “Users don’t necessarily need that high-speed connectivity, but the internal data center needs to be where it can get at high-speed access to the cloud.”

That means companies have to evaluate exactly how much data traffic they expect to generate, and how often. Bandwidth is expensive, particularly if you are moving large amounts of data to and from some set of third-party servers miles away.

“You have to evaluate whether this particular type of high-performance computing is going to work well on this type of infrastructure,” Leong says. “Look at the performance implications of not using a high-performance network interconnection. If it’s slower, will it matter if it’s cheap?”

Cloud SecurityThere are some ways around this, including different approaches to compression, or accessing results remotely without actually downloading the data. Some cloud services providers also offer the option of accepting physical media, like tape drives (Amazon.com does this, for instance).

You also have to ensure that the provider can offer the computing capacity you need across its entire infrastructure, as well as evaluate the way the provider handles resource allocation.

“How are virtual machines sharing physical resources?” asks Kevin Mills, senior research scientist in the Information technology Laboratory at the National Institute of Standards (NIST). “That’s something the user has no control over, and therefore you have a right to be concerned about it. Providers will tell you to test your system on their services to get a sense of how it will work. But if there’s a resource allocation decision change between the time you test it and the time you want to run it in the future, the answers may all be different.”

The Security Debate
By far, one of the biggest concerns companies have about outsourcing their IT is security. Many people are uneasy about working on machines that are not under the physical control of the engineer or the company.

Security breaches do happen—even to large, experienced companies like Google and Amazon. Logic dictates that cloud service providers will have the best security money can buy, but illogical things happen. To protect your business, demand transparency from the service provider as to what types of security, encryption and authentication it is employing. If there is some type of breach, there should be language in your contract indicating how the provider will respond, and how it will compensate you for your loss of service (if there is a loss) through credits. However, if the breach involves sensitive customer data of some sort, the owner of the data ultimately bears responsibility to those customers.

You should also determine what controls the provider has in place to prevent internal security breaches (limiting administrative access for employees, for example).

Evaluate the Vendor
Even if the potential service provider meets all your criteria, you still have to decide whether it will be a reliable partner, and remain in business for however long you need to purchase its services. That can be difficult to determine. There aren’t really any cloud-specific industry standards or certifications to look for, although most providers adhere to typical data center certifications, or may have specialty certification like PCI (for retail payment) or HIPAA (for healthcare).

Find out whether the provider is using solutions that it has developed on its own, or if it’s using open source or other underlying technology. “With cloud computing, if you are buying from a vendor that developed internally, that has risks—and you have to be comfortable with that,” Freedman says.

Also keep a close eye on the provider’s financial solvency. Many cloud companies have entered the market with cut-rate pricing to attract new customers, but some of them are not cash flow-positive or profitable.

Have an honest discussion about what will happen to your data in the event the provider goes belly up. It may be possible to continue using the technology even after the provider has dissolved, particularly if your application or data has been locked into a proprietary solution. In some instances, vendors might provide what’s known as “solution escrow” so that if they go out of business, customers can still access the underlying code or network system architecture. This reassures customers that they won’t be left high and dry if the provider goes out of business.

Although IaaS or other cloud-based services could potentially help companies save money and increase performance, they have yet to establish much of a foothold in the engineering community. For companies that are considering making the move into the cloud, careful investigation and testing should be the first steps in that migration.

 

Scrutinize Contracts, Service Capabilities

Because you are handing your data over to a cloud provider, contracts have to include language about service and uptime guarantees, how security breaches or temporary loss of service will be handled, how data will be backed up and managed, and what will happen in the event you decide to switch providers or end the contract.

Gartner has identified a number of cloud service contract deficiencies, including contracts that do not include the typical legal, regulatory and commercial requirements of an enterprise contract; contract terms that generally favor the vendor, and are highly standardized; contracts that are often opaque and lack detail; and contracts that sometimes don’t include clear service-level commitments.

1. Determine how (or if) the provider will back up the data you send, and how many live copies of the data will exist. Companies will need to take on some of this responsibility by creating their own back-ups, or even using a different cloud provider to handle back-ups.

“In a dedicated cloud environment, customers almost always design in a back-up to their data center, even if it’s asynchronous,” says Avi Freedman, chief technology officer of ServerCentral. “It really varies widely. Some companies keep a repository, but they don’t have the server capacity to turn everything on. Most companies don’t do that because the whole reason they are outsourcing is that they don’t want to run that kind of power in their own data center.”

2. Examine the provider’s ability to keep an eye on your resource utilization and possibly provide alerts. “What kind of monitoring do they do?” Freedman asks. “Will they monitor the application or performance to let you know you are capacity-constrained?”

If there is some kind of failure, how quickly can the vendor provide recover data? How does it measure performance and uptime? Application performance may vary, based on the geographic location of the servers and the system architecture.

Every vendor seems to measure this differently, but sites like CloudSleuth (CloudSleuth.net) and CloudHarmony (CloudHarmony.com) have attempted to evaluate the performance of cloud providers under various use cases so that users can create realistic comparisons. CloudHarmony, in particular, has undertaken a number of benchmarking studies.

3. Pay particular attention to how the company awards credits in the case of an outage, and how long the continuous outage threshold is before credits are issued. These outage thresholds can range from 30 minutes to several hours, depending on the vendor.

4. Price is also important, but can vary wildly among vendors. There are generally charges based on computing use, storage capacity, and bandwidth usage, but rates can be applied in a variety of ways.

5. Monitor usage carefully, particularly when engineers in disparate locations are accessing services. “If your side of a computing job takes up 100 servers, and you forget to turn them off when you’re done, that will be a big bill at the end of the month,” says Gartner analyst Lydia Leong. “People severely underestimate how much user management will be required. You can’t let all the engineers loose to buy whatever they want without supervision.”

Other factors that can affect cost are a lack of scalability, and software license issues. Licensing is a challenge that the cloud community has yet to completely resolve: If you use a flexible cloud infrastructure to run an application, what ramifications will that have in terms of the number of software licenses you have purchased? Depending on exactly what you’re utilizing a cloud provider for, you should make sure to investigate the issues with all of the vendors involved.

Last but not least, the service level agreement (SLA) is critical, because it will spell out what you can expect from the vendor—and what it will do if it doesn’t measure up to those expectations. If the SLA is vague, ask for specifics in writing.


Selected Cloud Service Providers

Brian Albright is a freelance journalist based in Columbus, OH. He is the former managing editor of Frontline Solutions magazine, and has been writing about technology topics for 14 years. Contact him via [email protected].

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Brian Albright's avatar
Brian Albright

Brian Albright is the editorial director of Digital Engineering. Contact him at [email protected].

Follow DE
#3705