11
INSIGHTS: IT Modernization Feature articles & full report available for download at: www.FCW.com/InsightsITModern Inside Debunking the myth s2 Changing threats require a new approach to cybersecurity s4 Virtualization and the next-generation data center s6 Tech refresh: Finding the right balance s8 To cloud or not to cloud? s10 Online report sponsored by:

INSIGHTS: IT Modernizationdownload.1105media.com/.../InsightsITModernization.pdf · Nemertes Research, based in Mokena, ... router access control lists, ... INSIGHTS: IT Modernization

  • Upload
    vutruc

  • View
    222

  • Download
    2

Embed Size (px)

Citation preview

INSIGHTS: IT Modernization

Feature articles & full report available for download at: www.FCW.com/InsightsITModern

Inside

Debunking the myth s2

Changing threats require a new approach to cybersecurity s4

Virtualization and the next-generation data center s6

Tech refresh: Finding the right balance s8

To cloud or not to cloud? s10

Online report sponsored by:

As organizations begin to rely more heavily on unified communications — the management of voice, video and messaging through one unified

system — many have developed concerns about the security of this IP-based communications infrastructure.

These concerns stem from several factors — not only the fact that UC is IP-based but that there are so many potential modes of communication, from video, instant messaging and Web collaboration to presence, e-mail and voice mail. Complicating the situation is the proliferation of mobile devices being used more frequently in business environments, devices that often aren’t as secure as those housed in the business environment.

One of the biggest security concerns in UC is eavesdropping, the idea that external parties can infiltrate the IP connection to eavesdrop on a Web conference, instant message exchange or other communication medium. The biggest concern is when organizations extend their UC capabilities outside their boundaries to external partners. Although there are no ironclad solutions for preventing eavesdropping, experts recommend employing the highest level of authentication and encryption techniques.

SIP trunking — a service that allows organizations to use voice over IP through an Internet connection — also has created a lot of concern. Simply put, when organizations move from a digital connection to an IP-based connection to receive and make phone calls, concern about hacking grows.

“The thinking is that before, you had an island where there was no way for anybody to hack across the connection from your phone system to your service provider because it was a digital connection, but now, with an Internet-based connection, it’s easier for someone to hack across or monitor where your calls are going or who you are talking to,” said Irwin Lazar, vice president and service director at Nemertes Research, based in Mokena, Ill.

The best way to mitigate this concern, Lazar said, is to make sure that your system includes SIP-aware firewalls

or session border controllers as protective mechanisms. In addition, there are many products in the SIP security market that will help mitigate risks.

One concern that is as old as UC itself is toll fraud, in which hackers make their way into a VOIP network and use it to make long-distance calls at an organization’s expense. Another concern is vishing, or voice phishing, in which hackers use voice e-mail, VOIP, a land line or cell phone to gather sensitive information.

Another growing concern is denial of service, an attack method most often identified with the Internet but has become a growing UC threat. In the world of UC, it means flooding a system so that the UC infrastructure comes to a standstill.

The proliferation of mobile devices in the workforce has made them the newest entrant into the UC infrastructure. They are a valuable addition, allowing workers to participate in meetings and collaborative activities from wherever they are, but they also present challenges. An organization that allows employees to use their own cell phones, for example, has to worry about password protection, how to wipe the data from the phone if it’s lost, and how to make sure call data records aren’t compromised.

A research brief from Aberdeen Group on secure UC discusses the threat of mobile technology in the UC infrastructure. Its analysts share concerns about the physical security risk, where sensitive enterprise data can be compromised or exposed to uncontrolled or noncompliant software applications; the access control challenge of exposing the UC infrastructure that is used to communicate with mobile endpoints; and the communication itself, which travels on the Internet and noncontrolled public access points, such as Wi-Fi hot spots and Internet cafés.

Aberdeen recommends taking four steps to secure mobile devices in a UC environment: • Securely authenticate all mobile users of organizational

assets.

s2 1105 Government Information Group Custom Report

www.FCW.com/InsightsITModernINSIGHTS: IT Modernization

Debunking the myth: Unified communications

1105 Government Information Group Custom Report s3

• Implement remote security management.• Implement end-to-end message and data encryption.• Install remote device lock and remote device kill in case

of theft or loss.For the device, best practices include shutting down unused services and ports and changing default passwords. For the network, best practices include deploying firewalls, router access control lists, virtual local-area networks, port-level switch security and authenticated network access. Other proactive moves include implementing host- and network-based intrusion detection and intrusion prevention systems or proxy servers to protect SIP trunking.

But it’s not all bad news. Security for UC has come a long way in the past few years, and it’s only getting better. Not only are SIP security capabilities much improved, but there is a lot of interest around security certificate authentication mechanisms. With this in place, users placing a call over an IP network would be able to validate the identity of the person on the other end. s

www.FCW.com/InsightsITModern INSIGHTS: IT Modernization

continued from page s2, Debunking the myth...

During the past several years, the type and frequency of cyberattacks have changed dramatically. Today,

organizations are subject to increasingly sophisticated intrusion tactics that are more destructive and malicious than before. Newer technologies such as cloud computing, social networking and the proliferation of mobile devices also have provided new opportunities for hackers to find and exploit vulnerabilities.

According the Ponemon Institute, malicious attacks — from inside and outside organizations — were the root cause of 31 percent of data breaches in 2010, up from 24 percent in 2009 and 12 percent in 2008. What’s more, cyberattacks are costing organizations $214 per compromised record and an average of $7.2 million per data breach event, according to the Ponemon Institute.

The problem in the federal government is no less significant. Recent research from Input noted that during the past year, federal agencies have experienced a 78 percent growth in cyber incidents.

The government is on the case. The military last year created the Cyber Command, complete with a four star general to lead it. The goal of the Cyber Command is to protect computer networks from cyberattacks. And in May, the Obama administration proposed a plan in which the Homeland Security Department would coordinate with the private sector to increase cybersecurity.

However, organizations can’t rely on those advances. The changing threat landscape requires organizations to take a new approach and perhaps implement new technology to keep pace.

The most important thing, said Balaji Srimoolanathan, research manager and senior security consultant at Frost & Sullivan, is to take an end-to-end approach to cybersecurity. For too long, he said, organizations

have employed different solutions for physical and information security that can leave gaping holes in a security strategy.

“Rather than gathering a bunch of disparate solutions, get an end-to-end solution from one vendor that includes everything from a complete audit of the data network and IT infrastructure to physical security,” he recommended.

At the very least, that includes a vulnerability assessment and an IT/networking audit, followed by identity management, deep packet inspection and behavioral filtering.

Identity management, which ensures that users can access only the data and applications they have clearance to access, is a baseline technology every organization should employ. It generally works by assigning roles to users on the system; each role has a different level of authorized access to content and areas of the network. It also makes extensive use of biometrics, digital certificates and user name/password combinations.

Deep packet inspection technology is another tool in the arsenal. Many think deep packet inspection, which examines the individual digital packets that make up data or e-mail messages sent via the Internet, is the best way to provide security for IP traffic. The technology generally is used to help organizations monitor and manage network traffic while identifying and blocking security threats.

That’s especially true for the government market. According to a report released earlier this year by Market Research Media, government-related IP traffic will quintuple between 2010 and 2015. That growth, along with exponential growth in data processing power and new cyber threats, has increased the deployment of deep packet inspection technologies in U.S. government

Changing threats require a new approach to cybersecurity

s4 1105 Government Information Group Custom Report

www.FCW.com/InsightsITModernINSIGHTS: IT Modernization

1105 Government Information Group Custom Report s5

agencies. Its use is growing so fast that the study called deep packet inspection “a major line of cyber defense for years to come” in government organizations.

Behavior-based filtering is another burgeoning tool to help combat cyberattacks. Basically, it’s a way to track the behavior of people who are accessing content.

“If a particular user tends to access a particular file three or four times in a given period of time but on one day tries to access it more than 20 times, the system would trigger a notification,” Srimoolanathan said. “It’s common in the defense world today, but it will be coming to the commercial market in three to four years.”

Another up-and-coming factor that should improve cybersecurity is the Security Content Automation Protocol (SCAP), developed by the National Institute of Standards and Technology. The goal of SCAP, which is endorsed by major security vendors, is to create a standardized way of maintaining the security of enterprise systems. SCAP provides a way to identify, express and measure security data in a standardized way. Many think that after SCAP is fully ratified and becomes integrated in off-the-shelf security solutions, it will greatly improve cybersecurity. s

continued from page s4, Changing threats...

www.FCW.com/InsightsITModern INSIGHTS: IT Modernization

There are few data centers around that haven’t implemented some level of virtualization. Organizations are turning to virtualization to help

reduce costs, free floor space, and save on power and cooling.

For most organizations, the first parts of the data center to be virtualized are servers and storage. According to Tier1 Research, about 70 percent of organizations will have virtualized servers in their data centers by 2013 thanks to technology maturation and market acceptance. Storage virtualization is also growing exponentially. Virtualizing storage combines all drives into one centrally managed resource and allows for more consistent management.

But that’s only the tip of the iceberg. Now that server and storage virtualization have become more or less commonplace in the data center, organizations are hoping to gain similar benefits through other types of virtualization — virtualization of network components and other data center components.

“Virtualization is the catalyst that will allow the data center to be seen as a system instead of individual parts,” said Dave Ryan, chief technology officer and vice president of General Dynamics Information Technology’s Navy and Air Force division. “In five years, you won’t look at a server and wonder how much capacity it has or wonder how much storage is in a single storage-area network. Instead, you’ll look at the entire data center as a comprehensive system. It’s a technical bridge we’re still crossing, but virtualization is what will enable it.”

The idea of the data center as a system is one that has many vendors excited. It’s not easy because it requires not only significant virtualization but also coordination

of everything in the data center — applications, servers, networks and storage — in a fully automated way.

Jason Schafer, a senior analyst at Tier1 Research, said the industry is getting there.

“Within three to six years, the virtualized/hypervisor layer of the data center will be integrated with the physical/power and cooling layers,” he said. “When you bring them together, you get a level of intelligence and functionality that you wouldn’t otherwise have.”

Part of what will enable this coordination is something called data center infrastructure management (DCIM) — a system of software, hardware and sensors that, when implemented, creates the ability to monitor and manage the data center environment in real time through aggregated metrics, analytics, controls and automation.

Take the simple example of a belt failing in an air conditioner. The air conditioner senses the failure and communicates the loss of cooling in that sector of the data center. The system could then migrate the load of the virtual machines in that sector to another sector until the issue is resolved.

“It will make the data center as a whole a machine, rather than a bunch of individual parts,” Schafer said. “And you can’t have that level of sophistication without virtualization.”

In addition to more work that has to be done at the technology level, there is one big problem to be solved: the physical data center itself. Traditionally, the solution to the rising demand for resources was simply to expand data center capacity. However, it can take two years or more to build such a facility, and time is money.

Virtualization and the next-generation data center: natural companions

s6 1105 Government Information Group Custom Report

www.FCW.com/InsightsITModernINSIGHTS: IT Modernization

1105 Government Information Group Custom Report s7

www.FCW.com/InsightsITModern INSIGHTS: IT Modernization

One way to reduce time and cost is to include prefabricated modular components, such as the power and cooling infrastructure. Schafer said modularization at this level can reduce data center build time from two years down to four to six months.

The next step is installing the analytics necessary to provide data center managers with automation and intelligence operations and controls. Along with that comes trending analysis, which allow managers to see peaks and growth rates over time, which in turn allows them to do a better job planning for future capacity.

“You can easily see, for example, that over the past three months you have increased capacity by X amount. If you maintain that same growth rate, you can easily determine that you will be over capacity in nine months,” Schafer said. “They can use that intelligence to help stay ahead of the game.”

In the end, the goal is to create more agile, flexible and scalable data centers that function as on-demand environments. By using virtualization and next-generation data center automation, along with eliminating bottlenecks such as the physical data center and lack of analytics, that goal might only be a few years away. s

continued from page s6, Virtualization...

s8 1105 Government Information Group Custom Report

It’s a recurring question for every organization: When it’s time to buy new servers, network switches or any other type of technology, how can you balance

features, functions and cost?

When a tech refresh beckons — whether it’s hardware, software, networking or telephony — it’s tempting to go with the most feature-rich option. Sometimes that makes sense, but it could be overkill. Many factors come into play: budget, time, available options, and the technology’s ability to fulfill the organization’s core mission.

“The last thing you want is buyer’s remorse — investing in that server or laptop refresh and discovering a few months later that if you had just held out a little longer, you could have gotten more features or paid less, or that what you chose doesn’t give you the power or ROI you need,” said Charles King, president and principal analyst at Pund-IT. “So take a close and careful strategic look at what you have, where you are, and where you want to be.”

The type of technology you’re considering refreshing is a big factor. The most flexible is hardware, such as servers, desktop PCs and laptops. For hardware, the question is how much of a premium price you’re willing to pay for premium performance.

King described the case of a customer at a midsize organization in the midst of a server consolidation and virtualization process. Instead of waiting a short time for the newest servers to hit the market, the IT manager chose to purchase the previous generation of servers at a significant discount. Although the IT manager recognized the newer servers would have delivered a 20 percent bonus in overall performance, the price the organization was able to negotiate on the previous models more than made up for the slightly less robust performance those systems would deliver.

“It’s like buying last year’s model car after the new models come in,” King said. “Sometimes it can make sense.”

Networking is another area in which it can pay to

consider something less than the most expensive technology. Often, the improvements in network switching technology, for example, support a larger number of users or a higher level of bandwidth than previous versions. If you don’t have those needs, you might be satisfied with reliable but not top-of-the-line switches.

But that approach doesn’t work with all technology. Organizations should always purchase the latest version of software. Even if earlier versions are available at a discount, vendors are routinely upgrading their software to include newer features that are often very useful. And many times, vendors pull older versions out of circulation as soon as they release a new version, so there might be no choice but to buy the latest version.

Budgets, of course, dictate how far up the food chain you go. If it doesn’t make sense to settle for second best, there are other options, including delaying the upgrade cycle, changing the way you pay for services such as maintenance, or considering cloud-based options, which are offered as a pay-as-you-go model, requiring nothing upfront.

“You have to consider these things, especially cloud, when it’s time for a tech refresh,” says R. Ray Wang, CEO and principal analyst at Constellation Research.

There are also times when nothing but the latest and greatest — no matter the cost — will do. When a technology is part of a mission-critical operation, for example, there is really no choice. If the technology is in a call center that supports a disaster response in a major metropolitan area, for instance, paying the premium is worth it because the potential price of failure is extremely high.

Security is another reason for state-of-the-art technology. As threat levels increase and information thieves become more aggressive and sophisticated, organizations must use the best tools possible to protect data and citizens.

But before considering any of those issues, do your due diligence, Wang said. That means surveying customers to

Tech refresh: Finding the right balance

www.FCW.com/InsightsITModernINSIGHTS: IT Modernization

make sure that what you get fits current and future needs. “Determine a five-year road map and base your decisions on that,” he said.

In addition, pick the brains of systems integrators and vendors. “It’s worth talking to them one at a time and asking them how they would proceed,” King said. “The more information you get, the better the decision you will end up making.” s

continued from page s8, Tech refresh...

www.FCW.com/InsightsITModern INSIGHTS: IT Modernization

s10 1105 Government Information Group Custom Report

There is a lot of pressure to move software, computing platforms and technology infrastructure to the cloud. In the federal government, the

pressure is growing each year. Last year, Federal CIO Vivek Kundra announced the cloud-first plan, which requires federal agencies to consider cloud computing for initiatives before other options.

“Cloud computing services help to deliver on this administration’s commitment to provide better value for the American taxpayer by making government more efficient,” Kundra said in October 2010.

But does it make sense to move everything to the cloud? Although the promises of cost savings, scalability and flexibility of cloud-based solutions are alluring, there are times when it might make sense to keep software, platforms or infrastructure in-house — at least for now.

There are three basic types of cloud scenarios: software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS). Here is a brief definition of each.

SaaS: A model in which an organization basically rents an application, paying a flat monthly fee based on the number of transactions, users or employees.

IaaS: The idea of off-loading the guts of a organization’s data center, such as servers and networking, to a cloud provider. It’s attractive to organizations that don’t want to manage their infrastructure, undertake an infrastructure upgrade or deal with scalability issues and would prefer to off-load that responsibility to a third party.

PaaS: A cloud service that consists of an entire platform — user interfaces, workflow engines, database services and security/authentication — complete with tools to walk you through the application-building process.

One school of thought takes these categories as whole entities, preferring to move, for example, software to the cloud, while holding out on infrastructure and platforms.

It doesn’t make a lot of sense to evaluate the cloud based

on major categories such as SaaS, IaaS and PaaS, said Bob Tarzey, an analyst at UK-based Quocirca.

“There is no calculation that says that PaaS is more economically viable than SaaS, for example,” Tarzey said. “It’s more about factors like in-house IT expertise, what’s core to the business, security and yes, eventually, cost. But you have to evaluate everything on a case-by-case basis.”

The amount and type of IT expertise you have on staff is an important evaluating factor. If the organization requires a new piece of software that the IT staff does not support, accessing that application in the cloud via the SaaS model makes a lot of sense. That can be true of other types of cloud offerings, too.

“If your organization has an ample development team in-house and has a project to develop a new application, they have to choose a platform and a language. Sometimes, it can make more sense to try different approaches with multiple platforms and languages using PaaS,” said Mark Bowker, a senior analyst at Enterprise Strategy Group. “It doesn’t mean you don’t have the skill set, but it can really help with time to market because the platform is available and flexible, and you’re not locked into a specific platform, operating system or environment.”

When it comes to software, some organizations prefer to keep those applications that are core to the mission in-house and send other routine applications, such as accounting and e-mail, to the cloud.

Security is another important factor. Organizations that must adhere to specific regulations, such as the Health Insurance Portability and Accountability Act, are more apt to keep those applications in-house to avoid potential security issues. In addition, public-facing websites are more likely to be managed in the cloud than sites with sensitive information, added Greg Sanchez, chief technology officer of the Civilian and Homeland Security Division at General Dynamics IT. Many agencies, including the Internal Revenue Service, White House, General Services Administration, Homeland

To cloud or not to cloud?

www.FCW.com/InsightsITModernINSIGHTS: IT Modernization

Security Department and Health and Human Services Department, have done this.

Instead of making blanket decisions about what types of applications, platforms and infrastructure are good fits for the cloud, Bowker said he recommends evaluating everything on a case-by-case basis.

“It’s when you have a specific issue like an application upgrade or a need to access new resources, or an entirely new project, that can be used as a trigger point to consider the cloud,” he said.

If all things are equal, looking at the economics certainly makes sense, Tarzey added. When you do, look at the total stack: physical, software and people. “It’s pretty easy to do the math,” he said. s

continued from page s10, To cloud...

www.FCW.com/InsightsITModern INSIGHTS: IT Modernization