32
BEAR DATA SOLUTIONS SUMMER 2014 VISIT US AT BEARDATASOLUTIONS.COM The Big Store Storage For The Internet Of Things BYOD Reality? Per-App Vpn is Taking Off “Soſtware is Eating The World” Is Enterprise Application Development Keeping Up? Mix It Up Challenges in the Hybrid Cloud Model THE TALENT SEARCH CoreLogic’s Eric Ring on finding the right tech talent as his company grows. BEAR BYTES

BEAR Bytes July 2014

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: BEAR Bytes July 2014

BEAR DATA SOLUTIONS SUMMER 2014

VISIT US AT BEARDATASOLUTIONS.COM

The Big StoreStorage For

The Internet Of Things

BYOD Reality?Per-App Vpn is Taking Off

“Software is Eating

The World” Is Enterprise

Application Development

Keeping Up?

Mix It UpChallenges

in the Hybrid Cloud Model

THE TALENT SEARCHCoreLogic’s Eric Ring on finding the right tech talent as his company grows.

BEAR BYTES

Page 2: BEAR Bytes July 2014

Some look at an empty field and see an empty field.

Others look at that same field and see homes with families

or thriving businesses.

Bring Your Ideas to Life

800-718-BEARbeardatasolutions/emptyfields

BEAR Data Application Development

BEAR App Development Ad 201407BH.indd 1 7/22/14 2:49 PM

Page 3: BEAR Bytes July 2014

CLICK HERE TO LEARN MORE!

we’re not just makingservers. we’re makingserver history.While innovation comes rapidly in the IT industry,basic server architectures haven’t changed fordecades. That’s why Cisco introduced the CiscoUnified Computing System - which integratescompute, networking, storage access andvirtualization. IT departments dramatically reducedata center complexity while:

• Lowering operating costs by up to 30%.

• Reducing deployment times from weeks to minutes.

• Harnessing the power of over 30 world-recordperformance benchmarks.

The Cisco Unified Computing Systemsignals the next evolution of the datacenter - where everything, andeveryone, works together like never before.

Find out more at www.cisco.com/go/servers

©20

11Cisco

Systems,Inc.

Allrig

htsreserved

.

BUILT FORTHE HUMANNETWORK

Page 4: BEAR Bytes July 2014

4 BEARDATASOLUTIONS.COM

in this issue FOUNDER AND CEO

Don James Jr.

VP OF MARKETINGJosh K

CREATIVEH2andCo

Lauren LadoceourBharath Natarajan

Questions? Please contact:

BEAR Data Solutions128 Spear St, 1st Floor

San Francisco, CA 94105(800) 718-BEAR

6 SIMPLETEXT

8 15 MINUTES

14 THE CLOUD

16 NETWORKING

18 VIRTUALIZATION

20 STORAGE

24 PRODUCT HIGHLIGHTS

31 BACKUP

E veryone knows how difficult it has become to find, recruit, and retain top technology talent. Many companies

turn to recruiters or staffing firms to assist them with their hiring efforts, often resulting in little or no value in finding the right technology candidate and wasted time. Meanwhile, critical IT positions remain open.

Five years ago, some of our best clients began to ask for a service around people that went beyond traditional technology. They wanted assistance from our experts in filling a role in their IT organization.

Responding to this, we acquired a staffing company and integrated it into the BEAR Data Solutions services portfolio. Today our technical recruiting and placement service provides a unique value proposition. The difference between BEAR Data and a recruiter or competitor

staffing company is that we offer our technology expertise to find, vet, and place candidates in your organization rapidly.

This issue of BEAR Bytes is focused on helping you find the best-suited IT talent for your company. We also profile our customer Eric Ring from CoreLogic.

We hope that this issue of BEAR Bytes provides you with new ideas, technologies, and solutions that can help your organization prosper. Thank you for your continued support. And as always, I welcome your comments and feedback about any and all aspects of BEAR Data.

Sincerely,

Don James Jr.Founder and [email protected]

a bit from the CEO

WE OFFER OUR TECHNOLOGY EXPERTISE TO FIND, VET, AND PLACE CANDIDATES IN YOUR ORGANIZATION RAPIDLY“ ”

contents

BEAR BYTES

BB 201407 02 DJ Letter+TOC.indd 4 7/22/14 2:43 PM

Page 5: BEAR Bytes July 2014

Sponsored Cover

23BEARDATASOLUTIONS.COM

This summer, BEAR Data Solutions clients joined leaders from Cisco Systems at the San Jose Executive Briefing Center to learn about the latest in Cisco’s architectural approach to Collaboration, Data Center, Security, Enterprise Networks, and best practices.

Topics included:

• Cisco Collaboration Architecture and Strategy

• Cisco Enterprise Network: Transforming IT

• Cisco Security Overview

• Cisco Data Center Overview

Clients viewed special demonstrations, listened to keynotes and participated in hands-on technology labs. The day was capped by an executive dinner at the famous Birk’s restaurant.

800.718.BEAR

Cisco Executive Briefing Center Event with

BEAR Data Solutions

Are you interested in attending the next Cisco Executive

Briefing Center Event this fall?

Contact BEAR Data Solutions.

Event Highlight Ad 201407 (Cisco)BH.indd 23 7/22/14 2:48 PM

Page 6: BEAR Bytes July 2014

6 BEARDATASOLUTIONS.COM

MPLS Increases Speed and Efficiency

When looking at information transfer speeds and data sharing, Multi-Protocol Label Switching is a

technique that can make any network more efficient and powerful. For companies and individuals who are constantly transferring large amounts of information—as many are in the modern era of global, Internet-based industries—this can make a huge difference in how much can be done. To fully understand how speed and efficiency are going to be increased when using an MPLS system instead of traditional strategies, you must know how this technique stands apart.

Traditional Data Transfers The first thing to know is that traditional methods for data transferring involve having packets of data that are sent independently, one after another. Each packet gets to the router, and then the router decides where it needs to be sent. This happens in a fraction of a second, but the router still does have to take that moment to “think” about what it should do next before it can transfer the data on to the end of the line or to the next router, where the process is repeated.

Packet Labels Instead of putting this work on the router, the MPLS system is simply going to give each of those packets a label beforehand. These labels then correspond with the destinations to which the packets are supposed to be sent. When the packet arrives at the router, the router does not have to analyze it and make a decision about where to send it; instead, it just has to look at the label. It then quickly sends that packet to the right destination and moves on to the next. All packets that have the same labels are sent to the same place. Since the router is now just using the labels instead of making decisions, it can get all of the packets through far more quickly.

It is worth noting that most systems use a string of routers, and they can all be set up to recognize the various labels. The packet then simply has to enter the pipeline, and each router

will know where to send it, utilizing the same paths as the other packets before it. The labels can

be based on various criteria; as discussed, the destination is the main one, but they

could also look at the Virtual Private Network (VPN) membership or the parameters for the type of service that the packet falls within. No matter how the paths are selected, though, all routers will recognize the labels.

Low-Latency Paths Another trick that helps to increase

data transfer speeds is simply routing certain types of data across network

paths that offer the lowest latency levels, which can be identified with the routers.

The labels that are given to these types of data can then be sent along that path, while

other types of data are confined to high-latency routes - relatively speaking, of course - so that all of the data can be transferred in a manner that fits the specific needs.

For example, real-time video feeds need to be transferred as fast as possible so that the video for the end user does not skip or lag. Low latency paths are crucial. At the same time, the video feed is going to require a lot more information to be transferred than something related to text. Therefore, the real-time video packets are all going to be given labels that assign them to those low-latency paths so that they can get through efficiently, increasing the functionality of the network.

Transforming Your Network If you have been using traditional methods and you feel that your data transfer speeds within the network are not nearly what they should be, MPLS is a technique that you certainly need to consider. In the business world, this may not only mean higher speeds, but generally increased productivity across the board. When one aspect of the system is more efficient, it allows everything else to speed up as well, so your workers can get more done and overall performance can improve. Do not allow outdated techniques to hold you back from the ultimate potential that already exists within your network.

How’s That?

simpletext

BB 201407 03 SimpleText.indd 6 7/22/14 2:43 PM

Page 7: BEAR Bytes July 2014

CLICK HERE TO LEARN MORE!NetApp 201308 FP.indd 1 7/30/13 7:27 PM

Page 8: BEAR Bytes July 2014

8 BEARDATASOLUTIONS.COM

To integrate a newly acquired company into the Corelogic infrastructure, vice president of

Enterprise Technology Services Eric Ring initially went to a large global outsourcing firm. As time passed it became clear they hadn’t found a good fit and Ring

reached out to BEAR Data’s Technical Services team to come to the rescue! He explains why he turned to

BEAR Data Solutions.

photography by Tim Mantoani

Sending In the

Reserves

BB 201407 04 15 Minutes With Eric F.indd 8 7/22/14 2:43 PM

Page 9: BEAR Bytes July 2014

BB 201407 04 15 Minutes With Eric F.indd 9 7/22/14 2:43 PM

Page 10: BEAR Bytes July 2014

BEARDATASOLUTIONS.COM10

In 2011, CoreLogic, a company that collects and analyzes real estate and financial data, acquired a business that hosts the loan origination system for one of the largest banks in the country. It was a good strategic move for California-based CoreLogic. The challenge, though, came when vice president Eric Ring, who’s responsible for all of CoreLogic’s infrastructure, application support, information security, and compliance, realized the new company’s infrastructure was in urgent need of improvement. To complete the upgrade, he needed to find a top-tier, short-term architect with leadership skills in in a quick turnaround time. Here he explains how BEAR Data’s Technical Services team helped him find the right engineer for the job.

BB 201407 04 15 Minutes With Eric F.indd 10 7/22/14 2:43 PM

Page 11: BEAR Bytes July 2014

BEAR Bytes: What are some of the challenges CoreLogic has recently faced on the tech side?Eric Ring: We have more than five thousand servers and are deep into a data center migration, combining our two main data centers and five smaller colocation facilities into two new data centers. So while the vast majority of our staff is focused on data center migration, we acquired a company that hosts the loan origination system for one of the largest banks in the country. But it needed some significant infrastructure upgrades to the primary and disaster recovery sites.BB: What was the original plan of action?ER: We had partnered with another technical services firm to perform these upgrades, in conjunction with our local staff. On the surface it looked like a pretty straightforward job. But what seemed like a relatively easy upgrade proved to be very complicated.BB: What were the complications during the upgrade?

ER: First, we needed to comply with the bank’s IT service management framework, in addition to our own. All changes were scrutinized by the bank, and we were subject to scheduling in their maintenance windows. Second, there was an inordinate amount of unrelated change, in multiple software layers, that we were competing with. Third, the existing infrastructure had been managed by different people without adequate documentation. Last, we were migrating from a shared infrastructure to a dedicated infrastructure. BB: That sounds heinous. How did the process go?ER: Several months into the project, the partner we were using was doing more harm than good, causing a couple of production outages, making very little progress, and suggesting they needed to more than double the original work order. That’s when I jumped in and did a full review of the architecture, transition plan, and risks.

11BEARDATASOLUTIONS.COM

Sense and sensibility (opposite, previous page) Eric Ring; (above) The CoreLogic offices in San Diego

(BEAR) was able to deliver top talent for the full-time requirements and specialized talent we needed within three weeks.

BB 201407 04 15 Minutes With Eric F.indd 11 7/22/14 2:43 PM

Page 12: BEAR Bytes July 2014

BB: Is that when you turned to BEAR Data Solutions?ER: We needed at least three people over six months, including a technical architect to lead this project—someone we would be comfortable putting in front of our customer’s executives and technical teams. I’d worked with BEAR Data’s EVP of Technical Services, Brian Brown, when he was with another firm, so I reached out to him. Brian looks at the human factor, not just technical ability, in his approach to staff augmentation. Brian was very responsive and had resumes in front of me in a matter of days. I also liked the flexible staffing options BEAR presented. By using a combination of bench strength and contingent labor BEAR had a long-standing relationship with, they were able to deliver top talent for the full-time requirements and specialized talent we needed within three weeks.BB: How did the new team from BEAR Data help you reach your goals?

ER: We regained our customer’s confidence and moved into execution phase. Through a series of well-planned changes, we completed the upgrades in about six months. We finished it off with a “sustained resiliency” test where we ran the loan origination system out of the backup data center for a week and then swung back to the primary: a task that had never been successfully completed during the life of the system.BB: What’s next for your role in the company? Are more acquisitions in your future?ER: CoreLogic is always looking for new opportunities. Earlier this year we acquired several companies to secure strong footing in the insurance space and build our risk management portfolio. If we make more acquisitions, I’m sure we’ll work with BEAR again. The people are the differentiating factor, from the two primary contractors who came onto the project to Erin Lau and Brian at BEAR. I’m a big believer in teamwork, and that’s what made BEAR the right partner.

BEARDATASOLUTIONS.COM12

I’m a big believer in teamwork, and that’s what made BEAR the right partner.

BB 201407 04 15 Minutes With Eric F.indd 12 7/22/14 2:43 PM

Page 13: BEAR Bytes July 2014

13BEARDATASOLUTIONS.COM

Data driven (opposite) the Bear Data Solutuions San Diego offices; (above, from left) the Bear Data Solutions Technical Services Team: Wilcy Sharer, Brian Brown, Adam Bundy, Erin Lau

BEAR Bytes: What’s your role at BEAR Data Solutions?Erin Lau: I have a team of recruiters I work with to source, screen, and identify IT talent. I further qualify them and take them through a series of technical questions our engineers have provided us. Then I present options to hiring managers for a project like CoreLogic’s upgrade.BB: What makes your staffing service stand out among the competition?EL: Our clients are already buying IT products from us. Why not us provide the IT talent to support the products they’re buying? So we hired two delivery managers plus a team of 25 recruiters. Other competitors don’t

have our technical ability to identify the right person. On paper, someone can look really good, but can they actually do what’s on their resume? So our engineers put them to the test. BB: How big is your stable of contractors? EL: We have a database of 60,000 candidates, and every month we add 4,000 more. Our referral base is really big. We’ve placed people in New York, Florida, North Carolina, Oklahoma, Las Vegas, but mostly California.BB: What do you look for in a contractor before presenting them to a company like CoreLogic?EL: Communication is really big—being able to articulate their experience. There’s been a lot of studies done showing that

people hire because of personality and how well they can get along with a team. We ask them what kind of role they’ve been in before. Maybe they’ve touched a Cisco router, but have they done it in a way the client needs?BB: What are the keys to a successful match?EL: It’s all about finding the right person who wants to do that type of work. That involves making a lot of phone calls and screening a lot of people. We offer an hourly rate with benefits; they get to work with good technologies, doing what they like to do. It’s about us matching a client and company with a person who wakes up each morning, looking forward tackling that day’s challenges.

Meet Erin Lau, director of recruitment and delivery for the team behind BEAR Data Solution’s Technical Services.

BB 201407 04 15 Minutes With Eric F.indd 13 7/22/14 2:43 PM

Page 14: BEAR Bytes July 2014

14 BEARDATASOLUTIONS.COM

Challenges in the Hybrid Cloud Model

cloud computing

H ybrid cloud is a mix of both public and private clouds. Enterprises have the option of hosting their valuable applications and data within their internal network and migrate non-critical functions

and data to the cloud. In order to succeed with hybrid cloud approach, there are a few challenges that must be addressed such as application complexity and security.

Confidentiality and Integrity The major concern for companies is data security and integrity. There is only one best practice for securing data in the cloud with systems that involve multiple private and public locations: encrypt the data in a way that allows all systems to continue working transparently and to maintain ownership of the data through ownership of the encryption keys.

Reconfiguration planningPlanning which components and applications to migrate to the cloud is a complex problem. Factors like enterprise security policies, cost savings from migration, increased transaction delays, and wide area communication costs need to be considered during planning.

AddressingThere are difficulties when trying to link the different application components in and out of the cloud. When internal IP addresses change then the cloud providers have

to alter their networking and edge devices. This challenge might become a critical limitation for providing dynamic

deployment and agility.

FirewallIn order to safeguard the components moved to the cloud, it is the responsibility of the enterprise to create a firewall within the cloud and at the gateway of its own network. Firewalls need to be carefully designed to reflect the complex application interdependencies and only the application components that need to talk to each other are permitted to do so.

Application SecurityStandards-based API calls provide significant flexibility and ease for automation. However, this also opens the door to security risks that should be addressed. It is the responsibility of the cloud provider to implement application security and at the same time enterprises have to make sure that their API calls directed towards cloud are secure.

ENCRYPT THE DATA IN A WAY THAT ALLOWS ALL SYSTEMS TO CONTINUEWORKING TRANSPARENTLY AND TO MAINTAIN OWNERSHIP OF THE DATA“ ”

BB 201407 06 Cloud.indd 14 7/22/14 2:42 PM

Page 15: BEAR Bytes July 2014

15BEARDATASOLUTIONS.COM

Challenges in the Hybrid Cloud Model

Identity Management in the cloudAs companies add more cloud services to their IT environments, the process of managing identities is getting more complex. The need for security and compliance is driving

some companies to find better ways to bridge

enterprise Identity and Access

Management (IAM) and cloud provider applications.

This is often performed by

provisioning user identity through cloud-capable, federated single sign-on (SSO). Many companies are achieving this bridge through AD and LDAP connections, setting policies that can be enforced through group membership based on the users. An employee using a federated single sign-on system is given one set of credentials to access multiple cloud accounts. This user is only authorized to use those cloud accounts permitted by the group he or she belongs to. This approach aids the rapid rollout of new cloud services to large groups of users. Using AD to aggregate identities in cloud environments also speeds up the de-provisioning of cloud applications to employees when they leave the company or change roles.

NEWSWhat is OpenStack?OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds. OpenStack offers an open source cloud computing platform for infrastructure as a service (IaaS) for both public and private clouds of all sizes. OpenStack architecture is modular, focuses on providing the compute, network and storage component resources for customer deployments and monitors all services through a dashboard. OpenStack is backed by some of the biggest companies in software development and hosting, as well as thousands of individual community members, and many think that it is the future of cloud computing. OpenStack is managed by the OpenStack Foundation, a non-profit which oversees both development and community-building around the project. The OpenStack community collaborates around a six-month, time-based release cycle with frequent development milestones.

BB 201407 06 Cloud.indd 15 7/22/14 2:42 PM

Page 16: BEAR Bytes July 2014

16

O ne of the critical enterprise features of iOS 7 is the support for per-app

VPN connections. This technology offers security, user privacy, and performance benefits to organizations and their employees.

In the BYOD world where the lines blur between work and personal devices, mobile workers often need to make repeated or constant connections to resources on a corporate network during the day or night. They’re often multitasking and juggling both work and personal tasks.

The traditional VPN model is poor for this type of remote access because when a traditional VPN connection is active; all network traffic is routed through it. This creates security risks like unauthorized apps or data could get onto the network, and private information could be visible, routed, or logged when the corporate network is accessible on the device.

A per-app VPN only sends data from managed apps—those installed and managed by IT—

through an on-demand VPN connection, a process that is almost invisible to the

user. There is zero reliance on the device-level VPN and all of the VPN

settings are wrapped into the app—users just enter a user name and password for authentication.

Per-App VPN allows much more granularity in access to back-end systems and unmanaged or unapproved apps

can never gain access to sensitive data within the enterprise. The

Managed Open-In feature greatly improves user experience and firewalls

privacy so that non-business data is unable to touch the corporate network.

The concept has many applications as it can be used with virtually any app—public

from the App Store, enterprise/internal, or business-to-business. Since this can

quickly create barriers between ERP and Facebook, Per-app VPN makes BYOD a much more realistic goal for enterprise.

Since it’s a new technology we still have to learn a lot on all possible use-cases, pitfalls and interoperability

issues but this is probably a feature a lot of enterprises will

adopt within the next year.

networking

The Benefits of Tunnel Vision

Per-App VPN is taking off

BEARDATASOLUTIONS.COM

SINCE THIS CAN QUICKLY CREATE BARRIERS BETWEEN ERP AND FACEBOOK,PER-APP VPN MAKES BYOD A MUCH MORE REALISTIC GOAL FOR ENTERPRISE.“ ”

BB 201407 07 Networking.indd 16 7/22/14 2:41 PM

Page 17: BEAR Bytes July 2014

17BEARDATASOLUTIONS.COM

Chameleon Virus is spreading rapidly over WiFi networksA team of researchers at the University of Liverpool developed a virus dubbed Chameleon that travels over WiFi networks and spreads really efficiently.

Unlike most viruses, Chameleon doesn’t go after computers or internet resources, but focuses on access points or where you connect to the internet. For the average home user, this is usually a wireless router.

The research team says the virus spreads fast, avoiding detection and identifying the points at which WiFi access is least protected by encryption and passwords. If the virus hits a roadblock when trying to propagate, it simply looks for other access points which weren’t strongly protected including open access WiFi points common in locations such as coffee shops and airports.

At present this threat is only a proof-of-concept in that it hasn’t actually been discovered publically and was instead created by researchers in a controlled environment. What is clear is that it’s only a matter of time until a virus like Chameleon becomes a reality. Luckily, the Chameleon virus can easily be defended against. All users have to do is secure their network router with strong, unique passwords.

NEWSAndroid malware is getting increasingly complexIn 2013 we saw exponential growth in Android malware, not only

in terms of the number of unique families and samples, but also the

number of devices affected globally. While the new security features in

the Android platform will make a positive change in infection rates over

time, their adoption will be slow, leaving most users exposed to simple

social engineering attacks.

Cybercriminals will continue to explore new avenues for Android

malware monetization. Mobile devices are an attractive launching

pad for attacks aimed at social networks and cloud platforms. You can

mitigate this risk by enforcing a BYOD (bring your own device) policy

that prevents side-loading of mobile apps from unknown sources and

mandates anti-malware protection among other approaches.

BB 201407 07 Networking.indd 17 7/22/14 2:41 PM

Page 18: BEAR Bytes July 2014

18

The storage hypervisor software virtualizes the individual storage resources it controls and creates one or more flexible pools of storage capacity. It is a supervisory

program that manages multiple pools of storage as virtual resources and treats all the storage hardware it manages as generic, even though that hardware includes dissimilar and incompatible platforms. To do this, a storage hypervisor must understand the performance, capacity and other service characteristics of underlying storage. A storage hypervisor can also accept new devices or the replacement of part or all of an existing pool of storage resources without causing business disruption. It also provisions storage, provides services such as

snapshots and replication, and manages policy-driven service levels.

The Storage Hypervisor saves the organization from having to buy more

performance or capacity than they need. When the time comes where a different performance option or more capacity is needed they simply add another storage system in pod-like fashion, then leverage the hypervisor to move virtual machines

into that pod and eventually the hypervisor will manage those moves automatically. The Storage Hypervisor approach may change the economics of storage in virtualized environments by opening up an exciting future where lower-tier storage

can deliver high performance features.Hypervisors are not without their

weaknesses, and today one of those is in providing advanced storage service

features like snapshots, thin provisioning, cloning and replication. There could be significant decrease in performance when snapshots of virtual machines are active. Snapshots are the key foundation of other storage features like cloning and replication. One possible solution is to fill in the weak areas with third party software solutions.

The combination of improved hypervisor capabilities for managing independent storage systems, plus the addition of software that extends the hypervisor’s capabilities in delivering services, gives the administrator a powerful storage option, one in which they can select the most cost and performance appropriate storage system to meet their needs.

THE STORAGE HYPERVISOR SAVES THE ORGANIZATION FROM HAVING TO BUY MORE PERFORMANCE OR CAPACITY THAN THEY NEED. “ ”

virtualization

BEARDATASOLUTIONS.COM

Storage Hypervisor

What are the Advantages?

BB 201407 08 Virtualization.indd 18 7/22/14 2:41 PM

Page 19: BEAR Bytes July 2014

Sponsored Cover

19BEARDATASOLUTIONS.COM

Are you still on Windows Server 2003? As Microsoft plans to end support for Windows Server 2003 in 2015, now is the time to consider performance, security and server management issues on this platform. Windows Server 2003 won’t suddenly stop working as soon as support expires and the applications that run on it will keep on running. But to run a secure IT infrastructure that meets the legal and regulatory requirements of many organizations, you will have to pour resources into monitoring and isolating any servers that run Windows Server 2003. You will need to come up with a transition plan sooner than later.For many, the transition mechanism will be virtualization. While application virtualization can be a wonderful solution and might simplify the transition to a new operating system, it can’t be considered a panacea. It might make sense to move to a different operating system entirely. This might mean rewriting your application and maybe moving it to the cloud as well.

NEWSDesktop virtualization - where should the anti-virus run?The answer will very much depend on the type of VM and the load the anti-virus software places on it plus the licensing costs. There are two types of client VM used in VDI infrastructures: persistent and non-persistent. Persistent VMs are created and used for a prolonged period and not recreated frequently. You need to treat it like a typical desktop. Therefore, it needs standard protection, including anti-virus software.With Non-persistent VMs, the client OS is typically created as the user needs it for a session and then deleted when the user logs off. Today, most viruses are designed to cause damage on installation and that is why you should have antivirus protection on any OS instance, even if it will only exist for a short time. For these situations, there are VDI-specific anti-virus solutions that actually run a very small piece of in-memory code in the client VM to reduce footprint, and a larger scan on the actual parent partition.

BB 201407 08 Virtualization.indd 19 7/22/14 2:41 PM

Page 20: BEAR Bytes July 2014

20

the Internet of ThingS

What are the Storage Implications?

storage

BEARDATASOLUTIONS.COM

The Internet of Things (IoT) refers to a network of physical objects containing embedded

technology to communicate with each other or externally. The enormous number of devices, coupled with the sheer volume, velocity and structure of IoT data, creates challenges, particularly in the areas of security, data, storage management, servers and the data center network.

One of the critical tasks before designing storage architectures for IoT is to define the retention policy for this data. Some companies will delete the data that they’ve gathered after a week, and others will have a systematic process, such as only keeping the data they collect every other week from a certain year. The utility of data degrades fairly quickly in most scenarios, so immediate access becomes less important as it ages. Data retention policies allows the business to determine how long to retain information in a certain access layer. Often this is based on a specified amount of time, but it can also be based on a specific number of sensor readings or other factors.

There is more data to store, that’s the obvious part. The less obvious part is that IoT data comes in two distinct types, creating two entirely different challenges. First, there is large-file data, such as images and videos captured from smartphones and other devices. This data type is typically accessed sequentially.

The second data type is very small, for example, log-file data captured from sensors. These sensors can create billions of files that must be accessed randomly.

Datacenters must deal with both data types, and the two usually require different storage systems—one designed for large-file sequential I/O and the other for small-file random I/O. Historically, image-based data has typically been placed on large-capacity NAS systems, but there is a shift to object-based storage. Sensor data, usually stored on high-performance NAS systems, is moving to all-flash arrays, primarily to allow faster analytics.

THE UTILITY OF DATA DEGRADES FAIRLY QUICKLY IN MOST SCENARIOS, SO IMMEDIATE ACCESS BECOMES LESS IMPORTANT AS IT AGES.“ ”

BB 201407 09 Storage.indd 20 7/22/14 2:41 PM

Page 21: BEAR Bytes July 2014

21BEARDATASOLUTIONS.COM

What is an In-Memory Database System?In-Memory database is on the news these days. Let’s take a look at what it is. An in-memory database system stores data entirely in main memory. This contrasts to traditional on-disk database systems, which are designed for data storage on persistent media. Because working with data in memory is much faster than writing to and reading from a file system, they can perform data management functions a lot faster. Many in-memory database systems employ data compression of some form which helps to hold more data in memory. Foundational database algorithms are being revised to support in-memory needs. One approach, known as column based storage, is particularly powerful and works particularly well in analytic workloads which are dominated by numeric values and dimensions.

News

MLC vs SLC - Which is right for you?It’s a difference you need to know about when thinking of buying solid-state disks based on flash memory technology. SLC (Single Level Cell) products store only one data bit per NAND flash cell which leads to faster transfer speeds, higher cell endurance and a lower power consumption. The only downside to SLC chips is the cost. SLCs are intended for the high-end consumer and server market and they have approximately 10 times more endurance compared to MLC (Multi Level Cell). MLC stores two or more bits per NAND flash cell. Storing more bits per cell achieves a higher capacity and lower cost per Megabyte.

In the enterprise, MLC vs. SLC depends on the application requirements and the number of failures that can be tolerated. MLC is not going to be deployed in I/O-intensive databases, but it may make sense with distributed applications where you can tolerate more failures. If your applications are mostly reads, MLC is a very effective solution. However, if you have write-intensive applications, then SLC might be better fit. And it isn’t necessarily an either/or decision as there is an emerging trend to combine SLC and MLC SSDs

in a tiered configuration.

BB 201407 09 Storage.indd 21 7/22/14 2:41 PM

Page 22: BEAR Bytes July 2014

@BEARDatafacebook.com/beardatasolutions

linkedin.com/company/bear-data-solutions

beardatasolutions.com/ blog

B

BB 201407 10 Calendar+Events.indd 22 7/22/14 2:40 PM

Page 23: BEAR Bytes July 2014

• Seminars • Trade shows • Movie Premieres • Technology forums

• Executive Briefing Center Technology Visits • Sporting Events • Client Appreciation

Visit beardatasolutions.com/company/events/ to see the latest events or to register for an event.

Read the latest Technology Blog, authored by our esteemed team of solutions engineers. Follow our events and catch the latest news

on Facebook, LinkedIn and Twitter.

beardatasolutions.com

Stay Connected

BB 201407 10 Calendar+Events.indd 23 7/22/14 2:40 PM

Page 24: BEAR Bytes July 2014

What if you could accelerate the performance of your mission-critical workloads? Transactions could be processed instantaneously. Users could analyze data and make better decisions faster. Batch processing jobs would go faster. And other important processes would deliver better performance. And what if you could easily reshape your infrastructure as demands on it change? Now you can. Cisco’s leadership in high-performance, scalable, solid-state systems is dramatically improving application performance while simplifying data center operations.

Meet the Cisco UCS Invicta™ Scaling System: the first truly enterprise-class, scalable, solid-state architecture. As a next-generation modular architecture, the Cisco UCS Invicta Series delivers the highest sustained write throughput in the industry. It supports all standard networking and file protocols. And it can serve up application workloads from a multitenant architecture in which applications coexist without performance degradation.

Compared to similar technologies, the Cisco UCS Invicta OS outperforms. It was designed to use NAND flash memory for sustained high throughput, a high rate of I/O operations per second (IOPS), ultra-low latency, and fast write performance.

24 BEARDATASOLUTIONS.COM

product highlightData at the Speed of BusinessCisco UCS Invicta Series Solid-State Systems

Cisco’s application-centric approach, combined with a modular, scalable, high-performance architecture, let you improve performance of many types of workloads:

• Analytics and intelligence: Extract, integrate, and analyze data up to 10 times faster.

• Batch processing: Run batches without interrupting other workflow.

• E-mail: Reduce time delays by a factor of up to 50.

• Online transaction processing: Remove performance bottlenecks between servers and memory.

• Video: Complete more transcoding tasks in significantly less time.

• Virtual desktops: Improve overall user experience with desktops that launch faster and respond quickly while virus scanning.

• Database loads: Dramatically reduce query response times.

• High-performance computing (HPC): Leverage low latency IO requests to speed time sensitive applications.

Cisco UCS Invicta Series Solid State Systems

BB 201407 11A Cisco Product Spread.indd 24 7/22/14 2:40 PM

Page 25: BEAR Bytes July 2014

25BEARDATASOLUTIONS.COM

product highlight

The Cisco Nexus 9000 Series provide the foundation of the Cisco Application Centric Infrastructure (ACI). The switches deliver high scalability and performance, and exceptional energy efficiency in a compact form factor. These switches are ideal for Data Center aggregation- and access-layer deployments in enterprise, service provider, and cloud networks.

Organizations everywhere recognize that changing application environments are creating new demands for the IT infrastructure that supports them. Application workloads are deployed across a mix of virtualized and nonvirtualized server and storage infrastructure, requiring a network infrastructure that provides consistent connectivity, security, and visibility across a range of bare-metal, virtualized, and cloud computing environments:

Build an Application Centric InfrastructureCisco Nexus 9000 Series Switches

• Application instances are created dynamically. As a result, the provisioning, modification, and removal of application network connectivity needs to be dynamic as well.

• Business units demand accelerated application deployments. IT departments have to provide shared IT infrastructure to address time-to-market needs and to increase their return on investment (ROI).

• With organizations deploying a mix of custom, open source, and off-the-shelf commercial applications, IT departments must manage both security and quality of service (QoS) for environments that support multitenancy.

• Applications have been transitioning over time to a less monolithic, scale-out, multinode model. IT infrastructure that supports this model must scale with the speed of business and support both 10 and 40 Gigabit Ethernet connectivity.

The Cisco Nexus® 9000 Series Switchesinclude both modular and fixed-port switches that are

designed to overcome these challenges with a flexible,

agile, low-cost, application-centric infrastructure (ACI)

BB 201407 11A Cisco Product Spread.indd 25 7/22/14 2:40 PM

Page 26: BEAR Bytes July 2014

26 BEARDATASOLUTIONS.COM

product highlightFAS8000: Scale-Out Storage for the EnterpriseThe next generation of NetApp storage.

FAS8060 (6U) ControllerMaximum capacity: 4800TB

With the FAS8000, we’ve refined and enhanced every aspect of the FAS platform, and in the process created a hybrid storage system that is uniquely well suited for the needs of today’s enterprise—without leaving the past behind. The FAS8000 is designed to help you run your business operations faster while reducing management overhead, simplifying IT operations, and improving return on investment.

We’ve focused on flexibility, so that the FAS8000 adapts to your changing needs without ever requiring planned downtime or disruptive hardware changes. With FlexArray storage virtualization software (the subject of a separate article in this issue of Tech OnTap), the FAS8000 can also virtualize and manage existing storage arrays, extending the capabilities of the Data ONTAP® operating system to more of your storage infrastructure.

Introducing the FAS8000.

For many IT organizations, adding storage capacity can be disruptive. Most storage systems have pretty narrow limits and capabilities, so you end up with multiple storage systems, silos of storage, orphaned capacity, and greater management complexity. The FAS8000 solves this problem with unified scale-out storage that lets you scale your storage environment in the way that makes the most sense for your business needs. Flexible hybrid storage options let you provide an optimal level of acceleration for each workload.

Because the FAS8000 combines innovative hardware design with the proven capabilities of clustered Data ONTAP, industry-leading management capabilities, and unmatched support for well-known hypervisors, applications, and management and orchestration tools, it delivers

the benefits of scale-out without sacrificing any of the capabilities that your operation depends on.

The FAS8000 is the first FAS architecture designed specifically for clustered Data ONTAP. All FAS8000 models scale out to a maximum of 24 nodes.

The FAS8000 also offers significant scale-up capabilities. You can scale up FAS controllers as needed to meet your exact storage requirements by adding more capacity, by adding different types of media, or by installing Flash Cache™ intelligent caching or additional interface cards. You can upgrade from one model controller to another without disruption; NetApp customers have long appreciated these data-

BB 201407 11B NetApp Product Spread.indd 26 7/22/14 2:39 PM

Page 27: BEAR Bytes July 2014

27BEARDATASOLUTIONS.COM

product highlight

in-place “head” upgrades as a way to gain performance and capacity without disruptive data migration.

With FlexArray, a FAS8000 system can also incorporate existing EMC, HDS, and NetApp E-Series storage arrays as part of your scale-out cluster without necessitating the purchase of additional hardware.

Mix generations and eliminate disruptive tech refresh. if you’ve got an existing FAS cluster, you can combine existing nodes with FAS8000 models. This means that you can continue to grow your cluster or make the transition to the latest controller technology, all without taking any downtime or disrupting important business operations

Complete business operations faster. Leveraging a new high-performance, multi-core architecture and self-managing flash acceleration, FAS8000 unified scale-out systems boost

throughput and decrease latency to deliver consistent application performance across a broad range of SAN and NAS workloads.

Streamline IT operations. Simplified management and proven integration with cloud providers let you deploy the FAS8000 in your data center and in a hybrid cloud with confidence. Nondisruptive operations simplify long-term scaling and improve uptime by facilitating hardware repair, tech

refreshes, and other updates without planned downtime.

Deliver superior TCO. Proven storage efficiency and a 2x increase in price/performance over the previous generation reduce capacity utilization and improve long term ROI. FlexArray storage virtualization software lets you integrate existing arrays with the FAS8000, increasing consolidation and providing even greater value to your business.

BB 201407 11B NetApp Product Spread.indd 27 7/22/14 2:39 PM

Page 28: BEAR Bytes July 2014

BEAR Data Technical Resource Services

Your Key to IT Recruiting

One of the key differentiators between BEAR

Data and traditional recruiting firms is that BEAR

Data Solutions is an international systems integrator

specializing in infrastructure solutions (cloud, data

center, security, networks & systems, storage) and

professional services around these offerings.

With these grass roots services in place,

a natural extension for BEAR Data was

staffing services, with a focus around

our key expertise – IT Operations. This

process was specifically built to enable a

superior vetting process targeted toward

Network Engineers, Network Architects, Network

Administrators, Network Project Managers, NOC

Technicians, and IT Operations Management in

all areas of infrastructure – routing & switching,

security, storage, virtualization, application

optimization, and voice.

BEAR Data’s vetting process has

proven its effectiveness for our clients

(from early to late stage startups up to

large enterprises), where we have saved

them countless hours of interviewing and

delivered highly talented people.

800.718.BEARwww.beardatasolutions.com

Technology. Innovation. Delivered. [email protected]

BB 201405 BDS Staffing.indd 30 4/17/14 5:20 PM

Page 29: BEAR Bytes July 2014

29BEARDATASOLUTIONS.COM

product highlight

Hardware and SoftwareEngineered to Work Together

Oracle Exadata Database Machine is delivered completely integrated and balanced for optimal performance. There are no unique configuration requirements and no special Oracle Exadata certification.

Update your data warehouse in near real time. Run reports that once took three hours in just 20 minutes. Consolidate multiple databases onto a single platform. Oracle Exadata Database Machine loads data faster, returns queries sooner, and sets new IT performance standards. It’s secure, it’s scalable, and all of it— hardware and software—is supported by Oracle.

Future in a Box

Faster, more flexible, and highly available, Oracle Exadata is shaping the future of IT by delivering the complete technology stack—hardware, software, and everything in between—in a reliable, redundant database machine that’s easy to manage, fast to deploy, and fully supported by a single vendor.

Preconfigured, scalable, and secure, Oracle Exadata Database Machine addresses the needs of today’s businesses with extreme performance for enterprise data warehousing, online transaction processing (OLTP), and mixed workloads.

Extreme Ease of Use Keeps IT Simple

Easy to deploy and manage, Exadata runs—with no changes—all Oracle Applications. Even the expertise of your DBAs and system administrators is directly transferable.

Exadata is easy to upgrade in the field with no interruption to your existing system. And, because all Exadata components are from Oracle, you significantly reduce implementation risks, downtime risks, and support risks. With only one vendor to call, there’s no runaround— just faster resolution.

Better information, more flexibility, and lower IT costs with extreme performance. Experience the extreme benefits of Oracle Exadata today.

New Possibilities for Your BusinessThe World’s Fastest Database Machine: Oracle Exadata

BB 201407 11C Oracle Product Page.indd 29 7/22/14 2:38 PM

Page 30: BEAR Bytes July 2014

CLICK HERE TO LEARN MORE!

EvEryonE dEsErvEs a nativE PC ExPEriEnCE with vdi.

Increasing numbers of users need to use graphics-intensive applications and are looking for the same experience they have on their desktop, anywhere, on any device.

Now, with NVIDIA GRID™ available from BEAR Data, everyone can get the full experience of a local PC while running on a virtual desktop served from the data center. NVIDIA GRID offloads graphics processing from the CPU to the GPU in virtualized environments—unleashing the full graphics potential of enterprise desktop virtualization.

For more information about NVIDIA GRID, contact your BEAR Data Specialist at (800) 718-BEAR, or visit www.nvidia.com/vdi

How many times have you heard, “I can’t run this app in my Virtual Desktop environment, it’s just too slow”?

NVIDIA 201407 FP.indd 1 7/1/14 4:42 PM

Page 31: BEAR Bytes July 2014

12

3

31BEARDATASOLUTIONS.COM

backup

Integration becomes essentialIn the past, Enterprise applications were built with all of the features and capabilities their users might need built in. That’s changing. Application development becomes less about “what features does this include?” and more about “how does this work with other applications and services?”

How important will integration become to enterprise application development? In a word: Essential. According to Gartner, “if application integration does not become a true area of expertise, companies will find themselves at a serious competitive disadvantage within the next few years.”

Enterprise Applications are on the moveIn the past, IT managed application delivery platforms. They deployed applications to the user’s desktop or they made an application available through a browser on the user’s desktop or maybe a virtual desktop, but that was about it. With the rise of mobile devices and soon maybe even wearable devices, the concept of a “managed” application platform is vanishing.

This is causing a change in Enterprise Application development. As the possible platforms grow, it quickly becomes financially unrealistic to develop applications specifically for every possible alternative. Instead, Enterprise Applications need to be built to not care about what platform they are running on and to rely on that platform to provide the integration with the backend systems that are necessary. Yes, that means that the platform has to be smarter but that just means the platform has to be built to integrate with the environment in which it’s operating. See trend 1.

Application development and delivery shifts to the cloudIn a trend that’s already taking off, we’ll see more and more businesses move their application delivery and even development to the cloud. Rather than provision, install, and monitor their own hardware, more companies are opting for platforms installed on cloud hosts for their applications including the applications they use to develop new applications.

The number of companies willing to put up large capital investments in Enterprise Applications and even application development platforms is shrinking. Instead, companies are choosing a consumption model that allows them to use just the resources they need and pay form them as they use them. It requires a more agile approach to both operations and project management, but the companies that can master it will reap significant benefits.

IN SHORT, SOFTWARE IS EATING THE WORLD

For the past few years, this quote has been short hand in the tech industry for the fact that an ever increasing amount of the things we use every day are being run by software. From the toaster in the break room to the time card machine on the factory floor, software controls the rhythm of the modern business. Companies that can adopt that rhythm and adapt will have an advantage. Companies that don’t will fall behind.

So, what does that mean for enterprise application development? Here are 3 trends worth watching.

— Marc Andreessen

Author: Dennis Vickers joined BEAR Data in 2011. With over 30

years of experience in technical services, Dennis has been deeply

involved with both the corporate and technical operations of

organizations. As an independent software developer, he worked

both individually and as part of a small team to develop custom

applications that supported the unique requirements of the clients.

BB 201407 12 Vickers Backpage.indd 31 7/22/14 2:38 PM

Page 32: BEAR Bytes July 2014

CLICK HERE TO LEARN MORE!

We’re not retrofitting 20-year-old operating

systems with virtualization. Choose the

world’s most proven enterprise virtualization

technology and reach new levels of

efficiency, control and agility for the new

cloud era. VMware vCloud® Suite delivers the

Software-Defined Data Center, now.

Visit vmware.com/sddc

The Software-Defined DataCenterfromVMware.

The platform of the past is no match for the data center of the future.

C

M

Y

CM

MY

CY

CMY

K

VMWare FP 201308_B.pdf 1 7/30/13 9:14 AM