54
Turinys When A Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed................................................................................................................... 3 Apple and iOS 8 update ............................................................................................................ 7 iOS 8 isn‘t without its share of bugs...................................................................................... 8 Is the desktop computer going the way of the dodo bird? ...................................................... 10 Desktops on the Way Out................................................................................................... 10 Desktops Are Here to Stay .................................................................................................. 11 Other Threats to Desktop Computers ................................................................................. 12 Inside Movie Animation: Simulating 128 Billion Elements....................................................... 14 Introducing the World-Changing Ideas Summit 2014.............................................................. 18 Here are the limits of Apple's iOS 8 privacy features........................................................... 19 What does Apple Watch do? .............................................................................................. 20 Robots Use RFID to Find and Navigate to Household Objects ............................................. 21 Article About Education ......................................................................................................... 24 Security and Privacy in ........................................................................................................... 27 Cloud Computing ................................................................................................................... 27 Identifying New Threats and Vulnerabilities ....................................................................... 27 Protecting Virtual Infrastructures ....................................................................................... 28 Protecting Outsourced Computation and Services.............................................................. 28 Protecting User Data .......................................................................................................... 29 Securing Big Data Storage and Access Control .................................................................... 29 Call for Contributions ......................................................................................................... 29 A Review of ZeroAccess peer-to-peer Botnet ......................................................................... 31 INTRODUCTION.................................................................................................................. 31 EVOLUTION OF ZEROACCESS BOTNET ................................................................................ 32 LIFE CYCLE OF ZEROACCESS BOTNET .................................................................................. 33 CONCLUSIONS.................................................................................................................... 38 Escape From the Data Center: The Promise of Peer-to-Peer Cloud Computing ....................... 39 Not long ago ...................................................................................................................... 39 Some cloud-computing providers ....................................................................................... 40 The computers ................................................................................................................... 41 Gossip-based protocols ...................................................................................................... 41

Home reading Ernesto

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Home reading Ernesto

Turinys When A Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed ................................................................................................................... 3

Apple and iOS 8 update ............................................................................................................ 7

iOS 8 isn‘t without its share of bugs...................................................................................... 8

Is the desktop computer going the way of the dodo bird? ...................................................... 10

Desktops on the Way Out................................................................................................... 10

Desktops Are Here to Stay .................................................................................................. 11

Other Threats to Desktop Computers ................................................................................. 12

Inside Movie Animation: Simulating 128 Billion Elements....................................................... 14

Introducing the World-Changing Ideas Summit 2014.............................................................. 18

Here are the limits of Apple's iOS 8 privacy features ........................................................... 19

What does Apple Watch do? .............................................................................................. 20

Robots Use RFID to Find and Navigate to Household Objects ............................................. 21

Article About Education ......................................................................................................... 24

Security and Privacy in ........................................................................................................... 27

Cloud Computing ................................................................................................................... 27

Identifying New Threats and Vulnerabilities ....................................................................... 27

Protecting Virtual Infrastructures ....................................................................................... 28

Protecting Outsourced Computation and Services .............................................................. 28

Protecting User Data .......................................................................................................... 29

Securing Big Data Storage and Access Control .................................................................... 29

Call for Contributions ......................................................................................................... 29

A Review of ZeroAccess peer-to-peer Botnet ......................................................................... 31

INTRODUCTION .................................................................................................................. 31

EVOLUTION OF ZEROACCESS BOTNET ................................................................................ 32

LIFE CYCLE OF ZEROACCESS BOTNET .................................................................................. 33

CONCLUSIONS .................................................................................................................... 38

Escape From the Data Center: The Promise of Peer-to-Peer Cloud Computing ....................... 39

Not long ago ...................................................................................................................... 39

Some cloud-computing providers ....................................................................................... 40

The computers ................................................................................................................... 41

Gossip-based protocols ...................................................................................................... 41

Page 2: Home reading Ernesto

Placing the physical infrastructure...................................................................................... 42

The idea of creating ........................................................................................................... 43

The success of the many .................................................................................................... 43

Developments are admittedly ............................................................................................ 45

Oculus Brings the Virtual Closer to Reality .............................................................................. 47

IBM: Commercial Nanotube Transistors Are Coming Soon ..................................................... 50

The Future of IT ..................................................................................................................... 52

A New Order of IT .............................................................................................................. 53

Page 3: Home reading Ernesto

When A Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed Artificial intelligence reveals previuosly unrecognised influences between great artists

The task of classifying pieces of fine art is hugely complex. When examining a painting, an art expert can usually determine its style, its genre, the artist and the period to which it belongs. Art historians often go further by looking for the influences and connections between artists, a task that is even trickier.

Left: Portrait of Pope Innocent X (1650) by Diego Velazquez. Right: Study After Vel´azquez’s Portrait of Pope Innocent X (1953) by Francis Bacon So the possibility that a computer might be able to classify paintings and find connections between them at first glance seems laughable. And yet, that is exactly what Babak Saleh and pals have done at Rutgers University in New Jersey.

These guys have used some of the latest image processing and classifying techniques to automate the process of discovering how great artists have influenced each other. They have even been able to uncover influences between artists that art historians have never recognised until now.

Page 4: Home reading Ernesto

The way art experts approach this problem is by comparing artworks according to a number of high-level concepts such as the artist’s use of space, texture, form, shape, colour and so on. Experts may also consider the way the artist uses movement in the picture, harmony, variety, balance, contrast, proportion and pattern. Other important elements can include the subject matter, brushstrokes, meaning, historical context and so on. Clearly, this is a complex business.

So it is easy to imagine that the limited ability computers have for analysing two-dimensional images would make this process more or less impossible to automate. But Salah and co show how it can be done.

At the heart of their method, is a new technique developed at Dartmouth College in New Hampshire and Microsoft research in Cambridge, UK, for classifying pictures according to the visual concepts that they contain. These concepts are called classemes and include everything from simple object description such as duck, frisbee, man, wheelbarrow to shades of colour to higher-level descriptions such as dead body, body of water, walking and so on.

Comparing images is then a process of comparing the words that describe them, for which there are a number of well-established techniques.

Salah and co apply this approach to over 1700 paintings by 66 artists working in 13 different styles. Together, these artists cover the time period from the early 15th century to the late 20th century. To create a ground truth against which to measure their results, they also collate expert opinions on which of these artists have influenced the others.

For each painting, they limit the number of concepts and points of interest generated by their method to 3000 in the interests of efficient computation. This process generates a list of describing words that can be thought of as a kind of vector. The task is then to look for similar vectors using natural language techniques and a machine learning algorithm.

Determining influence is harder though since influence is itself a difficult concept to define. Should one artist be deemed to influence another if one painting has a strong similarity to another? Or should there be a number of similar paintings and if so how many?

So Saleh and co experiment with a number of different metrics. They end up creating two-dimensional graphs with metrics of different kinds on each axis and then plotting the position of all of the artists in this space to see how they are clustered.

Page 5: Home reading Ernesto

Georges Braque’s Man with a Violin (left) and Pablo Picasso’s Spanish Still Life: Sun and Shadow, both painted in 1912 The results are interesting. In many cases, their algorithm clearly identifies influences that art experts have already found. For example, the graphs show that the Austrian painter Klimt is close to Picasso and Braque and indeed experts are well acquainted with the idea that Klimt was influenced by both these artists. The algorithm also identifies the influence of the French romantic Delacroix on the French impressionist Bazille, the Norwegian painter Munch’s influence on the German painter Beckmann and Degas’ influence on Caillebotte.

The algorithm is also able to identify individual paintings that have influenced others. It picked out Georges Braque’s Man with a Violin and Pablo Picasso’s Spanish Still Life: Sun and Shadow, both painted in 1912 with a well-known connection as pictures that helped found the Cubist movement.

Page 6: Home reading Ernesto

It also linked (above left) Vincent van Gogh’s Old Vineyard with Peasant Woman (1890) and Joan Miro’s The Farm (1922), which contain similar objects and scenery but have very different moods and style.

Most impressive of all is the link the algorithm makes between (below left) Frederic Bazille’s Studio 9 Rue de la Condamine (1870) and Norman Rockwell’s Shuffleton’s Barber Shop (1950). “After browsing through many publications and websites, we concluded, to the best of our knowledge, that this comparison has not been made by an art historian before,” say Saleh and co.

And yet a visual inspection shows a clear link. The yellow circles in the images below show similar objects, the red lines show composition and the blue square shows a similar structural element, say Saleh and co.

That is interesting stuff. Of course, Saleh and co do not claim that this kind of algorithm can take the place of an art historian. After all, the discovery of a link between paintings in this way is just the starting point for further research about an artist’s life and work.

But it is a fascinating insight into the way that machine learning techniques can throw new light on a subject as grand and well studied as the history of art.

Sent by : Paulius D.

Collate - to bring together different pieces of written information so that the similarities and differences can be seen Deem - to consider or judge something in a particular way To throw light on something - to reveal something about something; to clarify something, to help people understand a situation

Page 7: Home reading Ernesto

Apple and iOS 8 update

Apple has become a different company this year. It’s undergone major personnel changes, promised to be a more open and friendly company, and released two new iPhones that are clear responses to consumer demand.

A year ago, Apple released the most dramatic visual revision to iOS since the launch of the original iPhone in 2007. iOS 7 was a complete visual overhaul, a ditching of the skeuomorphic and obvious designs of years past and replacing them with a flatter, more colorful, and more modern interface. It was a striking visual change: it took some getting used to for longtime iOS users, and some of the design decisions were (and remain) questionable. But iOS 7 was a completely new-looking house built on an existing foundation. All of the paint inside was new, but the blueprints remained the same. The core functions of the OS was the same as always.

With iOS 8, Apple is tearing out the old foundation of iOS and replacing it with a new, friendlier platform. Apple has thrown open the doors of iOS 8 to developers in ways it never has before. Where the iPhone used to be an appliance, iOS 8 turns it into a platform, a way to connect your apps and devices in new ways. But Apple still hasn’t dramatically altered how it works for millions of people. iOS 8 is much more powerful than any of its predecessors, but it’s still as approachable and easy-to-use as it ever has been.

Changing the core functionality of an operating system without disrupting the day-to-day experience for millions of users is a difficult undertaking, and as a result, iOS 8 is far from a perfect operating system. I’ve been using it on devices old and new for the past couple of weeks, and it’s clear that Apple still has a bit of work to do.

The most important thing Apple did with iOS 8 is open parts of the platform that have always been off-limits to third-party developers. Third-party apps can now integrate into Apple’s native sharing system, put themselves right into Apple’s own Photos app, and replace the iPhone’s keyboard. They can put widgets in the Today tab and use your Touch ID fingerprint scanner to authenticate you.

Many of these features and capabilities would have been unheard of in iOS just a year ago. It’s the biggest change Apple has made to the core of how iOS works since the App Store launched in 2008. It makes iOS a more friendly, more extensible, and more useful platform than ever before

That isn’t to say iOS 8 is without fault — in fact, it feels like one of the buggiest, most unpolished versions of iOS in years. In my testing, there weren’t any noticeable performance or battery life degradations compared to iOS 7. But there are inexplicable bugs everywhere: a keyboard that refuses to appear when you need it, or an interface element that remains stuck in landscape when you rotate the phone back to portrait. No iOS launch has been without bugs, and Apple’s iterations in the weeks and months that follow tend to smooth a lot of things out. (iOS 7 didn’t get really great until 7.1 was released, a full six months after 7.0's launch.)

Page 8: Home reading Ernesto

iOS 8 isn‘t without its share of bugs

A number of developers are already taking advantage of iOS 8’s new widget functionality, with mixed results. Evernote’s widget is super useful for quickly creating new notes and NYT Now inserts the top headlines right into your Today screen. Dropbox’s widget is less than useful (it just shows recently changed files in your account, with no ability to do anything with them), and Yahoo’s image-heavy widgets for Weather and News Digest just look clunky and out of place. It’s clear that developers are still wrangling with the best ways to use widgets in iOS, and it’s likely we’ll see some really useful additions in just a short time. Fortunately, all widgets are disabled by default and you can pick and choose which ones show up in your Today tab.

Likewise, though third-party keyboard support is welcome and long overdue, the options you can use right now range from "sometimes buggy" to "barely usable." Oftentimes, the keyboard just wouldn’t show when it’s supposed to, making replying to a message from a notification using a third-party keyboard more miss than hit. Third-party keyboards also aren’t allowed access to iOS’ built-in dictation service. Developers will likely fix a lot of these problems as time goes on, but it might take more work from Apple to make third-party keyboards as good as the native one.

Sharing data between apps, such as saving an article from the web to Pocket or Instapaper, has always been a chore in iOS, requiring clunky bookmarklets or a lot of copy and pasting. That’s finally been rectified since third-party apps can now make themselves available in iOS’ native sharing system. Web links can instantly be saved to Pocket or Evernote with just two taps, and apps like Drafts won’t need to rely on hacky URL schemes to pass text to other third-party apps. It’s the closest thing iOS has ever had to Android’s sharing functionality — for all intents and purposes, Apple’s essentially mimicked it. But there are some small bugs to work out with the system: regardless of how many times I told the system I wanted Pocket to be first in the list of apps to share to, it would revert back to Apple’s default ordering.

Editing photos in third-party apps in iOS was never possible from the native Photos app, but iOS 8 changes that entirely. Editing photos with third-party tools right in the Photos app is a huge improvement to my workflow, and I can now enhance and tweak photos with my preferred apps like Litely (and soon VSCO Cam) instead of having to use Apple’s own tools. It still takes far too many taps to get to the third-party photo editors, and the process isn’t obvious at all, but I’m very excited that it’s here.

Sent by : Mindaugas V.

Wrangle - an argument, especially one that continues for a long time Overdue- not done or happening when expected or when needed; late Hacky - (Of a piece of computer code) providing a clumsy or inelegant solution to a particular problem Mimic - Imitate (someone or their actions or words), especially in order to entertain or ridicule

Page 9: Home reading Ernesto

Predecessors - A thing that has been followed or replaced by another Approachable - Friendly and easy to talk to Undertaking - A formal pledge or promise to do something; The action of undertaking to do something Overhaul - Take apart (a piece of machinery or equipment) in order to examine it and repair it if necessary Blueprints - an early plan or design that explains how something might be achieved:

Page 10: Home reading Ernesto

Is the desktop computer going the way of the dodo bird?

The personal computer is perhaps the most significant technological advancement hatched from the human mind over the past 30 years. It has spawned a world driven by technology. But innovation can be fickle, and in recent years, the desktop computer seems to be losing some of its steam. Advances in technology have made it possible to create smaller and lighter computers. No longer underpowered and heavy, notebooks are now commonplace, and tablets, netbooks and even smartphones are able to do tasks that used to require larger machines.

Like just about everything, there are two sides to the story. Some data suggests the desktop PC is still necessary and it isn't easily replaced. Other data support the notion that desktops are becoming increasingly obsolete. So what's the deal? This article intends to give an adequate look at both sides of the spectrum.

To understand this argument, a quick lesson from economics 101 about opportunity costs may be handy. In essence, everything has a cost. Even reading this article costs you something. You could have been reading an article on another site or even finding a new way to make more money. If you think about the differences between desktops and notebooks along those lines, you'll develop a better understanding of each side and ultimately make the decision that suits you best. So let's get right down to it and see where both sides stand. We'll start the arguments by the naysayers in the next section.

Desktops on the Way Out

As necessary as computers are to productivity in the workplace and even at home, it's becoming harder to get ahead in the world without using one. You could even say we've become dependent on them. For a long time, desktop machines were the preferred choice. But now thatWi-Fi is easy to find and often free, mobile connectivity is easier and more common than ever -- especially with the recent social networking boom.

Notebook computers found their niche with traveling business professionals in the late '80s and early '90s but were much too expensive for most people to afford. They had other faults, too; notebooks traditionally had limited storage and computing power. But computer manufacturers have been able to close the processing power gap between desktops and laptops. In fact, the cost of manufacturing LCD screens, still one of the most expensive parts of notebook computers, have been falling, too.

Desktops still hold an edge when it comes to computing power, and they can be hooked up to large monitors, too. But docking stations give notebook users the ability to use larger, crisper displays when at home or in the office and the costs of external storage devices has come down, making the idea of carrying a portable computer more attractive.

Recently, even notebook sales have given way to something totally new. Netbooks are small notebook computers with lower-power processors and a scaled-down feature set. Despite their apparent shortcomings, netbooks seem to be popping up everywhere. In 2009, 13.5 million netbooks were sold throughout the world . It may have something to do with affordability -- for less than $300, you can get you hands on one of these mini notebooks .

Page 11: Home reading Ernesto

In general, netbooks don't have an optical drive, so you won't be able to use CD-ROMs or DVD-ROMs. But with seemingly ubiquitous broadband access, you can accomplish many of the same tasks online. Netbooks are so tied in with Internet access that telecommunication companies such as AT&T and Verizon have taken to selling them packaged with wireless Internet service.

That's one side of the story. Now let's look at why desktops aren't so easy to replace in the next section.

Desktops Are Here to Stay

For a long time, the relatively high costs of notebook computers made owning one more of a luxury than a reality. Over time though, the technology has become less expensive and notebooks are more affordable than ever.

But even though the price of notebook computers has come down due to technological advances over the past 10 years, they still aren't as cheap as desktops. Especially when you consider what you learned about in the beginning of the article when we talked about opportunity costs.

We decided to configure a desktop and notebook with similar specifications to see which came out cheaper. The specifications of the two systems weren't exact. The manufacturer didn't offer two systems that could be compared exactly. Upgrading the desktop computer's configuration would have included a hard drive the same size as the one in the notebook and added another gigabyte of memory -- but it also would have boosted the processor speed to 2.93GHz (for an additional $59). The two systems each have their strengths and weaknesses, but the desktop still comes out costing less.

Desktops offer more configuration options and are almost always easier to upgrade. For instance, a video editor or graphic designer may need a lot of storage space to save large video and image files. He or she could use a tower-style desktop computer with multiple internal hard disks to store the files. A notebook has one internal hard drive, at most. Cloud storage makes it possible to access a theoretically infinite amount of storage space, though this method poses security risks.

So you've read both sides of the spectrum. What about threats to both the desktop and even notebooks? The final section highlights a couple of upstart technologies looking for their share of the computing market.

Page 12: Home reading Ernesto

Other Threats to Desktop Computers

Though it may seem strange to compare them to desktop computers, or even laptops, smartphones are gaining popularity as mobile computing devices. At first, they were viewed as the ultimate gadgets. Recently, though, higher-powered processors and a robust network of developers creating applications for the devices has made smartphones explode in popularity.

Keyboard and screen sizes are considerably smaller than you'd see on laptop or netbook computers. Storage space is also less for smartphones than for computers. But smartphones are becoming more like computers when it comes to working online. A recent survey of small businesses, conducted by virtual file server company Egnyte, revealed some interesting trends in smartphone usage. It found that 25 percent of respondents prefer conducting business on their smartphones rather than their PCs . Even more telling, close to three-quarters of those surveyed felt accessing data through a file server would increase productivity. This trend clearly reveals the popularity of the smartphone and shows just how far the technology has come.

Cloud computing is something that also has the potential to shake up the PC market. The theory behind cloud computing is that the Internet would, in essence, replace your computer's hard drive. In other words, all you would need is a small computer with just the basics. A display and keyboard would be necessary but you wouldn't need much else. This would cut down on the cost of portable computers even more. A desktop cloud computer would also be an option. Again, it could be made inexpensively.

Apple's MacBook Air seems to have been made for cloud computing. The Air has no optical drive or traditional serial ports you typically see on notebooks. It's slim, sleek and meant to be carried around as if it were part of your everyday ensemble. It's expensive, though ($1,499), which goes against the trend of cheaper cloud computing devices.

Perhaps the best way to look at this quandary is that each type of computer serves its purpose. Each has its upside. Each has its limitations. It depends ultimately on the user. For example, a graphic artist who works in a studio, a computer with a fast processor and tons of memory connected to a large, bright display is preferable. When you go computer shopping and compare machines that meet those needs, a desktop will come out less expensive every time. However, if you're looking for a computer to handle basic productivity tasks and you're on the road quite often, maybe a netbook is in your future. It all comes down to choosing the best machine to satisfy your wants and needs.

Sent by : Tomas Š.

Spawn- to cause something new, or many new things, to grow or start suddenly

fickle - likely to change your opinion or your feelings suddenly and without a good reason (nepastovus, permainingas)

commonplace - happening often or often seen or experienced and so not considered to be special:

Page 13: Home reading Ernesto

to have the edge (on, over) - turėti pranašumą (palyginti su kuo

hook up - to set something up and get it working. (The object is to be connected to a power supply, electronic network, telephone lines, etc.)

robust - (of a person or animal) strong and healthy, or (of an object or system) strong and unlikely to break or fail

Page 14: Home reading Ernesto

Inside Movie Animation: Simulating 128 Billion Elements

Ever wonder how animated films such as The Incredibles get hair, clothing, water, plants, and other details to look so realistic? Or how, like the lion in The Chronicles of Narnia, animated characters are worked into live-action films? If not, the animators would be pleased, since they don't want special effects to distract from the story. Behind the scenes, though, is a sophisticated combination of artistry, computation, and physics.

Traditionally, animation was hand drawn by artists who needed"some of the same magical eye that the Renaissance painters had, to give the impression that it's realistically illuminated," says Paul Debevec, a computer graphics researcher at the University of Southern California. Over the past decade or so, the hand-painted animation has faded as physically-based simulations have increasingly been used to achieve more realistic lighting and motion. Despite this movement toward reality in animated films, the physics of the real world remains a slave to expediency and art: Simplifications and shortcuts make the simulations faster and cheaper, and what the director wants trumps physical accuracy.

In one dramatic scene in the movie 300, which came out early in 2007, several ships collide violently -- their hulls splinter, masts break, sails tear, and the ships sink. Stephan Trojansky, who worked on 300 as visual effects supervisor for the German-based company ScanlineVFX, said just creating the ocean in that scene involved simulating 128 billion elements. “We probably created the highest fluid simulation detail ever used in visual effects,” he said.

"For the fracturing and splintering of the ships," he added, "we developed splintering technology. Wood doesn't break like a stone tower. It bends. To get realistic behavior, you have to take into account how the ship is nailed together. The physics involved is mainly equations that define where the material will break."

Animations of both fluids and solids—and of facial expressions and clothing, among other things—use various computational methods and a host of equations. But there is a tradeoff in the push for more realistic animations – moving closer to reality requires more and more computer power, and becomes increasingly expensive. There are three commonly used methods of computer animation -- break the object being simulated into discrete elements, use sample points from the object, or create fixed cells in space.

Mark Sagar, of WETA Digital, a visual effects company in Wellington, New Zealand, specializes in simulating faces. One technique is motion capture, in which markers are placed on an actor's face, their positions are noted for different expressions, and the

Page 15: Home reading Ernesto

positions are then mapped onto an animated character. "For King Kong we mapped the actor's expressions onto a gorilla," said Sagar.

Simulating the face involves interpreting movement in terms of muscle, Sagar said. "We approximate the detailed mechanical properties of live tissue and its layers and layers. You have motion data and start working out what the driving forces are.” Modeling realistic stretching of the skin requires a lot of finite elements—each a small patch of tissue,” he said. "You compute and solve for forces at each point and then sum until you get a balanced equation. It's not sophisticated from an engineering standpoint but produces high-quality results."

Realistic motion is often too complicated for animators to do by hand, said Michael Kass, a researcher at Pixar Animation Studios. "The results can be awful and very expensive." In the original 1995 Toy Story, he said, "if you see a wrinkle in clothing, it's because an animator decided to put in a wrinkle at that point in time. After that we [at Pixar] decided to do a short film to try out a physically based clothing simulation."

The movement of clothing is computed as a solution to partial differential equations, he said. "You start with individual threads. What are their basic properties? Then you consider the bulk properties when [they're] woven. The main physical effects are stretching, shearing, and bending. To a certain degree, you can take real cloth and get actual measurements."

While animating clothing still presents problems, he said, “it's now part of a standard bag of tricks. Our simulations have become accurate enough that we can design garments with commercially available pattern-making software and then have them move largely as a tailor would expect in our virtual simulations."

Animating hair "is in many ways easier than clothing because it's like individual threads,” Kass said. “The difference is that clothing doesn't move like clothing unless the threads interact. In a real head of hair, the threads do interact, but you can get convincing motion without taking that into account."

Illumination is another area in which physics plays a key role in animation. For a long time, says Cornell University's Steve Marschner, "rendering skin was hard. It would look waxy or too smooth." The fix, he says, was to take into account that skin is translucent, which he and colleagues "figured out from looking at a different problem—rendering marble."

As with simulations of fluids, cloth, rigid bodies, and so on, incorporating translucency to model skin involves old physics. "In some cases we have to create the models from the ground up. But sometimes we find somebody in another branch of physics who has solved a similar problem and we can leverage what they've done." For skin

Page 16: Home reading Ernesto

translucency, "we were able to adapt a solution from medical physics, from a calculation of radiation distributions inside the skin that was used for laser therapy in skin diseases."

"One of the coolest things you see in a movie is when there is some sort of otherworldly beast or digital character that is sitting in the scene, roaming around, and it looks like it was really there," says Debevec. "The only way you can do that is by understanding the physics of light transport, respecting how light works in the real world, and then using computers to try to make up the difference from what was really shot."

For example, he says, in Narnia "they filmed a lot of it with the children dressed up in their knight costumes and left an empty space for the lion." Then, to get the digital lion just right, "Rhythm and Hues Studios used radiometricallycalibrated cameras to measure the color and intensity of illumination from every direction in the scene." The measurements, he adds, "are fed into algorithms that were originally developed in the physics community and have been adapted by the computer graphics community as a realistic way to simulate the way light bounces around in the scene.”

Similar methods are used for creating digital doubles—virtual stunt characters that fill in for live actors. For that, Debevec said, "film studios sometimes bring actors here to our institute, where we've built devices to measure how a person or object, or whatever you stick in [the device], reflects light coming from every possible direction.” The resulting data set, he says, can be used to simulate a virtual version of the person. "There are about 40 shots of a digital Alfred Molina playing Dr. Otto Octavius in Spider-Man 2. It looks like him, but it's an animated character. The reflection from the skin looks realistic, with its texture, translucency, and shine, since it's all based on measurements of the real actor."

"We rarely simulate more than two indirect bounces of illumination, whereas in reality light just keeps bouncing around," Debevec continued. "With no bounces, things look way too spartan and the shadows are too sharp. One bounce fills in perhaps three-quarters of the missing light, and with two bounces you're usually past 95%. That's good enough." Another shortcut, he adds, is to focus just on the light rays that will end up at the eye. "We try to figure out the cheats you can make that give you images that look right."

"There is a long tradition of cheating as much as possible," said Marschner, "because setting up an exact simulation is either not possible or too expensive." “We use physics to get realism,” Trojansky said. "But I am a physics cheater. I use it as a base, but I am interested in the visual effect."

Sent by : Kasparas Ž.

Page 17: Home reading Ernesto

Sophisticated - a machine, system, or technique) developed to a high degree of complexity

Computation - the action of mathematical calculation.

Fracturing - break or cause to break.

Discrete -individually separate and distinct.

Finite - having limits or bounds.

Bulk - be or seem to be of great size or importance.

Shearing - break off or cause to break off, owing to a structural strain.

Bending - shape or force (something straight) into a curve or angle.

Garments - an item of clothing.

Translucence- if an object or a substance is translucent, it is almost transparent, allowing some light through it in an attractive way

Page 18: Home reading Ernesto

Introducing the World-Changing Ideas Summit 2014

The World-Changing Ideas Summit will be held in New York on 21 October

We are thrilled to announce the launch of our first major live event aiming to explore what’s possible in the future – the World-Changing Ideas Summit in New York on 21 October. Here’s what we have in store, who will be there, and how you can take part.

If you’ve seen our World-Changing Ideas series, you’ll know we’re not afraid to push the boundaries of current knowledge – whether it is devising smart organs that respond to the way we live, or conceiving a day where everyone owns a flying car.

Now, we are taking this concept one exciting step further by hosting the first World-Changing Ideas Summit in New York on 21 October. For one day, we have assembled a diverse group of the world’s best thinkers and innovators to explore ambitious ideas and major challenges that could have the biggest impact on the next generation – including:

Larry Burns, former head of R&D at General Motors Missy Cummings, pilotless planes pioneer from Duke University Kate Darling who studies the ethics of robots at MIT Julius Genachowski, former Chairman of the Federal Communications

Commission Jeffrey Hoffman, retired astronaut at MIT Alexis Ohanian, a partner at Y Combinator and co-founder of Reddit Alfred Spector, Head of Research at Google.

You can find the full list of confirmed speakers here.

Speakers will present their big ideas, debate major issues we face, and discuss how to overcome our biggest challenges in the worlds of science, technology, and health. Where else will you hear ideas ranging from how we can feasibly colonise other planets to how your phone could help save your life. Or discover how the internet could evolve, and why the future of autonomous vehicles isn’t already here?

Page 19: Home reading Ernesto

If you are unable to attend, don’t feel as though you’ll be missing out. We will be posting videos and other highlights from the event on our pages. Plus you can follow and comment on events as they unfold on our Twitter and Facebook channels, through the hashtag #WCIS2014, or follow our live social feed of the event on this site.

Our tagline for this event is “Think Big. Transform Tomorrow”. We want this summit to inspire people to expand their horizons of possibilities, spur thoughtful debate on the challenges that lie ahead, and cultivate the big ideas needed to spark change and develop solutions for the future. We hope you can join us.

Here are the limits of Apple's iOS 8 privacy features

The privacy improvements in the latest version of Apple's mobile operating system provide necessary, but limited, protection to customers, experts say.

With the release of iOS 8 this week, iPhones and iPads configured with a passcode would encrypt most personal data, making it indecipherable without knowing the four-number password.

By tying the encryption key to the passcode and making sure the key never leaves the device, Apple placed the burden on law enforcement to obtain a search warrant and go directly to the customer to get data from their device during an investigation.

"Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data," Chief Executive Tim Cook said on the company's new privacy site. "So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8."

Rival Google reacted quickly to Cook's comments, and announced that it would turn on data encryption by default in the next version of Android. The OS has had encryption as an option for more than three years, with the keys stored on the smartphone or tablet.

On Friday, privacy experts said they supported Apple's latest move, which they viewed as putting more control over personal data in the hands of customers.

"The fact that they (law enforcement) now have to go directly to you, and can't do it without your knowledge, is a huge win for Apple's customers in terms of their privacy and security," Jeremy Gillula, staff technologist at the Electronic Frontier Foundation, said.

However, experts also said the protection had its limits, since customers often store on iCloud a lot of the data encrypted on the device, such as photos, messages, email, contacts and iTunes content.

In addition, information related to voice communications, such as call logs, is stored with the wireless carrier, as well as on the smartphone.

Page 20: Home reading Ernesto

Once in iCloud, law enforcement or government officials investigating national security cases could legally force Apple to hand over the data.

Apple's new privacy mechanism also has a weakness. Plugging the iPhone or iPad into a Mac or Windows PC that have been paired with the devices would circumvent the passcode-based encryption.

Unless the devices had been turned off, the password would not be needed to access data from the computers.

"This means that if you're arrested, the police will seize both your iPhone and all desktop/laptop machines you own, and use files on the desktop to dump and access all of the above data on your iPhone," Jonathan Zdziarski, an iOS forensics expert, said in his blog. "This can also be done at an airport, if you are detained."

Without naming Google, Cook made a point to emphasize that Apple's profits depended on selling hardware, not collecting customers' personal information and then selling it to advertisers.

"A few years ago, users of Internet services began to realize that when an online service is free, you’re not the customer. You’re the product," Cook said.

The privacy changes came after Apple suffered a black eye this month when cyber-thieves accessed celebrities' iCloud accounts and, in some cases, posted naked photos online. Apple found that the attackers did not compromise iCloud security, but obtained the credentials to the accounts some other way.

Apple beefed up iCloud security recently by introducing two-factor authentication, which was already available to people with an Apple account tied to iTunes and other services.

"Two-step verification is good, and long over-due," Rebecca Herold, a privacy adviser to law firms and businesses, said.

This story, "Here are the limits of Apple's iOS 8 privacy features" was originally published by CSO .

What does Apple Watch do?

Apple's first wearable gadget beams messages, Facebook updates, simplified apps and Siri to our wrists, eliminating the all-too-common need to take out our devices to constantly check notifications.

It's going to become especially convenient to pocket the 4.7-inch iPhone 6 and even bigger 5.5-inch iPhone 6 Plus in your jeans, or to always stow the next 9.7-inch iPad Air 2 in a bag.

Other apps seen in the Apple Watch video include iMessages, Health, Calendar, Weather, Mail, Photos, Camera's shutter button, Passbook that now include Apple Pay and even Apple Maps for navigation.

Page 21: Home reading Ernesto

The smartwatch also takes cues from the Nike FuelBand SE and other fitness trackers with health sensors and apps, a must for any serious wearable gadget these days.

Sure there are fitness apps on your smartphone, but you're not always carrying your iPhone while tracking your steps and activity. The Apple Watch is better suited for your everyday workout.

Robots Use RFID to Find and Navigate to Household Objects

Vision is, in theory, a great way for robots to identify objects. It works for us humans, so all of the stuff that we have to deal with regularly tends to have distinguishing visual characteristics like pictures or labels. Robot vision can certainly work as a way to identify objects, but it's not easy, and often requires a ridiculous amount of computing power, whether it's on the robot or off in the cloud somewhere. And even then, if the object you want to find is facing the wrong way or behind something else, you're out of luck. So when you think about it, there are two essential pieces to identifying things, and localization is a big one. Vision is often bad at this.

Another, much easier way of identifying objects is with RFID tags, because you can use a dirt cheap sensor that's super reliable and doesn't give a hoot what orientation an object is or how bad the lighting is or anything else. The other nice thing about RFID tags (besides the fact that they're dirt cheap and printable and will never give you false positives) is that you can detect them from far away, also using them for localization at the same time. If you know what you're doing.

Some researchers at Georgia Tech (including Travis Deyle, who writes his own robotics blog) totally know what they're doing, and have published a paper detailing an

Page 22: Home reading Ernesto

efficient, reliable way to perform long-distance localization that's basically (and I'm quoting the press release here) "the classic childhood game of “Hotter/Colder."

The idea of using UHF RFID tags for the localization of objects is not a new one, but a lot of energy has been devoted to trying to do it in very complicated ways, involving "explicitly estimating the tag's post relative to the robot or on a map using Bayesian localization with a data driven sensor model," and assuming "a relatively-uncluttered environment with substantial free space" where "the tag’s orientation and nearby material properties have relatively little variation from place to place."

In other words, these methods do not reliably work in real life.

The Georgia Tech roboticists did away with all of this witchcraft, and instead just outfitted a PR2 with a pair of shoulder-mounted pan/tilt long range UHF RFID antennas and gave it some simple behaviors to follow. First, the robot wanders around an assigned search area, making notes of wherever it picks up signals from RFID tags. Then it goes to the spot where it got the hottest signal from the tag it was looking for, zeroing in on it based on the signal strength that its shoulder antennas are picking up: if the right antenna is getting a stronger signal, the robot yaws right, and vice versa. Here it is in action, inside a mostly real home:

And that's really all there is to it. Using this method, the PR2 can find UHF RFID-tagged objects on top of things, under things, behind things, or inside things, reliably and without performing a comprehensive visual search. The hotter/colder search performs just as well as vastly more complicated systems that rely on having all kinds of modeling data, but it's much simpler and easier to implement and actually works outside of a lab (more or less).

Part of what's exciting about robotics is seeing how complex problems can be solved using solutions that leverage complex technology, but in many ways, it's more exciting to see that a simple, straightforward, logical approach works too. RFID-tagging the world to help robots get into our homes may be slightly less streamlined than doing everything through vision, but it has the advantage of being implementable now, and for some people who may need a little extra help, it looks like it could really work.

"Finding and Navigating to Household Objects with UHF RFID Tags by Optimizing RF Signal Strength," by Travis Deyle, Matthew S. Reynolds, and Charles C. Kemp from Georgia Tech, was presented last week at IROS 2014 in Chicago.

Page 23: Home reading Ernesto

Sent by : Martynas R.

Highlights - the best or most exciting, entertaining, or interestingpart of something: Highlights of the match will be shown after the news. Competitors - a person, team, or company that is competing against others: Their prices are better than any of their competitors. How many competitors took part in the race? Circumvent - to avoid something, especiallycleverly or illegally: Ships were registeredabroad to circumvent employment and safetyregulations. Obtain - to get something, especially by asking for it, buying it, working for it, or producing it from something else: to obtain permissionFirst editions of these books are now almostimpossible to obtain. In the second experiment they obtained a very clear result. Sugar is obtained bycrushing and processingsugarcane Credentials - a piece of information that is sent from one computer to another to check that a user is who they claim to be or to allow someone to seeinformation Distinguishing - a distinguishing mark or feature is one that makes someone or something different from similarpeople or things: The main distinguishing feature of the new car is itsfastacceleration. Leverage - the relationship between the amount of money that a companyowes to banks and the value of the company. Straightforward - easy to understand or simple: Just follow the signs to Bradford - it's very straightforward. Slightly - a little: Notifications - the act of telling someone officially about something, or a document that does this: You must give the bank (a) written notification if you wish to closeyouraccount.

Page 24: Home reading Ernesto

Article About Education

There's no doubt that technology is the new "panacea du jour" for public education in America today. Hundreds of millions of dollars (and much more on the way) are being spent on getting iPads and other tablets into the hands of teachers and students all over the country in classes as early as kindergarten. This nationwide effort was described in detail in a recent New York Times Magazine article. Many parents are clamoring for it, the U.S. Department of Education is supporting it, and, of course, many of the so-called education technology companies are profiting handsomely from it.

As I read the article, two questions came to mind. First, is there really a public education crisis in America? The answer to this question seems to be an emphatic "YES!" given the popular interpretation of the results of two international achievement tests (PISA and TIMSS). American students, after being at the top for years, have been in a tailspin and now finish in the middle of the pack in tests of math and science when compared to students in other countries.

But when these data are placed under real scrutiny, their conclusions don't stand up to the definition of "crisis," at least not in the way it is usually presented by the many Chicken Littles in public education these days. When the test scores of American students are separated by income level, the true differences and the real problems with public education in the U.S. become clear.

When the scores of American students are broken down by the percentage of students who receive free or reduced lunches, a generally accepted measure of poverty, a very different picture emerges. In schools with less than 10 percent of students dependent on subsidized lunch programs, American students placed in the top five in both math and science. By contrast, and not surprisingly, in schools with a student body in which 50 percent of the students receive free or reduced lunches, their scores are far down in the international rankings.

Anyone who has ever spent time in affluent public schools knows that there is no public education crisis in their schools because they are, in fact, semi-private schools with district foundations that raise upwards of several million dollars a year that go to enrichment programs. A visit to a school that serves disadvantaged students is an entirely different story.

Here are some inconvenient truths. The poor results on the international achievement tests are due to several factors that some people don't like to admit. For example, America has some of the highest poverty rates, far more income inequity, and poorer health care than most of the other developed countries that participate in the testing.

The U.S. also has far more diversity than other countries, with fully 25 percent of public school students as English as Second Language speakers. Additionally, many other countries engage in cherry-picking, where the best students are selected early and channeled into competitive educational programs who take the international tests while those who don't perform well are placed in trade schools.

Page 25: Home reading Ernesto

So, America doesn't have a public education crisis. Rather, it has a poverty crisis that manifests itself in the self-perpetuating educational vicious cycle in which most poor children, that is to say, black and Hispanic children, are caught and can't escape.

Which brings me to the second question that I think of when the issue of technology as the new, big thing in public education comes up. What makes people think that technology is the solution to our public education woes? Despite all the talk about how technology can transform education by better engaging students and enabling them to go at their own pace, there is no clear scientific evidence that technology produces better educational outcomes such as improved grades, higher graduation rates, or better preparedness for higher education.

What the data do show in the U.S. is that well-trained teachers are the single greatest school-related contributor to academic success. This finding is affirmed in top-ranked countries such as Finland and South Korea, where the best college graduates become teachers and the profession is well-respected and well-compensated. Yet, we are pouring millions upon millions of dollars into an unproven remedy rather than into a solution that has been verified empirically many times over.

By the way, I hope you saw my distinction above of the "single greatest school-related contributor" because the single most influential factor in the success of students in school is their experiences before they arrive at school. As the University of Chicago and Nobel Laureate economist James Heckman has described, children who are raised in disadvantaged homes arrive at elementary school already behind their more affluent peers in academic, cognitive, emotional, and social skill sets and most are unable to catch up. Yet, the amount of money devoted to early childhood education is a pittance compared to what is being thrown at elementary and secondary education by the U.S. Department of Education's Race to the Top program.

Yet, the "faith-based" technology approach to public education reform is moving full steam ahead, particularly for disadvantaged students. There is this fantastical notion that giving poor kids iPads just the rich kids will somehow magically transform them into great students. So, while their schools are crumbling around them, they will have shiny new tablets that will reverse years of physical, psychological, cognitive, and economic neglect.

Speaking of having technology in the lives of children, did you know that young people 8 to 18 years old spend, on average, more than 7.5 hours of their non-school day in front of screens? Moreover, that number is substantially higher for black and Hispanic kids (about 13 hours a day) who, because there isn't affordable child care in America, are often placed in front of a TV or video game console to act as babysitter. Has anyone considered what another, say, five hours of the school day spent in front of a screen is going to do to the development of children's interpersonal, creative, and cognitive skills? Gosh, perhaps the solution to our public education crisis is to remove technology rather than increase its use in children's lives at home and in school.

So who benefits from this rush to get on the technology school bus? Well, the education-technological complex, of course. It is, after all, a $17 billion industry and will only get bigger. Who else? The politicians who push for these "photo-op" solutions because they give the appearance of caring for children and doing something to solve

Page 26: Home reading Ernesto

the problem while not actually doing anything to solve the real problem, which is about poverty and income inequity, not public education.

And who loses from this knee-jerk, "technology is the answer to all of public education's problems" reaction? As usual, it's those who deserve it the least, the children who are already behind and are just waiting for someone to come up with a solution to America's education problems that actually works.

Sent by : Deividas G. Emphatic-very impressive or significant.

Subsidized-to furnish or aid with a subsidy.

Affluent-abounding in anything; abundant.

Enrichment-an act of enriching.

Inequity-lack of equity; unfairness.

Cognitive-of or pertaining to the act or process of knowing, perceiving, remembering.

Pittance-a small amount or share.

Neglect-to pay no attention or too little attention to.

Afforded-that can be afforded; believed to be within one's financial means.

Page 27: Home reading Ernesto

Clou Security and Privacy in Cloud Computing

Significant research and development efforts in both industry and academia aim to improve the cloud’s security and privacy. The author discusses related challenges, opportunities, and solutions.

The cloud has fundamentally changed the landscape of computing, storage, and communication infrastructures and services. With strong interest and investment from industry and govern-ment, the cloud is being increasingly

patronized by both organizations and individuals. From the cloud provider’s perspective, cloud comput-ing’s main benefits include resource consolidation, uniform management, and cost-effective operation; for the cloud user, benefits include on-demand ca-pacity, low cost of ownership, and flexible pricing. However, the features that bring such benefits, such as sharing and consolidation, also introduce potential security and privacy problems. Security and privacy issues resulting from the illegal and unethical use of information, and causing disclosure of confidential information, can significantly hinder user acceptance of cloud-based services. Recent surveys support this observation, indicating that security and privacy con-cerns prevent many customers from adopting cloud computing services and platforms.

In response to such concerns, significant re-search and development efforts in both industry and academia have sought to improve the cloud’s securi-ty and privacy. Here I give a quick (and incomplete) overview of new challenges, opportunities, and solu-tions in this area, with the purpose of stimulating more in-depth and extensive discussion on related problems in upcoming issues of this magazine.

Identifying New Threats and Vulnerabilities An essential task in cloud security and privacy re-search is to identify new threats and vulnerabilities that are specific to cloud platforms and services. Several recent reports have explored such vulner-abilities. For example, in 2009, researchers from the University of California, San Diego, and the Mas-sachusetts Institute of Technology demonstrated leakage attacks against Amazon’s Elastic Compute Cloud (EC2) virtual machines (VMs).1 More spe-cifically, the researchers showed that it’s possible to probe and infer the overall placement of VMs in the EC2 infrastructure. Furthermore, an attacker can launch a malicious EC2 instance and then de-termine whether that instance is physically colo-cated with a targeted (victim) instance. When the attacker’s instance is successfully colocated with the victim, it can launch a side-channel attack by moni-toring the status of shared physical resources such as level-1 and level-2 caches, and thus infer the vic-tim’s computation and I/O activities.

A follow-up study showed that it’s possible to extract private keys via the cross-VM side channel in a lab environment.2 In another study, researchers from the College of William and Mary reported that side-channel

attacks aren’t just a potential risk, but a realistic threat.3 They created a covert channel via another shared resource (the memory bus) that had a level of reliability and throughput of more than 100 bps in both lab and EC2 environments.

These risks represent a small subset of known cloud-specific vulnerabilities and threats. How-ever, they motivate us to think further about new adversary models, trust relations, and risk factors relative to cloud computing stakeholders. In the ex-amples, the cloud provider isn’t trusted because of its resource sharing and VM consolidation

Page 28: Home reading Ernesto

practices. Hence, the cloud provider doesn’t provide a desirable level of isolation and protection between tenants in the cloud, allowing them to attack each other.

Protecting Virtual Infrastructures Virtual infrastructures are infrastructure-level (virtual) entities, such as VMs and virtual networks, created in the cloud on behalf of users. Side-channel attacks target these virtual infrastructures. Researchers have proposed several solutions to defend against cross-VM side-channel attacks. Düppel, for exam-ple, aims to disrupt cache-based side channels. In this self-defensive approach, the target VM’s guest operating system injects cache access noise (that is, flushes) so the collocated attack VM can’t infer cache access patterns.4 This solution doesn’t re-quire modifying the underlying hypervisor or cloud platform. To defend against memory bus-based side channels, a simple and practical approach is to prevent a VM from locking the memory bus and let the hypervisor emulate the execution of atomic instructions that would otherwise require memory bus locking.5

Other attacks against virtual infrastructures include malware attacks against tenant VMs. The cloud presents a new opportunity to defend against these attacks. More specifically, the cloud provides a uniform and tamper-resistant platform to deploy sys-tem monitoring and antimalware functions. The uni-formity is reflected by the cloud provider’s consistent installation, configuration, and update of antimalware services for all hosted tenants. It’s tamper resistant be-cause monitoring and detection of malware attacks can be performed from outside the hosted VMs,either by the underlying hypervisor or by the more privileged management domain (for example, Do-main 0 of Xen). In CloudAV, a production-quality system that reflects the antivirus-as-a-service idea, a group of in-cloud antivirus engines analyzes sus-picious files submitted by agents running in client machines (including VMs) and

collectively detects malware in them.6 VMwatcher, a virtualization-based malware-monitoring and detection system,

moves commodity, off-the-shelf antimalware soft-ware from the inside to the outside of each tenant VM.7 This way, the antimalware software is out of the malware’s reach, preventing the malware from detecting, disabling, or tampering with it. Malware targeting a tenant VM—at either the user or kernel level—can be detected and prevented using such an “out-of-the-box” antimalware service.

A networked virtual infrastructure can consist of multiple VMs connected by a virtual network. With the rapid advances in software-defined net-working (SDN), the cloud increasingly supports such networked virtual infrastructures. SDN decouples the control and data-forwarding functions of a phys-ical networked infrastructure, such as a datacenter network. The SDN control plane performs control functions such as routing, naming, and firewall policy enforcement, and the SDN data plane follows the control plane’s decisions to forward packets be-longing to different flows. Such decoupling makes it easy to optimize the control and data planes with-out them affecting each other. However, the SDN paradigm raises security issues. Researchers have reported that it’s possible to launch attacks against the SDN architecture, incurring excessive workload and resource consumption to both the control and the data plane.8 Although researchers are develop-ing defenses against such attacks, we need more generic, scalable solutions that make the SDN ar-chitecture secure, robust, and scalable, which would support virtual infrastructure hosting in the cloud.

Protecting Outsourced Computation and Services Many organizations have been increasingly outsourc-ing services and computation jobs to the cloud. A client that outsources a computation job must ver-ify the correctness of the result returned from the cloud, without incurring significant overhead at its local infrastructure—the extreme being to execute the job locally, which would nullify the benefit of outsourced job execution. Such verifiability is impor-tant to achieving cloud service trustworthiness and hence has become a topic of active research. Encour-agingly, researchers have in recent years developed

techniques and real systems to bring the vision of a “verifiable cloud service” closer to reality. For ex-ample, the Pantry system composes and outsources proof-based verifiable computation with untrusted storage.9 It achieves theoretically sound verifiability of computation for realistic cloud applications, such as MapReduce jobs and simple MySQL queries.

In addition to computation outsourcing, the cloud can support network service/function out-sourcing. Example network functions include traffic filtering, transcoding, firewall policy enforcement, and network-level intrusion detection. Seyed Ka-veh Fayazbakhsh and his colleagues noted that, similar to computation outsourcing, a major chal-lenge is to verify (at end points of network connec-tions) that the “middle boxes” in the cloud correctly execute

Page 29: Home reading Ernesto

outsourced network functions with satisfac-tory performance.10 They also proposed a framework for verifiable network function outsourcing (vNFO) that aims to achieve verifiability, efficiency, and ac-countability of outsourced network functions. Such a framework will pave the way for deploying trusted network middle boxes, in addition to end points (that is, VMs), in the cloud, enriching the cloud ecosystem.

Protecting User Data User data is another important cloud “citizen.” To protect user data in the cloud, a key challenge is to guarantee the confidentiality of privacy-sensitive data while it’s stored and processed in the cloud. This problem assumes a somewhat different trust model, in which the cloud is not fully trusted be-cause of operator errors or software vulnerabilities. As a result, the cloud provider shouldn’t be able to see unencrypted or decrypted sensitive data during the data’s residence in the cloud. (In other words, sensitive data should remain encrypted while in the cloud.) However, such a requirement can limit the usability of (encrypted) data when a cloud applica-tion processes it. Fortunately, researchers at the Uni-versity of California, Santa Barbara, observed that many cloud applications can process encrypted data without affecting the correctness of the data execu-tion. These researchers proposed Silverline, which identifies data that the application can properly pro-cess in encrypted form.11 Such data will remain en-crypted and hence maintain its confidentiality to the cloud provider. The cloud user will perform data de-cryption locally once the encrypted data is returned from the cloud as application output.

In-cloud data confidentiality poses even great-er challenges. For example, even if the application data is encrypted, the access patterns exhibited by the corresponding applications can reveal sensitive information about the nature of the original data, weakening the data’s confidentiality. Hence a chal-lenge is to achieve confidentiality of data access patterns in the cloud—a problem called oblivious RAM (ORAM). Recently, researchers reported a breakthrough in achieving both practical and theo-retically sound ORAM.12 The solution,

called Path ORAM, is elegant by design and efficient in prac-tice.12 In fact, Path ORAM has been implemented as

part of a processor prototype called Phantom,13 which achieves realistic performance for real-world applications. This is a significant step toward ulti-mate deployment of ORAM-enabled machines for sensitive data processing in the cloud.

Securing Big Data Storage and Access Control In the recent past, more research has focused on cloud-based big data applications. Many consider the cloud to be the most promising platform for hosting, collaborating on, and sharing big data. The chal-lenge is to secure the storage and access to this data to preserve its integrity, confidentiality, authenticity, and nonrepudiation while facilitating availability.

Interesting solutions to increase the account-ability of data sharing have been proposed for cloud-based distributed systems. Smitha Sundareswaran and his colleagues, for example, proposed a decen-tralized accountability framework with logging ca-pabilities using the programmable capabilities of Java Archive files.14 The advent of many types of big data, such as electronic health records and sen-sor data, have spurred research on secure access and sharing with greater accountability. Recently, researchers have proposed solutions for increas-ing accountability and secure access to cloud-based health data,15 as well as robust cryptographic ac-cess control methods to increase the storage secu-rity of privacy-sensitive big data. Guojun Wang and his colleagues proposed hierarchical attribute-based cryptography to facilitate secure access to users in large-scale cloud storage systems.16 More recently,

researchers have designed more advanced solutions (for example, homomorphic cryptography17) for se-cure cloud-based storage systems to facilitate secure distributed access.

Given emerging trends in big data, we need more research on efficient, scalable, and account-able privacy-preserving mechanisms that can ad-dress application-specific requirements.

Call for Contributions The magazine welcomes articles that discuss new challenges, opportunities, and solutions in the area

Page 30: Home reading Ernesto

of cloud security and privacy—in particular, articles that relate to data, storage, computation, and com-munication. Enabling techniques include cryptogra-phy, virtualization, data management and analytics, software-defined networking, fault tolerance and recovery, and forensics. I’d like to hear from prac-titioners about their lessons and experience in de-veloping, deploying, and using cloud security and privacy solutions and services. I also welcome re-ports from academia on cutting-edge research and development, new vulnerabilities and challenges, and new or even controversial ideas and visions.

Sent by : Rytis B. outsourcing- užsakomųjų - A practice used by different companies to reduce costs by transferring portions of work to outside suppliers rather than completing it internally. disclosure - atskleidimas virtual machine (VM) - In computing, a virtual machine (VM) is an emulation of a particular computer system. throughput - pralaidumas adversary - priešininkas, konkurentas stakeholders - suinteresuotosios hypervisor - A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or hardware that creates and runs virtual machines. decoupling - atsiejimas incur - patirti transcoding - perkodavimas - Transcoding is the direct analog-to-analog or digital-to-digital conversion of one encoding to another. facilitate - palengvinti - to make easier forensics - teismo ekspertizės

Page 31: Home reading Ernesto

A Review of ZeroAccess peer-to-peer Botnet

Ms. Cheenu M.TECH & Graphic Era Hill University

Dehradun India Abstract— Today ZeroAccess is one of the widespread threats over the internet. The total number of infected systems is in the tens of millions with the number of lively infection. ZeroAccess is a Peer-to-peer botnet that affects Microsoft windows operating systems. It is used to download other malware on an infected machine from a Botnet and works as a platform. ZeroAccess is mostly implicated in bitcoin mining and click fraud, while outstanding hidden on a system using rootkit techniques. In this survey, we will explain the Evolution of ZeroAccess Botnet, the life cycle of ZeroAccess Botnet and concludes what are the challenges in ZeroAccess botnet. Keywords— Botnet, ZeroAccess botnet, Command and Control (C&C).

INTRODUCTION A bot is a malicious software instance that runs automatically on a compromised computer system without the user’s

permission. Some group of criminals professionally writes the bot program. A group of bots is known as Botnet that are under the control of attacker. A Botnet is collection of compromised computer systems (zombies) receiving and responding to commands from a server that serves as a meeting mechanism for commands from a botmaster.

To evade detection, the botmaster can optionally employ a number of proxy machines, called stepping-stone between command and control and itself [1].

Figure 1: Botnet Taxonomy [1]

In a centralised architecture of C&C, bots contact the C&C server in order to receive information from the botmaster. Generally, little time is spent in the transmission of a message from the botmaster to all the bots, and this represents one of the major advantages of this architecture. Its disadvantage is the fact that the C&C server constitutes a single point of failure. Thus, if the server shuts down, the complete network is shutdown. Examples of centralised botnets; Eggdrop [2], Gt-Bot and Agobot [3] and Bobax [4].

In a distributed architecture, all the bots in the botnet act simultaneously as servers and clients. This philosophy avoids the existence of a single point of failure, and so this kind of botnet is more resistant than a centralised one. However, the time required for a message to reach all the nodes is much greater than in the centralised case. Example of distributed botnets; Spybot [5], Storm[6], Nugache [7] and Phatbot [8].

Finally, hybrid botnets combine the advantages of the two previous architectures. In this case, there exist one or more distributed networks, each with one or more centralised servers. The disconnection of one of these servers implies, in the worst case, the fall of one of the distributed networks, allowing the rest of the botnet to continue its normal operation. Some examples of hybrid botnets are Torpig [9], Waledac [10] and a new design of botnet proposed by Wang et al. In [11]. More recent and sophisticated hybrid botnets include Alureon/TDL4 [12] and Zeus-P2P.

Table 1: Comparison of P2P botnets

Page 32: Home reading Ernesto

Botnet Year C&C Architecture Slapper [13] Sep 2002 P2P protocol Spybot [5] Apr 2003 Decentralized Sinit [14] Sep 2003 Decentralized ,P2P

protocol Nugache [7] Apr 2006 Decentralized ,Custom

protocol

Phatbot [8] Sep 2004 WASTE- P2P

Minor botnet Dec 2010 Centralized and [15] distributed, P2P protocol Kelihos [16] Dec 2010 Decentralized ,P2P

protocol ZeroAccess Sep 2011 Decentralized, P2P [17] protocol

TDL-4 [12] 2011 Kad network THOR Mar 2012 Decentralized The key feature of botnet is its C&C communication, which is categorized in to three categories:

ZeroAccess (ZA) is one of the peer-to-peer protocols. It is also known as max++, Sirefef [17] and a Trojan malware that

affects windows operating systems. ZeroAccess infected machine will be connected to the P2P network and download plugins and other malwares from a Botmaster. It is mostly implicated in Bitcoin mining and click-fraud, while outstanding unseen on a system using rootkit techniques. The size of zeroaccess botnet is approximately 10’s of million infected computers with the number of active infection [18]. It utilizes a peer-to-peer (P2P) mechanism for communication [19]. It is used to make money for the botmaster through click fraud (pay-per-click (PPC) advertising) and bitcoin mining.

EVOLUTION OF ZEROACCESS BOTNET

There are two different versions of ZeroAccess: Old version (V1) and new version (V2). Each version of botnet has 32-bit and 64-bit version. It means there are four total number of botnet. The old version (V1) of ZA discovered in May 2011[17]. The new version (V2) of ZA with various modification saw a major redesign of the Trojan’s internals, emerged in the summer of 2012. In September, 2013, Zeroaccess V2 is the last modified and most widespread version of the Trojan [20].

Both versions of ZeroAccess communicate with a P2P network on a set of designated ports used to distinguish 32-bit and 64-bit infections. The designated ports for V1 that are used by 32-bit are 21810, 22292, 34354, 34355 and used by 64-bit are 21810, 25700. The designated port for V2 that are used by 32-bit are 16464, 16464 and used by 64-bit are 16470, 16465. All communicating with their infected partners, independently of each other.

There are following four ZeroAccess variants that have been observed: ZeroAccess V1- Variant I, II, and III ZeroAccess V2- Variant IV

These variant have various different characteristics that are described in detail in the following sections: In May 2011, the first variant of ZeroAccess was released. It included a rootkit component that was installed as a kernel

driver. Hidden files used by the kernel driver and formatted as an NTFS file system and contains core component of zeroaccess. Variant I also included a tripwire driver, which was used to detect important behavioral features of antivirus scanners. In particular, it would check processes for strangely high access counts to the registry. If a process looked at more than fifty service registry key entries in a short period, the process was suspended [21].

The second variant of ZeroAccess was released in July 2012. It also used the rootkit component but modified the way where it covers the core components. It also included tripwire driver. The kernel driver was used the hidden files to allow access. Now these hidden files stored in %Windir%\$KBUnstall [FIVE DIGIT RANDOM NUMBER] $\.

After that in 2012, another variant of zeroaccess (Variant III) was released. It also used the rootkit component but modified the way where it covers the core components same as Variant II. It removed tripwire driver. The kernel driver was used the hidden files to allow access. Now these hidden files stored in %Windir%\$KBUnstall [FIVE DIGIT RANDOM NUMBER] $\ same as Variant II.

The fourth variant of ZeroAccess was released in July 2012 and underwent overhaul. Attacker was shifted code from the kernel mode to user mode [22]. ZeroAccess had developed and with this evolution came a new communication protocol. In this variant of ZeroAccess, C&C protocol also moved away from TCP and instead favoured UDP – a more efficient alternative to TCP for communication. Due to the major change in the design of ZeroAccess became known in the security community as version two (V2) and is referred to as Variant IV in this paper. This is still the most widespread version of ZeroAccess.

Page 33: Home reading Ernesto

In V1, P2P communication was through TCP [23]. Since the release of V2, communication has moved to UDP [19]. In V2’s release, the command set was also reduced. Coupled with UDP, this further enhanced the efficiency and resiliency of the communication protocol [24].

Table 2: Difference between V1 and V2

Serial number ZeroAccess ZeroAccess Version1 (V1) Version2 (V2)

Year September 2011 April 2012 Command and Control P2P P2P Encryption RC4 XOR Career protocol TCP UDP Size ~30,000 ~10 millions Application Click fraud, Click fraud, mining

spamming bitcoin Detection Antivirus, security Antivirus, security

vendors vendors Memory residence Rootkit, Payload Recycle Bin Port Hardcoded Hardcoded Infection vector Exploit kit, drive Exploit kit, drive by

by download, and download, and social engineering social engineering techniques techniques

The modified P2P functionality of ZeroAccess V2 makes its P2P network more flexible and robust against outside manipulation. The newL command of ZeroAccess V2 (2012) is used by ZeroAccess to share directly super nodes IP addresses amongst its peers. When a peer receives, a newL command it adds the built-in IP address within the newL command into its internal peer list. The peer also forwards the newL commands to other peers it knows about, magnifying the message’s effect. Due to the newL command and sending it to a ZeroAccess peer, it might be possible to introduce a rascal IP address into an infected internal list of ZeroAccess peer and have that rascal newL command distributed to other ZeroAccess peers. Due to the newL command and small fixed length of internal peer list, some of the botnet are sinkhole [25]. However, the modified P2P ZeroAccess V2 (2013) removed the newL command and allows the botnet to filter out rascal IP address [26].

LIFE CYCLE OF ZEROACCESS BOTNET

There are following stages of the life cycle of a zeroaccess Botnet indicates how zeroaccess Botnet spreads its infection and propagates: A. Stage one: Conception

Conception is the first stage of the life cycle of the zeroaccess botnet or any other botnet. It is important to understand the causes underlying botnet creation, and the common architectures and designs employed.

The first stage of the botnet lifecycle can be divided into following three phases: Motivation, Design, and

Implementation

1) Motivation: Initially attacker needs a good reason to create a botnet. The motivation of zeroaccess botnet is to make money for the attacker through click fraud (pay-per-click) and bitcoin mining. The Symantec Global Internet Security Threat Report [27] shows a detailed catalogue of prices for botnet services. According to Symantec, the ZeroAccess botnet consists of more than 1.9 million infected computers and is used mainly to perform click fraud and Bitcoin mining in order to generate revenues estimated at tens of millions of dollars per year. Machines involved in Bitcoin mining generate Bitcoins for their controller, the estimated worth of which was estimated at 2.7 million US dollars per year in September 2012 [19]. The machines used for click fraud simulate clicks on website advertisements paid for on a pay per click basis. The estimated profit for this activity may be as high as 100,000 US dollars per day [28], costing advertisers a $900,000 a day in fraudulent clicks [29].

2) Design: ZeroAccess botnet uses a distributed architecture in which all the bots in the botnet act simultaneously as servers and clients. This philosophy avoids the existence of a single point of failure, and so this kind of botnet is more resistant than a centralised one. However, the time required for a message to reach all the nodes is much greater than in the centralised case. According to Symantec, No C&C server exists for zeroaccess therefore poses a major challenge for anybody to attempting to sinkhole the botnet [20]. Many botnets present a distributed structure, including Spybot, Storm, Nugache, and Phatbot.

3) Implementation: Once the botnet has been conceptually conceived and designed, the last process involved in this stage concerns the implementation of the architecture. This task does not present special characteristics, and can be performed following a traditional software development process.

Page 34: Home reading Ernesto

B. Stage two: Recruitment

Recruitment is the second stage of zeroaccess botnet. The implemented botnet software must be deployed for operation in a real environment. For this purpose, bots must be recruited; indeed, the botmaster’s aim is to recruit as many as possible. Note that this question is not unique to botnets, but is found in many cyber attack techniques. Recruitment is also known as infection or propagation.

In ZeroAccess botnet, the bot itself is increase through the ZeroAccess Rootkit through a variety of attack vectors. First infection vector is a form of social engineering, where a user is in no doubt to execute malicious code either by masking it as a legal file, or including it hidden as an additional payload in an executable which announces itself as, for example, bypassing copyright protection (a keygen). A second infection vector utilizes an advertising network in order to have the user click on an advertisement that redirects them to a site hosting the malicious software itself. A third infection vector used is an affiliate scheme where third party persons are paid for installing the Rootkit on a system [23].

The attacker uses an exploit machine to send fake email notification to the victim machine using email attachment. Victim downloads the attachment and receives fake email notification containing malicious URL on its machine by which it gets compromise. Victim machine transmitted to compromised site and then transmitted to malicious site hosting Blackhole exploit kit [30]. Blackhole exploit determines software vulnerability and drops the malware. The fake email notification has a malicious URL received on the victim’s machine opens a back door and connects to a command and control (C&C) server, which allows the remote attacker access to the victim machine.

Figure 2: Total number of infected machine [18] 4) Installation: ZeroAccess selects randomly one of the computer start drivers and restores it with a completely different

malware driver, location its length to that of the original driver. The operating system loads the malware driver instead of the original during system startup. The malware driver loads the original driver from its confidential benefit store. The malware reloads the I/O database to exchange places with the original driver so that the original driver works normally. Once active, the driver prevents disk operations at the lowest layers of the storage stack to present a view of the replaced driver so that it appears normal and cannot be detected by scanning the file. The driver registers a Shutdown

Page 35: Home reading Ernesto

handler, which will repair the malware components on disk. Even if risk images and registry entries are removed, they are restored during shutdown.

Variant III infections are also up to with a malicious shell image [23]. The registry holds an entry similar to; • HKEY_CURRENT_USER\Software\Microsoft\Win dows NT\Current Version\Winlogon\ “Shell” = “C:\Documents and

Settings\Administrator\Local Settings\Application Data\(any 8-character string)\X”

There is a picture at the path that is called when a user logs in. The picture is capable of re-infecting the system even if the original risk is determined.

Variant IV infections do not contain any kernel components. They are launched from hijacked COM registration entries [19]. For example, HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\ {F3130CDB-AA52-4C3A-AB32-85FFC23AF9C1} changed from C:\WINDOWS\system32\wbem\wbemess.dll to \\.\ globalroot\systemroot\Installer\ {{[RANDOM GUID]}\. HKEY_CURRENT_USER\Software\Classes\clsid\ {42aedc87-2188-41fd-b9a3-0c966feabec1} new key added:

• %User Profile%\Local Settings\Application Data\{[RANDOM GUID]}\

HKEY_CLASSES_ROOT\clsid\{42aedc87-2188-41fd-b9a3-0c966feabec1} new key added: • % User Profile%\Local Settings\Application Data\{[RANDOM GUID]}\n.

Windows 7 and Vista versions also infect the system service image:

• %System%\services.exe

This file is critical to system operation and cannot be removed without rendering the system unbootable. C. Stage three: Interaction

This stage refers to all the interactions performed during the botnet operation, including the orders sent by the attacker, the messages interchanged between nodes, external communications from the attacker to monitor P2P network information, and the communications of nodes with external servers.

A computer infected with ZeroAccess works as a client and a server in the peer-to-peer network. The infected computer will connect to a peer-to-peer (P2P) network to spread monetization payloads and circulate active peer IP addresses. These monetization payloads carry out the payload functionality and classically perform tasks such as click fraud and Bitcoin mining. The payload of the whole botnet can be altered by seeding new plug-in files into the P2P network.

On the other hand, numerous company and home networks build use of Network Address Translation (NAT) when connecting to the Internet. This results in the local IP address of the computer being different from the IP address that the computer becomes visible to have on the Internet. Devoid of setting exact port forwarding at the firewall or router, inward connections to a ZeroAccess infected computer that is, using NAT to connect to the Internet will not arrive at it. As shown in figure3.

Figure 3: Peer-to-peer protocol overview Super nodes connects each other and are connected by normal nodes but normal nodes only connect super nodes

and cannot be connected directly to the botmaster whose say what the Botnet does by seeding new files onto super nodes that they manage. Super Nodes do the similar action as Normal Nodes in terms of their operation, but they are also answerable for distributing files and IP addresses throughout the botnet. Devoid of Super Node the peer-to-peer, principle would fail [20].

Page 36: Home reading Ernesto

5) C&C communication: When a system infected by ZeroAccess, it has two main objective. The first spread IP addresses and second spread files across the network quickly. Initially each node maintains a list of 256 IP addresses is taken from a dropped file name “@” that is stored in the ZeroAccess folder. ZeroAccess attempts to reach out to each of the IP addresses in succession in order to establish a connection with a peer, retrieve the latest peer list, and join the network. This is done using three commands; getL, retL, and newL.

Table 3: Version 1 and Version 2 supported commands and their description P2P protocol version V1 P2P protocol version V2 Comma Details Comma Details nds nds

getL Request for getL Request for peer list peer list

retL Reply to getL retL Reply to getL command and command. Includes the Transmits list of 256 pairs updated peer that initially list and file holds and a list metadata of files and information.

timestamps for each file that it has downloaded.

getF Request to file. newL Add new peer Download files list. and store it under hidden folder.

Figure 4: C&C Communication 1. The protocol starts issuing a getL command to the remote peer. 2. The new node then responds with a retL packet, which contains 16 peer IP addresses from new node peers list

along with a list of all new node’s files that remote node can download. Remote node will add any IP addresses from new node that are newer based on the last contacted timestamp from new node. Remote node may also decide to download files from new node.

3. The New node will transmit a getL+ command to remote (super) node on the normal UDP listening port for the current network (16454, 16455, 16470, or 16471). A getL+ command is the same as a getL command but the difference remote (super) node and new (normal) node.

4. The remote node will respond with a retL+ command to new node. A retL+ command is the same as a retL command but again difference super node and new node. The retL+ command will include 16 peer IP addresses from remote node as well as files that remote node has. New node may replace peers from its list that it received from remote node if they have a lower last contacted timestamp than new node has. New node may also decide to download files from remote node.

Page 37: Home reading Ernesto

5. If new node receives the retL+ response from remote node and since new node originally sent the getL+

command to the normal port for this network, new node knows that remote node is not behind NAT and is likely reachable by other peers. New node checks if remote node’s IP address is in its peer list and if not, it is added. In addition, if IP addresses of remote node are not in new node’s list, new node will select 16 peers from its list at random.

6. Using the 16 peers selected randomly, new node will send a newL command to those 16 peers with the flag value of the newL command set to eight.

7. Upon receiving the newL command the Random Peer 1 (and any peer that receives the newL commands from new node) will check to see if remote node’s IP address is in its list. If remote node’s IP address is not in Random Peer1’s IP list, it is added. Again, if IP addresses of the remote node are not in Random Peer1’s list Random Peer 1 will select 16 peers from its list at random.

8. Using the 16 peers selected randomly, Random Peer 1 will send a newL command to those 16 peers with the flag value of the newL command decremented by one from what it received from new node. This newL propagation will continue until the flag’s value reaches zero.

D. Stage four: Attack execution Once an infected computer has connected to a live peer in the network, it will download monetization or plugin files that carry out the payload functionality of ZeroAccess. These plugin files can be updated at any time and new files can be seeded into the botnet by the attackers. Each files is verified with a 1,024-bit RSA key to prevent a third party seeding the botnet with option files, and each botnet has its own set of plug-ins [19]. There are some monetizations or attacks downloaded by ZeroAccess botnet:

6) Pay per install: Besides the conventional attack vectors, ZeroAccess have also been sold as a service on various underground hacker forums. Originally, ZeroAccess was being sold for US$60,000 for the basic package and up to US$120,000 a year for a more featured version [20]. The customers would be allowed to install their own payload modules on the infected systems.

7) Click fraud: This consists in inducing, by deceit, users to click on online ads or to visit certain websites, and thus either increase third-party website revenues and advertisers budget [31]. The use of botnets makes it possible to simulate the behaviour of millions of legitimate users, and is thus ideal for this kind of attack [32]. Microsoft said that the botnet had been assessment advertisers on Bing, Google Inc and Yahoo Inc an estimated $2.7 million monthly [33].

8) Bitcoin mining: Bitcoin mining is stands on performing mathematical operations on computing hardware. This action has a direct value to the attacker and a cost to innocent victims. two commands getL and newL. We have concluded research challenges of ZeroAccess botnet from this survey.

IV. RESEARCH CHALLENGES E. Detection and mitigation

ZeroAccess selects randomly one of the computer start drivers and restores it with a completely different malware driver, location its length to that of the original driver. The operating system loads the malware driver instead of the original during system startup. The malware driver loads the original driver from its confidential benefit store. The malware reloads the I/O database to exchange places with the original driver so that the original driver works normally. Once active, the driver prevents disk operations at the lowest layers of the storage stack to present a view of the replaced driver so that it appears normal and cannot be detected by scanning the file. The driver registers a Shutdown handler, which will repair the malware components on disk. Even if risk images and registry entries are removed, they are restored again during shutdown. F. Server takedown

ZeroAccess uses P2P C&C communication architecture that gives the botnet a high degree of availability and redundancy. It does not use any central C&C server that pretences a major challenge for anybody attempting to sinkhole the botnet. G. Disruption

The modified P2P functionality of ZeroAccess V2 makes its P2P network more flexible and robust against outside manipulation. The newL command of ZeroAccess V2 (2012) is used by ZeroAccess to share directly super nodes IP addresses amongst its peers. When a peer receives, a newL command it adds the built-in IP address within the newL command into its internal peer list. The peer also forwards the newL commands to other peers it knows about, magnifying

Page 38: Home reading Ernesto

the message’s effect. Due to the newL command and sending it to a ZeroAccess peer, it might be possible to introduce a rascal IP address into an infected internal list of ZeroAccess peer and have that rascal newL command distributed to other ZeroAccess peers. However, the modified P2P ZeroAccess V2 (2013) removed the newL command and allows the botnet to filter out rascal IP address. Therefore, it is difficult to disrupt ZeroAccess botnet.

CONCLUSIONS ZeroAccess is one of the widespread threats over the internet. The total number of infected systems is in the tens of millions with the number of lively infection. ZeroAccess is a Peer-to-peer botnet that affects Microsoft windows operating systems. It is used to download other malware on an infected machine from a Botnet and works as a platform. ZeroAccess is mostly involved in bitcoin mining and click fraud, while outstanding hidden on a system using rootkit techniques. It used a UDP carrier protocol and port hardcoded into the bot binaries. Currently its C&C communication channel uses only

Sent by : Ričardas S.

Page 39: Home reading Ernesto

Escape From the Data Center: The Promise of Peer-to-Peer Cloud Computing Today, cloud computing takes place in giant server farms owned by the likes of Amazon, Google, or Microsoft—but it doesn’t have to :

Illustration: Rob Wilson

Not long ago, any start-up hoping to create the next big thing on the Internet had to invest sizable amounts of money in computing hardware, network connectivity, real estate to house the equipment, and technical personnel to keep everything working 24/7. The inevitable delays in getting all this funded, designed, and set up could easily erase any competitive edge the company might have had at the outset.

Today, the same start-up could have its product up and running in the cloud in a matter of days, if not hours, with zero up-front investment in servers and similar gear. And the company wouldn’t have to pay for any more computing oomph than it needs at any given time, because most cloud-service providers allot computing resources dynamically according to actual demand.

With the computing infrastructure out of sight and out of mind, a start-up can concentrate its attention on launching and improving its product. This lowers the barriers to entry, letting anyone with an Internet connection and a credit card tap the same world-class computing resources as a major corporation. Many of the most popular and successful Internet services today, includingNetflix, Instagram, Vine, Foursquare, and Dropbox, make use of commercial clouds.

Page 40: Home reading Ernesto

These clouds might seem a bit ethereal to their end users, but they in fact require some very down-to-earth facilities. Their stadium-size data centers are immensely costly to construct, and, not surprisingly, most are run by giant corporations like Amazon, Google, and Microsoft. Each offers a variety of service models, depending on exactly how the customer interacts with its cloud-computing environment.

The lowest-level model is known as infrastructure as a service, or IaaS, which outfits each customer with one or more virtual machines running on the cloud provider’s physical equipment. One actual computer might, for example, simulate five different virtual computers, each leased to a different customer. In addition to leasing such virtual machines, an IaaS provider may include a choice of operating systems to run on them. Notable examples of such IaaS clouds include Google’s Compute Engine and Amazon’s Elastic Compute Cloud.

At a higher level of abstraction are platform-as-a-service, or PaaS, clouds. These include an environment for developing the online applications that are to run on the provider’s equipment. Customers don’t have to manage virtual machines. They just create their applications using various software libraries, application-programming interfaces, and other software tools such as databases and middleware, and then one or more virtual computers are spun up automatically as needed to run all of this. Examples of PaaS clouds are Amazon’s Elastic Beanstalk, Google’s AppEngine, Microsoft’s Azure, and SalesForce’s Force.com.

Scattering the Clouds

With the right software, geographically distributed hardware can provide a unified cloud-computing resource

Some cloud-computing providers put all their hardware eggs in one data-center basket [blue]. Others employ multiple data centers networked together [orange]. The logical extension is a peer-to-peer cloud made of individual computers [yellow].

Page 41: Home reading Ernesto

The computers in some P2P networks resemble gossiping office workers. Gossip-based protocols allow information to flow reliably, even if some computers leave the system and break previously established links [orange lines].

All Illustrations: Rob Wilson

Gossip-based protocols are used to maintain an unstructured peer-to-peer network of individual computers, some of which do the work of one customer [orange] while other combinations serve different customers [blue and yellow].

At a still-higher level are software-as-a-service, or SaaS, clouds. Their customers know nothing of the underlying infrastructure or computing platform: They simply use some Web-based application or suite of applications to handle the task at hand. This is probably the model of

Page 42: Home reading Ernesto

cloud computing that most people are familiar with. It includes services like Apple iWork, Gmail, Google Docs, and Microsoft Office 365.

But is this the only way cloud computing can work? At the University of Bologna, in Italy, we’ve been investigating a very different strategy to do cloud computing without those giant centralized facilities at all—using peer-to-peer technologies of the kind sometimes associated with shady file-sharing operations. Their use here, though, could help democratize cloud computing. Our prototype software is still at a very early stage, but its development and similar successes by researchers elsewhere show real promise.

Placing the physical infrastructure for a cloud-computing operation where it’s usually found, in a single massive data center, has definite advantages. Construction, equipment procurement, installation, and maintenance are all simplified, and economies of scale reduce costs. On the other hand, a single large data center consumes an enormous amount of electrical power, often comparable to what you’d need to run a small town, and dissipating the waste heat it generates is usually a big headache.

Perhaps the most serious shortcoming, though, is that a centralized cloud-computing facility can end up being a single point of failure, no matter how cleverly it is designed. Redundant power supplies, backup power generators, and replicated network connections help, but they can’t protect absolutely against catastrophic events such as fires, hurricanes, earthquakes, and floods.

Another drawback of centralized clouds is that their geographic location, which may be best for the owners, may not be best for the customers. This is the case, for example, when governments place restrictions on sensitive data crossing national borders. A data center located in one country may then be off-limits to customers in some other countries.

Cloud-service providers have increasingly addressed these concerns by using not just one but several far-flung data centers connected through fast private networks. Doing so not only protects against local catastrophes, it also provides customers with more options for locating their data.

What would happen if you took this trend in geographically distributing cloud infrastructure to its logical conclusion? You’d end up with clouds made up of millions of individual computers distributed across the globe and connected through the Internet. We would call this a peer-to-peer (P2P) cloud because it shares many of the characteristics of various P2P systems developed for file sharing, content distribution, and the payment networks of virtual cryptocurrency schemes such as Bitcoin.

In principle, a P2P cloud could be built using the ordinary computing, storage, and communication equipment found now in people’s homes, with essentially zero initial investment. Broadband modems, routers, set-top boxes, game consoles, and laptop and desktop PCs could all contribute. The challenge is to turn this motley collection into a coherent and usable cloud infrastructure and offer its services to customers. You also have to ensure

Page 43: Home reading Ernesto

that the salient features of clouds—on-demand resource provisioning and the metering of service—are maintained.

This would surely be tough to do, but think of the advantages. First, there would be no single entity that owns or controls it. As with most other P2P applications, a P2P cloud could be created and operated as a grassroots effort, without requiring the permission or consent of any authority. People would choose to participate in a P2P cloud by installing the appropriate client software on their local machines, and the value of the resulting cloud infrastructure would be commensurate with the number of individuals who are contributing to it.

A second advantage comes from the fact that a P2P cloud’s components are small, individually consume little power, and well distributed. This drastically reduces concerns about local catastrophes. It also removes the usual worries about heat dissipation. Although such P2P clouds couldn’t provide the quality-of-service guarantees of a Google or an Amazon, for many applications that wouldn’t much matter.

The idea of creating a huge computing resource from a large number of loosely coupled machines is not new. This has long been done, for example, with volunteer computing, where people execute applications on their personal computers on behalf of others. Volunteer-computing systems usually require you to install certain software, which then runs when your computer has no higher-priority tasks to do. That application then uses your spare computing cycles to fetch and process data from some central server and upload the results back to the same server when it’s done.

This strategy works well for many scientific problems, for which a central controller can farm out pieces of the desired computation to workers that operate independently and in parallel. If one fails to return a result within some reasonable period, no problem: The same task is simply handed out to some other volunteer worker.

The Berkeley Open Infrastructure for Network Computing (BOINC) is a popular volunteer-computing system that can load and run different client programs. Examples of projects found on the BOINC platform include SETI@home (to analyze radio signals from space in the search for extraterrestrial transmissions), Rosetta@home (to calculate how proteins fold), and Einstein@home (to detect gravitational waves).

Another type of volunteer computing is known as a desktop grid. In conventional grid-computing projects, multiple high-performance computers in different locations are harnessed to work together on a single problem. Desktop grids allow people to contribute the processing power of their personal computers to such efforts. BOINC supports desktop grids, as doesEDGeS (Enabling Desktop Grids for e-Science), a project of several European institutions that is based on BOINC, and also XtremWeb, a project of the French National Center for Scientific Research, the French Institute for Research in Computer Science and Automation (INRIA), and the University of Paris XI.

The success of the many volunteer-computing projects demonstrates the extreme scale that a P2P cloud could in principle attain, both in terms of the number of different computers

Page 44: Home reading Ernesto

involved and their geographic distribution. Using such a collection would, of course, mean that equipment failures will be common. And besides, the people who contribute their computers to these clouds could turn them on and off at any time, something the people who run P2P networks refer to as “churn.”

So the first task for any P2P cloud is to keep track of all functioning and online devices enrolled in the system and to dynamically partition these resources among customers. And you have to do that in a completely decentralized manner and despite churn.

To deal with such challenges, many P2P systems make use of what are calledgossip-based protocols. Gossiping, in this context, is when the computers linked together in a large, unstructured network exchange information with only a small number of neighbors. Gossip-based protocols have been extensively studied and have been used to model, for example, the spread of malware in a computer network, the diffusion of information in a social network, even the synchronization of light pulses in a swarm of fireflies. Gossip-based protocols are appealing for P2P clouds because they are simple to implement and yet enable complex global computations to take place efficiently even in the face of churn.

So when we built our prototype system at the University of Bologna, which we call the Peer-to-Peer Cloud System (P2PCS), we included several decentralized gossip-based protocols. They are used for figuring out what equipment is up and connected, monitoring the overall state of the cloud, partitioning the resources available into multiple subclouds, dynamically allocating resources, and supporting complex queries over the set of connected computers (for example, to identify the most reliable ones). Creating those capabilities was an important first step. But there are still many other requirements for a practical system, only some of which we have attempted to tackle.

If all the equipment is owned by a single organization, building a P2P cloud with it should be straightforward, even if the bits and pieces are located in different people’s homes, as might be the case with broadband modems or routers operated by an Internet service provider or set-top boxes operated by a cable-television company. The computing devices will be all pretty similar, if not identical, making it easier to configure them into a single computing environment. And because the equipment’s one owner presumably installs the P2P-cloud software, you can be reasonably confident that the data and computations will be handled properly and according to the organization’s security policies.

This is not true, however, if the P2P cloud is made up of a diverse collection of different people’s computers or game consoles or whatever. The people using such a cloud must trust that none of the many strangers operating it will do something malicious. And the providers of equipment must trust that the users won’t hog computer time.

These are formidable problems, which so far do not have general solutions. If you just want to store data in a P2P cloud, though, things get easier: The system merely has to break up the data, encrypt it, and store it in many places.

Page 45: Home reading Ernesto

Unfortunately, there is as yet no efficient way to make every computation running on untrusted hardware tamper-proof. For some specific problems (such as mining bitcoins), verifying the results is significantly faster than computing them, which allows the client to check and discard faked results. For those problems that do not have an efficient verification procedure, the best way to detect tampering is to compare results for the same calculation coming from independent machines.

Another issue, common to all P2P systems, is that there must be appropriate incentives to get enough people to cooperate and to discourage free riding. Otherwise, the system is bound to degenerate completely. Coming up with incentives would be easy enough for a company that uses its own devices to create a cloud. That company might have a monetary incentive for creating such a cloud, and the people housing the equipment might have an incentive to keep connected to it because they get better service that way.

Volunteer-computing systems don’t enjoy the luxury of having such incentives in place. But they typically have such laudable objectives that getting people to contribute their free CPU cycles is not a problem. Who would not want to help make history when SETI@home, which has been around since 1999, detects the first extraterrestrial radio transmission? For volunteer P2P systems of other kinds, though, the incentives have to be carefully worked out.

Developments are admittedly at an early stage, but several research projects and a few commercial systems that have hit the market suggest that P2P clouds can indeed be built and used productively, at least for certain purposes.

Our work on the P2PCS, for example, demonstrated that it is possible to use gossip-based protocols to handle the dynamic allocation of resources and the basic monitoring of the system. Other researchers—at the University of Messina, in Italy (Cloud@Home), at INRIA (Clouds@Home), and associated with the European Union’s Nanodatacenters project— have been exploring similar concepts.

The Nanodatacenters project is particularly interesting. The researchers involved worked out how to form a managed P2P network from a far-flung constellation of special home gateways controlled by Internet service providers. Because these “nanocenters” are near end users, the network can deliver data much faster than a few large data centers could.

Some commercial distributed-storage solutions are also based on P2P computing principles. An early version of Wuala’s cloud backup, for example, allowed users to trade space on their hard disks. Sher.ly offers a similar service but is oriented toward the business sector: It allows companies to use their own machines and infrastructure to create a secure, always-on private cloud to share files internally. There are also a number of open-source P2P systems for distributed file storage (such as OceanStore, developed at the University of California, Berkeley) or computations (such as OurGrid, developed mostly at the Federal University of Campina Grande, in Brazil).

These pioneering experiments are still few and far between compared with traditional cloud environments. But if they succeed, and if researchers can find ways to deal with the hurdles

Page 46: Home reading Ernesto

we’ve described here, you could easily find yourself making use of a P2P cloud in your daily routine. You might not even know you’re doing it.

Sent by: Domas S. Ethereal - nežemiškas, lengvas

Procurement - pirkimai

Shortcoming - trūkumas

Redundant - nereikalingas, perteklinis

Drawback - trūkumas

Cryptocurrency - užkoduotas pinigų pervedimas

Broadband - plataus ryšio prietaisas

Volunteer computing - kompiuterio išteklių suteikimas kitiems

Partition - pertvara/perskirti

Page 47: Home reading Ernesto

Oculus Brings the Virtual Closer to Reality

LOS ANGELES — Virtual reality is virtually here — although its first incarnation will come with short battery life, images that do not quite track eye movements and a tendency to induce motion sickness.

In the next few months, Samsung intends to release the Gear VR, a headset that combines software from the virtual-reality pioneer Oculus VR and Samsung’s coming Galaxy Note 4 smartphone to create a portable virtual reality experience.

And within the next year or so, personal computer users will probably be able to buy a more powerful headset from Oculus itself that will allow them to plunge more deeply into three-dimensional virtual worlds, from outer space to the Egyptian pyramids.

Oculus showed off the latest versions of both devices over the weekend to developers in Los Angeles. Two things were clear: Serious technical challenges remain, but Oculus is closer than any other company to creating a product consumers can use to explore computer-generated environments that seem so real that you almost forget they are fake.

Trying out Crescent Bay, the new prototype of the Oculus PC headset announced on Saturday, I was struck by how menacing it felt to be charged by a full-size Tyrannosaurus rex and how my stomach flipped as I peered over the edge of a city skyscraper. Even the Samsung headset, which shows images in lower fidelity because of cellphones’ limitations, transported me briefly to faraway places, like Ngorongoro Crater in Tanzania.

Page 48: Home reading Ernesto

“When it’s good enough, suddenly the back of your brain believes you’re there,” Brendan Iribe, chief executive of Oculus, said in his keynote speech to the hundreds of gathered developers.

Virtual reality has certainly become real enough to attract the attention of the big boys. Facebook bought Oculus in a $2 billion deal that closed in July. Samsung is putting its reputation and technical prowess behind the Gear VR, which will cost more than $1,000 for the phone and headset before carrier subsidies.

Sony, maker of the PlayStation game consoles, is working on a VR system, Project Morpheus, which it demonstrated at the Tokyo Game Show last week. Nvidia, a leading maker of graphics chips for personal computers, announced last week that its newest productswould include technology to improve the speed and quality of virtual reality.

But virtual reality is not much of a business yet. It is still wide open to hackers, hobbyists and tiny companies that raise money through Kickstarter projects, not pitches to venture capitalists.

Their inspiration is Palmer Luckey, Oculus’s founder, who taped and glued together the first prototype in 2011 in his garage, raised the first money for his Rift headset on Kickstarter, sold Oculus to Facebook and turned 22 on Friday.

“For a lot of people, this is their lifelong dream,” Mr. Luckey said in an interview. “They’ve been watching and reading science fiction novels their whole lives, and this is the technology to make some of those fantasies real. This isn’t the most profitable thing they could be working on, but the most exciting thing they could be working on.”

Karl Krantz, founder and organizer of Silicon Valley Virtual Reality, which hosts monthly gatherings of enthusiasts, compared the industry to the early days of personal computers, when tinkerers like the Apple co-founders Steven P. Jobs and Steve Wozniak met with like-minded souls in the Homebrew Computer Club. “Computers had existed a long time, but they were at big institutions and were not affordable to people who were just passionate about it,” he said. “V.R. has hit that inflection point where enthusiasts can hack it.”

That enthusiasm was palpable at the Oculus conference. Developers set up laptops and headsets at the Loews Hollywood Hotel to show off buggy demos of their games and software. They traded tips on solving common problems like jittery images and making sure that what was on screen tracked a person’s eyes.

Mike McArdle, a technology tutor and trainer in North Carolina, was one of the amateur developers soaking it all in. He has been working on a volcano simulation that he hopes to use to teach elementary school students about science and math. “It’s never been easier to build V.R. games than it is now,” he said.

Despite the excitement, there is acknowledgment of the many problems yet to be solved.

John Carmack, a famed game developer who is now Oculus’s chief technology officer, went on stage and rattled off problems with the Samsung Gear VR, his pet project for the last year. It doesn’t track eye movements, which can cause the image to lag and nauseate the user. The display flickers. It’s built on Google’s Android smartphone platform, which slows graphics.

And oh, yeah, an intense virtual reality video game can draw so much power that the phone will overheat in 10 minutes.

Oculus is also trying to address several other problems. “Inevitably, you want to see your hands, and they’re not there now,” Mr. Carmack said. Tracking hand movements and bringing them into the virtual picture could help people feel less disoriented and would provide a more intuitive way to control the interface than the multibutton game pads that developers use now.

Atman Binstock, chief architect of Oculus, quite succinctly summed up the biggest challenges facing Oculus and virtual reality in general: “actually delivering compelling experiences and not making people sick.”

Sent by : Mindaugas B.

Induce - to cause something to happen

Page 49: Home reading Ernesto

Rattle something off - say, perform, or produce something quickly and effortlessly: he rattled off some instructions

Palpable - Able to be touched or felt

Nauseate - to cause someone to feel as if they are going to vomit

Lag - to move or make progress so slowly that you are behind other people or things

Jittery – shaking and slightly uncontrolled

Page 50: Home reading Ernesto

IBM: Commercial Nanotube Transistors Are Coming Soon

Chips made with nanotube transistors, which could be five times faster, should be ready around 2020, says IBM.

Chip test: Each chip on this wafer has 10,000 nanotube transistors on it. IBM hopes to be able to put billions of the devices on a single chip soon after 2020.

For more than a decade, engineers have been fretting that they are running out of tricks for continuing to shrink silicon transistors. Intel’s latest chips have transistors with features as small as 14 nanometers, but it is unclear how the industry can keep scaling down silicon transistors much further or what might replace them.

A project at IBM is now aiming to have transistors built using carbon nanotubes ready to take over from silicon transistors soon after 2020. According to the semiconductor industry’s roadmap, transistors at that point must have features as small as five nanometers to keep up with the continuous miniaturization of computer chips. “That’s where silicon scaling runs out of steam, and there really is nothing else,” says Wilfried Haensch, who leads the company’s nanotube project at the company’s T.J. Watson research center in Yorktown Heights, New York. Nanotubes are the only technology that looks capable of keeping the advance of computer power from slowing down, by offering a practical way to make both smaller and faster transistors, he says.

In 1998, researchers at IBM made one of the first working carbon nanotube transistors. And now, after more than a decade of research, IBM is the first major company to commit to getting the technology ready for commercialization.

“We previously worked on it as a sandbox type of thing,” says James Hannon, head of IBM’s molecular assemblies and devices group. Hannon led IBM’s nanotube work before Haensch, who took over in 2011 after a career working on manufacturing conventional chips. “Wilfried joined with a silicon technology background [and] our focus really shifted.”

Haensch’s team chose the target for commercialization based on the timetable of technical improvements the chip industry has mapped out to keep alive Moore’s Law, a prediction originating in 1965 that the number of transistors that could be crammed into a circuit would double every two years. Generations of chip-making technology are known by the size of the smallest structure they can write into a chip. The current best is 14 nanometers, and by 2020, in order to keep up with Moore’s Law, the industry will need to be down to five nanometers. This is the point IBM hopes nanotubes can step in. The most recent report from the microchip industry group the ITRS says the so-called five-nanometer “node” is due in 2019.

Page 51: Home reading Ernesto

IBM has recently made chips with 10,000 nanotube transistors (see “How to Build a Nanotube Computer”). Now it is working on a transistor design that could be built on the silicon wafers used in the industry today with minimal changes to existing design and manufacturing methods. The design was chosen in part based on simulations that evaluated the performance of a chip with billions of transistors. Those simulations suggest that the design chosen should allow a microprocessor to be five times as fast as a silicon one using the same amount of power.

IBM’s chosen design uses six nanotubes lined up in parallel to make a single transistor. Each nanotube is 1.4 nanometers wide, about 30 nanometers long, and spaced roughly eight nanometers apart from its neighbors. Both ends of the six tubes are embedded into electrodes that supply current, leaving around 10 nanometers of their lengths exposed in the middle. A third electrode runs perpendicularly underneath this portion of the tubes and switches the transistor on and off to represent digital 1s and 0s.

The IBM team has tested nanotube transistors with that design, but so far it hasn’t found a way to position the nanotubes closely enough together, because existing chip technology can’t work at that scale. The favored solution is to chemically label the substrate and nanotubes with compounds that would cause them to self-assemble into position. Those compounds could then be stripped away, leaving the nanotubes arranged correctly and ready to have electrodes and other circuitry added to finish a chip.

Haensch’s team buys nanotubes in bulk from industrial suppliers and filters out the tubes with the right properties for transistors using a modified version of a machine used to filter molecules such as proteins in the pharmaceutical industry. It uses electric charge to separate semiconducting nanotubes useful for transistors from those that conduct electricity like metals and can’t be used for transistors.

Last year researchers at Stanford created the first simple computer built using only nanotube transistors (see “The First Nanotube Computer”). But those components were bulky and slow compared to silicon transistors, saysSubhasish Mitra, a professor who worked on the project. “We now know that you can build something useful with carbon nanotubes,” he says. “The question is, how do you get the performance that you need?”

Although IBM hasn’t worked out how to make nanotube transistors small enough for mass production, Mirta says it has made concrete steps, and has devised processes that should be amenable to the semiconductor industry.

However, for now IBM’s nanotube effort remains within its research labs, not its semiconductor business unit. And the researchers are open about the fact that success is not guaranteed. In particular, if the nanotube transistors are not ready soon after 2020 when the industry needs them, the window of opportunity might be closed, says IBM’s Hannon.

If nanotubes don’t make it, there’s little else that shows much potential to take over from silicon transistors in that time frame. Devices that manipulate the spin of individual electrons are the closest possible candidate (see “Silicon-Based Spintronics”), but they’re less mature, and unlike carbon nanotubes, they don’t behave similarly to silicon transistors, says Hannon.

Sent by : Vakaris J. Fret - to be nervous or worried

Cram - to force a lot of things into a small space

Amenable – willing to accept or be influenced by a suggestion

Page 52: Home reading Ernesto

The Future of IT By Samuel Greengard

The history of IT is littered with innovative and disruptive technologies. In the 1960s, mainframe computers revolutionized the way businesses managed information. By the 1980s, word processors and PCs automated office tasks, and spreadsheets yielded new insights into better business practices. Starting in the 1990s, the Internet and mobility unleashed a torrent of change that put the IT department at the center of the enterprise. Powerful and efficient enterprise systems were the order of the day. But these developments pale in comparison to the tsunami of change washing over IT organizations today. The introduction of iPads and iPhones, social media, big data, and cloud computing have unleashed profound changes that far exceed the impact of each of these devices or systems alone. The combined impact of these technologies is redefining the way organizations and people interact. It's also revolutionizing how businesses harness data, information and knowledge and put them into play. "The IT department is undergoing a remarkable transformation," says David Nichols, Americas CIO Services Leader for consulting firm Ernst & Young. Make no mistake, IT organizations must adapt and evolve like never before. Over the next few years, the role of IT will change further as the consumerization of IT marches forward and cloud computing provide more powerful ways to manage everything from infrastructure to enterprise applications. "The interrelationship between technologies is creating unprecedented waves of disruption," observes Bill Briggs, global lead at Deloitte Consulting. "It's forcing organizations to rethink everything and embrace a post-digital world filled with new risks and opportunities." What will the IT organization of the future look like? What can CIOs and other senior IT executives do to prepare for fundamentally different roles? And what is required to reach the promised land of a more strategic IT department? There are no easy answers, but one thing is perfectly clear: Feel-good slogans and talk about innovation, agility and flexibility won't get the job done. CIOs must have a deeper and more intrinsic understanding of how to navigate this brave new world. It's a place where information technology touches everything and everybody all the time.

Reinventing Roles Over the last half-century, computers have become infinitely more powerful, software has advanced and mobile devices have put information in front of customers and employees and provided them with powerful tools. The enormous popularity of iPhones, iPads and other mobile hardware and software has yanked control of IT away from enterprises and put consumers firmly in charge. The consumerization of IT and the Bring Your Own Device (BYOD) movements have wrested control of IT decision-making away from CIOs and enterprises. The ramifications are profound. "Today, non-IT people, including business executives and consumers, are either making decisions or involved in the decision-making process," says Didier Bonnet, senior vice president for Capgemini Consulting. What's more, "many of the IT systems of the past are too expensive and too cumbersome for today's business environment. IT is being forced to adopt entirely different models and play a different role as the fundamental model for IT changes."

What this means for CIOs is that strategies, approaches and technologies that worked well in the past must often be tossed into the digital dustbin. Today's enterprise requires different leadership skills, openness to new ideas and innovation, and an ability to connect IT dots in fundamentally different ways. It also requires new governance methods. The task is no longer to simply align IT with the business, it's to drive integration and data sharing throughout the enterprise. At Bank of America, Global Technologies & Operations executive Cathy Bessant is working to build the IT department of the future. Customers and employees, she says, are demanding very different tools and functionality than only a few years ago. Mobility is at the center of this trend. "People expect to accomplish tasks on their mobile devices and they respond to value-added differentiation," she explains. Moreover, "the price for not delivering a good user experience is extraordinarily high. The rise of social media has tilted the power structure distinctly toward the consumer." Within this environment, agility is the sun around which all planets revolve. IT organizations must eliminate barriers to scale and find ways to build an infrastructure that can adapt and evolve rapidly, Bessant says. In addition, IT must find ways to connect legacy systems--including mainframes, storage arrays and databases--into an infrastructure stack that can provide the required elasticity for tablet and smartphone apps, social media analytics, location-based services, and

Page 53: Home reading Ernesto

an array of other post-PC tools and features. "IT must lay a foundation that allows brilliant and creative people to introduce innovative ideas and solutions," says Bessant. Consequently, a growing number of organizations are looking to migrate to new IT skill sets. Some outsource infrastructure and enterprise applications to a cloud or hosted services provider that specializes in IT as well as associated security functions. It's then up to IT staff to work with business executives in order to handle strategic mapping and streamline systems and channels. At the same time, many organizations are hiring developers that specialize in app development as well as the fundamentals of social media, crowdsourcing and other emerging tools. But the demands on IT don't stop there. Bessant says IT must be literate in advanced data-mining techniques and rapid cycle testing. "It's critical to find ways to make things work without introducing the negatives when they don't work," she explains. "It's not good enough to say that innovation is important, it's essential to take a focused approach that actually introduces innovation." At some companies, this means introducing small intrapreneurial teams that can assemble and dissemble in days or weeks. At others, it means crowdsourcing solutions and using reverse mentoring and internal social media tools to share expertise and ideas. All organizations must use metrics and analytics to "relentlessly measure and document results so they can be applied in a meaningful way," she says. Ernest & Young's Nichols says the role of the CIO is also shifting, particularly as business executives pull the trigger on decisions about technology and purchase their own cloud services and infrastructure. By 2014, CIOs will have relinquished control of 25 percent of their organizations' IT spending, according to a 2011 Gartner report. Says Nichols, "We're beginning to see new reporting structures for the CIO. In some cases, they're unplugging from the CFO and CEO and interacting with a newly defined and elevated CTO role that provides a technology vision for the organization. CIOs must be careful that their role isn't reduced to maintenance and operations."

A New Order of IT Forrester Research predicts the IT organization of 2020 will only vaguely resemble what's in place today. Powerful and easy-to-acquire tools and technologies will usher in an era of business self-sufficiency. Tech-savvy managers will increasingly control and provision their own services and solutions. IT departments, as a result, will typically be smaller, leaner and more strategic. Deloitte's Briggs notes that as organizations turn to the cloud and adopt infrastructure-as-a-service and software-as-a-service models, project management and portfolio management will become necessary skills for senior IT executives. Ultimately, he says, "CIOs must lead the charge and become the visionary for exploiting all the disruption." It's a concept that resonates at Cars.com, an online automobile shopping site that attracts upwards of 20 million unique visitors per month. Cars.com enterprise architect James Houska sees a new future of IT unfolding before his eyes. "End users are demanding new features and services faster," he says. This has resulted in Cars.com ramping up from approximately 30 releases and updates per year to more than 300. "There is an extreme acceleration in the business and how we leverage information technology,” says Houska. “There is a greater need for automation. We're being forced to become more agile and innovative in whatever we do." This environment, Houska says, necessitates the use of "game-changing technology" such as the cloud and outside IT services. App development is now measured in days rather than in months. Application performance management and other tools that provide metrics have emerged as a necessity. For example, Cars.com has turned to service providers such as Compuware APM and Splunk to cull vast amounts of data and ensure that Web and mobile systems are performing at optimum levels. Houska believes the future of IT is rooted in the CIO and IT department serving as thought leaders for technology. The goal should be connect data and services from different departments and systems into a unified and seamless enterprise IT strategy. To be sure, those that cling to the command and control model of the past are destined to face severe turbulence. Today, success hinges on a lean, agile, flexible and intrapreneurial IT model that's inextricably linked to business needs. It also requires lots of experimentation, says Bonnet of Capgemini Consulting. In this upside down post-PC world, risk must be viewed as a friend and change as a potential competitive advantage. "We have entered a new phase of enterprise IT that is less dependent on technical skills than on strategic vision,” says Nichols. “The IT department of the future must be equipped to function in a real-time, fast-changing environment that drives the business like never before."

Page 54: Home reading Ernesto

Sent by : Rinaldas A.

Innvative -

yielded -

intrapreneurial -

cumbersome -

yanked -

harness -

consumerization -

crowdsourcing -

mapping -

Consequently -