Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Page 1 of 63
Top 10 datacentre stories of 2018
In this e-guide:
There is nothing like a datacentre outage to highlight just how
reliant the digital economy is on these facilities, with the 2018
news cycle dominated by tales of server rooms going awry and
causing mass disruption to end users across the globe.
Regardless, user appetites for cloud-hosted services show no
signs of waning, prompting the hyperscale cloud and internet
giants to double-down on their datacentre investments to
ensure they have enough capacity to cater to the needs and
requirements of their customers.
And the challenges all this growth poses to the datacentre
industry has come into sharp focus over the course of 2018, as
the hyperscalers grapple with location constraints, planning
permission issues and hardware security problems.
With all this in mind, here’s a look back over Computer
Weekly’s top 10 datacentre stories of 2018.
Caroline Donnelly, datacentre editor
Page 2 of 63
Top 10 datacentre stories of 2018
Contents
Meltdown and Spectre: AWS, Google and Microsoft rush to patch cloud
chip flaws
Colocation and the hyperscalers: What the cloud giants want in a
datacentre partner
BA to sue CBRE over May Bank Holiday datacentre outage
Apple pulls plug on €850m Irish datacentre project after three-year planning
delay
Visa reveals 'rare' datacentre switch fault as root cause of June 2018
outage
Microsoft deploys underwater datacentre off the coast of Orkney
Dividing lines: EU bid to curb server energy use has the European
datacentre community split
Infinity SDC sells Here East Olympic Park datacentre to fund development
of Romford facility
NHS Wales IT outage: What went wrong with its datacentres?
Page 3 of 63
Top 10 datacentre stories of 2018
The bitcoin boom: How colocation datacentres are cashing in on
cryptocurrency mining
Page 4 of 63
Top 10 datacentre stories of 2018
Meltdown and Spectre: AWS, Google and Microsoft rush to patch cloud chip flaws
Caroline Donnelly, datacentre editor
As the cloud provider community mobilises to protect users from two long-
standing processor-based security flaws, researchers suggest a rip and replace
of their underlying CPU hardware may be required to eradicate the risk of
exploitation.
According to an advisory notice issued by the Carnegie Mellon University
Software Engineering Institute, the flaws – dubbed Meltdown and Spectre –
need to be addressed by applying updates and replacing the affected CPU
hardware.
“The underlying vulnerability is primarily caused by CPU architecture design
choices. Fully removing the vulnerability requires replacing the vulnerable CPU
hardware,” the institute advised.
Both flaws could pave the way for hackers to steal data being processed on
devices and servers featuring the affected hardware through the use of
malicious programs, it is claimed.
Page 5 of 63
Top 10 datacentre stories of 2018
Meltdown is thought to potentially affect every Intel processor made since
1995 that implements out-of-order execution, with the exception of Itanium and
Atom. At the time of writing, it is not thought to affect competing processors from
AMD and ARM.
The Spectre vulnerability, however, has been verified by researchers as
affecting chips made by Intel, AMD and ARM.
“While programs are not typically permitted to read data from other programs, a
malicious program can exploit Meltdown and Spectre to get hold of secrets
stored in the memory of other running programs,” claimed the researchers, who
uncovered the flaws, in a blog post.
These “secrets” could include login details saved in password managers or
browsers, personal photos, emails or instant messages, and business-critical
documents, the researchers added.
“Meltdown and Spectre work on personal computers, mobile devices and in the
cloud. Depending on the cloud provider’s infrastructure, it might be possible to
steal data from other customers,” they wrote.
The blog post went on to state that cloud providers which make use of Intel
CPUs and Xen-based para-virtualisation techniques are at risk, unless they
patch their systems.
Page 6 of 63
Top 10 datacentre stories of 2018
“Furthermore, cloud providers without real hardware virtualisation, relying on
containers that share one kernel, such as Docker, LXC or OpenVZ, are
affected,” the researchers added.
In light of the cloud threat, Amazon Web Services (AWS), Google and Microsoft
have all moved to assure users of their respective cloud platforms that action is
being taken to mitigate the risks posed by Meltdown and Spectre.
As previously reported by Computer Weekly, details of the security flaws first
came to light in late 2017, on the back of work carried out independently by
several research teams and individuals, including Jann Horn from Google’s
Project Zero initiative.
Since receiving word from Project Zero about the vulnerabilities, Google claims
its engineers have been working closely to protect users of its G Suite of
productivity services and the Google Cloud Platform (GCP) from both threats.
“G Suite customers and users do not need to take any action to be protected
from the vulnerability,” the company said in a blog post. “GCP has already been
updated to prevent all known vulnerabilities. Google Cloud is architected in a
manner that enables us to update the environment while providing operational
continuity for our customers.”
Page 7 of 63
Top 10 datacentre stories of 2018
AWS, meanwhile, released a statement saying all “but a small single-digit
percentage” of Amazon EC2 instances were protected, at present, from
exploitation.
“The remaining ones will be completed in the next several hours,” it said. “We
will keep customers appraised of additional information with updates to our
security bulletin.”
Similarly, Microsoft confirmed in a statement that it was actively developing and
testing a series of “mitigations” to the threats, and was in the process of
deploying fixes for its cloud customers.
“We have not received any information to indicate that these vulnerabilities have
been used to attack our customers,” Microsoft added.
Next Article
Page 8 of 63
Top 10 datacentre stories of 2018
Colocation and the hyperscalers: What the cloud giants want in a datacentre partner
Caroline Donnelly, datacentre editor
The colocation market is riding on the crest of a wave, fuelled by the growing
demand for fast, ready access to datacentre capacity from the hyperscale cloud
and internet giants.
In the rush to meet growing user appetites for locally hosted, high-performing
and low-latency cloud services, the likes of Amazon, Google, IBM and
Microsoft, are opting out of building their own datacentres in favour of using
colocation facilities instead.
It is a trend that has been gaining momentum over successive quarters, shows
the European colocation market tracking data shared by real estate consultancy
CBRE. with its most recent report recording another record half-year period for
datacentre capacity take-up within the major colocation hubs of Frankfurt,
London, Amsterdam and Paris (FLAP).
“To-date the cloud providers have zoned in on particular markets in Europe.
They have been very active in the core FLAP markets of Frankfurt, London,
Amsterdam and Paris, and more recently in hubs such as Geneva, Zurich and
Page 9 of 63
Top 10 datacentre stories of 2018
Milan, and are now setting their sights on Madrid,” Mitul Patel, head of Europe
Middle East and Africa Datacentre Research at CBRE, tells Computer Weekly.
“They are targeting key European cities that have significant business activity
and/or those with a high population of connected people – these are the cities
where cloud businesses are most successful.”
The colocation community has moved swiftly to respond to this trend, and
ensure there is enough space to go around, with CBRE’s full-year report for
2017 revealing that more spare capacity came online last year than in any other
previous 12-month period.
But just because a colocation provider has capacity to spare does not
automatically guarantee a hyperscaler will consider it a good fit for their
requirements.
Location, location, location
Sometimes it simply comes down to location. “Within these [geographical]
markets the hyperscale companies have their own preferences for how they
shape their colocation footprint and ‘availability zones’,” says Patel. “They may
choose a [provider that offers] a linear distance between two sites or a triangular
formation, for example.”
Shared history plays its part too, with hyperscalers often favouring one
colocation provider over another because they have done business with them
Page 10 of 63
Top 10 datacentre stories of 2018
before in another city or country. “Hyperscalers, like other companies, value
relationships and there is an element of de-risking the process by working with
companies in new territories that have performed well for them in other
markets,” says Patel.
What this serves to highlight is just how lucrative a first-time engagement with a
hyperscale cloud firm can be for a colocation provider, as the potential for
follow-on investment is sizeable, he says.
For both parties, a second or third-time engagement often takes less time to
sign-off too, adds Stuart Levinsky, vice-president of sales, cloud and global
accounts at global colocation provider CyrusOne.
“Getting your first engagement with one of these hyperscale companies typically
takes three times as long as subsequent engagements because of things like
contracts. These organisations are looking for proven track records, and proof
you can deliver when you say you’re going to deliver,” he tells Computer
Weekly.
His company operates 48 datacentres across the United States, Europe and the
Far East, and – according to Levinsky – its services are currently being used
by nine of the 10 largest hyperscale companies in the world.
“Securing that first win and first engagement for CyrusOne with these
hyperscale companies gives us a chance to proof ourselves. And once that is
Page 11 of 63
Top 10 datacentre stories of 2018
done, and once you have got that trust, it opens the door for all subsequent
future business,” he says.
First mover advantage
Securing an anchor tenant for a new facility has always been a top priority for
both retail and wholesale colocation operators, says Steve Wallage, managing
director of datacentre-focused analyst house Broadgroup Consulting.
But, given how lucrative landing a hyperscale tenant can be, competition for
such deals is exceedingly high.
“We’ve had the likes of Amazon, Google and Microsoft investing in the UK, and
if you’re a colocation provider who gets one of those deals – whoosh – you’re
away, because they tend to land and expand, and that all generates its own
momentum,” he says.
Especially because securing a hyperscale cloud tenant can often lead to
winning the custom of their ecosystem partners too, says Wallage.
And it is not always the case that hyperscale firms will simply go for the best-
known or most high-profile colocation provider in a given market. “They have
shown they would be willing to go to newer players who don’t have a huge
operational record,” he says.
Page 12 of 63
Top 10 datacentre stories of 2018
Such an engagement would be a huge boon for a smaller player, but there are
drawbacks, particularly when it comes to securing on-going investor support.
“It’s a bit of a catch-22. The hyperscalers are willing to go to an unproven
player, but they [the colocation provider] don’t want to be seen as if their whole
business model is dependent on them,” says Wallage.
Supply and demand becomes askew
Despite the high (and growing) demand for colocation capacity across the major
European markets, one might assume datacentre operators would have the
luxury to charge what they like, but the zero sum nature of these engagements
means the opposite is true.
While the potential size and scale of their colocation engagements is huge,
there are relatively few hyperscale prospects out there, says Wallage,
compared to the number of colocation providers vying for these deals.
“There are only four or five [big cloud] guys, effectively, offering these deals,” he
says, with Amazon, Microsoft and Google chief among them.
“People sometimes talk about getting Salesforce in or Alibaba in, but really it’s
those top four or five.”
The balance of power between the colocation operators and the hyperscale
cloud firms is such that the amount of capacity being acquired by the
Page 13 of 63
Top 10 datacentre stories of 2018
hyperscalers gives them an advantageous negotiating position, when it comes
to agreeing contract terms, says CBRE’s Patel.
“The hyperscalers are responsible for so much take-up [in the colocation hubs],
winning these requirements can be the difference between feast or famine. The
hyperscalers are not short of providers willing to meet their requirements in any
major market,” he says.
And they use this to their advantage, by demanding lower prices and break
clauses in their contracts that leave the door open for them to take their
business elsewhere at much shorter notice than the colocation industry is
accustomed to.
Financial protection
In Levinsky’s experience, it is also not uncommon for hyperscalers to request
“flex up and flex down” clauses in their contracts too, to give them a level of
financial protection when it comes to rolling out new services within certain
geographies, he says.
“If the hyperscalers want to launch a cloud service in a particular region, they
won’t have revenue coming from it from day one, so they might ask [for some
leeway] in terms of how quickly they have to move in or if we can scale our
billing based on their rack counts.”
Page 14 of 63
Top 10 datacentre stories of 2018
“And the flipside is, if that cloud service doesn’t take off, they will be looking for
ways to flex down and reduce their commitments further down the road,” says
Levinsky.
The ability to respond to such requests is something smaller providers, with
fewer facilities or a smaller geographical spread, might struggle with.
“There are certainly very sophisticated negotiators at the hyperscale companies,
and the demands are becoming greater on organisations like us to offer greater
flexibility and creativity in our contract terms,” he says.
“There are advantages to being a large-scale provider in our business,
[because] I can aggregate that risk across a lot of geographies and datasets.
“The [request] we’re seeing more of is around portability clauses, whereby they
will commit to using a facility in the Frankfurt market, but would like the flexibility
to move a certain percentage of that commitment to another European location
without penalty should they need to,” he continues.
The benefits that come from securing the business of a hyperscale cloud firms
means colocation operators are usually happy to accommodate such demands,
provided they are able to.
“The colocation players are keen to get the cloud business so they will bid
aggressively and there is a view, because they are such a magnet for others to
Page 15 of 63
Top 10 datacentre stories of 2018
follow suit that it is worth discounting for them. On the whole, they will never pay
more than £100 per KW unless there is a very compelling reason to do so,”
says Wallage.
“As well as being very aggressive on price, there is also high demand from the
cloud guys for flexibility. Whereas a lot of large deals in the past would have
been for 10 or more years, a lot of the cloud guys are looking for break clauses
from three-to-five years.”
Shortening break clauses
The demand for ever-shortening break clauses has emerged as a matter of
concern for some colocation providers, who fear – should the demand for cloud
services start to plateau at any point – the hyperscalers may start to ramp up
their efforts to build their own facilities again.
It is a discussion point touched upon repeatedly during various sessions at
Broadgroup’s annual Datacloud Europe conference in June 2018, but Wallage
says it is too early to say whether this is an objective the majority of
hyperscalers will be working towards.
“Sometimes they put in [short break clauses] because they can. To be fair, a lot
of it is to do with their negotiating power. If you have everyone queuing up to
offer you a deal, clearly you’re going to push it as aggressively as you can,” he
says.
Page 16 of 63
Top 10 datacentre stories of 2018
“The view of a lot of the colocation guys is they are willing to accept it, through
gritted teeth, because they expect the guys to expand and take on more
capacity [as time goes on] – not contract, but it is still too early to say what is
going to happen.”
From Levinsky’s point of view, it all really comes down to how colocation
providers think enterprise appetites for their services are likely to change over
the coming years.
“We’ve got one school of thought that says enterprises will continue to move
load into the cloud eventually to the point where they cease to be colocation
customers, and they cease to run their own IT organisations and primarily all of
the world’s IT requirement will be fulfilled by a relatively small handful of these
hyperscale cloud companies,” he says.
“The competing view point suggests we’re not getting there nearly as fast as
people think we’re going to, and – while enterprises might put 40-60% of their
loads in the cloud, they’re still going to continue to maintain large IT kits and
require colocation for some time to come.”
Meeting enterprise demand
If the former vision Levinsky lays out does become a reality, the hyperscale
cloud giants are going to need a ready supply of datacentre capacity to meet
Page 17 of 63
Top 10 datacentre stories of 2018
enterprise demand for their services, and the colocation market should remain
in rude health for a while to come on the back of that.
“If we do end up with 50 companies globally supplying all the world’s IT, then I
do believe you are going to see consolidation [within the colocation space], and
those companies that have built those relationships with the hyperscalers and
are trusted technology advisors to them, will grow and become consumers of
the smaller colocation companies,” he says.
“I don’t personally foresee anything in the near term that suggests anything that
the rate of growth of the hyperscalers is likely to change, and certainly for the
next three to five years, I think we’re on an incredible growth curve as these
companies are basically insatiable in terms of their requirements.
“Nothing goes on forever, but – to my earlier point – if these hyperscalers end
up supplying the world’s computer dial tone down the road, they are going to
need space where they can continue to grow,”concludes Levinsky.
Next Article
Page 18 of 63
Top 10 datacentre stories of 2018
BA to sue CBRE over May Bank Holiday datacentre outage
Caroline Donnelly, datacentre editor
British Airways is understood to be taking legal action against the managed
services arm of US real estate consultancy CBRE over the datacentre outage
that blighted the firm over the 2017 May Bank Holiday.
The outage, which is now known to have been caused by a power failure at one
of the airline’s two West London datacentres, resulted in BA flights being
grounded at both Gatwick and Heathrow airports for two days, causing
disruption to thousands of the firm’s customers.
According to a report in the Mail on Sunday, the airline has appointed global law
firm Linklaters to oversee the action against CBRE, which is known to have
been responsible for managing the facilities at the time of the outage, and is
intent on taking its case to the London High Court.
Reports in the wake of the outage suggested the problems were down to a
defective uninterruptible power supply (UPS) system within the affected facility,
which failed to respond as expected when power to the site was lost for a short
time.
Page 19 of 63
Top 10 datacentre stories of 2018
In a statement to Computer Weekly at the time, a BA spokesperson said an
“uncontrolled” return of electricity to the site then resulted in a power surge that
caused the IT systems underpinning its check-in, baggage, ticketing and contact
systems to fail, brought about by “human error”.
The company also confirmed an “exhaustive investigation” into the outage
would be undertaken, but it is unclear at this time what the end result of that
was.
In July 2017, BA’s parent company, International Airlines Group (IAG),
confirmed the incident cost the organisation around £58m in compensation fees
and lost business, and blighted the travel plans of around 750,000 of its
customers.
Computer Weekly contacted BA for a comment on this story, but was told it
would be unable to respond until the “legal particulars” of the case are filed, and
Linklaters said it is unable to discuss the case at this time. CBRE, meanwhile,
declined to comment.
Next Article
Page 20 of 63
Top 10 datacentre stories of 2018
Apple pulls plug on €850m Irish datacentre project after three-year planning delay
Caroline Donnelly, datacentre editor
Apple has called time on its plans to build an €850m datacentre in Athenry, on
the west coast of Ireland, after more than three years of planning delays and
legal challenges.
In a statement to the Irish Times, the consumer electronics giant said it remains
committed to expanding its operations in Ireland, despite lengthy delays in the
local planning system putting paid to the Athenry project, which was first
announced in February 2015.
“Several years ago, we applied to build a datacentre at Athenry,” the Apple
statement read. “Despite our best efforts, delays in the approval process have
forced us to make other plans and we will not be able to move forward with the
datacentre.
Page 21 of 63
Top 10 datacentre stories of 2018
“While disappointing, this setback will not dampen our enthusiasm for future
projects in Ireland as our business continues to grow.”
Apple’s decision to abandon the project comes on the same day that a Supreme
Court appeal hearing, brought about by objectors, was due to take place.
If the project had gone ahead, it would have seen Apple construct a 24,500m2
datacentre – and accompanying 220kV power station – in Derrydonnell Forest,
Athenry.
The prospect of Apple siting its datacentre in the area has proved a hugely
divisive issue in the local community, with supporters – mobilising under the
name Athenry for Apple – hailing the economic benefits of the project, while
others have expressed concerns about the environmental impact it could have.
It was the latter issue on which the two main objectors in the case, Allan Daly
and Sinead Fitzpatrick, have based many of their objections, as they have
pursued various legal routes over the past two years in an attempt to halt
Apple’s plans.
At the time of publication, neither of the objectors had commented on Apple’s
decision, and Apple had not responded to Computer Weekly’s requests for
further comment on the case.
In a statement posted to the Athenry for Apple Facebook group, Ciaran Cannon,
minister of state at Ireland’s Department of Foreign Affairs and Trade, said
Page 22 of 63
Top 10 datacentre stories of 2018
Apple’s decision to drop the project was “deeply disappointing” for all those who
had campaigned for the datacentre to be built.
But their efforts will not have been in vain, said Cannon, because the case has
already prompted the Irish government to start pushing through reforms to the
way planning applications are handled in the country.
“I very much regret that Apple will not be pursuing its plans to construct this
datacentre, especially as the project would have been a source of significant
investment and job creation for Galway and the west of Ireland,” he said.
“It is deeply disappointing for all those who have worked so hard to secure this
potential investment in the first instance, not least the Athenry for Apple group.
The kind of reforms we need are already under way, particularly in relation to
our legal system.”
Next Article
Page 23 of 63
Top 10 datacentre stories of 2018
Visa reveals 'rare' datacentre switch fault as root cause of June 2018 outage
Caroline Donnelly, datacentre editor
Visa has revealed a “rare defect” in a datacentre switch is what stopped millions
of credit card transactions from being carried out during its UK-wide outage on
Friday 1 June, in a letter to the Treasury Select Committee.
The Committee is understood to have contacted the credit card payments firm,
seeking both clarification over the cause of the outage and assurances about
what action Visa is taking to prevent a repeat of it occurring at a later date.
Over the course of the 11-page missive, Visa expands on its previous
explanation of a “hardware failure” being the cause of the 10-hour outage by
laying the blame on a defective switch in its primary UK datacentre, which – in
turn – delayed its secondary datacentre from taking over the load.
The primary and secondary datacentre are setup so that either one has
sufficient redundant capacity to process all the Visa transactions that take place
Page 24 of 63
Top 10 datacentre stories of 2018
across Europe should a fault occur, and the systems are tightly synchronised to
ensure this can happen at a moment’s notice.
“Each datacentre includes two core switches – a primary switch and a
secondary switch. If the primary switch fails, in normal operation the backup
switch would take over,” the letter reads.
“In this instance, a component within a switch in our primary data centre
suffered a very rare partial failure which prevented the backup switch from
activating.”
This, in turn, meant it took longer than intended to isolate the primary datacentre
and activate the backup systems that should allow its secondary site to assume
responsibility for handling all of the credit card transactions taking place at that
time.
The firm’s UK datacentre operations team were alerted to the faulted switch at
2.35pm on Friday 1 June, after noting a “partial degradation” in the performance
of the company’s processing system, before initiating its “critical incident”
response protocols, the letter continues.
“It took until approximately 19:10 to fully deactivate the system causing the
transaction failures at the primary datacentre,” the letter continues.
Page 25 of 63
Top 10 datacentre stories of 2018
“By that time, the secondary data centre had begun processing almost all
transactions normally. The impact was largely resolved by 20:15, and we were
processing at normal service levels in both datacentres by Saturday morning at
00:45, and have been since that time.”
Visa is also quick to point out that at no point during the incident did a “full
system outage” occur, but admits the percentage of transactions that were
processed successfully did fluctuate, with peak periods of disruption occurring
between 3.05-3.15pm and again between 5.40pm-6.30pm.
During these times, around 35% of attempted card transactions failed, but this
failure rate dropped outside of these periods to 7%.
“Over the course of the entire incident, 91% of transactions of UK cardholders
processed normally; approximately 9% of those transactions failed to process
on the cardholders’ first attempt,” the letter continues.
Failed transactions
In total, 51.2m Visa transactions were initiated during the outage, and 5.2m
failed to go through.
Since the outage resolved, Visa said it has focused its efforts on preventing a
repeat of the events of 1 June, but admits it is still not clear on why the
offending switch failed when it did.
Page 26 of 63
Top 10 datacentre stories of 2018
“We removed components of the switch that malfunctioned and replaced them
with new components provided to us by the manufacturer,” the company said.
It is also working with its hardware manufacturer to conduct a “forensics
analysis” of the faulty switch, Visa added, and undertaking a “rigorous” internal
review of its processes.
“We are working internally to develop and install other new capabilities that
would allow us to isolate and remove a failing component from the processing
environment in a more automated and timely manner,” it said.
“Bringing in an independent third party to ensure we fully understand and
embrace lessons to be learned from this incident.”
Next Article
Page 27 of 63
Top 10 datacentre stories of 2018
Microsoft deploys underwater datacentre off the coast of Orkney
Caroline Donnelly, datacentre editor
Microsoft has deployed a 40ft long underwater datacentre off the coast of the
Orkney Islands near Scotland, as part of its ongoing research into the potential
use cases for subsea server farms.
The unmanned facility contains more than 860 servers and is expected to stay
in place for a year, with Microsoft engaging with French submarine engineering
company, Navel Group, to design the vessel.
During that time, the performance of the facility will be closely monitored,
Microsoft wrote in a blog post outlining the project. Its energy consumption, as
well as the amount of sound and heat it gives off, will also be tracked.
The deployment falls under the remit of software giant’s Project Natick initiative,
which Computer Weekly first reported on in February 2016, and is focused on
Page 28 of 63
Top 10 datacentre stories of 2018
determining how feasible it would be to build underwater datacentres powered
by offshore renewable energy sources.
According to Microsoft, the Orkney project marks the start of the second phase
of Project Natick, with the first phase serving to prove the underwater
datacentre concept had legs. It is now time to see if it is “logistically,
environmentally and economically practical”, the company said.
The Orkney datacentre, known as Northern Isles, requires just under a quarter
of a megawatt of power when running at full capacity, which it draws from the
island’s power grid.
The island itself runs exclusively on renewable power generated by its own wind
turbines and residential solar panels, and is also a sizeable testbed for tidal
energy generation.
“We know if we can put something in here and it survives, we are good for just
about any place we want to go,” said Microsoft’s special projects researcher,
Ben Cutler, in the blog post.
According to Microsoft, there are a number of economic and technical
advantages to be had from using underwater datacentres. For example, the
seawater surrounding the vessels would negate the need to rely on mechanical
cooling methods to keep the equipment inside running at an optimal
temperature.
Page 29 of 63
Top 10 datacentre stories of 2018
Furthermore, the facilities could also potentially provide homes and businesses
in coastal areas, particularly those in remote places with patchy internet
connections, with easier access to low-latency cloud services, Microsoft
claimed.
Datacentre industry watchers have largely been supportive of Microsoft’s
subsea server farm research in the past, although Greenpeace previously aired
concerns over the potential risk of thermal pollution occurring as a result of
planting heat-discharging datacentres on the sea floor.
Next Article
Page 30 of 63
Top 10 datacentre stories of 2018
Dividing lines: EU bid to curb server energy use has the European datacentre community split
Caroline Donnelly, datacentre editor
Finding ways to improve energy efficiency of their sites is an undisputed top
priority for datacentre operators, given just how big a line item power costs are
for so many of them.
For this reason, one might think an EU-backed legislative push that could
potentially lower the collective power consumption of datacentres across
Europe would be warmly welcomed by the industry and its assorted
stakeholders, but – in reality – the initiative is proving to be surprisingly divisive.
“ICT has delivered amazing efficiency improvements over the last few decades
without the help of regulation but Moore’s Law cannot go on for ever, and the
datacentre sector is a significant energy user,” Emma Fryer, associate director
Page 31 of 63
Top 10 datacentre stories of 2018
of climate change programmes at technology trade body, TechUK, tells
Computer Weekly.
“However you look at it, as the [datacentre] sector grows, we do have to accept
increasing regulatory scrutiny.”
Through legislation such as the proposed EU EcoDesign Directive, which is
mooted as a means of improving the energy efficiency of a wide range of
products, spanning household appliances to enterprise servers and storage
devices, by setting mandatory limits on how much power they use.
Under the proposals, which are in the final stages of being approved by EU
lawmakers, products that exceed these energy limits will be phased out of use
and sale within the EU, starting from March 2020.
The hope being this will help improve the quality of goods sold within across EU
member states, while limiting the amount of energy and resources used to
create and run them.
Proposed guidelines
The Enterprise servers and storage portion of the directive is covered off in Lot
9, with the EU proposing to set guidelines on how much energy products within
this category definition consume when operating in an idle state.
Page 32 of 63
Top 10 datacentre stories of 2018
According to EU estimates, the implementation of Lot 9 could collectively result
in annual energy savings of approximately 9TWh by 2030, with 2.4TWh of these
savings attributable to curbing the amount of power used by idle servers.
To put that 9TWh figure into perspective, the draft of the regulation claims this is
on a par with how much energy Estonia uses over the course of a year, based
on 2014 figures.
Differences of opinion
The prevailing view among stakeholders is that any effort to curb the continent’s
energy use on such a large scale are welcome, but it’s the EU’s proposed use
of the idle energy metric to achieve these savings that has proven to be so
contentious.
A four-week consultation on the proposals back in July 2018 saw IBM, Dell-
EMC and HPE all query the rationale for using the metric, claiming that idle
energy measurements are an ineffective means of determining how energy
efficient a server truly is.
In fact, Kurt Van der Herten, EU environmental policy program manager at IBM,
says – in a statement to Computer Weekly – that the directive’s proposed
methodology could end up driving up the energy use of datacentres, rather than
reducing it.
Page 33 of 63
Top 10 datacentre stories of 2018
“There are elements that may have the consequence of decreasing the energy
efficiency of and reducing the power consumption savings from datacentres
contrary to the intent of the Eco-Design directive,” he says.
“The proposal to set a limit of idle power consumption of servers will result in the
deployment of a larger number of less efficient servers, higher energy use, and
poorer datacentre energy performance.”
Server use neglected
It is further claimed, in a separate consultation response by HPE, the Directive
neglects to take into account how servers are used within datacentre
environments, given its focus on measuring the idle energy use of each
individual appliance.
“The current focus on idle addresses the individual product one, and fails to
recognise how servers are used to manage multiple workloads and utilisation
levels and how it can be done in the most efficient way,” says Pieter Paul
Laenen, compliance manage for Europe, Middle East and Africa (EMEA) at
HPE, in its written response to the four-week consultation.
“In essence, [by] settling idle limits for individual servers which are too tight for
new high performance servers, [it is our view] that this will result in EU
datacentre operators being forced to use more low performance servers at a
higher total energy use.”
Page 34 of 63
Top 10 datacentre stories of 2018
Actively opposed
This sentiment is broadly shared by all three suppliers, who have all separately
made a case in their consultation submissions for the idle energy metric to be
dumped in favour of an alternative active efficiency measure. Because, in their
view, it provides a better overall picture of how well these appliances perform.
“Active efficiency remains the optimal tool to remove the least efficient servers,
driving energy efficiency not only in enterprise datacentres but in small closet
installations as well,” says HPE, in its submission.
This is because it not only takes into account the amount of energy used when
servers are running idle, but also how much power they consume when in active
use too, says HPE.
The suppliers’ claims have won the support of TechUK, who further asserts that
some of the most efficient servers on the market consume relatively higher
amounts of energy when idle, but that does not mean they should be precluded
from sale within the EU.
Susanne Baker, head of programme, environment and compliance, at TechUK
says: “Servers have become better performing and are more efficient when
operational, the trade-off is a slight increase in idle energy.
Page 35 of 63
Top 10 datacentre stories of 2018
“Overall though it results in energy reductions. Measuring server efficiency by
only using idle power metrics will see the most efficient and best performing
servers banned from the EU market,” she says.
This is based on the theory that, once the Directive comes into force, datacentre
operators will opt for servers based on how much energy they consume at rest
rather than how much they use when performing a given task – and, in turn, this
could result in servers being deployed in datacentres that consume more
energy overall, and are, for that reason, considered to be less efficient.
An industry divided
This assertion, put forward by those opposing the use of the idle energy metric,
is roundly contested by a number of datacentre industry stakeholders, including
academic researchers and analysts, as well as some other members of the
server supplier community too.
Some have private expressed misgivings to Computer Weekly over the
motivations behind Dell-EMC, HPE and IBM’s decision to publicly condemn the
EU’s use of the idle energy metric, with some suggesting their dissatisfaction is
borne out of a need to meet their own commercial interests.
After all, in its consultation submission, Dell-EMC claims the EU’s mooted
energy saving projection figures would come at the expense of product
Page 36 of 63
Top 10 datacentre stories of 2018
availability, as around 76% of the servers currently on sale would have to be
phased out for failing to meet the Directive’s “aggressive idle power limits”.
All three of the manufacturers in question retain a sizeable hold on the EMEA
server market at present, but have also seen their dominance challenged in
recent years by the hyperscale cloud community’s growing appetite for
datacentre kit made by white-label, original design manufacturers (ODM).
It is also worth noting there are manufacturers out there making high-
performing, energy efficient servers that are widely used in datacentres across
Europe, who have not seen fit to respond to the consultation at all, because
their kit falls well within the idle energy limits.
“There is a huge rise in a new generation of servers that are based on open
standards, such as the Open Compute Project ones, that already well exceed
anything the legislation is requiring,” says Rabih Bashroush, a reader in
distributed systems and software engineering at the University of East London’s
(UEL) School of Architecture, Computing and Engineering, on this point.
As part of his academic work, Bashroush recently completed the 36-month, EU-
backed EURECA research project, which focused on helping public sector
datacentre operators pinpoint areas where cost and operational efficiencies can
be made within their facilities.
Page 37 of 63
Top 10 datacentre stories of 2018
He is very much in favour of what the Directive is trying to achieve, as well as
the EU’s decision to clampdown on the amount of energy servers consume
when running in an idle state.
“Low server utilisation is perhaps one of the key problems we have when it
comes to energy waste [in datacentres]. When servers are running idle (i.e.
doing no useful work) they still consume anything between 30%-to-70% of their
energy, which also requires the underlying datacentre power and cooling
infrastructure to be operational (and consuming energy),” he says.
“According to the research findings from our EURECA work, the average server
utilisation in Europe ranges between 15%-to-25%, with the occasional high
performer averaging 30% or so.
“To help reduce energy waste, we ought to do something about idle state
energy consumption, and that is what the EcoDesign legislation is trying to do,”
he adds.
And while the Directive is focused on curbing the datacentre industry’s overall
energy consumption using idle energy caps, it is unlikely to be the EU’s sole
aim, says John Goodacre, a professor of computer architectures at the
University of Manchester, and founder of UK-based converged infrastructure
startup, Kaleao.
Page 38 of 63
Top 10 datacentre stories of 2018
“I suspect the whole point of this legislation was to drive innovation and
persuade these guys to do things differently. There is not a market pressure that
motivates them [at present],” he says.
“It is a historical design choice in a sense that if you have to be low power, you
put in those features, and if you don’t, you haven’t, and the Directive could
promote innovation and change over time.”
Encouraging innovation
This is a view shared by John Laban, a European representative for the Open
Compute Project (OCP), a Facebook-backed industry initiative whereby
participants share datacentre and server design concepts with each other and
the wider IT community to encourage innovation.
Instead of focusing on how onerous the idle energy limits are, manufacturers
should be using the Directive to rip up the server design rulebook and revamp
their own product roadmaps.
“Whether we like it or not, it is very clear that Europe needs to do something to
lower the energy usage of its fast-growing datacentre industry. The products
and technologies to do that are already available as open source hardware
designs,” he says.
Page 39 of 63
Top 10 datacentre stories of 2018
“So I actually think that the EU is giving us a great opportunity here to start
using innovative datacentre hardware that will make it possible for a typical
datacentre to reduce the energy consumption of idle servers by at least 50%.”
The final countdown
At the time of writing, the Directive was in the final stages of being formally
adopted by the European Commission and enforced, after it was cleared with a
majority vote by the Regulatory Committee to proceed on 17 September 2018.
Its contents will now be subjected to three months of scrutiny by members of the
European Parliament and Council, and – while they cannot amend its wording –
the draft can still be opposed.
For supporters of the EU’s preference for using the idle energy metric, the
outcome of the vote is being treated as a significant win, particularly as past
revisions to the Directive have led to accusations that its content has been
significantly “watered down” over the course of its successive drafts.
“The legislations excludes all HPC servers, servers with integrated APAs and
high resilience servers, plush many others (based on the number of cores
running the same operating system and number of ports, for example),” says
UEL’s Bashroush.
“If anything, the legislation has been watered down so much already by the
Commission due to pressure from certain industry players, diluting it any further
Page 40 of 63
Top 10 datacentre stories of 2018
by removing the idle power limits will defeat the purpose of the legislation and
will mean a major opportunity is missed to reduce the energy waste in
datacentres.”
Late opposition a mistake
Furthermore, Laban’s colleague and fellow OCP European representative,
Robbert Hoeffnagel, says any subsequent moves to oppose the Directive at this
late-stage would be a mistake.
Particularly as, in his opinion, it could have as big an impact on the European
datacentre industry as the introduction of the Power Usage Effectiveness (PUE)
metric did more than a decade ago.
In that time, the metric has gone from being a concept pioneered by The Green
Grid in 2007 to becoming a metric that is widely used across the industry by
operators to benchmark the energy efficiency of their facilities, and pinpoint
areas for continued improvement.
“Look at what happened with PUE. It was a controversial metric and in the
beginning hardly anybody really knew how to do the calculations, but it grew
into something quite powerful when datacentres started to recognised they
could use PUE for marketing purposes,” he says.
Page 41 of 63
Top 10 datacentre stories of 2018
“That started to drive investments – little by little – in reducing overall energy
usage by getting rid of the low-hanging fruit. Maybe in time we will see the same
trend here.”
Conciliatory role
And, with the Directive getting closer to coming into force unopposed with each
passing day, TechUK’s Fryer says she sees its role in the debate changing, and
becoming more conciliatory in nature, to ease the tensions between those for
and against the proposals.
“Over the last few weeks, we have been sounding out the wider sector on this
issue to try and understand the gap between industry and the Commission,
[as] some academics take the view that we can have our cake and eat it on the
idle power front. This view is not shared by the major server manufacturers,”
she says.
“Much seems to depend on how people anticipate how the sector and its
business models will evolve: the prevailing view has been that we will see
greater consolidation, continuing the existing trend towards larger, more
powerful machines.”
“Others envisage a scenario where the trend is the opposite – towards smaller,
lower power devices within a distributed or Edge infrastructure.
Page 42 of 63
Top 10 datacentre stories of 2018
“I suspect the data centre landscape of the future will accommodate both
models – in which case it is even more important that regulation is appropriately
targeted. Time will tell,” she concludes.
Next Article
Infinity SDC sells Here East Olympic Park datacentre to fund development of Romford facility
Caroline Donnelly, datacentre editor
Page 43 of 63
Top 10 datacentre stories of 2018
Colocation provider Infinity SDC has sold off its much-hyped Here East
datacentre in the former London Olympic Park in Stratford, East London, for an
undisclosed sum to the V&A Museum.
The company was one of a number of tech firms to have set up shop in the
former Olympic Press and Broadcast Centre, having agreed to lease 260,000
square feet of the site around 2013, with a view to building out a datacentre with
more than 130,000 square feet of technical capacity.
At the time, Infinity SDC claimed the development would eventually be home to
one of the “largest and most efficient datacentres in Europe”, and Computer
Weekly understands the company secured one customer for the site, who
moved in during January 2016, and had – at one point – some interest from at
least one of the hyperscale cloud providers about moving in.
Computer Weekly has contacted Infinity SDC for clarification about how many
customers the Here East site housed at the time of the sale, and if they have
been relocated to the firm’s other datacentre campus in Romford, Essex, but
was still awaiting confirmation at the time of publication.
The firm has steadily wound down the number of datacentres it operates in and
around London over the course of the past three years, having sold its shared
services facility in Slough to Virtus in December 2015, followed in March 2017
by the divestiture of its Stockley Park site in West London to Zenium.
Page 44 of 63
Top 10 datacentre stories of 2018
These site sell-offs come at a time when the London colocation market is in the
throes of a boom period, as hyperscalers – such as Amazon Web Services
(AWS), Google and Microsoft – are increasingly leasing datacentre space from
them third-party operators, instead of building their own, so they can keep up
with the growing user demand for locally-hosted cloud services.
Where Infinity SDC and the Here East site is concerned, there are a couple of
site-specific challenges that might have proven off-putting for some potential
colocation customers, offered Steve Wallage, managing director of datacentre-
focused analyst house Broadgroup Consulting.
“Particularly where the hyperscale guys are concerned, they want somewhere
with lots of expansion capabilities, they want the ability to customise their
requirements and they want a bit of privacy – and you get a lot of footfall in that
area,” he told Computer Weekly.
“One of the supposed advantages of the site was that you had these media
guys in there, such as BT Sport and universities, so you had those as potential
customers, but for the cloud guys, it’s not exactly great to have all those people
hanging about.”
As previously reported by Computer Weekly, Infinity SDC said the money raised
by the sale of its Slough site to Virtus would be used to accelerate the
development of its remaining London datacentres, including the site now sold
site in Stratford.
Page 45 of 63
Top 10 datacentre stories of 2018
According to Infinity SDC’s most recent set of Companies House accounts,
which cover the year to March 2017, the Stockley Park sell-off is credited with
helping the firm to “reduce debt” and “strengthen its balance sheet”.
As such, the firm reported a 34% year-on-year drop in operating costs to £4.2m,
as well as a 2.3% increase in revenue from its continuing operations, which rose
from £17.2m to £17.6m over the same time period.
Infinity SDC building out its presence
In a statement, Infinity SDC CEO Stuart Sutton said the sale will enable the firm
to concentrate on building out its presence in Romford, where it has two
datacentres.
“Moving forward, our focus is firmly on continuing the development of our
Romford datacentre campus, which has already proven extremely popular with
customers looking for a well-connected, purpose-built, state-of-the-art facility
close to the heart of London.”
Next Article
Page 46 of 63
Top 10 datacentre stories of 2018
NHS Wales IT outage: What went wrong with its datacentres?
Caroline Donnelly, datacentre editor
A networking outage caused two NHS datacentres to fall offline on Wednesday
24 January, preventing healthcare workers across Wales from accessing patient
data and core IT systems.
Page 47 of 63
Top 10 datacentre stories of 2018
According to the BBC, healthcare professionals working for NHS Wales were
unable to access multiple IT systems for several hours, including those used to
book patient appointments, retrieve test results, and log notes taken during
consultations.
Email and internet usage is also thought to have been affected, along with the
systems used by NHS Wales to access pharmaceutical information and
administer drugs.
The NHS Wales Informatics Service (NWIS), which oversees the delivery of IT
systems for health and social care organisations across the country, attributed
the problems to network issues at two of its datacentres, in a brief statement on
its website.
“Both NHS Wales national datacentres are now back online, following an earlier
networking outage. All clinical systems are now available,” the statement said.
“NWIS will continue to monitor the situation and work with our equipment
suppliers to investigate the root cause. We appreciate that this will have caused
disruption to our service users and we apologise for any inconvenience
caused.”
Computer Weekly contacted NWIS for further guidance on the steps the
organisation is taking to prevent a repeat of the reported problems, but had not
received a response at the time of publication.
Page 48 of 63
Top 10 datacentre stories of 2018
The facilities are about 30 miles apart, with one located in Blaenavon,
Pontypool, and the other in Cardiff Bay. Collectively, they are home to the
infrastructure used to deliver IT services to NHS Wales.
Guillaume Ayme, IT operations evangelist at big data analytics software supplier
Splunk, raised concerns about the datacentres’ setup, given that running dual
sites usually means that in the event of an outage, one will failover to the other.
“For the issue to be impacting two datacentres suggests it is severe, as one
would normally be the backup for the other,” he said. “This may suggest there
has been a problem in the failover procedure.
“Once the service is restored, it will be essential to find the root cause to avoid a
potential repeat. This can be complex for organisations that do not have full
visibility into the data generated by their IT environment.”
NHS Wales is known to have undergone a rationalisation and upgrade of its
datacentre estate for efficiency and resiliency purposes in recent years,
resulting in the closure of a number of smaller facilities and server rooms, with
its Blaenavon and Cardiff Bay sites taking up the slack.
The organisation has also moved to develop and roll out applications that run on
a common, underlying infrastructure, known internally at NWIS as the National
Architecture, to enable greater interoperability and data-sharing between
various clinical IT systems.
Page 49 of 63
Top 10 datacentre stories of 2018
Built using service-orientated architecture (SOA) principles, the National
Architecture “enables information originally gathered in one user application to
be reused in another”, states the 2017 NWIS Annual review document.
“It aims to provide each user with high-quality applications that support their
daily tasks in the delivery of health and care services, while also ensuring that
any relevant information created about the citizen is available safely and
securely, wherever they present for care,” the document says.
The document credits the setup with breaking down boundaries between the
various departments and organisations. In turn, this is giving clinicians working
within NHS Wales “a national view” of the health of the country and its citizens.
It also acknowledges the underlying complexity of the setup, which Dave
Anderson, digital performance expert at application performance management
software provider Dynatrace, suggested could be why the incident took as long
as it did to resolve.
“While systems are now back up and running, the chaos it created shows why
we need to move from hours to minutes to resolve problems like this,” said
Anderson.
“Ultimately, it comes down to our reliance on software and the need for it to
work perfectly – and that’s difficult in IT environments that are getting more
complex by the day.
Page 50 of 63
Top 10 datacentre stories of 2018
“The challenge is that trying to find the root cause of the problem is like finding a
needle in a haystack, and then understanding the impact and how to roll back
from it is even more difficult.”
Next Article
The bitcoin boom: How colocation datacentres are cashing in on cryptocurrency mining
Caroline Donnelly, datacentre editor
The money-making potential of cryptocurrency mining is an opportunity that has
caught the attention of huge numbers of users, ranging from the hobbyist to the
enterprise.
Participants are, essentially, responsible for processing cryptocurrency
transactions using specially designed hardware rigs that ensure each of these
transactions is recorded in a linear, time-stamped fashion within a public ledger
known as a blockchain.
Page 51 of 63
Top 10 datacentre stories of 2018
It is a compute and energy-intensive process, and participants are rewarded for
their efforts with whatever cryptocurrency they have chosen to mine. The more
time they spend mining, the more money they stand to make, and downtime
must be avoided at all costs.
“Users might be running mining units in a warehouse or garage at the moment,”
Greg McCulloch, CEO of Godalming-based colocation provider Aegis Data, tells
Computer Weekly, “but if the lights go out and they lose power, they’re not
making money.”
To achieve maximum profitability, miners leave their rigs running all day and
night, meaning round-the-clock access to reliable power sources and resilient,
high-speed network connections are a must.
These requirements make cryptocurrency mining sound like a dream use case
for colocation datacentres, but this is a realisation that some participants have
been relatively slow to reach.
The great British bitcoin rush
While bitcoin, the most well-known and high-profile example of a
cryptocurrency, has been around since 2009, user interest in using colocation
facilities for mining purposes – particularly UK-based ones – began picking up a
year or so ago, operators claim.
Page 52 of 63
Top 10 datacentre stories of 2018
“It probably started for us about July/August 2017, which is when the price
[bitcoin values] crossed over to about $5,000, prompting a fairly regular run of
enquiries about colocating miners to start filtering through,” says David Barker,
founder and technical director of West Byfleet-based colocation provider 4D
Data Centres.
Anecdotally, this interest is coming from hobbyist miners, looking to scale up
and improve the resiliency of their operations, while – perhaps – looking to
secure a secondary income, whereas others could be classified as investor-
backed micro-businesses.
“They range from a couple of IT guys, with maybe a backer funding the
acquisition of miners, through to hobbyists with a couple of mining rigs they can
no longer run in their house for noise or power consumption reasons,” adds
Barker.
As miners have progressed from using home PCs on to specially designed,
power-hungry mining rigs to process the complex calculations needed to create
their favoured cryptocurrency, it stands to reason that they’ve started to look
beyond their own four walls for locations that would be a better fit to run them in.
“These guys are running these units to the absolute limit, and – while they might
be paying a bit more to use a datacentre – they get security, and can sleep at
night knowing they have power, cooling and resiliency,” says McCulloch.
Page 53 of 63
Top 10 datacentre stories of 2018
Flexible colocation contract terms
All of the colocation providers Computer Weekly has spoken to are at great
pains to point out their cryptocurrency mining customers benefit from the same
uptime, availability and levels of support as their more traditional enterprise
clients.
One notable difference, though, is the relatively short length of leases the
colocation providers are willing to offer miners, which is partly down to operator
concerns about the volatility of the cryptocurrency market.
As such, Computer Weekly understands there are clauses in some colocation
contracts that allow cryptocurrency miners to have their rigs switched off
whenever cryptocurrency values drop below a pre-defined point.
“I would never sign a bitcoin miner on a three-year term because it would not be
a good business model for either of us,” says McCulloch. “We don’t want to be
tied into a traditional two- to three-year datacentre contract if the bottom falls out
of the cryptocurrency market.”
So instead of signing them up to multi-year contracts, providers are opting for
rolling, monthly renewals instead, it seems.
“That gives flexibility to the datacentres to move users out if something isn’t
going right, and for the users to pull out their equipment at short notice, should
they need to,” says McCulloch.
Page 54 of 63
Top 10 datacentre stories of 2018
From boom to bust?
Based on Barker’s own calculations, running a bitcoin mining operation out of a
datacentre remains profitable until the exchange rate reaches the $4,000 mark.
“That profitability curve drops as you approach that figure,” says Barker. “At
current values, you will get about 18 months’ worth of profitability out of it.”
The performance of a mining rig is measured using the hash rate metric, which
tracks the number of computations that take place per second to help
participants contribute to the supply of new cryptocurrencies. The higher the
hash rate, the better.
Therefore, performance improvements to the hardware used to mine
cryptocurrencies are a big influence on how much money miners can make, but
power costs are the biggest determinant of profitability.
“In terms of efficiency gains, three years ago you could buy a mining rig with a
[hash rate] of 15 terrahashes per second and it consumed around 8kW of
power,” says Barker. “Now you can buy one that runs at 15 terrahashes a
second and consumes around 1.5kW.”
To remain competitive, efficient and – ultimately – profitable, there may come a
point for miners where they need to decide if it is worth upgrading their
hardware or exiting the market altogether. A lot of this comes down to how
quickly miners believe they can recoup the cost of such a hardware investment.
Page 55 of 63
Top 10 datacentre stories of 2018
“Only people who can do [mining] at scale will have the funds to continue on
that treadmill [of hardware investment], so I suspect we will continue to see
mining consolidate into a few major players,” says Barker.
Beijing-based bitcoin mining hardware manufacturer Bitmain is one to watch, in
this regard. Not only does it make the kit powering many of the third-party
mining operations that exist today, but the company also runs cryptocurrency
server farms of its own.
Governments across the world are still grappling with how to regulate cryptocurrencies,
meaning countries where it is advantageous to operate today may become less
accommodating hosts in the years to come
According to the estimates of US analyst house Bernstein, Bitmain is thought to
have made between $3bn and $4bn in operating profit in 2017, putting it on a
financial performance par with chipmaking giant Nvidia.
“[Bitmain] is effectively using the money that comes in from people buying the
miners to fund its own mining operations and operating in a part of the world
where electricity is extremely cheap,” adds Barker.
Governments across the world are still grappling with how to regulate
cryptocurrencies, meaning countries where it is advantageous to operate today
may become less accommodating hosts in the years to come.
Page 56 of 63
Top 10 datacentre stories of 2018
This is a worry, says Steve Wallage, managing director of datacentre-focused
analyst house Broadgroup Consulting, as miners are lot more upwardly mobile
than more traditional colocation clients, and will vote with their feet if they have
to.
“These guys have their maps of the world and if the place they are at the
moment becomes unfriendly, from a regulation or power cost perspective, there
are lots of other places they can go and lots of places queuing up to host them,”
says Wallage.
“It could be argued they would like to be somewhere where there is low
government involvement and assessment of their affairs, but also cheap energy
and taxes, with Iceland, Scandinavia and Canada all targeting the market.”
Swedish colocation provider Hydro66 is an example of an overseas operator
that is supplementing the revenue generated by its more traditional enterprise
clients by throwing over some of the space in its renewably powered facility to
cryptocurrency miners.
In terms of customer base, Hydro66 claims its client mix is broadly the same as
what the other colocation providers are seeing, but the firm is also picking up
business from some of the Bitmain-like players which are starting to dominate
the cryptocurrency landscape.
Page 57 of 63
Top 10 datacentre stories of 2018
“We are seeing this type of activity coming from Japan and China, mainly due to
geo-diversification needs for risk balancing and also for [the Nordics] access to
low-cost green power,” says Paul Morrison, the firm’s business development
manager.
“UK power costs and the stability and capacity of the grid will be significant
headwinds for cryptomining in the UK, while other regions, such as the Nordics,
offer low-cost green power at industrial strength and scale.”
Home versus away
While freedom of movement is a concept cryptocurrency miners have at their
disposal to take advantage of, there are a number of reasons why some prefer
to keep their rigs running closer to home in the UK, rather than ship their kit
overseas where they could potentially make more profit.
“If you place your kit in Norway, for example, there is the cost involved with
getting everything over there for a trend that might fall over in a year or two, so
they don’t mind paying a little bit more to keep it local,” says McCulloch.
There is also an element of “server hugging” involved, he adds, as people still
like to have the option to come and visit their kit, with relative ease, should they
want to.
Page 58 of 63
Top 10 datacentre stories of 2018
This is a view Barker shares, before going on to share anecdotal tales of users
who have paid to have their hardware shipped over to the continent, so they can
take advantage of cheaper power prices, only to find it never arrives.
“If you’ve spent hundreds of thousands of pounds on mining equipment, do you
really want to trust that investment to somewhere you have never seen?” says
Barker.
“That tends to be the main driver for people looking to retain their miners in the
UK: they’re willing to accept a slightly higher price point on power for that
security of knowing where their equipment is.”
There are a good number of reputable colocation providers dotted across
Europe, with a track record in catering to the needs of cryptocurrency miners,
but it does pays to be wary, adds Barker.
“There are people out there who have seen the strong demand for this and have
just bought office space or warehouses and are advertising it as cryptocurrency
colocation, but they haven’t got the experience of running the facilities or
enough staff to deal with enquiries from customers,” he continues.
The decision to host locally or overseas is not necessarily an either/or
conversation, adds McCulloch, as some of miners on Aegis Data’s books are
running rigs in the UK and the Nordics to keep a lid on costs.
Page 59 of 63
Top 10 datacentre stories of 2018
“It is not a dissimilar arrangement to the one some of our enterprise clients have
where they are running workloads in traditional datacentres, in the cloud or a
colocation facility, and getting a bit of a mix,” he says.
The blockchain opportunity
All things considered, 4D Data Centres’ Barker says UK cryptocurrency miners
probably have around a year to 18 months to maximise their profitability, before
market consolidation, high power prices and hardware refresh costs really start
to take their toll.
“Mining cryptocurrencies is probably a good thing to do for the next 12 to 18
months, but beyond that – and where bitcoin, litecoin, dash and the other
currencies people are mining in the datacentre are concerned – it is going to
become too centralised. Long term, I don’t think the market for mining is in the
UK,” he adds.
For this reason, Wallage predicts blockchain, rather than cryptocurrency mining,
is where the long-term opportunity in all this lies for the datacentre community
as a whole.
The open source community, in particular, has expanded the functionality of the
blockchain code used to underpin cryptocurrency transactions to extend the
usefulness of this distributed ledger technology to a much larger pool of users
and industries, including retail, financial services and legal.
Page 60 of 63
Top 10 datacentre stories of 2018
“Cryptocurrency mining to secure the transactions on the network is only one
blockchain application, the same as email is only one application on the
internet,” says Hydro66’s Morrison.
“The blockchain is simply a decentralised ledger where cryptography replaces
the need for trusted intermediaries. So any situation or process which depends
on middlemen can potentially be improved by implementing a blockchain
system.”
Indeed, according to research from benchmarking firm McLagan, the world’s
eight biggest investment banks could cut the costs of their IT infrastructure by
up to $12bn a year by replacing their fragmented ecosystem of database
systems with a single blockchain-based digital ledger.
IBM is one of a number of tech giants focused on building out its blockchain
proposition at present, having brought to market its software-as-a-service
(SaaS) IBM Blockchain Platform offering in August 2017, which is geared
towards making the technology accessible to a much wider range of industries.
Oracle also has a cloud-based blockchain offering that forms part of its wider
platform-as-a-service (PaaS) portfolio, while Microsoft is courting developers of
blockchain-based applications to run them on its Azure public cloud platform.
Cloud giant Amazon Web Services (AWS) has also made an investment
commitment to help members of its partner community create blockchain-based
Page 61 of 63
Top 10 datacentre stories of 2018
services for members of the healthcare, life sciences, supply chain
management, security and compliance industries too.
If the work the hyperscale cloud provider community is doing around blockchain
starts to take off, the colocation community could benefit indirectly, as the
suppliers may need additional datacentre capacity to keep up with demand.
4D Data Centres is going one step further, and is currently in the throes of
building a hyper-ledger fabric platform of its own, in anticipation of enterprise
demand for blockchain-based services increasing in future.
“We’re testing the water because, longer term, I think that holds more value for
us. The underlying blockchain technology is going to be disruptive, and it could
take five, 10 or 15 years for its full potential to become known,” says Barker.
“It is not going to be an overnight thing, but I think blockchain will have a much
wider impact and presents more of an opportunity to our business than mining
cryptocurrencies in the long term.”
Page 62 of 63
Top 10 datacentre stories of 2018
Getting more CW+ exclusive content
As a CW+ member, you have access to TechTarget’s entire portfolio of 140+
websites. CW+ access directs you to previously unavailable “platinum members-
only resources” that are guaranteed to save you the time and effort of having to
track such premium content down on your own, ultimately helping you to solve
your toughest IT challenges more effectively—and faster—than ever before.
Take full advantage of your membership by visiting www.computerweekly.com/eproducts
Images; stock.adobe.com
Page 63 of 63
Top 10 datacentre stories of 2018
© 2019 TechTarget. No part of this publication may be transmitted or reproduced in any form or by any means without
written permission from the publisher.