Wednesday, 25 March 2020

International Data Centre Day - 25th March 2020

First of all, a massive shout out to all those involved in Data Centres, be it, operations, construction, or consultancy, that are working so hard to keep the internet running globally, the connected world depends on it.

Not to forget, the millions of healthcare professional globally fighting the coronavirus.

Data Centres process data, the data that runs banks, logistics, academia, governments and almost every aspect of modern life, use a contactless card to buy groceries?, you will route through a data centre, buy on the web? the web and data centres enable not only the ability to view millions of items, but also to pay for it, conducting research? perhaps on the Coronavirus? Data Centres house the data and do the number crunching. Doing maths homework, the application will be running in a data centre and may route through multiple smaller data centres to get to your home.

Here, at Carbon3IT Ltd we work in supporting data centres with a number of  services including ad-hoc training, consultancy, compliance on the EU Code of Conduct for Data Centres (Energy Efficiency) (EUCOC), ISOs including ISO9001 (Quality), ISO14001 (Environmental) ISO22301 (Business Continuity) ISO27001 (Information Security) and finally ISO50001 (Energy) Management Systems.

We are also ESOS Lead Assessors.

We can provide guidance on EN50600. We work in standards development, primarily the EUCOC and EN 50600.

We also conduct data centre audits, for EUCOC, CEEDA (Certified Energy Efficiency Data Centres Award) and for EN 50600 Gap Analysis.

We wish all data centres, their employees, the supply chain and use the very best International Data Centre Day 2020!

Keep safe, and wash those hands!


Wednesday, 4 March 2020

EU Code of Conduct for Data Centres (Energy Efficiency) v11 Update

The 2020 version of the EU Code of Conduct for Data Centres (Energy Efficiency) was published in January, however due to an oversight the reporting form had not been updated until earlier today.

Below is a conscise list of EVERY change made to the reporting form.

New applicants are advised to download all the guidance documentation before completing the forms, (or seek expert advice from ourselves) existing participants should use the latest 2020 reporting form v11 when published, to advise the EU-JRC of progress against all best practices and any action plans, as well as provide updated energy data.

Given the recent EU announcements that "data centres can and should be climate neutral by 2030" and that they are considering measures, it is highly recommended that all data centres in the EU and UK consider becoming a participant in the EUCOC as soon as possible.

This recommendation applies to all hyperscale, colocation and enterprise sites.

All the documents can be downloaded from this link

Carbon3IT Ltd has been working with the EU-JRC since 2012, we sit on the best practices committee and provide under contract, the review services for all participants.

However, we also provide pre-EUCOC reviews and can complete the application and reporting forms on your behalf for a set fee (currently £3000+VAT per site, discounts are available for multiple sites) in whch case we will inform the EU-JRC that we have provided this service and an alternate reviewer will be engaged.
To date we have completed 40 EUCOC applications on behalf of global clients and all have been accepted on initial application.

The best practices for 2020 have been updated as follows:


3.3.2 Practice Updated
3.2.4 Practice Updated
3.2.5 Practice Updated
3.2.6 Practice Updated
3.2.8 Practice Updated
3.2.12 Practice Updated 
3.2.13 Practice Updated
3.2.14 Practice has become MANDATORY and Note Updated
3.2.15 Practice has become MANDATORY and Note Updated

4.1.1 Practice Updated
4.1.2 Practice Updated
4.1.3 Practice Updated
4.1.4 Practice Updated
4.1.6 Practice Updated
4.1.10 Practice Updated
4.1.11 Practice Updated
4.1.14 Practice Updated
4.1.15 Practice Updated

4.2.6 Practice Updated
4.2.8 *New Practice* 

4.3.2 Practice Updated
4.3.5 Practice Updated
4.3.6 Practice Updated

4.4.4 Practice Updated

5.1.3 Practice has become MANDATORY and Updated

5.2.6 Practice Updated

5.6.1 Practice renumbered (previously 5.4.2.9)

5.7.1 Practice renumbered (previously 5.6.1)
5.7.2 Practice renumbered (previously 5.6.2
5.7.3 Practice renumbered (previously 5.6.3)
5.7.4 Practice renumbered (previously 5.6.4)

8.3.3 Practice updated
9.2.2 Practice has become MANDATORY for new build/retrofit and updated

Please also note that there are 3 new best practices scheduled for publication next year, these are

5.7.5 Capture Read Infrastructure - 

Consider installing ‘Capture Ready’ Infrastructure to take advantage of, and distribute, available waste heat during new build and retrofit projects.

This is scheduled to become a mandatory best practice from 2021

11.2 Network Energy Use - 

When purchasing new cloud services or assessing a cloud strategy, assess the impact on network equipment usage and the potential increase or decrease in energy consumption with the aim of being to inform purchasing decisions.

The minimum scope should include inside the data centre only.
The ambition is to include overall energy consumption and energy efficiency including that related to multiple site operation and the network energy use between those sites.

11.3 Smart Grid - 

Continue to evaluate the use of energy storage and usage to support Smart Grid. A Smart Grid is a solution that employs a broad range of information technology resources, allowing for a potential reduction in electricity waste and energy costs































Wednesday, 26 February 2020

The Phoney War is over....

Back in 2008, the British Computer Society, Chartered Institute for IT, the Department for Farming and Rural Affairs (DeFRA) and the EU - Joint Research Centre launched the EU Code of Conduct for Data Centres (Energy Efficiency) also known as the EUCOC, comprising of 2 parts, the first a series of best practices for data centre operators to use, to really optimise their facilties using a combination of tangible (those that you can touch such as blanking panels, or change, such as temperature settings) and intangible (such as changes to policy, processes and procedures covering procurement, service take on, and management) And the second, a scheme for those using (participants) or promoting (endorsing) the EUCOC.

To date, there are some 350+ Data Centre "participants" from various organisations, a combination of colocation, hyperscale and enterprise organisations, and 250+ "endorsers" covering consultancies, organisations, trade bodies and suppliers to the industry.
There are annual reporting requirements for both participants and endorsers.
Participants are required to report (by the end of February) energy consumption information, whether any new best practice has been adopted and progress against an action plan which is formulated on intial application.
Endorsers are required to report any "endorsing" actions taken place annually from the anniversary of their initial application.

From the initial application, data is extracted from the reporting forms and is used as the input into a report from the reviewer (currently ourselves) to the EU-JRC on a regular but not annual basis.
Energy consumption data is tabulated and forms inputs into research conducted by the working group, where it is used to provide support to policy makers with the EC and EP.

As we are not party to the annual update participants data or the endorser annual reports we cannot report on the frequency or accuracy of any information passed to the EU-JRC.

However, many of our readers will know that we provide pre-EUCOC reviews and in some cases actually complete the reporting forms and participation applications on behalf of clients as a paid for service, currently £3000+VAT per site, although discounts are available for multiple sites.

Readers will also know that the EUCOC best practices, but not the scheme, have been adopted by the CEN/CENELEC/ETSI standards body as a technical report under the EN 50600 series of Data Centres, design, build and operator standards and available from national standards bodies as "PD CLC/TR 50600-99-1:2019"

So, why sign up and provide the energy data etc? Well, it was felt, and in some EU member states actually implemented, that public sector procurement of data centre and data centres services (such as cloud) would require, as a scoring element of the tender process, that the data centre had to be a participant of the EUCOC. This is certainly the case for the UK G-Cloud and other EU national governments, although not actively policed.

It should also be noted that the scheme is VOLUNTARY!

From our own work with CEEDA (which is an external (paid for) assessment based upon a subset of the EUCOC and ISO30134/EN50600 KPIs), we have over the last 8 yrs or so visited 60 CEEDA sites, compiled around 20 EUCOC reporting forms on behalf of clients, and reviewed some 200 participant application forms on behalf of the EU-JRC and discussions with colleagues in the industry where we discuss energy efficiency and the implementation of the best practices, especially the intangible elements, it has to be said that, some, but not all of our clients are paying lip service to the EUCOC and it could be argued merely ticking boxes with little or no intention of acting upon the action plan that we create.

There are many reasons for this, primarily client SLA's or indifference, a lack of skills within the organisation, lack of funding, or indifference from senior managers and this is understandable when there is no external compulsion such as regulation or the need for an operating licence.

The past 12 years or so, can now be considered the "phoney war" as it is clear, in light of the recent announcement from the EU regarding the Green Deal, especially the following statement...

"Yet it is also clear that the ICT sector also needs to undergo its own green transformation. The environmental footprint of the sector is significant, estimated at 5-9% of the world’s total electricity use and more than 2% of all emissions. Data centres and telecommunications will need to become more energy efficient, reuse waste energy, and use more renewable energy sources. They can and should become climate neutral by 2030. 

that the phoney war is over and that we can expect measures from the EU to persuade data centres to actively up their game,  and that they will be required to become more energy efficient (and there are a couple of methods for this but at the core is the EU Code of Conduct for Data Centres (Energy Efficiency) and other EU polices such as EMAS and BEMP, reuse waste heat and use more renewable energy sources.)

All of these 3 elements are contained in the CATALYST project which we are working on, on behalf of Green IT Amsterdam, more information on the website www.project-catalyst.eu

We'll be on the road talking about the project and wider aspects of the potential policy changes at the following events:

Data Centre World London March 11/12th 
Data Centre Forum Oslo March 19th
DCD Energy Smart Stockholm April 27/28th
DDI/UN Copenhagen May 15th

Check out the CATALYST project website for more event info here

We've been working in this area for the past 10 yrs and have built up a wealth of exprience and knowledge in the field, so if you need any practical advice contact us on info@carbon3it.com or send us a message on linked in or twitter feed. 




Monday, 30 December 2019

Air V Liquid - Part 4 - Ecosystems

Following on from the previous articles, and over a year late! we're now going to look at the relative costs of providing an ecosystem for IT equipment, essentially the rationale for data centres.

I think its important to recognise that back in the past, the delivery of IT systems was a lot different to the way we do it today, but it does have a bearing on data centre ecosytem architectures.
Back in the day, business used a central mainframe and dumb terminals, the main frames were heavy bits of kit and I can remember some installations where floors were strengthened to take the weight, thus rooms in buildings were specifically used for IT equipment, cooling solutions were installed and Bob's your uncle, you had a computer room.

These were normally over provisioned to allow for expansion and I've personally been asked to build a new room that needed to cover the existing kit plus 100% expansion.
Well, thats all very well, but 100% of what exactly? floor space, power density, network capacity, cooling capacity? Normally, everything was doubled up, just to cover ourselves, but it was never going to be enough. Why? because IT was getting smaller, more equipment was needed, power densities rose, more network was needed. So these rooms soon became not fit for purpose and for a variety of reasons, insufficient cooling capacity, not enough power, in some cases not enough space.

So, IT managers were in a dilemma, without visibility of IT needs moving forward, it became impossible to provide expansion space without spending a great deal of capital in future proofing (with the risk of getting it completely wrong) or failing to meet the business requirements.
I've seen row upon row of racks, all empty because the business decided to use blade servers, which of course have a high power density that standard servers, and there wasn't sufficient power available so power was taken from other racks, rending them useless, this of course leads to hot spots because you've concentrating your IT (a blade chassis is about 7.5kW) into an area that was designed for a standard 2kW rack.

Today, business has other options than to keep their IT on premise, they can use colocation facilites or cloud services but they will still need a room on premise to provide networking access to the colocation/cloud services and they may have some on site compute (those services that can't go into the cloud for reasons such as latency or data transfer rates),

All we've done though, is transfer the problem of the ecosystem to someone else, now its the colocation provider that has to think about capacity, in terms of space, power and cooling and the thing is, is that they are always behind the curve, insofar as they are reactive rather than proactive, they respond to customers requirements in a building that was designed in the past, with the pasts intepretation of power, space and cooling requirements and that leads to the same problems. i.e. a lack of power, problems with cooling, and the risk of having empty racks.

Its understandable though, if you are a colocation or hosting provider, you dont have crystal balls to see into the future, so you have to deal with what you know or you can take a gamble on what the future looks like.

The future, to them is very much like the past, insofar that if 99% of systems are designed for air cooling then an air cooling infrastructure is what they will build.

Hence, the market is dominated by air cooled systems, and so we should build for air.

Building for air means, a raised floor (perhaps), it means CRAC/H's, it means pipework, it means chillers, or external units, in whatever flavour you desire, but you have to provide an infrastructure for what the market needs, and at the present time that is air.


But it doesn't have to be that way...

The data centre of the "future" is, very much like the data centre of today, given that we are building them today (as discussed with my friend and colleague Mark Acton) however, what would the data centre of the future look like if we did adopt some of the more outlandish suggestions coming out of academia and some design consultancies and what if we decided to adopt more liquid cooled options?

In November I attended the DCD London event, where not one but two immersed liquid cooled solutions were on show, both using the single immersion technique (this is where the server is immersed into a bath full of an engineered (non dielectric) liquid, the heat generated by the servers is carried by the liquid to the top of the bath and transferred via a heat exchanger to an external water circuit, this is then connected to a external dry cooler and the heat vented into the atmosphere, but when compared with an air cooled solution, we see that some of the capital plant items, namely the floor (baths dont need a raised floor), and CRAC/Hs are moot, as a result the capex and opex costs will be lower.
But, we can go one step further and get revenue, thus potentially reducing our costs even further. How?, simple, the heat rejected by the system is warmer and in a medium where it can be captured better than air and thus directed to provide, or offset energy use elesewhere, such as hot water or heating locally (within the building) or passed to a low temperature district heating system for use over a wider area. There are some commercial aspects that need to be ironed out with this approach, such as contractual agreements, cost, and service levels etc.

This approach, where waste heat is used to offset energy requirements elsewhere, is a fundamental aspect of Green Data Centres and from our research it appears that liquid immersed systems can contribute, and we're not the only ones thinking this..

The whole concept of data centres as engaged players in the energy transition towards the decarbonisation of society is within the remit of the EU funded Catalyst project http://project-catalyst.eu/

So, in terms of capital and operation costs of  air v liquid where do we stand ?

There are in effect 3 types of cooling for data centres, the first is using a chilled (or cold) water loop, this basically transfers the air cycle heat to liquids in the CRAC unit which are then pumped to a chiller where the retained heat is dissapated into the atmosphere.

The second is to use evaporative cooling, wiki provides good content on how evaporative cooling works and this is the text

"An evaporative cooler (also swamp cooler, swamp box, desert cooler and wet air cooler) is a device that cools air through the evaporation of water. Evaporative cooling differs from typical air conditioning systems, which use vapor-compression or absorption refrigeration cycles. Evaporative cooling uses the fact that water will absorb a relatively large amount of heat in order to evaporate (that is, it has a large enthalpy of vaporization). The temperature of dry air can be dropped significantly through the phase transition of liquid water to water vapor (evaporation). This can cool air using much less energy than refrigeration. In extremely dry climates, evaporative cooling of air has the added benefit of conditioning the air with more moisture for the comfort of building occupants.
The cooling potential for evaporative cooling is dependent on the wet-bulb depression, the difference between dry-bulb temperature and wet-bulb temperature (see relative humidity). In arid climates, evaporative cooling can reduce energy consumption and total equipment for conditioning as an alternative to compressor-based cooling. In climates not considered arid, indirect evaporative cooling can still take advantage of the evaporative cooling process without increasing humidity. Passive evaporative cooling strategies can offer the same benefits of mechanical evaporative cooling systems without the complexity of equipment and ductwork."

Some social media and search engine hyperscalers use this type of cooling technology.

The third is Emerging liquid technologies and include "liquid to chip", cold plate and immersive.

Liquid to chip and cold plate in effect are extending the chilled water loops into the rack, and in the case of liquid to chip into the server.

Immersed technologies however are a very different kettle of fish.

This is where a server is actually immersed into a non dielectric fluid in either a single mode (direct bath) or dual mode (server is encased in a blade type enclosure filled with the non dielectric fluid and installed into a chassis with the liquid cooling loops).

The heat transfer is made to the fluid and then via a heat exchanger to water and then to a dry cooler or other mode of use, these are the waste heat reuse scenarios often discussed, heating office areas, resdiential heating, swimming pools and greenhouses.

An air cooled data centre needs the following:

Raised Floor (not always)
CRAC/H's
Chiller (or dry cooler, other method of rejecting heat)
Power train (HV/LV boards, PDU's)
UPS
Batteies

In a immersed liquid data centre, you reduce some of these elements as follows:

Raised Floor (we dont need to pump air under the floor, but you still might want to run power and network cables under the floor (but we're seeing a lot of overhead cable routes now so maybe not!))

CRAC/H's are not required
Chillers are not required, although if you dont have a easily available user for your waste heat, you might want to include a dry cooler for summer running
Power train - Most Immersed Units are already equipped with full 2N power, and only need a standard connection.
UPS would still be required but as you're only going to need it for power and not cooling, you can downsize it.
Batteries, again you can reduce the amount of batteries needed.

All in all, we think that moving to an fully immersed solution could save around 50% of a standard data centre build costs, couple that with reduced operating costs and your data centre is already saving lots of money, consider the CATALYST project and you may even begin to make money from selling that waste heat and providing grid services.

We geniunely believe that in the future ALL data centres will be used immersed technologies and be integrated with smart grids and that the CATALYST project will do EXACTLY what it says on the tin!

Thats the Air v Liquid skirmish put to bed, and we've been a strong supporter of the technology since 2010 when we saw the first immersed demo unit from ICEOTOPE, since then we've been following and writing about this technology in a number of articles, one of which was an update from the original article, I recall, Martin from Asperitas telling me that I would need to update it sooner rather than 2021 and I think he's right, so look out for that update to an update!!

Friday, 20 December 2019

Carbon3IT 2019 Update and 2020 Forecast

This year has ben AMAZING!

Absolutely, bloody amazing, I said and I quote from last year "I'm not going to gaze into my crystal ball at this time, except to say that 2019 is going to be a VERY interesting year." and so it proved.
It was AMAZING!
Did I say AMAZING!, it was and at the risk of repeating myself, it was AMAZING!

So, why was it amazing? Well, I'm going to follow our usual format of a month by month commentary so here goes....

So, January 2019 saw us visiting a new clients premises to begin work on a whitepaper on IST, this was published by them in Q2 and, will feature in a forthcoming edition of a european DC related publication as well as a feature on the CATALYST project, more on that later! We also attended kickstart.eu in Amsterdam again followed by a quick visit to Brussels to speak at the ICT Footprint event. At the end of the Month I went to Boden, Lulea, Sweden to visit the DC that was the topic of a recent post. They also WON an award at the DCD Global Awards, the.....




We also had a few meetings with a client for ISO50001 certification, more on that later as well!


February is usually a fairly quiet month but we had a few calls for potential future projects, most of these planned to start in the new financial year and I spoke at the ENTress event in Wolverhampton on climate resilent infrastructure, specifically DCs citing the example of the City of Lancaster after Storm Desmond in 2015. We, as SFL also put in a funding round and we had loads of meetings to decide approach, content and finances, sadly we didn't get through to the next round but we did gain very valuable exprience.

March saw us visit the DCW event in London, which proved to be a turning point as we met up with Vicki from Green IT Amsterdam to have a handover of the CATALYST project, basically we are now the in-house data centre consultants for Green IT Amsterdam working on the CATALYST project, more info here

April  saw us visit Oslo at the Data Centre Forum where I spoke about the CATALYST project, the second of what turned out to be numerous trips to the Nordic region in 2019.
We also went to Zurich for a GRIG meeting, this is Green IT Global and consists of a number of organisations based in the UK, Netherlands, France, Switzerland and Finland promoting the use of sustainable ICT. We also had a couple of meetings with a few clients.

Early May,saw our MD getting excited as his team, Charlton got into a playoff position in League 1, eventually winning against Sunderland at Wembley. Oh, and a visit to Helsinki to speak at the Data Centre Forum on CATALYST and the EUCOC.

We love June, it might be because we get to go to Monaco for the DataCloud Europe event, as usual we were invited as the guest of the EU-JRC and this year partly funded by our friends at Rentaload, well if you call going back to a villa high up in the mountains above Monaco and staying in a dorm! Its always a good event for us and we picked up 3 new clients! A planned visit to the Netherlands was cancelled due to a family illness.

In July, our MD took his first visit to Poland and his 3rd to India, Bangalore to speak about CATALYST.

August is usually pretty quiet due to the holiday season and we all went to Amsterdam for a mini break, we did have a meeting with Green IT Amsterdam as well and we did a lot of work on the NHSD GP IT F project.

In September, which was probably the busiest month this year I visited Lincoln to finalise the coolDC design and build CEEDA award which also saw success at the DCD Global Awards winning the...




closely followed by a visit to Valencia to visit another DC for a CEEDA assessment, another GOLD and a visit to another one of their facilities is scheduled for Q1/2 2020..


Mid month it was off to Manchester for the DCA Data Centre Retransformation event where we held our 2nd CATALYST project Green Data Centre - Stakeholder Group meeting and presented at the Main event on the CATALYST project per se.

The Ops Director was also on the NEBOSH Health and Safety Course, a pretty intensive health and safety course on behalf on an existing client, she followed this up with a visit to their site for a review (required for the assessment part of the course).

I also spoke at Solar and Storage Live about ICT energy and DC's with Tim Chambers and Emma Fryer from coolDC and techUK respectively.

The week after was my epic euro tour where I visited 4 countries in 10 days, first to Amsterdam for a Green IT Amsterdam participants meeting, then an epic railway journey to Copenhagen to speak at DCF, then another train to Stockholm to assist with the Green IT Amsterdam study trip for Dutch Authorities (they guys responsible for DC planning approvals (you may recall that Amsterdam is on a DC construction ban at the present time!) Finally, a quick flight to Brussels for the CATALYST project preparation and EU review meeting (we passed!) my last train journey was back to London for a BSI TCT7/3/1 meeting.

We always tend to schedule the EUCOC Annual Best Practices meeting around this time of year, and this year it was scheduled for early October, this was followed by a visit to London for IP expo

November is conference season, 3 this year!, the month started with the DCD Converged event at London's Old Billingsgate, our MD had 2 speaking engagements the first to promote a new modular UPS solution and the 2nd on the first 10 yrs of the CEEDA programme.
Our ISO50001 client decided to amend the process and concentrate on their ESOS qualification due to some internal issues, this was completed in Dec (before the due date). We've completed all the policies, processes and procedures and review where we go from here in the new year.
Our MD had to leave early from DCD London and make his way to Amsterdam to speak at the DC Innovation Day organised by Mercer and Saval, followed by a Green IT Amsterdam team meeting, the following week was all about CATALYST.

The week after I made my way over to Dublin for Data Centres Ireland where had arranged the "heat" track, part of our CATALYST work followed by the 3rd Green Data Centres - Stakeholder Group meeting.

The following week we attended Data Center Forum in Stockholm, where our MD both ran a room, and delivered another presentation on CATALYST.

December is nearly always quite quiet, but I love going to the DCD Awards, and this year was excellent, I was very pleased 2 projects we've been involved get the recognition they deserve (see above).
This was followed by a visit to the Birmingham City University to begin preparations for a new Data Centre Module to be included in the Computing and Networking degrees, we are very honoured to be part of this project. It starts in January and we do have an option for a limited amount of "industry experts" to join us both as guest lecturers on specific subjects, we'll be in touch but please contact us on the email below if you'd like to join, and for some people who may work in the industry on specific areas and want to get an overall picture, a data centre 101 as it were, as well, again get in touch, but this will be extremely limited.

We then went to Amsterdam to meet with two clients as part of our work with Green IT Amsterdam and picked up 3 new pieces of work!

We also had a call with a potential partner on a new H2020 project, more on that in the new year.



As I stated earlier, we think 2020 is going to be very interesting indeed and as we have no idea as how its all going to pan out POLITICALLY, we're going to keep our powder dry for the time being.

But, that said, we will continue to offer the following services:

EU Code of Conduct for Data Centres Review and Preparation
CEEDA assessements (with our DCD partners)
ISO Management Systems for ISO9001/14001/22301/27001 and 50001
Data Centre Audits (with our M and E partners)
Data Centre Training (on site, and tailored to your requirements)
Data Centre Support Services - Compliance
Health and Safety Services

Special Services, if you have a problem that needs solving, let us know, through our wide network of consultants, supply chains and operators we've probably come across the problem before and therefore may be able to help

So far, we already have a number of assigments scheduled for Q1/2/3 and 4, but we'll always find time and space to add some more.


Finally, we'd like to wish all our customers, suppliers and industry collegues the very best wishes for 2020

PS Last year we said that our next blog post will be the 4th in our series "Air v Liquid" this, ahem, was delayed and we hope that this article will be on the cost side of things scheduled to be published early in the new year, honest!

As always, until next time.

If you need to get in touch with us, please use the following:

info@carbon3it.com
www.carbon3it.com
@carbon3it (Twitter/Skype)

Sunday, 24 November 2019

Boden - The Arctic Circle

Last Jan/Feb I spent some time in Boden, Lulea, Sweden to visit the Boden One Data Centre, this is an EU funded project and you can find out all about the project here, as well as the RI:SE data centre laboratories, you can find out about them here.

I'm giving a presentation at Data Center Forum in Stockholm later this week, more info here , so thought it was time I got around to posting this article which has been 10 months in preparation!

This next section was written upon my return from the coldest place I'd even been to, it was minus 37, yes you read that correctly -37, which is to be expected when you're about 100yds inside the Arctic Circle!

Boden One DC

Before I write about my impressions of the Boden One DC I have to present some background, essentially, "The ultimate goal for the project Boden Type DC One is to build the prototype of the most energy and cost efficient data centre in the world in Boden, Sweden."

The project seeks to be "Resource efficient throughout its lifecycle" and goes on to state that
"Data centres consist of two distinct elements; IT-equipment and the surrounding service envelope. The innovation of Boden Type DC One is a new, holistic design approach for the latter. The goal is to bring a novel combination of existing techniques and locations to the market. This combination does not exist and has never been tested.

The unique solution offers a sustainable datacentre building which is energy and resource efficient throughout its lifecycle, cheaper to build and operate and brings jobs and knowledge to remote places in large parts of Europe. The cornerstones of the concept Boden Type DC One are:
  • Efficient fresh air cooling.
  • Modular building design.
  • Renewable energy.
  • Favourable location.
The 600kW prototype facility will be a living lab and demonstration site, as it will be tested by providers and end-users in a real operation environment with all aspects of its operations measured. With the prototype, the project stakeholders will be able to:
  • Validate that the datacenter concept meets the energy efficiency, financial reliability, and other targets in real operational environments.
  • Validate and improve the design software tools for modeling and simulating the operation of the facility and cooling equipment.
  • Demonstrate through accurate simulation that the prototype can be replicated in other European sites with less favourable conditions.
The name Boden Type Data Centre One is naturally created for the first Type DC in the location of Boden, in the north of Sweden. Close to the clean and high quality electricity supply by renewable energy source, ideal climate and service infrastructure."

These are laudable aspirations and goals and well in keeping with Carbon3IT's own view on data centres for the future.

However, and this is not a criticsm, more of an observation, there is no reference to existing data centre design and build standards, nor has any classification in terms of an Uptime Institute Tier or EN50600 Class been applied to the site. In my opinion, and contrary views are welcome, at best this site and current infrastructure would be classed as a Tier 1/Class 1 facility, but it is none the poorer for being classed as such, it is after all a research project.

 So, when I visited, this EU funded project was still a work in progress, the building is 98% complete, and was undergoing some final AC commissioning during my visit, and only 1 of the 3 pods has any active IT equipment installed. The configuration is 3 pods which will be fitted with an OCP stack (this is the active pod), a HPC cluster and finally a blockchain (cryptocurrency) mining operation.

It does not have a UPS nor a generator, but there is space for a UPS within the power room and the use of a generator is a moot point when you have a hydropower station and biogas plant literally meters away, the connection to the biogas plant is still under discussion, and information on the hydro power station can be found here.
It should also be noted that the site is also literally meters away from https://www.hydro66.com/

The DC used 100% free cooling, there is a cold air corridor which feeds ambient air into the eco-cooling equipmnent which is then (depending on season) tempered with hot return air in winter to provide a range of inlet temperatures for the IT equipment or in summer passed straight through to the IT equipment with the hot air being vented out to a vaulted roof and then outside.

This next section was written today (24/11/2019) and the site is now fully operational and generating some "sweet" data which you can view here, the current PUE is here on the top left of the page (the reason why I posted the link is because this is live real time data and I didn't want to "date" this post.

This is very impressive, however it has be tempered with the fact that this is an unmanned test facility with some of the latest equipment that has been heavily optimised by the research team and as stated earlier would be classed under both the UTI Tier Topology and EN50600 Classification as a Tier/Class 1 site.

Yes, it is clearly a VERY energy efficient site, in terms of the ratio of total energy consumption against IT energy consumption being around the low 1.007 PUE mark.

So, some observations, the cold aisle temperatures in Pod 1 & 3 appear to be below ASHRAE reccommended at around 13.2 degrees in Pod 1, the OCP pod and 13.6 in Pod 3, the ASICs Pod, I'll make a mental note to ask the RI:SE team in Stockholm late this week to find out why this is the case, but it raises an interesting question about supply temperatures.

It is interesting that there is no UPS or Generator, could, or would a commercial DC operator consider this? I doubt it, although the prudent application of the EU Code of Conduct for Data Centres (Energy Efficiency) best practices applied to an organisation in relation to the "actual" mission critical IT functions via BP 4.3.x may yield an interesting view?

At best, the Boden One is a "hyper-edge" facility and could be deployed to link "edge" sites in a hub/spoke configuration.

My conclusion is that research has to be done, and the mission parameters are laudable:

Efficient fresh air cooling. Modular building design. Renewable energy.Favourable location.

However, DCs cannot always take advantage of fresh air cooling, on-site or local renewable energy or favourable locations and I'm sure that many in the community will be dismissive of the project and its results stating that its not real world, that is missing the point.

If we strip out some of the historical retoric out of DC design and educate users, we could build DCs using this research and build them cheaply, using less energy and save time deploying compute to remote locations, if we have to give up some uptime and resilience, is this necessarily a bad thing?

I'll be available at the Data Center Forum on the 28th November in Stockholm to defend my corner, I'll look forward to some robust discussions.

7th December 2019 - Update
FANTASTIC News from the recent DCD Global Awards held in London on the 5th December, Boden won an award the... 



Well Done!





 

Energy Efficiency - The 5th Fuel

Saw this BBC Article
ages ago and wrote an article but forgot to post however, I thought IT is was worthy of further discussion as it does have some interesting insights. I decide to post it in full below: Our comments are in italics

"Installing a single low-energy LED bulb may make a trivial contribution to cutting the carbon emissions that are overheating the planet.
But if millions choose LEDs, then with a twist of the collective wrist, their efforts will make a small but significant dent in the UK's energy demand.
Studies show making products more efficient has - along with other factors - already been slightly more effective than renewable energy in cutting CO2 emissions.
The difference is that glamorous renewables grab the headlines.
The "Cinderella" field of energy efficiency, however, is often ignored or derided. 

Yes, unfortunately, I've been to many data centres where the basics of energy efficiency are often ignored or derided, hopefully this article may save me some time in the future!

Who says this?

The new analysis of government figures comes from the environmental analysis website Carbon Brief.
Its author says EU product standards on light bulbs, fridges, vacuum cleaners and other appliances have played a substantial part in reducing energy demand.
Provisional calculations show that electricity generation in the UK peaked around 2005. But generation per person is now back down to the level of 1984 (around 5 megawatt hours per capita).

Wow!

How much carbon has been reduced?

It’s widely known that the great switch from coal power to renewables has helped the UK meet ambitions to cut carbon emissions.
The report says the use of renewables reduced fossil fuel energy by the equivalent of 95 terawatt hours (TWh) between 2005 and now. And last year renewables supplied a record 33% share of UK electricity generation.
But in the meantime, humble energy efficiency has contributed to cutting energy demand by 103 TWh. In other words, in the carbon-cutting contest, efficiency has won – so far. And what’s more, efficiency is uncontroversial, unlike wind and solar.

Yes, efficiency is avoided cost, it also means that new plant doesn't need to be built

What role has industry played?

The energy efficiency story doesn’t just apply to households. There have been major strides amongst firms, too. Big supermarkets have worked hard to improve the performance of their lighting and refrigeration.
And because firms and individuals are using less energy, that has offset the rise in energy prices. So whilst the prices have gone up, often bills have gone down.
The issue is complicated, though. Other factors have to be taken into account, such as energy imported via cables from mainland Europe, population growth and shifts from old energy-intensive industries.

UK Data Centres represent a small amount of the total overall energy consumption, or so they would have us believe, here at Carbon3IT we think the actual amount that can be attributed to the sector is somewhat higher, that said, we include all IT energy and think its around 12% of total consumption. Let us know if you think differently, and we can discuss.

Should 'Cinderella' efficiency be allowed to shine?

Simon Evans from Carbon Brief told BBC News: “Although the picture is complex it’s clear that energy efficiency has played a huge role in help the UK to decarbonise – and I don’t think it’s got the recognition it should have.
"Say you change from a B or C-rated fridge to an A++ rated fridge. That can halve your energy use from the appliance, so it’s pretty significant.”
The UK government has consistently said it champions energy efficiency, but campaigners say it could do more. The UN’s climate body also supports energy efficiency as a major policy objective, although the issue features little in media coverage.
But supporters of efficiency argue that ratcheting up efficiency standards for everything from planes and cars to computer displays and freezers offers the best-value carbon reductions without the pain of confronting the public with restrictions on their lifestyle choices.
Joanne Wade from the Association for Conservation of Energy told BBC News: “I haven’t seen these figures before but I’m not surprised.
“The huge improvement in energy efficiency tends to be completely ignored. People haven’t noticed it because if efficiency improves, they are still able to have the energy services that they want. I suppose I should reluctantly agree that the fact that no-one notices it is part of its appeal.”
Scientists will be keen to point out that government-imposed energy efficiency is just one of a host of cures needed to tackle the multi-faceted problem of an overheating planet.

We totally agree! The use of the EUCOC can reduce energy consumption in data centres between 25-50% and possibly more if you adopt some of the latest technologies and optional best practices.

If you want any assistance in reducing your ICT energy consumption, let us know by emailing "info@carbon3it.com" or by following and then messaging us on our twitter feed @carbon3it