Sunday, 24 November 2019

Boden - The Arctic Circle

Last Jan/Feb I spent some time in Boden, Lulea, Sweden to visit the Boden One Data Centre, this is an EU funded project and you can find out all about the project here, as well as the RI:SE data centre laboratories, you can find out about them here.

I'm giving a presentation at Data Center Forum in Stockholm later this week, more info here , so thought it was time I got around to posting this article which has been 10 months in preparation!

This next section was written upon my return from the coldest place I'd even been to, it was minus 37, yes you read that correctly -37, which is to be expected when you're about 100yds inside the Arctic Circle!

Boden One DC

Before I write about my impressions of the Boden One DC I have to present some background, essentially, "The ultimate goal for the project Boden Type DC One is to build the prototype of the most energy and cost efficient data centre in the world in Boden, Sweden."

The project seeks to be "Resource efficient throughout its lifecycle" and goes on to state that
"Data centres consist of two distinct elements; IT-equipment and the surrounding service envelope. The innovation of Boden Type DC One is a new, holistic design approach for the latter. The goal is to bring a novel combination of existing techniques and locations to the market. This combination does not exist and has never been tested.

The unique solution offers a sustainable datacentre building which is energy and resource efficient throughout its lifecycle, cheaper to build and operate and brings jobs and knowledge to remote places in large parts of Europe. The cornerstones of the concept Boden Type DC One are:
  • Efficient fresh air cooling.
  • Modular building design.
  • Renewable energy.
  • Favourable location.
The 600kW prototype facility will be a living lab and demonstration site, as it will be tested by providers and end-users in a real operation environment with all aspects of its operations measured. With the prototype, the project stakeholders will be able to:
  • Validate that the datacenter concept meets the energy efficiency, financial reliability, and other targets in real operational environments.
  • Validate and improve the design software tools for modeling and simulating the operation of the facility and cooling equipment.
  • Demonstrate through accurate simulation that the prototype can be replicated in other European sites with less favourable conditions.
The name Boden Type Data Centre One is naturally created for the first Type DC in the location of Boden, in the north of Sweden. Close to the clean and high quality electricity supply by renewable energy source, ideal climate and service infrastructure."

These are laudable aspirations and goals and well in keeping with Carbon3IT's own view on data centres for the future.

However, and this is not a criticsm, more of an observation, there is no reference to existing data centre design and build standards, nor has any classification in terms of an Uptime Institute Tier or EN50600 Class been applied to the site. In my opinion, and contrary views are welcome, at best this site and current infrastructure would be classed as a Tier 1/Class 1 facility, but it is none the poorer for being classed as such, it is after all a research project.

 So, when I visited, this EU funded project was still a work in progress, the building is 98% complete, and was undergoing some final AC commissioning during my visit, and only 1 of the 3 pods has any active IT equipment installed. The configuration is 3 pods which will be fitted with an OCP stack (this is the active pod), a HPC cluster and finally a blockchain (cryptocurrency) mining operation.

It does not have a UPS nor a generator, but there is space for a UPS within the power room and the use of a generator is a moot point when you have a hydropower station and biogas plant literally meters away, the connection to the biogas plant is still under discussion, and information on the hydro power station can be found here.
It should also be noted that the site is also literally meters away from https://www.hydro66.com/

The DC used 100% free cooling, there is a cold air corridor which feeds ambient air into the eco-cooling equipmnent which is then (depending on season) tempered with hot return air in winter to provide a range of inlet temperatures for the IT equipment or in summer passed straight through to the IT equipment with the hot air being vented out to a vaulted roof and then outside.

This next section was written today (24/11/2019) and the site is now fully operational and generating some "sweet" data which you can view here, the current PUE is here on the top left of the page (the reason why I posted the link is because this is live real time data and I didn't want to "date" this post.

This is very impressive, however it has be tempered with the fact that this is an unmanned test facility with some of the latest equipment that has been heavily optimised by the research team and as stated earlier would be classed under both the UTI Tier Topology and EN50600 Classification as a Tier/Class 1 site.

Yes, it is clearly a VERY energy efficient site, in terms of the ratio of total energy consumption against IT energy consumption being around the low 1.007 PUE mark.

So, some observations, the cold aisle temperatures in Pod 1 & 3 appear to be below ASHRAE reccommended at around 13.2 degrees in Pod 1, the OCP pod and 13.6 in Pod 3, the ASICs Pod, I'll make a mental note to ask the RI:SE team in Stockholm late this week to find out why this is the case, but it raises an interesting question about supply temperatures.

It is interesting that there is no UPS or Generator, could, or would a commercial DC operator consider this? I doubt it, although the prudent application of the EU Code of Conduct for Data Centres (Energy Efficiency) best practices applied to an organisation in relation to the "actual" mission critical IT functions via BP 4.3.x may yield an interesting view?

At best, the Boden One is a "hyper-edge" facility and could be deployed to link "edge" sites in a hub/spoke configuration.

My conclusion is that research has to be done, and the mission parameters are laudable:

Efficient fresh air cooling. Modular building design. Renewable energy.Favourable location.

However, DCs cannot always take advantage of fresh air cooling, on-site or local renewable energy or favourable locations and I'm sure that many in the community will be dismissive of the project and its results stating that its not real world, that is missing the point.

If we strip out some of the historical retoric out of DC design and educate users, we could build DCs using this research and build them cheaply, using less energy and save time deploying compute to remote locations, if we have to give up some uptime and resilience, is this necessarily a bad thing?

I'll be available at the Data Center Forum on the 28th November in Stockholm to defend my corner, I'll look forward to some robust discussions.

7th December 2019 - Update
FANTASTIC News from the recent DCD Global Awards held in London on the 5th December, Boden won an award the... 



Well Done!





 

Energy Efficiency - The 5th Fuel

Saw this BBC Article
ages ago and wrote an article but forgot to post however, I thought IT is was worthy of further discussion as it does have some interesting insights. I decide to post it in full below: Our comments are in italics

"Installing a single low-energy LED bulb may make a trivial contribution to cutting the carbon emissions that are overheating the planet.
But if millions choose LEDs, then with a twist of the collective wrist, their efforts will make a small but significant dent in the UK's energy demand.
Studies show making products more efficient has - along with other factors - already been slightly more effective than renewable energy in cutting CO2 emissions.
The difference is that glamorous renewables grab the headlines.
The "Cinderella" field of energy efficiency, however, is often ignored or derided. 

Yes, unfortunately, I've been to many data centres where the basics of energy efficiency are often ignored or derided, hopefully this article may save me some time in the future!

Who says this?

The new analysis of government figures comes from the environmental analysis website Carbon Brief.
Its author says EU product standards on light bulbs, fridges, vacuum cleaners and other appliances have played a substantial part in reducing energy demand.
Provisional calculations show that electricity generation in the UK peaked around 2005. But generation per person is now back down to the level of 1984 (around 5 megawatt hours per capita).

Wow!

How much carbon has been reduced?

It’s widely known that the great switch from coal power to renewables has helped the UK meet ambitions to cut carbon emissions.
The report says the use of renewables reduced fossil fuel energy by the equivalent of 95 terawatt hours (TWh) between 2005 and now. And last year renewables supplied a record 33% share of UK electricity generation.
But in the meantime, humble energy efficiency has contributed to cutting energy demand by 103 TWh. In other words, in the carbon-cutting contest, efficiency has won – so far. And what’s more, efficiency is uncontroversial, unlike wind and solar.

Yes, efficiency is avoided cost, it also means that new plant doesn't need to be built

What role has industry played?

The energy efficiency story doesn’t just apply to households. There have been major strides amongst firms, too. Big supermarkets have worked hard to improve the performance of their lighting and refrigeration.
And because firms and individuals are using less energy, that has offset the rise in energy prices. So whilst the prices have gone up, often bills have gone down.
The issue is complicated, though. Other factors have to be taken into account, such as energy imported via cables from mainland Europe, population growth and shifts from old energy-intensive industries.

UK Data Centres represent a small amount of the total overall energy consumption, or so they would have us believe, here at Carbon3IT we think the actual amount that can be attributed to the sector is somewhat higher, that said, we include all IT energy and think its around 12% of total consumption. Let us know if you think differently, and we can discuss.

Should 'Cinderella' efficiency be allowed to shine?

Simon Evans from Carbon Brief told BBC News: “Although the picture is complex it’s clear that energy efficiency has played a huge role in help the UK to decarbonise – and I don’t think it’s got the recognition it should have.
"Say you change from a B or C-rated fridge to an A++ rated fridge. That can halve your energy use from the appliance, so it’s pretty significant.”
The UK government has consistently said it champions energy efficiency, but campaigners say it could do more. The UN’s climate body also supports energy efficiency as a major policy objective, although the issue features little in media coverage.
But supporters of efficiency argue that ratcheting up efficiency standards for everything from planes and cars to computer displays and freezers offers the best-value carbon reductions without the pain of confronting the public with restrictions on their lifestyle choices.
Joanne Wade from the Association for Conservation of Energy told BBC News: “I haven’t seen these figures before but I’m not surprised.
“The huge improvement in energy efficiency tends to be completely ignored. People haven’t noticed it because if efficiency improves, they are still able to have the energy services that they want. I suppose I should reluctantly agree that the fact that no-one notices it is part of its appeal.”
Scientists will be keen to point out that government-imposed energy efficiency is just one of a host of cures needed to tackle the multi-faceted problem of an overheating planet.

We totally agree! The use of the EUCOC can reduce energy consumption in data centres between 25-50% and possibly more if you adopt some of the latest technologies and optional best practices.

If you want any assistance in reducing your ICT energy consumption, let us know by emailing "info@carbon3it.com" or by following and then messaging us on our twitter feed @carbon3it 

Sunday, 20 October 2019

Wow, so long since our last post!

Well, they say that the devil makes work for idle hands so you can assume that we've been very busy, seeing as its over 9 months since we last posted, so we need to give you an update!

In our last post we spoke a little bit about Brexit, well, as expected its turned out to be a little more complex than most people thought, the current situation is that the UK is being run by the very people that promoted it, and even they cant find a way through. We reserve further comment until the situation becomes a little clearer.

So, what have we been up to?

It makes sense to list our current projects (where we are not bound by NDA's!)

FT NHSD GP IT Futures Project

Towards the end of last year we were asked to review the list of data centre standards applicable to the NHS Hosting for various projects, we consolidated this list down from over 500 individual points to the use of international standards, specifically the EN50600 series of Data Centre Design, Build and Operate standards developed by industry professionals over the last 7 years and a few others.
As a result of this successful project we were asked to assist in the Market Engagement and Assurance element of a new NHSD tender for GP IT Hosting Services, this work continues.

CATALYST Project


The EU Funded Catalyst Project www.project-catalyst.eu aspires to turn data centres into flexible multi-energy hubs, which can sustain investments in renewable energy sources and energy efficiency. Leveraging on results of past projects, CATALYST will adapt, scale up, deploy and validate an innovative technological and business framework that enables data centres to offer a range of mutualized energy flexibility services to both electricity and heat grids, while simultaneously increasing their own resiliency to energy supply.
We have been retained as in-house consultants by www.greenitamsterdam.nl  to assist Green IT Amsterdam to deliver their project elements which are, to play an active role in defining innovative business models, designing the impact creation strategy, and lead the establishment of the Green Data Centres stakeholder Group so as to share information on environmentally sustainable data centres and foster knowledge and experience exchange. In addition, we will be responsible for assessing the CATALYST framework and introducing the CATALYST Green DC Assessment toolkit which will offer support services to data centre operators and developers, certification bodies and other stakeholders to identify, measure and assess relevant KPIs and metrics in a modular, structured and organized way.
This project continues and is scheduled to end in September 2020.
As part of this project, our MD has been travelling widely across the EU and beyond, and expects to do a lot of travel for the next 11 months, so far this year, he has been to Oslo, Amsterdam, Helsinki, Poznan, Copenhagen, Stockholm, Brussels and Bangalore, India on behalf of the project and has trips to scheduled to Milan, Dublin, Stockholm for the rest of 2019, with occasional trips to Amsterdam for project updates, next years calendar has yet to be scheduled but keep on eye on the @catalyst-dc and @carbon3it twitter feeds for news. We'd like to invite all followers to our next CATALYST Green Data Centres - Stakeholder Group event taking place in Dublin, Ireland on the 20th November full details and registration can be found on the http://project-catalyst.eu/events-and-news/ page, where other information on the project can be found.

CEEDA

Its been a relatively quiet year for CEEDA with only 2 assessments taking place, these were the coolDC Lincoln Discovery Centre, which was the "operate" element of a CEEDA design and operate assessment, and one in Spain (which is covered by an NDA) both are in the final stages of reporting and all will be revealed at the DCD London event in early November, you can register for the event here https://www.datacenterdynamics.com/conferences/london/2019/ 
We'll have 2 speaking slots at this years conference, one has yet to be finalised and one on the "10years of CEEDA", Recommendations and Insights from the field" so register to attend!
We have 3 re-assessments taking place in 2020, some held over from 2019 due to operational issues, these are in Germany, Gibraltar, and Scotland, and a new client in Saudi Arabia. We understand from DCD that there will be a greater push on the CEEDA product moving forward, this is largely due to the climate emergency and the dual aspect of ICT globally, the first to mitigate the impact of ICT on the environment itself and the use of ICT system to monitor, predict and mitgate other aspects of the emergency. We are working very closely with DCD on this topic.
We've also been invited to judge this years DCD Awards and we are currently looking through the submissions for our particular category, you'll have to wait until the 5th December to hear the final award winners!

Training
We are delivering some ad-hoc training on "energy and cost management in data centres" which is an old course no longer carried by the original supplier, but our client has specifically asked us to deliver it, based on our industry knowledge and connection in the industry, but also because we'll be visiting their DC and pointing out where potential energy savings can be made, we'll report back once we've delivered the training (late Oct)

ISOs
We provide assistance in the developement of ISO management systems, specifcially in the data centre field, namely ISO9001, 14001, 22301, 27001, 45001 & 50001,
Our Operations Director has recently completed the NEBOSH General Certificate in Health and Safety and is awaiting the results.
We have been providing assistance to a local DC for ISO50001/ESOS.

Standards
We still continue our work with the global standards bodies, and have recently attended the BSI TCT7/3 group, been seconded to the TCT7/3/1 group working on a revamped EN50600-4-X standard and presented at the JCG - GDC TC215 committee on the CATALYST project (as part of our work!) 

Speaking Engagements
We've spoken at a number of events this year on  behalf of the CATALYST project and the EUCOC, mostly at https://datacenter-forum.com/
events in Oslo, Helsinki, and Copenhagen, but have also scheduled for Stockholm in late November.
You'll also see us at DCD London, DC Ireland and potentially in Copenhagen and Berlin, but we're in the early stages and nothing is confirmed yet.


So, in summary, we said last year in our last post that we felt that this year was going to be special as so it has proved.

We still continue to provide the following services and one of our major tasks over the coming months is to undertake a serious revamp of our website, so look out for that!


EU Code of Conduct for Data Centres Review and Preparation
CEEDA assessements (with our DCD partners)
ISO Management Systems for ISO9001/14001/22301/27001/45001 and 50001
Data Centre Audits (with our M&E partners)
Data Centre Training (on site, and tailored to your requirements)
Data Centre Support Services - Compliance

Special Services, if you have a problem that needs solving, let us know, through our wide network of consultants, supply chains and operators we've probably come across the problem before and therefore may be able to help.


Until next time (and we promise it wont be 10 months!!)

Friday, 28 December 2018

Carbon3IT Ltd 2018 Update

I'll start by stating that this has been a very diverse year in terms of sales, we appear to have broken new ground in various ways and our income streams have essentially quadrupled and financially this year has been marginally better than last year, and we have a number of projects already booked for 2019 but these are essentially restricted to Q1 and we (and it seems the Government and 100% of UK inhabitants) have no real clue as to what will happen after the 29th March 2019 (Brexit Day).

At the time of writing this article, Parliament is in recess for Christmas, and no real progress has been made at all, the EU Withdrawal Agreement that T May has come back with, has no prospect of making it through the house, so all the talk is "my deal, no deal or no brexit" we'd prefer "no brexit" but there are some who wish to jump out of the UK aeroplane without a parachute, thank you but we'd rather not.
I'm not going to gaze into my crystal ball at this time, except to say that 2019 is going to be a VERY interesting year.

So, January 2018 saw us preparing for our final EURECA project sessions, attending kickstart.eu and presenting at the BroadGroup DataCloud event on the topic of energy aand energy efficiency, as you'd expect the EUCOC featured heavily as did some of the UK energy legislation, standards and thoughts for the future. We were quite busy at this time preparing some BRIGHTTALK seminars to support our final EURECA sessions. We also had a conversation with some chaps at Building Research Establishment (BRE) but more on that later.

February saw the final EURECA session and it was a bumper event, we had a day at the UEL for around 75 delegates, and then attended the Data Centre Summit where we had a number of presentations scheduled, finally rounded off with a series of webinars and an event at the Houses of Parliament. We also manged to fit in a number of client meetings, the first to discuss the use of our services for a large global data centre supply chain supplier, the second to discuss a project in North Africa, the third to review some EUCOC updates for a large global telco/colo provider. We also found the time to visit the HoP for a policy connect launch meeting for sustainable ICT inlcuding data centres for which we and EURECA had provided some content.

March was an intersting month, we met up with our good friends at Ekkosense to discuss items of  mutual interest (air flow in data centres mainly), we attended the Hydrogen and Fuel Cell show at the NEC, mainly to ascertain whether the technology could be used for data centre (yes, they can but it requires a major rethink and redesign of the data centre!) The month was rounded off by our annual visit to Data Centre World, my thoughts on this are that I remember it being very very cold! We also went to Amsterdam for an ICTFootprint event were we spoke on the topic of Green IT and also Sustainable Data Centres, we also undertook a datacentre audit for a midlands university

April was a fairly quiet month due to Easter but we did have some EUCOC work and some work via our Dutch partners Green IT Amsterdam.

Early May, was the Data Centres North event, and we managed to fit 2 trips into the Netherlands (the 3rd and 4th this year"), the first, to provide some basic data centre foundation training to another Dutch partner Asperitas, and the second to meet up with a Green IT Amsterdam client to develop their Green Data Centre Campus Sustainability Framework, we also attended the Data Centre Solutions Awards and a couple of webinars on the emerging Data Centre EMAS criteria where we provided some content and discussion.

We love June, it might be because we get to go to Monaco for the DataCloud Europe event, as usual we were invited as the guest of the EU-JRC and partly funded by our friends at the DCA. We love it because the weather improves and it tends to get hot, this year was no exception it got really hot and for a long time. We also presented to a selected audience for ABB on the topic of "The EUCOC and other animals (I may have been watching too much "The Durells" in the previous couple of weeks whilst building the presentation), we also attended the BICSI and MIXingIT event as well as England first WC game (thank you Keysource!)

In July, we made our way to Manchester for the Data Centre Transformation conference (and another England WC game), attended the datacentre.me networking event and the launch of Interxion's London 3 facility at the London Assembly/Mayors Office, that was a very good night, the view is magnificent,  the Tower of London, Tower Bridge, and all the city laid out in front of you, the weather was so good that you could see for miles!. The next week saw me in Brussels to visit a new client to prepare 3 EUCOC applications,  Later that week it was also a family event, a christening that co-incided with the school holidays so we went up to Manchester, stayed the night, and met up with 2 clients the following  day. We also managed a trip to Cambridge to visit another potential partner/client and the girls enjoyed the city (again it was very hot!)

August is always very quiet, but not this year! We had a meeting with a Landlord for a very interesting data centre site very close to us, in fact it is the closest and I've been dying to get in there and mooch around, it appeared that my wish was about to come true! We also visited a client in London at their Class 2 facility (sadly no business from them!) We rounded the month off by providing some training to BRE personnel (the result of that phone call in January and a meeting with them at DCW in March (thats how long our sales cycle can be!)) that itself was topped off by a visit to a Data Centre (We went to Interxion's London site). I've always felt that data centre training is too one sided but thats not a topic for discussion now, but rest assured that plans are afoot!

In September I was back in Amsterdam for a brief visit to attend the launch of the Catalyst project with our partner Green IT Amsterdam, this project is seeking to promote the reuse of data centre waste heat for other purposes, such as district heat systems and the like, its going to be a very interesting project and we are assisting them. We then flew to Helsinki for a CEEDA assessment, this assessment was a "Design Operate CEEDA" and we were reviewing the operational elements, I can state that this was an exceptional facility and well worth its Gold accreditation.
We also found the time to attend the DCA golf day where once again we managed to just miss the "worst golfer" prize.
Later, in the month we visited Leeds with a client and their client and then attended the DCA Annual Members meeting coupled with a couple of steering group meetings namely the "energy efficiency" and new "liquid alliance"

We always tend to schedule the EUCOC Annual Best Practices meeting around this time of year, and this year it was scheduled for early October, this was follwed by a visit to London for IP expo and then a trip to Bristol to meet up with the Council to discuss their "Energy Leap" project. We also visited the data centre closest to me to conduct the audit on behalf of the landlord, my dream had come true! Its always good to visit data centres and check out the technology and management systems in use, but this was a joy to behold, the site was built when I was 11 years old, (I'm 54 now, but feel 24!) and I have to say that for its age, it was in exceptional condition, true it did look a bit tired but hey, this girl had been working her socks off 24/7/365 for over 40 years, hopefully she'll get a make over soon (watch this space!)

November is conference season, but we managed to fit in a CEEDA assessment and an EUCOC preparation meeting with a new client as well as attend a few non standard (i.e. non data centre events) as well, the month started with the DCD Converged event at London's Old Billingsgate, and then I attended the Low Carbon Britain event. We then met up with a client with regard to assisting them with their proposed ISO50001 certification (we got the job, and are currently preparing draft documents for them to review and amend to suit their needs, later we'll check the policies, processes and procedures and review the evidence prior to the certification process, which we'll attend.
Later that week I travelled to Lincoln to visit an old client who has finally managed to get a site and is starting to build a very interesting data centre, more on that project next year!
The week after I made my way over to Dublin for Data Centres Ireland and then London for a CBRE briefing on Standards (a subject I know a little about!). We spent the rest of the month preparing and writing our report on the data centre audit we'd completed for our client in Warwick. We also managed to get tickets for the Networking Event of the Year, which is the DataCentre.ME Christmas Party, held on the Tattershall Castle, a boat moored between the House of Parliament and Hungerford Bridge, with excellent views of the London Eye and of course the River.

December is nearly always quite quiet, but not this year, we were invited to the Man Utd v Arsenal match at Old Trafford at the Captains Table, this is corporate hospitality at its finest, 3 course meal, drinks, programme, gift and a very good evenings entertainment (not on the football side of things unfortunately, although it was a 2-2 draw, the game wasn't that great!) followed by a quick trip to London to attend the DCD Awards. I love going to the DCD Awards, and this year was excellent, the EURECA project WON! the Initative of the Year award, so many thanks to the entire EURECA team for winning this very prestigious award, which has made and will continue to make data centres in the EU energy efficient and sustainable.
The next week I met with a potential new partner who is interested in heat recovery and reuse in the DC sector, this was followed by a small networking event run by the excellent Caroline Hitchens and her DataCentre.Me team. The day after I attended the BSI TCT7/3 standards committee meeting which of course develops the EN50600 series of data centre design, build and operate standards.
We had an update meeting with our ISO50001 client and then a couple of meetings in London to sort out some new work (potentially) for next year.

As I stated earlier, next year is going to be very interesting indeed and as we have no idea as how its all going to pan out, we're going to keep our powder dry for the time being.

But, that said, we will continue to offer the following services:

EU Code of Conduct for Data Centres Review and Preparation
CEEDA assessements (with our DCD partners)
ISO Management Systems for ISO9001/14001/22301/27001 and 50001
Data Centre Audits (with our M&E partners)
Data Centre Training (on site, and tailored to your requirements)
Data Centre Support Services - Compliance

Special Services, if you have a problem that needs solving, let us know, through our wide network of consultants, supply chains and operators we've probably come across the problem before and therefore may be able to help


Finally, we'd like to wish all our customers, suppliers and industry collegues the very best wishes for 2019

The next blog post will be the 4th in our series "Air v Liquid" and this article will be on the cost side of things scheduled to be published early in the new year.

As always, until next time.

If you need to get in touch with us, please use the following:

info@carbon3it.com
www.carbon3it.com
@carbon3it (Twitter/Skype)

Sunday, 2 September 2018

Air v Liquid - Part 3 - Cooling v Heat rejection

Following on from the previous 2 articles, I'm now going to look at cooling v heat rejection in the data centre environment.

I stated with some authority that cooling per se is a misnomer, to date I've not had any comments refuting my assertion so I'll continue.

Cooling (by the strict technical term is "is the transfer of thermal energy via thermal radiation, heat conduction or convection.
Heat rejection is a component part of these processes.
So, when we cool something we effectively remove heat and thats precisely what we do in a data centre.

The question, and the topic of these articles is....

Air v Liquid

If we were to look at the data centre ecosystems in almost every continent on this planet we will find that in 99.99% of cases, the medium for heat transfer is air, good old air.

And the principle reason for this is that most computer equipment, servers, storage and networking equipment is designed around the use of air as a cooling medium.

Lets look at air.

So, the main concept here is the "air cycle", for the purposes of this article we're going to start the cycle at the exit of the CRAC/H unit, but we could easily start anywhere in the cycle.

Air is pushed via a fan into a floor void at positive pressure, the air escapes into the room by the prudent placement of floor tiles (in front of the rack, please refer to the EUCOC for futher guidance!) but could easily escape from a whole host of gaps, holes and other routes (hence why the best practice recommend the stopping of all potential sources of leaks), from the tile, air is forced upwards and hopefully into the main inlet of the server, air then passes over the "heat" producing components and is exhausted through the rear of the server and moves upwards (hot air tends to rise, recall your physics lesssons from school) the air then rises to the ceiling level and may, if coerced, find its way back to the top of the CRAC/H unit. What happens inside the CRAC/H unit is a mystery to me!
Nah, its not, just jesting, the air is passed over a cooling coil and heat is transferred to the coil, inside the coil is a liquid, this is pumped to the outside unit, and disappears into the ether by a host of different methods dry coolers, evaporative, cooling towers or through a chiller, what is key is that the liquid transfers its heat to the outside source and thus becomes cooler, this liquid is then returned back to the internal unit and at the bottom of the coil is considerably cooler than the air at the top. Thus the heat generated by the IT equipment is removed and cooler air is supplied.

[NOTE: Some systems will differ in the approach and method of heat rejection but the principle is the same]

The temperatures of the air will differ depending on the desired temperature, but if you recall, average room temperatures from my students are between 18-21℃.

Lets introduce the concept of supply and return temperatures, and the control of them, so supply is the temperature of the air from the bottom of the CRAC/Hs being pushed into the room, the return is the temperature as it enters the top of the CRAC/Hs, so when my students speak of a range of 18-21℃ this may be controlled either by a supply temperature of 18-21℃ or a return temperature of between 30-35℃ which equates to a supply temperature of 18-21℃, the key being somthing called the delta t, or the difference between the cold air and the hot air.

The optimum delta is 15℃ so 33℃ return will provide 19℃ supply.

Understanding this key concept is an important point, many facilities unfortunately have AC systems that have no user controls, and the control points are set by factory to be a standard range. Some facilities operate on return temperatures and sometimes these are set quite low, which in turn means that the delta t forces the supply temperature lower, in some cases (anecdotally from colleagues, temperatures where meat could be stored, so around 5-8℃ causing a considerable amount of energy to be consumed and potentially causing problems for IT kit at the lower end of the operational range (5℃)

Got it ? Good, now to liquid..

There are 4 main liquid cooled solutions in use today, listed as follows:

1. Rear door cooling, this is where the cooling loop for CRAC/H's is extended to the rack where it meets with heat exchangers in the door frames, thus the hot air from the servers is cooled immediately before it leaves the rack footprint, normally the room itself is not cooled.

2. Cold plate, this is where the heat producing components have copper piping to remove the heat at source, this is them treated similarily to rear door cooling and the heat taken away using conventional methods, again the room is not cooled

3. Immersion (1), - this is where server motherboards are actually immersed in baths filled with a non conductive fluid and there is a heat exchanger situated near the bath to remove the heat, natural convection methods move the heated liquid to the heat exchanger. Power and connectivity are provided by a common bar. In this and the following senarios, the rooms are not cooled.

4. Immersion (2) - this is where individual motherboards are encased in a cartridge which contained the non conductive fluid and they slot into a rack with a cooling loop integrated within, valves and other connections provide power and connectivity to the board.

The key thing here is that the liquid in 2, 3, and 4 above is hotter than the air that leaves the rear of the server, around 50℃ and is potentialy more useful and can be used for other processes, such as heat for office areas, or transferred to adjacent buildings for heating swimming pools or greenhouses.


There is a current EU funded project that is looking into heat re-use from data centres (both air and liquid) called Catalyst, more information on this link

In the next article, I'm going to look at the relative costs of both ecosystems, the air cooled ecosystem, the non immersed systems and the immersed ecosystems.

Friday, 1 June 2018

Air v Liquid - Part 2 Data Centre Basics

So, continuing on from the previous post, we're going to look at the basics of a data centre, and for this I'm going to cover all types for all sorts of organisations.

As a user of digital services (as covered in the previous post) I don't really care about how my digital services get to me, as long as they do. But, just because I don't care doesn't mean that someone else doesn't.

Lets go back to basics. In order for me to access a digital service someone needs to provide 3 things, the first a network, a physical or wireless network that transmits my outbound and any inbound signals to a server somewhere, the second, that destination server or servers to process and deal with my request, be it, a financial transaction, a look at my bank balance or to view the latest news updates, and thirdly some power so both the server and the network can run.

At a most basic level, I could use domestic power points in my gargage to power my server and connect it to the internet via my wireless connection or broadband. Is that a data centre? Well, actually technically it isn't, although some organisations back in the early days did precisely that!

The official defination of a data centre (for me) is that contained within the EU Code of Conduct for Data Centres (Energy Efficiency) aka EUCOC guidance document, that can be downloaded from this link

It states "For the purposes of the Code of Conduct, the term “data centres” includes all buildings, facilities and rooms which contain enterprise servers, server communication equipment, cooling equipment and power equipment, and provide some form of data service (e.g. large scale mission critical facilities all the way down to small server rooms located in office buildings)

Very clear, insofar, that the building, facility or room must contain, servers, server communication equipment (network), power equipment and lastly cooling equipment and provide some sort of data service and that those buildings, facilities or rooms can range from large scale mission critical facilities all the way down to small server rooms located in office buildings.

Thus the EUCOC covers all types of communication cupboards, server rooms, machine rooms, mini data centres, medium data centres, telco switch locations, hyperscale data centres etc and those belonging to all types of organisations, in fact any organisation that operates in this increasingly digital world. The only thing I didn't discuss was the cooling element.

Cooling is actually a bit of a misnomer, what we're actually doing when we "cool" is to reject heat, that is to carry away the heat away from the servers and then cool it (the air) down and reinject that cooled air back into the loop. There is plenty of stuff available online if you really want to know about the airflow cycle BUT...

And, it is a BIG BUT...

What are we actually rejecting the heat from?
 

In essence, servers generate a lot of waste heat, and if we dont manage the air flow we can get thermal problems, but where is this heat coming from? 

In a server, there are 2 heat sources, the first is the processor itself (more on that later) and the power supply unit, this device converts the 220-240AC power into the micro DC voltages found on the motherboard and other components.

Server chipsets (the processors) undergo Computational Fluid Dynamics (CFD) modelling to channel the heat away from the core to the heatsinks, the surface of a chip can reach temperatures in excess of 140°C, by the time the heat gets to the end of the fins it can be around 50-60°C.

Most servers themselves undergo CFD modelling to determine the air flow across the heat producing components, listed above within the chassis of the server, so the air flow is optimised as it enters through the front of the device, passes over the processor/power supply and then is exhausted through the rear of the unit. The fans in the device actually pulls the air through the unit, assisted by any positive pressure provided by the AC units.

Most servers are actually designed to operate quite happily in ambient air, and as such can operate between 5° - 40°C,  Thermal monitoring controls the fan speed. It is only when we cluster a great many servers togther for instance in a rack that heat problems can appear.

When I teach the EU Code of Conduct for Data Centres (Energy Efficiency) I ask the students 3 questions, as follows:

1. What is the target temperature in your facility (server room etc) ?
2. What is the target humidity range in your facility?
3. Why these numbers specifically?  

The answers (with some exceptions) are:

1. 18-21℃
2. 50%RH +/-5%
3. Er, we dont know!

Well, those specific numbers relate to the use of paper tapes and cards back in the 50's and 60's and then when some old magnetic tapes needed cool and dry environments back in the 80's and 90's. (I could do a whole blog post on this!)

The thing is that these temperature and humidity ranges belong in the 20th Century, IT equipment can run at higher temperatures today so the question is, do we need cooling or just simply heat rejection?

Well, we'll cover that in the next blog post. 

Before we do though, we'll just go through the absolute basics for a server room and by definition the rest of the data centre ecosystem types as the only real difference is scale and risk profile/appetite.

Scale is clearly, the amount of IT equipment you are using, for smaller organisations, it may not be too big an IT estate, whilst for the hyperscale search engines or social media platforms it may number tens of thousands, if not millions of servers, storage units and network switches. This amount will create a great deal of heat which needs to be managed.

The risk profile is essentially, your own appetite to the risk of the IT going down, if it is absolutely mission critical that your IT systems stays up all the time, or as we say in the sector 24hours a day, 7 days a week, 365 days a year or 24/7/365 then you will require some duplicate systems to deal with any failure, and not just in the IT but your power, network and cooling solutions as well, this will add cost, complexity and an increased maintainance regime to your calculations. There are certain classifications that can be applied, such as the EN50600 (ISO22237) Classes, the Uptime Institute Tiers as well as others, I dont want to go into much detail on these at the present time, but will cover them later in this series.

Lets take a quick look at the minimums:

Power, we'll need electricity to power the servers, networking equipment and storage solutions.
The power train is, at its most basic a standard 13Amp socket on the wall, but if we have more than one server and you're using a rack it may be prudent to consider other options.

Space, computer racks come in a number of different sizes and configurations, most prevalent today are 800mm x 800mm and approximately 1.8 high for a 42U rack and you'll need to access the front and rear of the rack with the doors open so allow at least 1m around the rack for access. You could just use a table, and I've seen that in a number of locations!

Technically, thats it, but it may be prudent to allow the hot air to escape the room and a standard extract ventilator fan can do the job (bear in mind thought that outside air can also come in through this fan unit so some filtration equipment may be useful.)

If you want cooling, then you may want a false floor to allow the cool air to surround your rack and some tiles to direct it to where it needs to be (please refer to the EUCOC section 5 cooling for the myriad of best practices about air flow direction and the containment of that air to the right place). You can get away with not having any air flow management (indeed up until very recently (circa 2008) and in some places even today, but this will come at the risk of hotspots, thermal overload failures and increased energy bills.

You'll also need some sort of cooling unit (if so you dont really need the extract fan), the most basic of this is a standard domestic DX cooler, this unit provides cold air (it should have control to specify the exact temperature) and rejects the air via pipework and an external unit to the outside, again there are a multitude of cooling options available on the market today and some even optimised for data centres!

In the next post, I'll be looking at the cooling v heat rejection.




Thursday, 31 May 2018

Air v Liquid - Part 1 The direction of travel!

This is going to be the first in a series of articles on liquid cooling in the data centre environment, with a view to sparking some debate.
The use of liquid cooled technologies in the data centre has been akin to debates about nuclear fusion, insofar that its always 5-10 years from adoption.
We've been tracking the use of liquid cooled technologies for some time now and we are party to some developments that have been taking place recently, and it is my belief that we are going to see some serious disruption taking place in the sector sooner rather than later.

We'll be looking at a unique way of looking at how to implement liquid cooled solutions in the next few posts, but before that we need to understand the direction of travel outside of the DC Space.

Customers are increasingly using cloud services and it appears that organisations that are buying physical equipment are only doing so because of either "thats the way we've always done IT" or that they are using specialist applications that cannot be provide via cloud services or they are wedded to some out of date procurement process. This is supported by the fact that conventional off the shelf server sales are in decline, whilst specialist cloud enabled servers on on the up but being purchased by Cloud Vendors and the Hyperscalers (albeit that they are designing and building their own servers which by and large are not suitable for conventional on premise deployements) There is also the Open Compute Project that is mostly being used for pilots via development teams.
 
So, with cloud servers, the customer, in fact any customer has no say in the type of physical hardware deployed to provide that SaaS, PaaS or IaaS solution and that is the right approach. IT is increasingly becoming a Utility, just like Electricity, Water and Gas. As a customer of these utilities I have no desire to know how my electricity reaches my house, all I want is that when I flick the light switch or turn on the TV, that the light comes on and that I can watch the "Game of Thrones" I certainly do not care which power station generated the electricity, how many pylons, how many transformers, how many different cables etc that the "juice" passed through to get to my plug/switch.

For digital services, I access a "service" on my smartphone, tablet, desktop etc and via a broadband or wifi connection, access the "internet" and then route through various buildings containing network transmission equipment to the server (which can be located anywhere on the planet), the physical server(s) that the "service" resides upon. Everything, apart from my own access device is not my asset, I merely "pay" for the use of it. And I dont pay directly, I pay my ISP or Internet Service Provider a monthly fee for access and the supplier directly for the service that I am accessing and somehow they pay everybody else.

So, in essence, digital services are (to me) an app I select on my digital device, information is then sent and received over digital infrastructure, either fixed such as a broadband connection or via a wifi network. I have no knowledge or, nor do I care how the information is sent or received, merely that it is. 

What does this have to do with the use of air or liquid in a data centre?

Well, its actually quite important, but before we cover that, we'll be covering the basics of data centres and we'll be doing that in the next post.