- EVROPROM
November 19 2025

HVAC System Upgrade at Two Scandinavian Data Centers

Northern Europe is emerging as one of the fastest growing data centres in EMEA due to climate, generation mix and high energy availability. According to industry data, installed capacity in the EMEA data centre segment increased by 21% in the first six months of the twenty-fifth year to over 8,200 MW. In the Nordic countries, growth remains in the range of 12 to 18 per cent annually, directly related to increased load growth and the expansion of AI clusters. Against this technical background, EVROPROM supplied chillers to the Bluefjords and Sarek Oysites.

Market growth is accelerating due to the availability of renewable energy. In Norway, the share of hydro generation exceeds 88 per cent, in Iceland the share of clean electricity reaches 99 per cent and in Sweden the annual wind and hydro power volumes exceed 170 TW/h. This reduces the energy efficiency PUE to a range of 1.15-1.18, while forming sustainable campuses for AI systems. AI loads require instant access to capacity from 5 to 100 MW Classic FLAP regions are experiencing grid constraints, so choice is shifting to where large AI clusters can be connected in 12-24 months.

The investment cycle also confirms the scale of the acceleration. In Sweden, projects in excess of $1.3 billion have been announced, including an AI centre in Strangnes with over 120 MW connected. Finland demonstrates intensive development of heat supply systems: the cities of Helsinki and Espoo have implemented data centre heat recovery schemes ranging from 15 to 60 MW. The projected growth in electricity consumption by data centres in Denmark by the twenty-thirtieth year is between 6 and 9 TW/h annually, making the industry one of the key load factors on the Nordic power grid.

Iceland forms one of the most optimal regions for dense computing due to its nearly one hundred per cent renewable generation and temperature profile of -5 to 10 °C throughout the year. These conditions allow to support AI clusters with a density of 40-70 kW per rack and thermal loads of over 600 W per server module with a stable PUE below 1.20. Data transmission is provided by submarine cables with a bandwidth of over 60 Tbps, and in the coming years it is planned to increase to 90 Tbps, which allows to host clusters with heavy east-west traffic in Europe.

New campuses are being designed for 20-50 MW connections in the start-up phase and scaling up to 70-80 MW, including single liquid-cooled and hybrid HVAC configurations. Against this background, the cooling system upgrades implemented by EVROPROM meet the pan-European shift to densities above 40 kW per rack, continuous loads above 90 per cent of nominal and energy efficiency requirements for next-generation AI infrastructure.

Load growth, structural resilience of the North and risks driving the energy retention mechanism

Northern Europe forms one of the fastest growing data centre segments: installed EMEA capacity grew 21% in H1 2025 to exceed 8.2 GW, with the Nordics maintaining a pace of 12-18% annually due to a high share of renewable generation and a climate with a temperature range of -5 to 10 °C for most of the year. Conditions allow PUEs to be kept in the 1.15-1.20 range and support densities of 30-45 kW per rack, and up to 70 kW on Iceland, with heat loads in excess of 600 W per module. Against this backdrop, operators are expanding plug-in capacity to 20-50 MW per campus, with the potential to scale to 70-80 MW under next-generation AI clusters.

The region’s energy base reinforces demand shifting, with Norway generating over 88% of its electricity from hydropower, Sweden providing over 170 TWh of annual wind and hydro generation, and Iceland providing almost 100% of renewables. This allows rapid provisioning of available 5-100 MW under new AI segments where stable temperature, high computational density and readiness for liquid HVAC contraptions are required. Data transmission is provided by 60 Tbps submarine cables with plans to expand beyond 90 Tbps by 2026 , essential for clusters with heavy east-west traffic.

The sustainability of the sites is ensured by long-term PPA contracts for RES, fixing MW costs and reducing OPEX over a ten-year horizon. Finland and Sweden demonstrate mature thermal integration: 15-60 MW of data centre heat is routed to the Hamina and Espoo municipal heating networks, reducing generation needs and turning data centres into an infrastructure asset. After such integration, moving compute clusters becomes uneconomic: the cost of moving 1 MW of AI load can exceed $2-4 million including engineering, logistics and networks.

Despite the benefits, the region faces interconnection constraints: in Norway, the connection queue for consumers over 10 MW is growing year on year, and the system operator is recording a shortage of available capacity. In Denmark, data centre consumption may reach 6-9 TWh by 2030, which creates competition for MW and increases the time for issuing technical specifications to 18-36 months. In parallel, water, noise and carbon footprint requirements are increasing, increasing the demand for hybrid HVAC systems, liquid cooling and heat reuse projects. High-density solutions require the ability to cool 40-70 kW per rack, hold loads above 90 per cent and provide precise thermal profile management while minimising power consumption.

Scandinavian local technology specifics

Sweden 🇸🇪:

– Infrastructure funds allocate over $1 billion per year for new data centre projects; AI clusters reach 50-70 kW/rack;

– Average PUE of new campuses is 1.15-1.20, thanks to climate and friction free cooling; Heat reuse covers up to 20-35% of district heating networks in several municipalities;

– Renewable generation share exceeds 65%, including ~40 TWh of hydro and over 30 TWh of wind each year; Campuses are built with the expectation of 20-120 MW of interconnection capacity per site.

Finland 🇫🇮:

– Google Hamina project utilises heat recovery with more than 50 MWth returned to the grid. Fortum’s urban heat networks are capable of receiving heat from data centre sites with a capacity of 15-30 MWth;

– One of the world leaders in heat-reuse: individual sites cover up to 25-40% of city heating. Temperature profile of server rooms allows a stable PUE of 1.16-1.22;

– Campuses are connected to 10-60 MW power grids, with preparation for expansion to 100 MW. Electricity costs are 40-55 USD/MWh, below EU average.

Norway 🇳🇴:

– More than 88% of electricity is hydro generation, over 140 TWh annually. Connections for new campuses are limited: queue of 10 MW consumers grows by 15-20% annually;

– Selective policy: low-value loads are limited to 0-5 MW or banned; New campuses lay 30-50 kW/stand with possibility of liquid expansion;

– Average cost of electricity is 25-35 USD/MWh, one of the lowest in Europe. Grid constraints reduce available power in some regions to levels below 50 MW free reserve.

Denmark 🇩🇰:

– Capacity planning is intensifying with new data centres taking 18-36 months to connect. The share of wind generation exceeds 50 per cent, providing up to 45-50 TWh of electricity;

– Campuses are most often designed for the 15-40 MW range, with expansion phases under 70 MW; AI clusters are 40-60 kW/rack, with a requirement for a precise HVAC profile;

– Expected growth in data centre consumption by 2030 reaches 6-9 TWh, which forms a pressure on the power grid. Thermal footprint integration is a must for new campuses above 10 MW;

Iceland 🇮🇸:

– Campuses are being built under 20-50 MW, with phases scaling up to 70-80 MW. AI cluster density is 40-70 kW/rack, above EMEA average.

– Temperature profile of -5…10°C allows to keep PUE 1.12-1.18. Submarine links provide over 60 Tbps, with planned expansion to 90 Tbps.

– Energy is 99% based on RES: hydro and geothermal generation totalling over 20 TWh. The cost of energy is 25-40 USD/MWh, one of the lowest for infrastructure projects.

Chiller selection for Bluefjords AS

Bluefjords AS is located in the hydro generation zone of Norway, with a production of over 140 TWh/year and an energy cost of 25-35 USD/MWh. A climate of 0… 10 °C is kept 250-280 days/year, which allows maintaining a PUE of 1.15-1.20 with loads of 30-45 kW/rack and heat flows of 500-700 W/module. A CARRIER 30RQ 0522 chiller with a capacity of 465 kW for cooling and 560 kW for heating, with an EER of 2.8-3.1 and COP up to 3.4, operating on R410A refrigerant, was selected for the retrofit. The unit is designed for a thermal circuit of 1.2-1.5 MW level with the possibility of load redistribution by systemisation zones.

Two refrigeration circuits withstand 50-60% of thermal load each, which gives the site stability of SLA level 99.98% . There are 8 Danfoss SH300A4ACA compressors with a total installed capacity of 240-260 kW, operating in the range of 30-90% of rated capacity. The shell-and-tube heat exchanger is designed for a temperature head of 5-7 °C and has an operating life of 7200-7800 h/year. The copper-aluminium condenser maintains efficiency at average outside temperatures of up to 10-15 °C.

The Salmson DIL 206-19/11 hydraulic module provides 180-210 kPa head and 70-110 l/min circulation, preventing localised thermal failures in tight installations. The ventilation unit includes 8 fans with a total airflow of 45,000-65,000 m³/h, of which 2 fans are controlled by frequency converters. This reduces the energy consumption of the air circuit by 12-18% and the total HVAC system load by 12%, with data centres, 85-90% .

The total parameters – 465 kW cold, 560 kW heat, 2 circuits, 8 compressors, 8 fans, integrated hydronic module and temperature profile of the region – ensure operation at a thermal profile of 1.0-1.2 MW with the possibility of returning 20-40 MW/h of heat to local networks. The configuration is suitable for densities of 50-70 kW/rack and guarantees a life of 60,000-80,000 hours of operation under continuous AI loads.

Chiller selection for Sarek Oy

Sarek Oy is located in the northern zone of Sweden, where the -5… 8 °C climate is maintained for 200-230 days/year, allowing operation with a PUE of 1.12-1.18 at HPC loads of 10-25 kW/rack and a total thermal capacity of 50-250 kW. Over 70% of the region’s electricity is generated by RES, reducing the carbon footprint of HPC clusters by 40-55% and minimising operating costs during high load periods. AERMEC NRGI382X A M 03 chiller with a cooling capacity of 87 kW at 12/7 °C and 35 °C condensing modes is installed at the site, designed for stable operation under HPC modules with irregular thermal spikes of up to 130% of rating and frequency loads of up to ±20% per hour.

The system uses R32 refrigerant, which reduces the climatic load by 65-70% and increases efficiency under partial HPC loads by 8-12%. One refrigeration circuit is equipped with 2 Copeland compressors, where the inverter channel provides 20-100% capacity regulation with modulation accuracy of less than 1%, keeping the required temperature within a corridor of ±0.3-0.5 °C with computational flows of HPC nodes varying between 50-150 kW. This dynamic is critical for rendering, simulation, ML-inference tasks on off-chip GPU servers, where the load profile changes in a wavelike manner and requires special stability control.

The plate heat exchanger is designed for temperature headers of 3-5 °C and provides stable heat dissipation at HPC densities of 5-20 kW/rack, including periods of abrupt changes in thermal power typical of parallel computing and distributed ML clusters. The copper-aluminium capacitor maintains 85-90% efficiency at external temperatures as low as 20 °C, allowing it to handle peak HPC loads without performance degradation. Two frequency-controlled fans generate an airflow of 8,000-12,000 m³/h, keeping cooling system power consumption between 0.8-1.0 kW/kW of cooling at partial HPC loads of 30-70%.

The Lowara CIE370/3V/D hydraulic module generates 140-180 kPa head and provides 40-70 l/min circulation to compensate for thermal spikes in HPC racks with a total load of 80-150 kW. The accumulation tank stabilises the hydraulic circuit under computational profile changes of ±10-20%, reduces the number of compressor starts by 25-35% and extends the compressor group life up to 45,000-60,000 motor hours. This configuration ensures stable operation under high computational loads typical of intensive server tasks – from parallel calculations and graphical operations to streaming analytical systems where the thermal profile changes dynamically.

Why data centres choose EVROPROM ?

Both customers – Bluefjords AS and Sarek Oy – chose EVROPROM because of its measurement-proven parameters. Before dispatch each unit is inspected by HVAC engineers: circuit pressure 28-32 bar, vibration 0.3-0.7 mm/s, compressor current 8-24 A, flow rate 40-110 l/min, temperature conditions 5-12 °C at inlet and 7-15 °C at outlet, load tests at 30-90% capacity. Electrical performance, thermal profile, automation stability and leak tightness are recorded to an accuracy of ±5 g refrigerant. All data is summarised in a 10-14 page report that meets operational requirements.

Delivery from stock takes 1-3 days preparation and 3-7 days logistics, including service, 1.5-2.0 MΩ insulation test, 380-400 V voltage control, 63-72 dB(A) noise level test, ±0.3-0.5 °C temperature stability, 180-210 kPa hydro pressure and N/N 1 circuit correctness. EVROPROM engineers accompany the integration under loads of 30-70 kW/rack, ensuring that the equipment is ready for commissioning without stopping the thermal circuits of the refrigeration system.

Equipment inspection and maintenance standards from EVROPROM:

– Inspection by HVAC engineers: inspection of circuits at 28-32 bar pressure, vibration measurement 0.3-0.7 mm/s, compressor group diagnostics, leakage control with ±5 g accuracy, leakage test of lines and assemblies.

– Testing of 8 parameters: 5-12 °C inlet and 7-15 °C outlet temperatures, low/high side pressure, power consumption, compressor current, flow rate 40-110 l/min, automation correctness, load modulation 30-90%, condensation stability at 35-45 °C.

– Pre-sales service: cleaning of the heat exchange packs, flushing of the refrigeration and hydraulic circuits, checking the tightening force of the fittings, checking the oil level, calibration of the pressure and temperature sensors.

– Confirmation of electrical characteristics: input supply 380-400 V, operating currents 8-24 A, measurement of inrush currents, checking phasing, voltage symmetry and tripping of protection modules.

– Warranty: 6-36 months coverage period including compressors, fan units, condenser sections, heat exchangers and automation elements.

– Document generation: test reports, certificates of origin, technical data sheet, Local PFI, Packaging List, declarations of conformity and a full set of export documentation.

– Preparing for transport: reinforced packing, fixing the equipment on the frame, checking vibration resistance, labelling of components, photo and video recording of the unit’s condition before loading.

– Taxes and duties: execution of internal fiscal documents, preparation of the export package, compliance with EU regulations on the movement of equipment with refrigerants.

– Customs clearance: preparation of classification codes, invoices, certificates; accelerated border crossing due to European warehouse status.

– Loading and shipping: placement of the equipment on the carrier’s site, securing for transport loads, control of frame rigidity and fixing points.

– Integration support: advice on connection to the hydraulic network, setting the required flow rate, correct start-up, selection of redundancy schemes.

– Confirmation of readiness for start-up: check of start-up modes, analysis of performance at 70-90% load, recording of operating temperatures, pressures, currents and stability of the equipment under test load.

Final evaluation of the implemented HVAC-engineering solutions:

The Bluefjords AS and Sarek Oy sites are different in terms of scale, load profile and infrastructure architecture, but both chose EVROPROM equipment due to proven performance, test protocols and full technical preparation prior to commissioning. The equipment was integrated without additional modifications and provided stable operation under the high computational and thermal loads required by modern data centres.

EVROPROM is a supplier to industry, energy and data centres:

EVROPROM provides tested 20-1200 kW cooling systems that have undergone advanced engineering verification, load testing and documented proof of performance parameters. The company ensures stable deliveries, technical support and commissioning readiness.

Contact EVROPROM for the best solution:

🌐: evroprom.com

📞: 48 799 355 595

📥: sales@evroprom.com

Author of the article:
Svyatoslav Ovcharenko, sales manager
19.11.2025