In the data center, efficient power distribution is critical to maintaining uptime and lowering costs. The data center power distribution system should be designed with fault tolerance to ensure power is available at every rack cabinet.

Data Center Power Distribution

While there are many different power distribution systems, a feeder-distribution center (FDC) is the most common. These systems consist of a high-voltage feeder cable that usually runs along the top of the racks and connects to each cabinet using a branch circuit. The branch circuit cable distributes the 120V to each outlet.

These systems can be installed in either a 1+1 or N+1 configuration. In a 1+1 setup, one feeder cable is used per row of cabinets, while an N+1 configuration uses multiple cables per row. A capacity planning tool must be used when designing the system to prevent overloading branch circuits. This will consider the maximum load for each cabinet and help determine how many feeders/branch circuits are needed before they are connected to the grid.

Power distribution is one of the most important things to consider when building a data center. With so much equipment requiring power, you want to ensure your loads are distributed as efficiently as possible.

There are three main ways to do this. The first two methods are for purposes of data center design and should be accomplished before construction begins. The last method is for use during the operation of the data center.

Query power requirements by individual device

As data center power efficiency becomes a top priority for most companies, the need to understand the power requirements of each device becomes more critical than ever. When an organization has multiple tiers of storage and computing, it is crucial to know how much power is being used by each tier.

Although this information can be gathered from any number of sources, it is often a manual process that requires a team of engineers to pore over reports from various sources and logbooks.

The process typically starts with a survey of all equipment in the data center and making a note of their power requirements. This is done using a power meter. Devices are then classified based on similar power requirements and grouped for analysis. This analysis aims to create an accurate picture of how much energy each appliance uses, so all devices must be accounted for in one way or another.

Query power requirements based on rack location

Querying the power requirements of data center racks from their locations will help to determine the layout of additional racks and the placement of future equipment requests. The data gathered can be used to assess the capacity of power distribution units (PDUs) needed for specific rack locations and are used in conjunction with sub-metering to manage power usage.

This query will allow you to get close enough to your facility’s current power draw for it to be used as a baseline for future power usage projections.

To calculate the required power for rack locations, you first need to determine the maximum power of each location. Factors that affect power requirements include:

  • Number of servers (1U, 2U or 3U)
  • Type of servers (Dell, HP, etc.)
  • Type of workload (Microsoft Exchange, Database using Linux or Windows)
  • Server CPU power (Intel Xeon E5-4650 vs Intel Xeon E3-1245)

Then you can calculate the power requirement for each rack location and add them to get a total.

Scripted load balancing through server virtualization

Server virtualization uses more efficient hardware than older technologies by using fewer physical servers, which equates to less energy use. The consolidation of workloads onto fewer physical servers also allows for better load balancing, which means that there will be fewer periods of peak demand for electricity.

This has added benefits for both the customer and vendor. Customers can receive better service at a lower cost because their facilities operate more efficiently, and vendors can profit more from a more efficient operation that uses fewer resources.

A common load-balancing method is network-based load balancers. These systems work by sending packets through a line card (usually an Ethernet card) with a built-in switch. In this way, all packets sent through the card can be directed to any one of several servers based on some criteria, such as the intended recipient or the requested content. This system works well for distributing packets, but some limitations make it inappropriate for many use cases.

Power Distribution Unit Best Practice

The power distribution unit (PDU) is the backbone of a data center’s power infrastructure. It distributes power to the various racks of equipment and is a critical component of a data center’s overall efficiency and reliability.

First, let’s understand how a PDU works. The PDU is an uninterruptible power supply (UPS), which protects connected equipment from unexpected power interruptions by providing a backup supply of electricity that enters the equipment through the PDU outlets.

In addition, each outlet can be configured to provide only enough power to run equipment at its maximum capacity. For example, suppose you have a server that requires ten amps of electricity. In that case, you can configure the PDU outlet for that server to provide only nine amps, which will leave 1 amp of reserve power in case of an emergency or sudden spike in energy use.

Read also: Biggest Digital Companies in Europe Committed To Net Zero.

Centralized and Decentralized Power Distribution

There are two main approaches to designing a power distribution scheme: centralized and decentralized. In a centralized method, all IT loads share a single power feed connected to one or more UPS units.

Centralized Power Distribution

Centralized power distribution is used when the data center has few users or equipment. In this type of system, one main circuit breaker panel distributes power to all outlets. The main circuit breaker panel contains all necessary protection devices, such as fuses or circuit breakers. This system is easy to design and install but has several drawbacks that limit its scalability:

  1. It only allows for a single connection point for all users, making expanding difficult.
  2. It limits the amount of power that can be drawn because there must always be enough left over to run crucial systems such as emergency lighting and fire suppression.
  3. This system is susceptible to outages because a single component failure can cause an entire section to go offline.

Decentralized Power Distribution

Decentralized power distribution is used when the data center requires more than one main circuit breaker panel or when more than one main circuit breaker panel is required for redundancy purposes.

Conclusion

The data center power distribution practices described in this article are intended to reduce the amount of energy used by the facility, which can help reduce the overall costs of running it. In addition, green practices like these result in less environmental impact from the facility’s operation.

It is crucial to develop a green data center with higher efficiency. A good practice of data center power distribution must be applied.

Pin It on Pinterest