Data center infrastructure management


Data center infrastructure management

Data center infrastructure management (DCIM) is the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center's critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM will enable a common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.[1]

The efficiencies of having this type of integrated management have led technology providers like Hewlett-Packard, Dell and IBM to build out and complement their product-centric infrastructure and management offerings with Converged Infrastructure environments that converge servers, storage, networking, security, management and facilities. [2] This type of environment allows enterprises to use fewer resources to manage the operations of these independent components. [3]

Contents

Driving factors

According to Forrester, DCIM is expected to grow to 60 percent market penetration by 2014, versus one percent market penetration in 2010. There are several trends driving the adoption of DCIM. These drivers include:[4]

  • Increased power and heat density
  • Data center consolidation
  • Virtualization and cloud computing
  • Increased reliance on critical IT systems[5]
  • Energy efficiency or Green IT initiatives

Features

To address data center availability and reliability requirements, DCIM can identify and eliminate sources of risk to increase availability of critical IT systems. DCIM tools can be used to identify interdependencies between facility and IT infrastructures to alert the facility manager to gaps in system redundancy.

To reduce energy usage and increase energy efficiency, DCIM enables data center managers to measure energy use, enabling safe operation at higher densities. According to Gartner Research, DCIM can lead to energy savings that reduce a data center's total operating expenses by up to 20 percent. In addition to measuring energy use, CFD is used to create a Virtual Facility to maximize the use of airflow, which further drives down cooling infrastructure costs.

DCIM will allow optimal server placement with regard to power, cooling and space requirements.

DCIM software is used to benchmark current power consumption through real-time feeds and equipment ratings, then model the effects of "green" initiatives on the data center's power usage effectiveness (PUE) and data center infrastructure efficiency before committing resources to an implementation.

Evolution of tools

Data center monitoring systems were initially developed to track equipment availability and to manage alarms. While these systems evolved to provide insight into the performance of equipment by capturing real-time data and organizing it into a proprietary user interface, they have lacked the functionality necessary to effectively monitor and make adjustments to interdependent systems across the physical infrastructure to address changing business and technology needs.

More sophisticated integrated monitoring and management tools were later developed to connect this equipment and provide a holistic view of the facility's data center infrastructure. In addition to enabling the comprehensive real-time monitoring, these tools were equipped with additional modeling and management functionality to facilitate long-term capacity planning; dynamic optimization of critical systems performance and efficiency; and efficient asset utilization.[6]

In response to the rapid growth of business-critical IT applications, server virtualization became a popular method for increasing a data center's IT application capacity without making additional investments in physical infrastructure. Server virtualization also enabled rapid provisioning cycles, as multiple applications could be supported by a single provisioned server.

Modern data centers are challenged with disconnects between the facility and IT infrastructure architectures and processes. These challenges have become more critical as virtualization creates a dynamic environment within a static environment, where rapid changes in compute load translate to increased power consumption and heat dispersal.[7] If unanticipated, rapid increases in heat densities can place additional stress on the data center's physical infrastructure, resulting in a lack of efficiency, as well as an increased risk for overloading and outages.[8] In addition to increasing risks to availability, inefficient allocation of virtualized applications can increase power consumption and concentrate heat densities, causing unanticipated "hot spots" in server racks and areas. These intrinsic risks, as well as the aforementioned drivers, have resulted in an increase in market demand for integrated monitoring and management solutions capable of "bridging the gap between IT and facilities" systems.[9]

In 2010, analyst firm Gartner. Inc. issued a report on the state of DCIM implementations and speculated on future evolutions of the DCIM approach. According to the report, widespread adoption of DCIM over time will lead to the development of "intelligent capacity planning" solutions that support synchronized monitoring and management of both physical and virtual infrastructures.[10]

Intelligent capacity planning will enable the aggregation and correlation of real-time data from heterogeneous infrastructures to provide data center managers with a common repository of performance and resource utilization information. It also will enable data center managers to automate the management of IT applications based on server capacity—as well as conditions within a data center's physical infrastructure—optimizing the performance, reliability and efficiency of the entire data center infrastructure.

In 2011, Future Facilities and nlyte partnered to provide simulation based airflow management (Virtual Facility and Datacenter CFD) to a DCIM platform. This combination of cooling capacity information and intelligent capacity planning process enables a proactive approach for identifying potential hot spots within the data center while providing increased airflow management to reclaim stranded cooling capacity.[11]

In 2011, Emerson Network Power introduced "Trellis," the industry’s first DCIM platform capable of bridging the data center’s physical and IT infrastructure layers in real time to facilitate holistic data center monitoring and management.[12] According to Reuters, the platform will allow enterprises to “cut their energy bills, infrastructure spending and better allocate staff at the giant server farms that manage everything from bank transactions to corporate payrolls.”[13]

References

  1. ^ "DCIM: Going Beyond IT", Gartner, Inc., March 29, 2010, Page 2
  2. ^ Huff, Lisa, “The Battle for the Converged Data Center Network,” Data Center Knowledge, August 18, 2011. [1]
  3. ^ Oestreich, Ken, "Converged Infrastructure," The CTO Forum, November 15, 2010. [2]
  4. ^ "Put DCIM into Your Automation Plans", Forrester Research, December 2009 [3]
  5. ^ "Infrastructure Monitoring and Management Tops List of Data Center User Issues", Information Management [4]
  6. ^ "Data Center Management and Efficiency Software", 451 Group
  7. ^ "Emerson Power Bringing Its Perspective to Data Center Management", eWeek, October 19, 2010
  8. ^ "A Look At Data Center Infrastructure Management Software & Its Impact", Processor Magazine, July 2, 2010 [5]
  9. ^ "Bridging the Gap between IT and Facilities", Data Center Knowledge, June 8, 2010 [6]
  10. ^ "DCIM: Going Beyond IT", Gartner, Inc., March 29, 2010, Page 4
  11. ^ [7],"nlyte Software and Future Facilities Partner"
  12. ^ [8]
  13. ^ [9]

Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Data center infrastructure efficiency — (DCIE), is a performance improvement metric used to calculate the energy efficiency of a data center. DCIE is the percentage value derived, by dividing information technology equipment power by total facility power.[1][2][3] See also Power usage… …   Wikipedia

  • Data center — An operation engineer overseeing a Network Operations Control Room of a data center. A data center (or data centre or datacentre or datacenter) is a facility used to house computer systems and associated components, such as telecommunications and …   Wikipedia

  • Data center fabric — A data center fabric describes the hardware, software, and technology infrastructure required to power data centers. New and emerging business requirements drive IT organizations to evolve their technology infrastructures, fueling an ongoing… …   Wikipedia

  • Open Data Center Alliance — The Open Data Center Alliance is an independent organization created in Oct. 2010 with the assistance of Intel to coordinate the development of standards for cloud computing. Approximately 100 companies, which account for more than $50bn of IT… …   Wikipedia

  • Management Development Institute — Motto Yogah Karmasu Kaushalam (Sanskrit) from the Gita 2:50 Motto in English Pefection in action is Yoga An act becomes perfect when you do it with all joy and without expecting anything in return …   Wikipedia

  • Data Palette — Developer(s) Stratavia Stable release 4.1 (2007) / October, 2007 Type Enterprise Software License Proprietary EULA …   Wikipedia

  • Center for Information Technology — The Center for Information Technology (CIT) is an agency of the United States Federal Government. CIT, first established in 1964 as the Division of Computer Research and Technology (DCRT), provides the technological and computational support and… …   Wikipedia

  • Infrastructure optimization — Optimization All up Model Infrastructure optimization is Microsoft s structured, systematic process for assessing an organization s IT infrastructure and application platform across capabilities in order to provide an optimization roadmap toward… …   Wikipedia

  • Data curation — In science, Data curation is a term used to indicate the process of extraction of important information from scientific texts such as research articles by experts and converting them into an electronic form such as an entry of a biological… …   Wikipedia

  • Center for Naval Analyses — The Center for Naval Analyses is a federally funded research and development center (FFRDC) for the Department of the Navy, which includes both the Navy and the Marine Corps. The Center for Naval Analyses (CNA) is operated by the Alexandria,… …   Wikipedia


Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.