hit counter script
IBM N series Hardware Manual
IBM N series Hardware Manual

IBM N series Hardware Manual

System storage
Hide thumbs Also See for N series:
Table of Contents

Advertisement

IBM System Storage
N series Hardware Guide
Select the right N series hardware for
your environment
Understand N series unified
storage solutions
Take storage efficiency to the
next level
ibm.com/redbooks

Front cover

Roland Tretau
Jeff Lin
Dirk Peitzmann
Steven Pemberton
Tom Provost
Marco Schwarz

Advertisement

Table of Contents
loading

Summary of Contents for IBM N series

  • Page 1: Front Cover

    Front cover IBM System Storage N series Hardware Guide Select the right N series hardware for your environment Understand N series unified storage solutions Take storage efficiency to the next level Roland Tretau Jeff Lin Dirk Peitzmann Steven Pemberton Tom Provost Marco Schwarz ibm.com/redbooks...
  • Page 3 International Technical Support Organization IBM System Storage N series Hardware Guide May 2014 SG24-7840-03...
  • Page 4 Fourth Edition (May 2014) This edition applies to the IBM System Storage N series portfolio as of October 2013. © Copyright International Business Machines Corporation 2012, 2014. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule...
  • Page 5: Table Of Contents

    1.2 IBM System Storage N series hardware ........
  • Page 6 4.3.1 IBM N series N7x50T slot configuration ....... . 39...
  • Page 7 8.7.2 N series and expansion unit failure ........
  • Page 8 11.5 N series read caching techniques ........
  • Page 9 Chapter 14. Designing an N series solution ....... . .
  • Page 10 18.1 Overview ............238 18.2 Configuring SAN boot for IBM System x servers ......239 18.2.1 Configuration limits and preferred configurations .
  • Page 11 Updating Data ONTAP ........... . 319 Obtaining the Data ONTAP software from the IBM NAS website ....320 Installing Data ONTAP system files .
  • Page 12 IBM System Storage N series Hardware Guide...
  • Page 13: Notices

    IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead.
  • Page 14: Trademarks

    IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml...
  • Page 15: Preface

    This IBM® Redbooks® publication provides a detailed look at the features, benefits, and capabilities of the IBM System Storage® N series hardware offerings. The IBM System Storage N series systems can help you tackle the challenge of effective data management by using virtualization technology and a unified storage architecture. The N series delivers low- to high-end enterprise storage and data management capabilities with midrange affordability.
  • Page 16: Now You Can Become A Published Author, Too

    IT solution architect, pre-sales specialist, consultant, instructor, and enterprise IT customer. He is a member of the IBM Technical Experts Council for Australia and New Zealand (TEC A/NZ), has multiple industry certifications, and is co-author of seven previous IBM Redbooks.
  • Page 17: Comments Welcome

    Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html Preface...
  • Page 18 IBM System Storage N series Hardware Guide...
  • Page 19: Summary Of Changes

    New information The following new information is included: The N series hardware portfolio was updated to reflect the October 2013 status quo. Information and changed in Data ONTAP 8.1.x have been included. High availability and MetroCluster information was updated to include SAS shelf technology.
  • Page 20 IBM System Storage N series Hardware Guide...
  • Page 21: Part 1. Introduction To N Series Hardware

    Introduction to N series Part hardware This part introduces the N series hardware, including the storage controller models, disk expansion shelves, and cabling recommendations. It also describes some of the hardware functions, including active/active controller clusters, MetroCluster, NVRAM and cache memory, and RAID-DP protection.
  • Page 22 IBM System Storage N series Hardware Guide...
  • Page 23: Chapter 1. Introduction To Ibm System Storage N Series

    Chapter 1. Storage N series The IBM System Storage N series offers more choices to organizations that face the challenges of enterprise data management. The IBM System Storage N series is designed to deliver high-end value with midrange affordability. Built-in enterprise serviceability and manageability features support customer efforts to increase reliability, simplify, and unify storage infrastructure and maintenance, and deliver exceptional economy.
  • Page 24: Overview

    1.1 Overview This section introduces the IBM System Storage N series and describes its hardware features. The IBM System Storage N series provides a range of reliable, scalable storage solutions for various storage requirements. These capabilities are achieved by using network access protocols, such as Network File System (NFS), Common Internet File System (CIFS), HTTP, FTP, and iSCSI.
  • Page 25: Ibm System Storage N Series Hardware

    The following sections address the N series models that are available at the time of this writing. Figure 1-2 shows all of the N series models that were released by IBM to date that belong to the N3000, N6000, and N7000 series line.
  • Page 26 – Supports attachment to IBM Enterprise Storage Server® (ESS) series, IBM XIV® Storage System, and IBM System Storage DS8000® and DS5000 series. Also supports a broad range of IBM, EMC, Hitachi, Fujitsu, and HP storage subsystems. IBM System Storage N series Hardware Guide...
  • Page 27 – Protects against data loss because of double disk failures and media bit errors that occur during drive rebuild processes. SecureAdmin: – Authenticates the administrative user and the N series system, which creates a secure, direct communication link to the N series system. – Protects administrative logins, passwords, and session commands from cleartext snooping by replacing RSH and Telnet with the encrypted SSH protocol.
  • Page 28 – Enables cost-effective, long-term retention of rapidly restorable disk-based backups. Storage Encryption Provides support for Full Disk Encryption (FDE) drives in N series disk shelf storage and integration with License Key Managers, including IBM Tivoli® Key Lifecycle Manager. SyncMirror: –...
  • Page 29: Software Licensing Structure

    For more information about N series software features, see IBM System Storage N series Software Guide, SG24-7129, which is available at this website: http://www.redbooks.ibm.com/abstracts/sg247129.html?Open All N series systems support the storage efficiency features, as shown in Figure 1-3. Storage Efficiency features...
  • Page 30: Entry-Level

    1.3.1, “Mid-range and high-end” on page 9. The following changes apply: All protocols (CIFS, NFS, Fibre Channel, iSCSI) are included with entry-level systems Gateway feature is not available MetroCluster feature is not available IBM System Storage N series Hardware Guide...
  • Page 31: Data Ontap 8 Supported Systems

    1.4 Data ONTAP 8 supported systems Figure 1-5 provides an overview of systems that support Data ONTAP 8. The listed systems reflect the N series product portfolio as of June 2011, and some older N series systems that are suitable to run Data ONTAP 8.
  • Page 32 IBM System Storage N series Hardware Guide...
  • Page 33: Chapter 2. Entry-Level Systems

    Entry-level systems Chapter 2. This chapter describes the IBM System Storage N series 3000 systems, which address the entry-level segment. This chapter includes the following sections: Overview N32x0 common features N3150 model details N3220 model details N3240 model details N3000 technical specifications...
  • Page 34: Overview

    CIFS, NFS, iSCSI iSCSI, FCP iSCSI, FCP a. All specifications are for dual-controller, active-active configurations. b. Based on optional dual-port 10 GbE or 8 Gb FC mezzanine card and single slot per controller. IBM System Storage N series Hardware Guide...
  • Page 35: N32X0 Common Features

    2.2 N32x0 common features Table 2-2 provides ordering information for N32x0 systems. Table 2-2 N3150 and N32x0 configurations Model Form factor Select Process Control Module N3150-A15, a25 2U chassis 12 SAS 3.5” One or two controllers, each with no mezzanine card N3220-A12, A22 2U chassis 24 SFF SAS 2.5”...
  • Page 36: N3150 Model Details

    2.3 N3150 model details This section describes the N series 3150 models. Note: Be aware of the following points regarding N3150 models: N3150 models do not support the Fibre Channel protocol. Compared to N32xx systems, the N3150 models have newer firmware and no mezzanine card option is available.
  • Page 37 Figure 2-2 shows the front view of the N3150. Figure 2-2 N3150 front view Figure 2-3 shows the N3150 Single-Controller in chassis (Model A15) Figure 2-3 N3150 Single-Controller in chassis Figure 2-4 shows the N3150 Dual-Controller in chassis (Model A25) Figure 2-4 N3150 Dual-Controller in chassis Note: The N3150 supports IP protocols only because it lacks any FC ports.
  • Page 38: N3220 Model Details

    2.4 N3220 model details This section describes the N series 3220 models. 2.4.1 N3220 model 2857-A12 N3220 Model A12 is a single-node storage controller. It is designed to provide HTTP, iSCSI, NFS, CIFS, and FCP support through optional features. Model A12 is a 2U storage controller that must be mounted in a standard 19-inch rack.
  • Page 39: N3240 Model Details

    Figure 2-7 N3220 Dual-Controller in chassis (including optional mezzanine card) 2.5 N3240 model details This section describes the N series 3240 models. 2.5.1 N3240 model 2857-A14 N3240 Model A14 is designed to provide a single-node storage controller with HTTP, iSCSI, NFS, CIFS, and FCP support through optional features.
  • Page 40: N3240 Hardware

    – Redundant hot-swappable, auto-ranging power supplies and cooling fans Figure 2-8 shows the front view of the N3240 Figure 2-8 N3240 front view Figure 2-9 shows the N3240 Single-Controller in chassis. Figure 2-9 N3240 Single-Controller in chassis IBM System Storage N series Hardware Guide...
  • Page 41 Figure 2-10 shows the front and rear view of the N3240 Figure 2-10 N3240 Dual-Controller in chassis Figure 2-11 shows the controller with the 8 Gb FC Mezzanine card option Figure 2-11 Controller with 8 Gb FC Mezzanine card option Figure 2-12 shows the controller with the 10 GbE Mezzanine card option Figure 2-12 Controller with 10 GbE Mezzanine card option Chapter 2.
  • Page 42: N3000 Technical Specifications

    The NVRAM on the N3000 models uses a portion of the controller memory, which results in correspondingly less memory being available for Data ONTAP. or more information about N series 3000 systems, see this website: http://www.ibm.com/systems/storage/network/n3000/appliance/index.html IBM System Storage N series Hardware Guide...
  • Page 43: Chapter 3. Mid-Range Systems

    Mid-range systems Chapter 3. This chapter describes the IBM System Storage N series 6000 systems, which address the mid-range segment. This chapter includes the following sections: Overview N62x0 model details N62x0 technical specifications © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 44: Overview

    I/O technologies. Maximize storage efficiency and growth and preserve investments in staff expertise and capital equipment with data-in-place upgrades to more powerful IBM System Storage N series. Improve your business efficiency by using the N6000 series capabilities, which are also available with a Gateway feature.
  • Page 45: Functions And Features Common To All Models

    These options are designed to allow deployment in multiple environments, including data retention, NearStore, disk-to-disk backup scenarios, and high-performance, mission-critical I/O intensive operations. The IBM System Storage N series supports the following expansion units: EXN1000 SATA storage expansion unit (no longer available) EXN2000 and EXN4000 FC storage expansion units...
  • Page 46: N62X0 Model Details

    RLM can communicate with the partner node instance of Data ONTAP. This capability was available in other N series models before the N6000 series. However, the internal Ethernet switch makes the configuration much easier and facilitates quicker cluster failover, with some failovers occurring within 15 seconds.
  • Page 47 All of the N62x0 controller modules provide the same type and number of onboard I/O ports and PCI slots. The Exx models include the IOXM, which provides more PCI slots. Figure 3-3 shows the IBM N62x0 Controller I/O module. Figure 3-3 IBM N62x0 Controller I/O The different N62x0 models also support different chassis configurations.
  • Page 48 IBM N62x0 I/O configuration flexibility is shown in Figure 3-4. Figure 3-4 IBM N62x0 I/O configuration flexibility IBM N62x0 I/O Expansion Module (IOXM) is shown in Figure 3-5 and features the following characteristics: Components are not hot swappable: – Controller panics if it is removed –...
  • Page 49 Figure 3-6 shows the IBM N62x0 system board layout. Figure 3-6 IBM N62x0 system board layout Figure 3-7 shows the IBM N62x0 USB Flash Module, which has the following features: It is the boot device for Data ONTAP and the environment variables...
  • Page 50: Ibm N62X0 Metrocluster And Gateway Models

    Supported MetroCluster N62x0 configuration The following MetroCluster two-chassis configurations are supported: Each chassis single-enclosure stand-alone: • IBM N6220 controller with blank. The N6220-C25 with MetroCluster ships the second chassis, but does not include the VI card. • IBM N6250 controller with IOXM...
  • Page 51: N62X0 Technical Specifications

    Max volume size 60 TB (64-bit) 60 TB (64-bit) 70 TB (64-bit) I/O scalability PCIe slots Max. FC ports Max. Enet ports Max. SAS ports For more information about N series 6000 systems, see this website: http://www.ibm.com/systems/storage/network/n6000/appliance/index.html Chapter 3. Mid-range systems...
  • Page 52 IBM System Storage N series Hardware Guide...
  • Page 53: Chapter 4. High-End Systems

    High-end systems Chapter 4. This chapter describes the IBM System Storage N series 7000 system, which addresses the high-end segment. This chapter includes the following sections: Overview N7x50T hardware IBM N7x50T configuration rules N7000T technical specifications © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 54: Overview

    Technology Attachment (SATA) disk expansion units Figure 4-1 N7x50T modular disk storage systems The IBM System Storage N7950T (2867 Model E22) system is an active/active dual-node base unit. It consists of two cable-coupled chassis with one controller and one I/O expansion module per node.
  • Page 55: Hardware Summary

    Support for 8 Gbps Fibre Channel speed N7550T The IBM System Storage N7550T includes the Model C20 storage controller. This controller uses a dual-node active/active configuration, which is composed of two controller units, in either one or two chassis (as required for Metrocluster configuration).
  • Page 56: Controller Module Components

    N7550T and N7950T provide the same onboard I/O connections. The N7950T also includes an I/O expansion module (IOXM) to provide more I/O capacity. Figure 4-5 on page 37 shows the IBM N series N7x50T controller I/O. IBM System Storage N series Hardware Guide...
  • Page 57 Figure 4-5 N7x50 controller Figure 4-6 shows an internal view of the IBM N series N7x50T Controller module. The N7550T and N7950T differ in number of processors and installed memory. Figure 4-6 N7x50 internal view Chapter 4. High-end systems...
  • Page 58: I/O Expansion Module Components

    This provides another 20 PCIe expansion slot (2x 10 slots) to the N7950T relative to the N7550T. The IOXM is not supported on the N7550T model. Figure 4-7 shows the IBM N series N7950T I/O Expansion Module (IOXM). Figure 4-7 IBM N series N7950T I/O Expansion Module (IOXM) The N7950T IOXM features the following characteristics: All PCIe v2.0 (Gen 2) slots: Vertical slots have different form factor...
  • Page 59: Ibm N7X50T Configuration Rules

    4.3 IBM N7x50T configuration rules This section describes the configuration rules for N7x50 systems. 4.3.1 IBM N series N7x50T slot configuration This section describes the configuration rules for the vertical I/O slots and horizontal PCIe slots. Vertical I/O slots The vertical I/O slots include the following characteristics: Vertical slots use custom form-factor cards: –...
  • Page 60: N7X50T Cooling Architecture

    – SLDIAG runs from maintenance mode: SYSDIAG booted with a separate binary – SLDIAG has a CLI interface: SYSDIAG used menu tables SLDIAG used on all new IBM N series platforms going forward 4.3.5 MetroCluster, Gateway, and FlexCache MetroCluster and Gateway configurations include the following characteristics:...
  • Page 61: N7X50T Sfp+ Modules

    Figure 4-9 shows the use of the SAS Card in I/O Slot 1. Figure 4-9 Using SAS Card in I/O Slot 1 NVRAM8 and SAS I/O system boards use the QSFP connector: – Mixing the cables does not cause physical damage, but the cables do not work –...
  • Page 62 Figure 4-11 shows the 10 GbE SFP+ modules. Figure 4-11 10 GbE SFP+ modules IBM System Storage N series Hardware Guide...
  • Page 63: N7000T Technical Specifications

    1000 1000 Max volume size 70 TB (64-bit) 100 TB (64-bit) I/O scalability PCIe slots Max. FC ports Max. Enet ports Max. SAS ports For more information about N series 7000 systems, see this website: http://www.ibm.com/systems/storage/network/n7000/appliance/index.html Chapter 4. High-end systems...
  • Page 64 IBM System Storage N series Hardware Guide...
  • Page 65: Chapter 5. Expansion Units

    Expansion units Chapter 5. disk shelves This chapter describes the IBM N series expansion units, which also called This chapter includes the following sections: Shelf technology overview Expansion unit EXN3000 Expansion unit EXN3200 Expansion unit EXN3500 Self-Encrypting Drive Expansion unit technical specifications...
  • Page 66: Shelf Technology Overview

    The EXN3000 SAS/SATA expansion unit is designed to provide SAS or SATA disk expansion capability for the IBM System Storage N series systems. The EXN3000 is a 4U disk storage expansion unit. It can be mounted in any industry standard 19-inch rack. The EXN3000...
  • Page 67 EXN3000 are installed by IBM in the plant before shipping. Requirement: For an initial order of an N series system, at least one of the storage expansion units must be ordered with at least five disk drive features.
  • Page 68: Supported Exn3000 Drives

    48 disk drives per unit. The EXN3200 is a disk storage expansion unit for mounting in any industry standard 19-inch rack. The EXN3200 provides low-cost, high-capacity SAS disk storage for the IBM N series system storage family. The EXN3200 must be ordered with a full complement of (48) disks.
  • Page 69: Overview

    5.3.1 Overview The IBM System Storage EXN3200 SATA expansion unit is available for attachment to all N series systems, except N3300, N3700, N5200, and N5500. The EXN3000 provides low-cost, high-capacity, and SAS SATA disk storage for the IBM N series system storage.
  • Page 70: Supported Exn3000 Drives

    PSU Total input current 3 TB 8.71 3.29 6.57 4.59 1.73 3.46 measured, A 4 TB 8.54 3.40 6.79 4.25 1.69 3.38 Total input power 3 TB measured, W 4 TB IBM System Storage N series Hardware Guide...
  • Page 71: Expansion Unit Exn3500

    19-inch rack. The EXN3500 provides low-cost, high-capacity SAS disk storage with slots for 24 hard disk drives for the IBM N series system storage family. The EXN3500 SAS expansion unit is shipped with no disk drives unless they are included in the order.
  • Page 72: Overview

    450 GB and 600 GB physical capacity, and must be ordered as features of the EXN3500. Requirement: For an initial order of an N series system, at least one of the storage expansion units must be ordered with at least five disk drive features.
  • Page 73: Intermix Support

    Figure 5-9 shows the IOM differences. Figure 5-9 IOM differences 5.4.2 Intermix support EXN3000 and EXN3500 can be combined in the following configurations: Intermix of EXN3000 and EXN3500 shelves: EXN3000 and EXN3500 shelves cannot be intermixed on the same stack. Only applicable to N3150 and N32x0, not other platforms: mixing EXN3500 and EXN3000 w/ IOM3 or IOM6 is supported.
  • Page 74: Environmental And Technical Specification

    – Encryption that is enabled through disk drive firmware (same drive as what is shipping with different firmware) Available in EXN3500 and EXN3000 expansion shelf and N3220 (internal drives) controller: Only fully populated (24 drives) and N3220 controller IBM System Storage N series Hardware Guide...
  • Page 75: Sed Overview

    Requires DOT 8.1 minimum Only allowed with HA (dual node) systems Provides storage encryption capability (key manager interface) 5.5.2 SED overview Storage Encryption is the implementation of full disk encryption (FDE) by using self-encrypting drives from third-party vendors, such as Seagate and Hitachi. FDE refers to encryption of all blocks in a disk drive, whether by software or hardware.
  • Page 76: Key Management

    PEM format, and can be self-signed or signed by a certificate authority (CA). Supported key managers Self-encryption with Data ONTAP 8.1 supports IBM Tivoli Key Lifecycle Management Version 2 server for key management (others follow). Other KMIP-compliant key managers are evaluated as they are released into the market.
  • Page 77 Because it demands no changes to applications and servers, it is a seamless fit for virtually any IT infrastructure. For these reasons, IBM led the IT industry in developing and promoting an exciting new security standard: Key Management Interoperability Protocol (KMIP). KMIP is an open standard that is designed to support the full lifecycle of key management tasks from key creation to key retirement.
  • Page 78: Expansion Unit Technical Specifications

    4 RU 2 RU Drives per shelf Drive form factor 3.5-inch 3.5-inch 2.5-inch Drive carrier Single drive Dual drive Single drive Storage tiers supported Ultra Perf. SSD High Perf. HDD High Capacity Self encrypting IBM System Storage N series Hardware Guide...
  • Page 79: Chapter 6. Cabling Expansions

    Cabling expansions Chapter 6. This chapter describes the multipath cabling of expansions and includes the following sections: EXN3000 and EXN3500 disk shelves cabling EXN4000 disk shelves cabling Multipath HA cabling © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 80: Exn3000 And Exn3500 Disk Shelves Cabling

    MetroClusters. The example that is used throughout is an HA pair with two 4-port SAS-HBA controllers in each N series controller. The configuration includes two SAS stacks, each of which has three SAS shelves. Important: We recommend that you always use HA (dual path) cabling for all shelves that are attached to N series heads.
  • Page 81: Sas Shelf Interconnects

    Connecting the Quad-port SAS HBAs adhere to the following rules for connecting to SAS shelves: HBA port A and port C always connect to the top storage expansion unit in a stack of storage expansion units. HBA port B and port D always connect to the bottom storage expansion unit in a stack of storage expansion units.
  • Page 82 Figure 6-3 shows how the SAS shelves are interconnected for two stacks with three shelves each. Figure 6-3 SAS shelf interconnect IBM System Storage N series Hardware Guide...
  • Page 83: Top Connections

    6.1.3 Top connections The top ports of the SAS shelves are connected to the HA pair controllers, as shown in Figure 6-4. Figure 6-4 SAS shelf cable top connections Chapter 6. Cabling expansions...
  • Page 84: Bottom Connections

    SAS connections. Complete the following steps to verify that the storage expansion unit IOMs have connectivity to the controllers: 1. Enter the following command at the system console: sasadmin expander_map Tip: For Active/Active (high availability) configurations, run this command on both nodes. IBM System Storage N series Hardware Guide...
  • Page 85: Connecting The Optional Acp Cables

    2. Review the output and perform the following tasks: – If the output lists all of the IOMs, the IOMs have connectivity. Return to the cabling procedure for your storage configuration to complete the cabling steps. – IOMs might not be shown because the IOM is cabled incorrectly. The incorrectly cabled IOM and all of the IOMs downstream from it are not displayed in the output.
  • Page 86: Exn4000 Disk Shelves Cabling

    Verify that the ACP cabling is correct by entering the following command: storage show acp For more information about cabling SAS stacks and ACP to an HA pair, see IBM System Storage EXN3000 Storage Expansion Unit Hardware and Service Guide, which is available at this website: http://www.ibm.com/storage/support/nas...
  • Page 87: Non-Multipath Fibre Channel Cabling

    6.2.1 Non-multipath Fibre Channel cabling Figure 6-7 shows EXN4000 disk shelves that are connected to a HA pair with non-multipath cabling. A single Fibre Channel cable or shelf controller failure might cause a takeover situation. Figure 6-7 EXN4000 dual controller non-multipath Attention: Do not mix Fibre Channel and SATA expansion units in the same loop.
  • Page 88: Multipath Fibre Channel Cabling

    Tip: For N series controllers to communicate with an EXN4000 disk shelf, the Fibre Channel ports on the controller or gateway must be set for initiator. Changing the behavior of the Fibre Channel ports on the N series system can be performed by using the fcadmin command.
  • Page 89: Multipath Ha Cabling

    6.3 Multipath HA cabling A standard N series clustered storage system has multiple single-points-of-failure on each shelf that can trigger a cluster failover (see Example 6-1). Cluster failovers can disrupt access to data and put an increased workload on the surviving cluster node.
  • Page 90 IBM System Storage N series Hardware Guide...
  • Page 91: Chapter 7. Highly Available Controller Pairs

    Highly Available controller pairs Chapter 7. IBM System Storage N series Highly Available (HA) pair configuration consists of two nodes that can take over and fail over their resources or services to counterpart nodes. This function assumes that all resources can be accessed by each node. This chapter describes aspects of determining HA pair status, and HA pair management.
  • Page 92: Ha Pair Overview

    Nondisruptive software upgrades: When you halt one node and allow takeover, the partner node continues to serve data for the halted node while you upgrade the node you halted. IBM System Storage N series Hardware Guide...
  • Page 93: Characteristics Of Nodes In An Ha Pair

    Nondisruptive hardware maintenance: When you halt one node and allow takeover, the partner node continues to serve data for the halted node. You can then replace or repair hardware in the node you halted. Figure 7-2 shows an HA pair where Controller A failed and Controller B took over services from the failing node.
  • Page 94: Preferred Practices For Deploying An Ha Pair

    They each have mailbox disks or array LUNs on the root volume: – Two if it is an N series controller system (four if the root volume is mirrored by using the SyncMirror feature). – One if it is an N series gateway system (two if the root volume is mirrored by using the SyncMirror feature).
  • Page 95 Table 7-1 Configuration types HA pair If A-SIS Distance between Failover possible Notes configuration active nodes after loss of entire type node (including storage) Standard HA pair Up to 500 meters Use this configuration to provide configuration higher availability by protecting against many hardware single points of failure.
  • Page 96: Ha Pair Types And Requirements

    Data from the NVRAM of one node is mirrored by its partner. Each node can take over the partner’s disks or array LUNs if the partner fails. IBM System Storage N series Hardware Guide...
  • Page 97 See the Data ONTAP Release Notes for the list of supported systems, which is available at this website: http://www.ibm.com/storage/support/nas For systems with two controller modules in a single chassis, both nodes of the HA pair configuration are in the same chassis and have internal cluster interconnect.
  • Page 98: Mirrored Ha Pairs

    – There must be sufficient spares in each pool to account for a disk or array LUN failure. – Avoid having both plexes of a mirror on the same disk shelf because that configuration results in a single point of failure. IBM System Storage N series Hardware Guide...
  • Page 99: Stretched Metrocluster

    If you are using third-party storage, paths to an array LUN must be redundant. License requirements The following licenses must be enabled on both nodes: syncmirror_local 7.2.3 Stretched MetroCluster Stretch MetroCluster includes the following characteristics: Stretch MetroClusters provide data mirroring and the ability to start a failover if an entire site becomes lost or unavailable.
  • Page 100: Fabric-Attached Metrocluster

    The main difference from a Stretched MetroCluster is that all connectivity between controllers, disk shelves, and between the sites is carried over IBM/Brocade Fibre Channel back-end switches switches. These are called the...
  • Page 101 A fabric-attached MetroCluster connects the two controllers nodes and the disk shelves through four SAN switches that are called the Back-end Switches. The Back-end Switches are IBM/Brocade Fibre Channel switches in a dual-fabric configuration for redundancy. Figure 7-5 shows a simplified Fabric-attached MetroCluster. Use a single disk shelf per Fibre Channel switch port.
  • Page 102: Configuring The Ha Pair

    Configuration Guide, which is available at this website: http://www.ibm.com/storage/support/nas Strict rules also apply for which firmware versions are supported on the back-end switches. For more information, see the latest IBM System Storage N series and TotalStorage NAS interoperability matrixes that are found at this website: http://www.ibm.com/support/docview.wss?uid=ssg1S7003897 7.3 Configuring the HA pair...
  • Page 103: Configuration Variations For Standard Ha Pair Configurations

    Attention: Use VIFs with HA pairs to reduce single points of failure (SPOFs). If you do not want to configure your network for use in an HA pair when you run the setup command for the first time, you can configure it later. You can do so by running the setup command again, or by using the ifconfig command and editing the /etc/rc file manually.
  • Page 104: Enabling Licenses On The Ha Pair Configuration

    HA pair configuration. 7.3.4 Configuring Interface Groups The setup process guides the N series administrator through the configuration of Interface Groups. In the setup wizard, they are called VIFs.
  • Page 105: Configuring Interfaces For Takeover

    Please enter the IP address for Network Interface vif1 []: 9.11.218.173 Please enter the netmask for Network Interface vif1 [255.0.0.0]:255.0.0.0 The Interface Groups can also be configured by using Data ONTAP FilerView or IBM System Manager for IBM N series.
  • Page 106: Setting Options And Parameters

    A network interface performs this role if it has a local IP address but not a partner IP address. You can assign this role by using the partner option of the ifconfig command. Example 7-6 shows how to configure a dedicated interface for the N series. Example 7-6 Configuring a dedicated interface Please enter the IP address for Network Interface e0b []: 9.11.218.160...
  • Page 107: Testing Takeover And Giveback

    For more information about the options, see the na_options man page at this website: http://www.ibm.com/storage/support/nas/ Parameters that must be the same on each node The parameters that are listed in Table 7-2 must be the same so that takeover is smooth and data is transferred between the nodes correctly.
  • Page 108: Eliminating Single Points Of Failure With Ha Pair Configurations

    If the FC-AL card for the secondary loop fails, the failover capability is disabled. However, both storage systems continue to serve data to their respective applications and users, with no effect or delay. IBM System Storage N series Hardware Guide...
  • Page 109: Managing An Ha Pair Configuration

    Data ONTAP command-line interface (CLI) Data ONTAP FilerView IBM System Manager for N series Operations Manager 7.4.1 Managing an HA pair configuration At a high level, the following tasks are involved in managing an HA pair configuration:...
  • Page 110: Halting A Node Without Takeover

    Halting a node without takeover Performing a takeover For more information about managing an HA pair configuration, see IBM System Storage N series Data ONTAP 8.0 7-Mode High-Availability Configuration Guide, which is available at this website: http://www.ibm.com/storage/support/nas 7.4.2 Halting a node without takeover You can halt the node and prevent its partner from taking over.
  • Page 111: Basic Ha Pair Configuration Management

    Copyright (C) 2000,2001,2002,2003 Broadcom Corporation. Portions Copyright (c) 2002-2006 Network Appliance, Inc. CPU type 0xF29: 2800MHz Total memory: 0x80000000 bytes (2048MB) CFE> The same result can be accomplished by using the command cf disable followed by the halt command. From the CFE prompt or the boot LOADER prompt (depending on the model), the system can be rebooted by using the boot_ontap command.
  • Page 112 Example 7-11 cf status: Verification if takeover completed itsonas2(takeover)> cf status itsonas2 has taken over itsonas1. itsonas1 is ready for giveback. Takeover due to negotiated failover, reason: operator initiated cf takeover itsonas2(takeover)> IBM System Storage N series Hardware Guide...
  • Page 113 In the example, the N series itsonas1 rebooted when you ran the cf takeover command. When one N series storage system node is in takeover mode, the partner N series node does not reboot until the cf giveback command is run.
  • Page 114 GUI. The example demonstrates how to perform these tasks by using System Manager. System Manager is a tool that is used for managing IBM N series that are available for at extra cost. System Manager can be downloaded from the IBM NAS support site that is available at this website: http://www.ibm.com/storage/support/nas...
  • Page 115 Tip: Under normal conditions, you do not need to perform takeover or giveback on an IBM N series system. Usually, you must use it only if a controller must be halted or rebooted for maintenance. Complete the following steps: 1. As shown in Figure 7-6, you can perform the takeover by using System Manager and clicking Active/Active Configuration ...
  • Page 116 2. Figure 7-7 shows the Active/Active takeover wizard step 1. Click Next to continue. Figure 7-7 System Manager initiating takeover: Step 1 3. Figure 7-8 shows the Active/Active takeover wizard step 2. Click Next to continue. Figure 7-8 System Manager initiating takeover: Step 2 IBM System Storage N series Hardware Guide...
  • Page 117 4. Figure 7-9 shows the Active/Active takeover wizard step 3. Click Finish to continue. Figure 7-9 System Manager initiating takeover: Step 3 5. Figure 7-10 shows the Active/Active takeover wizard final step where takeover was run successfully. Click Close to continue. Figure 7-10 System Manager takeover successful Chapter 7.
  • Page 118 Figure 7-11 System Manager itsonas2 taken over by itsonas1 Starting giveback by using System Manager Figure 7-12 shows how to perform the giveback by using System Manager. Figure 7-12 FilerView: Start giveback IBM System Storage N series Hardware Guide...
  • Page 119 Figure 7-13 shows a successfully completed giveback. Figure 7-13 System Manager giveback successful Figure 7-14 shows that System Manager now reports the systems back to normal after a successful giveback. Figure 7-14 System Manager with systems back to normal Chapter 7. Highly Available controller pairs...
  • Page 120: Ha Pair Configuration Failover Basic Operations

    You halt one of the HA pair nodes without using the -f flag. The -f flag applies only to storage systems in an HA pair configuration. If you enter the halt -f command on an N series, its partner does not take over. You start a takeover manually.
  • Page 121 Failover because of disk mismatch Communication between HA pair nodes is first established through the HA pair configuration interconnect adapters. At this time, the nodes exchange a list of disk shelves that are visible on the A loop and B loop of each node. If the B loop shelf count on its partner is greater than its local A loop shelf count, the system concludes that it is impaired.
  • Page 122 IBM System Storage N series Hardware Guide...
  • Page 123: Chapter 8. Metrocluster

    This chapter includes the following sections: Overview of MetroCluster Business continuity solutions Stretch MetroCluster Fabric Attached MetroCluster Synchronous mirroring with SyncMirror MetroCluster zoning and TI zones Failure scenarios © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 124: Overview Of Metrocluster

    8.1 Overview of MetroCluster IBM N series MetroCluster, as shown in Figure 8-1, is a solution that combines N series local clustering with synchronous mirroring to deliver continuous availability. MetroCluster expands the capabilities of the N series portfolio. It works seamlessly with your host and storage environment to provide continuous data availability between two sites while eliminating the need to create and maintain complicated failover scripts.
  • Page 125 (because of SyncMirror) in the second location. A MetroCluster system is made up of the following components: Two N series storage controllers, HA configuration: These controllers provide the nodes for serving the data in a failure. N62x0 and N7950T systems are supported in MetroCluster configurations, whereas N3x00 is not supported.
  • Page 126 Figure 8-2 Logical view of MetroCluster SyncMirror Geographical separation of N series nodes is implemented by physically separating controllers and storage, which creates two MetroCluster halves. For distances under 500 m (campus distances), long cables are used to create Stretch MetroCluster configurations.
  • Page 127: Business Continuity Solutions

    The N series offers several levels of protection with several different options. MetroCluster is one of the options that is offered by the N series. MetroCluster fits into the campus-level distance requirement of business continuity, as shown in Figure 8-3.
  • Page 128: Planning Stretch Metrocluster Configurations

    – Two onboard FC ports + dual port FC initiator adapter – Quad port FC initiator HBA (frees up onboard FC ports) All slots are used and the N6210 cannot be upgraded with other adapters. IBM System Storage N series Hardware Guide...
  • Page 129: Cabling Stretch Metroclusters

    Mixed SATA and FC configurations are allowed if the following requirements are met: – There is no intermixing of Fibre Channel and SATA shelves on the same loop. – Mirrored shelves must be of the same type as their parents. The Stretch MetroCluster heads can have a distance of up to 500 m (@2 Gbps).
  • Page 130: Fabric Attached Metrocluster

    (back-end traffic and FC/VI communication). It does not support sharing this infrastructure with other systems. Minimum of four FibreBridges are required when SAS Shelves (EXN3000 and EXN3500) are used in a MetroCluster environment. IBM System Storage N series Hardware Guide...
  • Page 131: Planning Fabric Metrocluster Configurations

    Storage must be symmetric (for example, same storage on both sides). For storage that is not symmetric but is similar, file a RPQ/SCORE. N series native disk shelf disk drives are not supported by MetroClusters. Four Brocade/IBM B-Type Fibre Channel Switches are needed. For more information...
  • Page 132 Brocade 5100 switches. For more information about shared-switches configuration, see the Data ONTAP High Availability Configuration Guide. Attention: Always see the MetroCluster Interoperability Matrix on the IBM Support site for the latest information about components and compatibility. IBM System Storage N series Hardware Guide...
  • Page 133: Cabling Fabric Metroclusters

    8.4.2 Cabling Fabric MetroClusters Figure 8-7 shows an example of a Fabric MetroCluster with two EXN4000 FC shelves on each site. Figure 8-7 Fabric MetroCluster cabling with EXN4000 Fabric MetroCluster configurations use Fibre Channel switches as the means to separate the controllers by a greater distance.
  • Page 134: Synchronous Mirroring With Syncmirror

    Figure 8-8. enables connectivity between Fibre Channel initiators and SAS Figure 8-8 Cabling a Fabric MetroCluster with FibreBridges and SAS Shelves For more information about SAS Bridges, see the SAS FibreBridges Chapter in the N series Hardware book. 8.5 Synchronous mirroring with SyncMirror...
  • Page 135 Figure 8-9 Synchronous mirroring SyncMirror is used to create aggregate mirrors. When you are planning for SyncMirror environments, remember the following considerations: Aggregate mirrors must be on the remote site (geographically separated) In normal mode (no takeover), aggregate mirrors cannot be served out Aggregate mirrors can exist only between like drive types When the SyncMirror license is installed, disks are divided into pools (pool0: local, pool1: remote/mirror).
  • Page 136 0c.17 FC:B 0 FCAL 15000 0/0 137104/280790184 partner 0c.27 11 FC:B 0 FCAL 15000 0/0 137104/280790184 partner 0c.18 FC:B 0 FCAL 15000 0/0 137104/280790184 partner 0a.23 FC:A 1 FCAL 15000 0/0 137104/280790184 IBM System Storage N series Hardware Guide...
  • Page 137: Syncmirror Without Metrocluster

    partner 0a.28 12 FC:A 1 FCAL 15000 0/0 137104/280790184 partner 0a.24 FC:A 1 FCAL 15000 0/0 137104/280790184 partner 0c.19 FC:B 0 FCAL 15000 0/0 274845/562884296 8.5.2 SyncMirror without MetroCluster SyncMirror local (without MetroCluster) is a standard cluster with one or both controllers mirroring their RAID to two separate shelves.
  • Page 138: Metrocluster Zoning And Ti Zones

    Fabric MetroCluster, which makes switch management minimal. The TI zone feature of Brocade/IBM B type switches (FOS 6.0.0b or later) allows you to control the flow of interswitch traffic. You do so by creating a dedicated path for traffic that flows from a specific set of source ports.
  • Page 139 You can benefit from using two ISLs per fabric (instead of one ISL per fabric) to separate out high-priority cluster interconnect traffic from other traffic. This configuration prevents contention on the back-end fabric, and provides additional bandwidth in some cases. The TI feature is used to enable this separation.
  • Page 140: Failure Scenarios

    (black) by TI zones. Figure 8-13 TI Zones in MetroCluster environment 8.7 Failure scenarios This section describes some possible failure scenarios and the resulting configurations when MetroCluster is used. IBM System Storage N series Hardware Guide...
  • Page 141: Metrocluster Host Failure

    8.7.1 MetroCluster host failure In this scenario, N series N1 (Node 1) failed. CFO/MetroCluster takes over the services and access to its disks, as shown in Figure 8-14. The fabric switches provide the connectivity for the N series N2 and the hosts to continue to access data without interruption.
  • Page 142: Metrocluster Interconnect Failure

    During this period, data access is uninterrupted to all hosts. No automated controller takeover occurs. Both controller heads continue to serve its LUNs and volumes. However, mirroring and failover are disabled, which reduces data protection. When the interconnect failure is resolved, mirrors are resynced. IBM System Storage N series Hardware Guide...
  • Page 143: Metrocluster Site Failure

    8.7.4 MetroCluster site failure In this scenario, a site disaster occurred and all switches, storage systems, and hosts are lost, as shown in Figure 8-17. To continue data access, a cluster failover must be started by using the cfo -d command. Both primaries now exist at data center 2, and hosting of Host1 is done at data center 2.
  • Page 144: Metrocluster Site Recovery

    A cf giveback command is run to resume normal operations, as shown in Figure 8-18. Mirrors are resynchronized and primaries and mirrors are reversed to their previous status. Figure 8-18 MetroCluster recovery IBM System Storage N series Hardware Guide...
  • Page 145: Chapter 9. Metrocluster Expansion Cabling

    Chapter 9. This chapter describes two options for using MetroCluster with SAS connected expansion shelves. This chapter includes the following sections: FibreBridge 6500N Stretch MetroCluster with SAS shelves and SAS cables © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 146: Fibrebridge 6500N

    FibreBridge provides a complete highly available connectivity solution for MetroCluster. 9.1.1 Description MetroCluster adds great availability to N series systems but is limited to Fibre Channel drive shelves only. Before 8.1, both SATA and Fibre Channel drive shelves were supported on active-active configuration in stretch MetroCluster configurations.
  • Page 147 Attention: At the time of this writing, Data ONTAP 8.1 has the following limitations: The FibreBridge does not support mixing EXN3000 and EXN3500 in same stack. FibreBridge configurations do not support SSD drives. The FibreBridge does not support SNMP. Table 9-1 Shelf combinations in a FibreBridge stack Shelf EXN3000 EXN3000...
  • Page 148 For example, if the spindle limit for N series N62x0 is n, the spindle limit for a N62x0 fabric MetroCluster configuration remains n despite the two controllers.
  • Page 149 Figure 9-4 shows an example of an N series Stretch MetroCluster environment. Fibre Channel ports of the N series nodes are connected to the Fibre Channel ports on the FibreBridge (FC1 and FC2). SAS ports of the first and last shelf in a stack are connected to the SAS ports (SAS port A) on the FibreBridge.
  • Page 150: Administration And Management

    SAS shelves in a stack are each connected through one SAS link to a bridge. Figure 9-5 Fabric MetroCluster with FibreBridges N series gateway configurations do not use the FibreBridge. Storage is presented through FCP as LUNs from whatever back-end array the gateway head is front ending.
  • Page 151: Stretch Metrocluster With Sas Shelves And Sas Cables

    Your system platform, disk shelves, and version of Data ONTAP that your system is running must support SAS optical cables. The most current support information can be found at the following IBM N series support site: http://www.ibm.com/support/docview.wss?uid=ssg1S7003897 SAS optical multimode QSFP-to-QSFP cables can be used for controller-to-shelf and shelf-to-shelf connections, and are available in lengths up to 50 meters.
  • Page 152 SAS optical cables. – If the shelf-to-shelf connections are SAS copper cables, the shelf-to-controller connections to that stack can be SAS optical cables or SAS copper cables. IBM System Storage N series Hardware Guide...
  • Page 153: Installing A New System With Sas Disk Shelves By Using Sas Optical Cables

    About these procedures The following general information applies to the procedures that are described in this IBM Redbooks® publication: The use of SAS optical cables in a stack that is attached to FibreBridge 6500N bridges is not supported. Disk shelves that are connected with SAS optical cables require a version of disk shelf firmware that supports SAS optical cables.
  • Page 154 Example: 0151 is the disk shelf firmware version for shelf number one (Slot A/IOM A) in the storage system: Expanders on channel 4a: Level 3: WWN 500a0980000840ff, ID 1, Serial Number ' SHU0954292G114C', Product 'DS424IOM6', Rev '0151', Slot A IBM System Storage N series Hardware Guide...
  • Page 155: Replacing Sas Cables In A Multipath Ha Configuration

    Compare the firmware information in the command output with the disk shelf firmware information at the IBM N series support site to determine the most current disk shelf firmware version. 13.The next step depends on how current the disk shelf firmware is: –...
  • Page 156 If the firmware version in the command output is the same as or later than the most current version on the N series Support Site, no disk shelf firmware update is needed. If the firmware version in the command output is an earlier version than the most current version on the N series Support Site, you must update the disk shelf firmware.
  • Page 157: Hot-Adding An Sas Disk Shelf By Using Sas Optical Cables

    The output should be the same as Step 1; the system should be Multi-Path HA, and the SAS port and attached disk shelf information should be the same. If the output is something other than Multi-Path HA, you must identify the cabling error, correct it, and run the sysconfig command again.
  • Page 158 Change the shelf ID to a valid ID that is unique from the other SAS disk shelves in the storage system. b. Power-cycle the disk shelf to make the shelf ID take effect. IBM System Storage N series Hardware Guide...
  • Page 159 For more information, see “Changing the disk shelf ID” in the Disk Shelf Installation and Service Guide. Cabling the hot-added disk shelf Cabling the hot-added disk shelf involves cabling the SAS connections and, if applicable, assigning disk drive ownership. About this task This procedure is written with the assumption that you originally cabled your system so that the controllers connect to the last disk shelf in the stack through the disk shelf’s circle ports instead of the square ports.
  • Page 160 3 - 5 minutes (the time it takes to update downrev firmware on a disk drive), which shows the firmware update progress. IBM System Storage N series Hardware Guide...
  • Page 161: Replacing Fibrebridge And Sas Copper Cables With Sas Optical Cables

    Product 'DS424IOM6', Rev '0151', Slot A c. Compare the firmware information in the command output with the disk shelf firmware information at the IBM N series support site to determine the most current disk shelf firmware version. 3. The next step depends on how current the disk shelf firmware is: –...
  • Page 162 You installed the SAS HBAs in Step 9 of this procedure after you halt your system. For more information about SAS ports, see the Universal SAS and ACP Cabling Guide. You must download the Universal SAS and ACP Cabling Guide from the N series Support Site.
  • Page 163 Figure 9-6 Stretch MetroCluster using FibreBridge and SAS copper cables Figure 9-7 on page 144 is an example of how a 62xx looks after the system is cabled with SAS optical cables (which were replaced the FibreBridge 6500N bridges and SAS copper cables).
  • Page 164 Locate the disk shelf firmware information for the disk shelves in the output. Example: 0151 is the disk shelf firmware version for shelf number one (for each IOM6) in the storage system: Shelf 1: IOM6 Firmware rev. IOM6 A: 0151 IOM6 B: 0151 IBM System Storage N series Hardware Guide...
  • Page 165 – If the firmware version in the command output is the same as or later than the most current version on the N series support site, no disk shelf firmware update is needed. – If the firmware version in the command output is an earlier version than the most current version on the N series support site, download and install the new disk shelf firmware file.
  • Page 166 FC ports (connected through the bridges) to SAS ports on the controllers. 16.Enable controller failover by entering the following command on either node: cf enable 17.Verify that controller failover is enabled by entering the following command on either node: cf status IBM System Storage N series Hardware Guide...
  • Page 167: Chapter 10. Data Protection With Raid Double Parity

    RAID-DP allows the RAID group to continue serving data and re-create the data on the two failed disks. This chapter includes the following sections: Background Why use RAID-DP RAID-DP overview RAID-DP and double parity Hot spare disks © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 168: Background

    FlexVols offer flexible and unparalleled functionality that is housed in a construct that is known aggregate as an . For more information about FlexVol and thin provisioning, see N series Thin Provisioning, REDP-47470, which is available at this website: http://www.redbooks.ibm.com/abstracts/redp4747.html?Open Traditional single-parity RAID technology offers protection from a single disk drive failure.
  • Page 169: Why Use Raid-Dp

    10.2 Why use RAID-DP Traditional single-parity RAID offers adequate protection against a single event. This event can be a complete disk failure or a bit error during a read. In either event, data is re-created by using parity data and data that remains on unaffected disks in the array or volume. If the event is a read error, re-creating data happens almost instantaneously and the array or volume remains in an online mode.
  • Page 170: Single-Parity Raid Using Larger Disks

    RAID reliability from storage vendors. To meet this demand, a new type of RAID protection called RAID Double Parity (RAID-DP) was developed, as shown in Figure 10-4 on page 151. IBM System Storage N series Hardware Guide...
  • Page 171: Raid-Dp Overview

    RAID-DP is available at no additional fee or special hardware requirements. By default, IBM System Storage N series storage systems are included with the RAID-DP configuration. However, IBM System Storage N series Gateways are not. The initial configuration has three drives that are configured, as shown in Figure 10-5.
  • Page 172: Raid-Dp And Double Parity

    The additional RAID-DP parity disk stores diagonal parity across the disks in a RAID-DP group, as shown in Figure 10-6. These two parity stripes in RAID-DP provide data protection if two disk failures occur in the same RAID group. Figure 10-6 RAID 4 and RAID-DP IBM System Storage N series Hardware Guide...
  • Page 173: Internal Structure Of Raid-Dp

    10.4.1 Internal structure of RAID-DP With RAID-DP, the traditional RAID 4 horizontal parity structure is still employed and becomes a subset of the RAID-DP construct; that is, how RAID 4 works on storage is not modified with RAID-DP. Data is written out in horizontal rows with parity calculated for each row in RAID-DP, which is considered the row component of double parity.
  • Page 174: Adding Raid-Dp Double-Parity Stripes

    The first condition is that each diagonal parity stripe misses one (and only one) disk, but each diagonal misses a different disk The Figure 10-9 shows an omitted diagonal parity stripe (white blocks) that is stored on the second diagonal parity disk. IBM System Storage N series Hardware Guide...
  • Page 175: Raid-Dp Reconstruction

    Omitting the one diagonal stripe does not affect RAID-DP’s ability to recover all data in a double-disk failure as shown in the reconstruction example. The same RAID-DP diagonal parity conditions that are described n this example are true in real storage deployments. It works even in deployments that involve dozens of disks in a RAID group and millions of rows of data that is written horizontally across the RAID 4 group.
  • Page 176 1, row 1, disk 3 parity (9 - 3 - 2 - 1 = 3). This process is shown in Figure 10-12. Figure 10-12 RAID-DP reconstruction of first horizontal block IBM System Storage N series Hardware Guide...
  • Page 177 The algorithm continues determining whether more diagonal blocks can be re-created. The upper left block is re-created from row parity, and RAID-DP can proceed in re-creating the gray diagonal block in column two, row two, as shown in Figure 10-13. Figure 10-13 RAID-DP reconstruction simulation of gray block column two RAID-DP recovers the gray diagonal block in column two, row two.
  • Page 178 When the missing diagonal block in the gold stripe is re-created, enough information is available to re-create the missing horizontal block from row parity, as shown in Figure 10-16. Figure 10-16 RAID-DP reconstruction simulation of gold horizontal block IBM System Storage N series Hardware Guide...
  • Page 179: Protection Levels With Raid-Dp

    After the missing block in the horizontal row is re-created, reconstruction switches back to diagonal parity to re-creating a missing diagonal block. RAID-DP can continue in the current chain on the red stripe, as shown in Figure 10-17. Figure 10-17 RAID-DP reconstruction simulation of Red diagonal block Again, after the recovery of a diagonal block, the process switches back to row parity because it has enough information to re-create data for the one horizontal block.
  • Page 180 [aggr | vol] options name raidtype raid_dp command. Figure 10-20 shows the example itso volume as a traditional RAID4 volume. Figure 10-20 The vol status command showing itso volume as traditional RAID4 volume IBM System Storage N series Hardware Guide...
  • Page 181 When the command is entered, the aggregate or traditional volumes (as in the following examples) are instantly denoted as RAID-DP. However, all diagonal parity stripes still must be calculated and stored on the second parity disk. Figure 10-21 shows the use of the command to convert the volume.
  • Page 182 Figure 10-26 shows the completed process. If a RAID-DP group is converted to RAID4, each RAID group’s second diagonal parity disk is released and put back into the spare disk pool. Figure 10-26 RAID4 conversion instantaneous completion results IBM System Storage N series Hardware Guide...
  • Page 183: Hot Spare Disks

    RAID4 and RAID-DP. Therefore, little to no changes are required for standard operational procedures that are used by IBM System Storage N series administrators. The commands that you use for management activities on the storage controller are the same regardless of the mix of RAID4 and RAID-DP aggregates or traditional volumes.
  • Page 184 During reconstruction, file service can slow down. After the storage system is finished reconstructing data, replace the failed disks with new hot spare disks as soon as possible. Hot spare disks must always be available in the system. IBM System Storage N series Hardware Guide...
  • Page 185: Chapter 11. Core Technologies

    Core technologies Chapter 11. This chapter describes N series core technologies, such as the WAFL file system, disk structures, and non-volatile RAM (NVRAM) access methods. This chapter includes the following sections: Write Anywhere File Layout Disk structure NVRAM and system memory...
  • Page 186: Write Anywhere File Layout

    11.1 Write Anywhere File Layout Write Anywhere File Layout (WAFL) is the N series file system. At the core of Data ONTAP is WAFL, which is N series proprietary software that manages the placement and protection of storage data. Integrated with WAFL is N series RAID technology, which includes single and double parity disk protection.
  • Page 187: Disk Structure

    ONTAP does everything, but particularly in the operation of RAID and the operation of Snapshot technology. 11.2 Disk structure Closely integrated with N series RAID is the aggregate, which forms a storage pool by concatenating RAID groups. The aggregate controls data placement and space management activities.
  • Page 188: Nvram And System Memory

    Caching writes early in the stack allows the N series to optimize writes to disk, even when writing to double-parity RAID. Most other storage vendors cache writes at the device driver level.
  • Page 189: Intelligent Caching Of Write Requests

    This process is used because writing to memory is much faster than writing to disk. The N series provides NVRAM in all of its current storage systems. However, the Data ONTAP operating environment uses NVRAM in a much different manner than typical storage arrays.
  • Page 190: Nvram Operation

    NVRAM is not lost. After data gets to an N series storage system, it is treated in the same way whether it came through a SAN or NAS connection. As I/O requests come into the system, they first go to RAM.
  • Page 191 NVRAM can function faster if the disks can keep up. For more information about technical details of N series RAID-DP, see IBM System Storage N Series Implementation of RAID Double Parity for Data Protection, REDP-4169, which is available at this website: http://www.redbooks.ibm.com/abstracts/redp4169.html?Open...
  • Page 192: N Series Read Caching Techniques

    Read caching is the process of deciding which data to keep or prefetch into storage system memory to satisfy read requests more rapidly. The N series uses a multilevel approach to read caching to break the link between random read performance and spindle count. This...
  • Page 193 Deciding which data to prefetch into system memory The N series read ahead algorithms are designed to anticipate what data will be requested and read it into memory before the read request arrives. Because of the importance of effective read ahead algorithms, IBM performed a significant amount of research in this area.
  • Page 194 IBM System Storage N series Hardware Guide...
  • Page 195: Chapter 12. Flash Cache

    Chapter 12. This chapter provides an overview of Flash Cache and all of its components. This chapter includes the following sections: About Flash Cache Flash Cache module How Flash Cache works © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 196: About Flash Cache

    12.2 Flash Cache module The Flash Cache option offers a way to optimize the performance of an N series storage system by improving throughput and latency. It also reduces the number of disk spindles and shelves that are required, and the power, cooling, and rack space requirements.
  • Page 197: Data Ontap Disk Read Operation

    12.3.1 Data ONTAP disk read operation In Data ONTAP before Flash Cache, when a client or host needed data and it was not in the system’s memory, a disk read resulted. Essentially, the system asked itself if it had the data in RAM and when the answer was no, it went to the disks to retrieve it.
  • Page 198: Saving Useful Data In Flash Cache

    Data is always read from disk into memory and then stored in the module when it must be cleared from system memory, as shown in Figure 12-4. Figure 12-4 Data is stored in Flash Cache IBM System Storage N series Hardware Guide...
  • Page 199: Reading Data From Flash Cache

    12.3.4 Reading data from Flash Cache When the data is stored in the module, Data ONTAP can check to see whether the data is there the next time it is needed, as shown in Figure 12-5. Figure 12-5 Read request with Flash Cache module installed When the data is there, access to it is far faster than having to go to disk.
  • Page 200 IBM System Storage N series Hardware Guide...
  • Page 201: Chapter 13. Disk Sanitization

    This chapter includes the following sections: Data ONTAP disk sanitization Data confidentiality Data ONTAP sanitization operation Disk Sanitization with encrypted disks © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 202: Data Ontap Disk Sanitization

    13.1 Data ONTAP disk sanitization IBM N series Data ONTAP includes Disk Sanitization with a separately licensable, no-cost solution as a part of every offered system. When enabled, this feature logically deletes all data on one or more physical disk drives. It does so in a manner that precludes recovery of that data by any known recovery methods.
  • Page 203: Technology Drivers

    As technology advances, upgrades, disk subsystem replacements, and data lifecycle management require the migration of data. To ensure that the data movement does not create a security risk by leaving data patterns behind, IBM System Storage N series offers the disk sanitization feature, as shown in Figure 13-1.
  • Page 204: Data Ontap Sanitization Operation

    By using Data ONTAP, IBM System Storage N series offers an effective sanitization method that reduces costs and risks. The disk sanitization algorithms are built into Data ONTAP and require only licensing. No other software installation is required. 13.3 Data ONTAP sanitization operation With the disk sanitize start command, Data ONTAP begins the sanitization process on each of the specified disks.
  • Page 205 If you must cancel the sanitization process, use the disk sanitize abort command. If the specified disks are undergoing the disk formatting phase of sanitization, the abort does not occur until the disk formatting is complete. At that time, Data ONTAP displays a message that the sanitization was stopped.
  • Page 206: Disk Sanitization With Encrypted Disks

    Tip: To render a disk permanently unusable and the data on it inaccessible, set the state of the disk to end-of-life by using the disk encrypt destroy command. This command works on spare disks only. IBM System Storage N series Hardware Guide...
  • Page 207: Chapter 14. Designing An N Series Solution

    Designing an N series solution Chapter 14. This chapter describes the issues to consider when you are sizing an IBM System Storage N series storage system to your environment. The following topics are addressed: A complete explanation is beyond the scope of this book, so only high-level planning considerations are presented.
  • Page 208: Primary Issues That Affect Planning

    The performance that is required from the storage subsystem is driven by the number of client systems that rely on the IBM System Storage N series, and the applications that are running on those systems. Performance involves a balance of the following factors:...
  • Page 209 Raw capacity is determined by taking the number of disks that are connected and multiplying by their capacity. For example, 24 disks (the maximum in the IBM System Storage N series disk shelves) x 2 TB per drive is a raw capacity of approximately 48,000 GB, or 48 TB.
  • Page 210 – RAID-DP: Protects against a double disk failure in any RAID group and requires that two disks be reserved for RAID parity information (not user data). With the IBM System Storage N series, the maximum protection against loss is provided by using the RAID-DP facility. RAID-DP has many thousands of times better availability than traditional RAID-4 (or RAID-5), often for little or no more capacity.
  • Page 211 (regardless of disk type) behind an N Series Gateway. – Zone checksum (ZCS) • Zone checksums were used on older N Series SATA storage. • They imposed only a small capacity overhead, with 63/64 sectors remaining available for user data, but they had a negative effect on performance.
  • Page 212 In the latest version on ONTAP, the default aggregate snapshot reserve is 0% Do not change this setting unless you are using a MetroCluster or SyncMirror configuration. In those cases, change it to 5%. IBM System Storage N series Hardware Guide...
  • Page 213 Another factor that affects capacity is imposed by the file system. The Write Anywhere File Layou WAFL) file system that is used by the IBM System Storage N series has less effect than many file systems, but the effect still exists. WAFL has a memory usage equal to 10% of the formatted capacity of a drive.
  • Page 214: Other Effects Of Snapshot

    N series controller: – Negligible effect on the performance of the controller The N series snapshots use a redirect-on-write design. This design avoids most of the performance effect that is normally associated with Snapshot creation and retention (as seen in traditional copy-on-write snapshots on other platforms).
  • Page 215: Capacity Overhead Versus Performance

    Adding disk drives is one simple example. The disk drives and shelves themselves are all hot-pluggable, and can be added or replaced without service disruption. However, what if all available space in a rack is used by full disk shelves? How is a disk drive added? Chapter 14. Designing an N series solution...
  • Page 216: Application Considerations

    It is especially important in this environment to install with maximum flexibility in mind from the beginning. This environment also tends to use many Snapshot images to maximize the protection that is offered to the user. IBM System Storage N series Hardware Guide...
  • Page 217 Microsoft Exchange Microsoft Exchange has various parameters that affect the total storage that is required of N series. These parameters are shown in the following examples: Number of instances With Microsoft Exchange, you can specify how many instances of an email or document are saved.
  • Page 218 Number of storage groups Because a storage group cannot span N series storage systems, the number of storage groups affects sizing. There is no recommendation on number of storage groups per IBM System Storage N series storage system. However, the number and type of users per storage group helps determine the number of storage groups per storage system.
  • Page 219: Backup Servers

    Each IBM System Storage N series platform has different capabilities in each of these areas. The planning process must take these characteristics into account to ensure that the backup server is capable of the workload expected.
  • Page 220: Resiliency To Failure

    These agreements affect the data and applications that run on the IBM System Storage N series storage systems. If it is determined that a Active/Active configuration is needed, it affects sizing. Rather than sizing for all data, applications, and clients that are serviced by one IBM System Storage N series node, the workload is instead divided over two or more nodes.
  • Page 221 An example is a product ordering system with the data storage or application on an IBM System Storage N series storage system. Any effect on the ability to place an order affects sales.
  • Page 222: Summary

    IBM System Storage N series storage system. Other sources of specific planning templates exist or are under development. You can find them by using web search queries.
  • Page 223: Part 2. Installation And Administration

    SSH connections At a high-level, the GUI interfaces This part includes the following chapters: Chapter 15, “Preparation and installation” on page 205 Chapter 16, “Basic N series administration” on page 213 © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 224 IBM System Storage N series Hardware Guide...
  • Page 225: Chapter 15. Preparation And Installation

    Preparation and installation Chapter 15. This chapter describes the N series System Manager tool. By using this tool, you can manage the N series storage system even with limited experience and knowledge of the N series hardware and software features. System Manager helps with basic setup and administration tasks, and can help you manage multiple IBM N series storage systems from a single application.
  • Page 226: Installation Prerequisites

    This section describes, at a high level, some of the planning and prerequisite tasks that must be completed for a successful N series implementation. For more information, see the N series Introduction and Planning Guide, S7001913, which is available at this website: http://www-304.ibm.com/support/docview.wss?crawler=1&uid=ssg1S7001913...
  • Page 227: Configuration Worksheet

    Sufficient people to safely install the equipment into a rack: – Two or three people are required, depending on the hardware model – See the specific hardware installation guide for your equipment 15.2 Configuration worksheet Before you power on your storage system for the first time, use the configuration worksheet (see Table 15-1) to gather information for the software setup process.
  • Page 228 Gateway name IPv4 address IPv6 address HTTP Location of HTTP directory Domain name Server address 1 Server address 2 Server address 3 Domain name Server address 1 Server address 2 Server address 3 IBM System Storage N series Hardware Guide...
  • Page 229 Type of information Your values CIFS Windows domain WINS servers (1, 2, 3) Multiprotocol or NTFS only filer? Should CIFS create default /etc/passwd and /etc/group files? Enable NIS group caching? Hours to update the NIS cache? CIFS server name (if different from default) User authentication style: (1) Active Directory domain...
  • Page 230: Initial Hardware Setup

    (if using Storage Encryption) Key tag name 15.3 Initial hardware setup The initial N series hardware setup includes the following steps: 1. Hardware Rack and Stack: Storage controllers, disk shelves, and so on 2. Connectivity: – Storage controller to disk shelves –...
  • Page 231: Troubleshooting If The System Does Not Boot

    15.4 Troubleshooting if the system does not boot This section is an excerpt from the Data ONTAP 8.1 7-mode software setup guide. If your system does not boot when you power it on, you can troubleshoot the problem by completing the following steps: 1.
  • Page 232 If your system... Then... Starts successfully Proceed to setting up the software. Does not start successfully Call IBM technical support. The system might not have the boot image downloaded on the boot device. IBM System Storage N series Hardware Guide...
  • Page 233: Chapter 16. Basic N Series Administration

    Basic N series administration Chapter 16. This chapter describes how to perform basic administration tasks on IBM System Storage N series storage systems. This chapter includes the following sections: Administration methods Starting, stopping, and rebooting the storage system © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 234: Administration Methods

    . This interface is still available for systems that are running ONTAP 7.3 or earlier, but was removed in ONTAP 8.1. To access a pre-8.1 N series through FilerView, open your browser and go to the following URL: http://<filername or ip-address>/na_admin To proceed, specify a valid user name and password.
  • Page 235 Figure 16-1 The help and ? commands The manual pages can be accessed by entering the man command. Figure 16-2 shows a detailed description of a command and lists options (man <command>). Figure 16-2 Results of a man command Chapter 16. Basic N series administration...
  • Page 236: N Series System Manager

    For more information about System Manager, see the following IBM NAS support website: http://www.ibm.com/storage/support/nas/ 16.1.4 OnCommand OnCommand is an operations manager is an N series solution for managing multiple N series storage systems that provides the following features: Scalable management, monitoring, and reporting software for enterprise-class...
  • Page 237: Starting The Ibm System Storage N Series Storage System

    16.2.1 Starting the IBM System Storage N series storage system The IBM System Storage N series boot code is on a CompactFlash card. After the system is turned on, IBM System Storage N series boots automatically from this card. You can enter an alternative boot mode by pressing Ctrl+C and selecting the boot option.
  • Page 238 9.1.39.107() (ITSO-N1\administrator - root) (using security signatures) itsosj-n1> With the IBM System Storage N series storage systems, you can specify which users receive CIFS shutdown messages. By running the cifs terminate command, Data ONTAP by default sends a message to all open client connections. This setting can be changed by running the following command: options cifs.shutdown_msg_level 0 | 1 | 2...
  • Page 239 When you shut down an N series, there is no need to specify the cifs terminate command. During shutdown, this command is run by the operating system automatically. Tip: Workstations that are running Windows 95, 98, or Windows for Workgroups do not see the notification unless they are running WinPopup.
  • Page 240 Halting the N series You can use the command line or FilerView interface to stop the N series. You can use the halt command on the CLI to perform a graceful shutdown. The -t option causes the system to stop after the number of minutes that you specify (for example, halt -t 5). The halt command stops all services and shuts down the system gracefully to the Common Firmware Environment (CFE) prompt.
  • Page 241 For more information about setting up netboot, see this website: http://www.ibm.com/storage/support/nas/ You often boot the N series after you run the halt command with the boot_ontap or bye command. These commands end the CFE prompt and restart the N series, as shown in Example 16-9.
  • Page 242: Rebooting The System

    The System Storage N series systems can be rebooted from the command line or from the NSM interface. Rebooting from the CLI halts the N series and then restarts it, as shown in Example 16-10. Example 16-10 Rebooting from the command-line interface...
  • Page 243: Part 3. Client Hardware Integration

    This part describes the functions and installation of the host utility kit software. It also describes how to configure a client system to SAN boot from an N series, and provides a high-level description of host multipathing on the N series platform.
  • Page 244 IBM System Storage N series Hardware Guide...
  • Page 245: Chapter 17. Host Utilities Kits

    This chapter provides an overview of the purpose, contents, and functions of Host Utilities Kits (HUKs) for IBM N series storage systems. It describes why HUKs are an important part of any successful N series implementation and the connection protocols that are supported. It also provides a detailed example of a Windows HUK installation.
  • Page 246: Host Utilities Kits

    17.2.2 Current supported operating environments IBM N series provides a SAN Host Utilities kit for every supported OS. This is a set of data collection applications and configuration scripts, which includes SCSI and path timeout values and path retry counts.
  • Page 247: Host Utilities Functions

    SCSI and path timeout values and HBA parameters. These timeouts are modified to ensure the best performance and to handle storage system events. Host Utilities ensure that hosts correctly handle the behavior of the IBM N series storage system. On other operating systems, such as those based on Linux and UNIX, timeout parameters must be modified manually.
  • Page 248: Preparation

    Add the iSCSI or FCP license and start the target service. The Fibre Channel and iSCSI protocols are licensed features of Data ONTAP software. If you must purchase a license, contact your IBM or sales partner representative. IBM System Storage N series Hardware Guide...
  • Page 249 Next, verify your cabling. For more information, see the FC and iSCSI Configuration Guide, which is available at this website: http://www.ibm.com/storage/support/nas/ Configuring Fibre Channel HBAs and switches Complete the following steps to install and configure one or more supported Fibre Channel...
  • Page 250 You cannot Hyper-V select the Microsoft MPIO Multipathing Support for iSCSI option. Microsoft does not support MPIO with Windows XP. A Windows XP iSCSI connection to IBM N series storage is supported only on Hyper-V virtual machines. Windows Vista...
  • Page 251: Running The Host Utilities Installation Program

    See the IBM NAS support website at: http://www.ibm.com/storage/support/nas/ b. Sign in with your IBM ID and password. If you do not have an IBM ID or password, click Register, follow the online instructions, and then sign in. Use the same process if you are adding new N series systems and serial numbers to an existing registration.
  • Page 252: Host Configuration Settings

    Select the N series software that you want to download, and then select the Download view. d. Click Software Packages on the website that is shown and follow the online instructions to download the software. 3. Run the executable file, and then follow the instructions in the window.
  • Page 253: Host Utilities Registry And Parameters Settings

    The WWPN resembles the following example: WWPN: 10:00:00:00:c9:73:5b:90 For Windows Server 2008 or Windows Server 2008 R2, use the Windows Storage Explorer application to display the WWPNs. For Windows Server 2003, use the Microsoft fcinfo.exe program. You also can use the HBA manufacturer's management software if it is installed on the Windows host.
  • Page 254: Setting Up Luns

    SnapDrive for Windows software, which automatically creates igroups. Consider the following points for initiator groups (igroups): igroups are protocol-specific. For Fibre Channel connections, create a Fibre Channel igroup by using all WWPNs for the host. IBM System Storage N series Hardware Guide...
  • Page 255: Mapping Luns For Windows Clusters

    For iSCSI connections, create an iSCSI igroup that uses the iSCSI node name of the host. For systems that use both FC and iSCSI connections to the same LUN, create two igroups: One for FC and one for iSCSI. Then, map the LUN to both igroups. There are many ways to create and manage initiator groups and LUNs on your storage system.
  • Page 256: Accessing Luns On Hosts

    17.5.5 Accessing LUNs on hosts This section addresses how to make LUNs on N series storage subsystems accessible to hosts. Accessing LUNs on hosts that use Veritas Storage Foundation To enable the host that runs Veritas Storage Foundation to access a LUN, you must make the LUN visible to the host.
  • Page 257: Chapter 18. Boot From San

    This chapter describes the process to set up a Fibre Channel Protocol (FCP) SAN boot for your server. This process uses a LUN from an FCP SAN-attached N series storage system. It explains the concept of SAN boot and general prerequisites for using this technique.
  • Page 258: Overview

    You can have data (boot image and application data) mirrored over the SAN between a primary site and a recovery site. With this configuration, servers can take over at the secondary site if a disaster occurs on servers at the primary site. IBM System Storage N series Hardware Guide...
  • Page 259: Configuring San Boot For Ibm System X Servers

    This implementation does not make any testing statements about supported configurations. For more information, see the IBM System Storage N series interoperability matrix for FC and iSCSI SAN, which is available at this website: http://www.ibm.com/systems/storage/network/interophome.html Review the supported configuration for your server and operating system.
  • Page 260: Preferred Practices

    (2865-A20) 18.2.2 Preferred practices The following guidelines help you get the most out of your N series: Fibre Channel queue depth: To avoid host queuing, the host queue depths should not exceed the target queue depths on a per-target basis. For more information about target...
  • Page 261 – Some administrators that are concerned about paging performance might opt to keep the pagefile on a local disk while storing the operating system on an N series SAN. There are issues with this configuration as well.
  • Page 262: Basics Of The Boot Process

    HBA, and so on), only one can be the actual boot device. The BIOS determine the correct boot device order that is based on each device’s ready status and the stored configuration. IBM System Storage N series Hardware Guide...
  • Page 263: Configuring San Booting Before Installing Windows Or Linux Systems

    3. Configure the PC BIOS boot order to make the LUN the first disk device. For more information about SAN booting, including restrictions and configuration recommendations, see Support for FCP/iSCSI Host Utilities on Windows at this website: https://www-304.ibm.com/systems/support/myview/supportsite.wss/selectproduct?taski nd=7&brandind=5000029&familyind=5364556&typeind=0&modelind=0&osind=0&psid=sr&conti nue.x=1 For more information about Linux Support for FCP/iSCSI Host Utilities, see this website: http://www-304.ibm.com/systems/support/myview/supportsite.wss/selectproduct?taskin...
  • Page 264 4. Record the WWPN for the HBA. Obtaining the WWPN by using QLogic Fast!UTIL To obtain the WWPN by using QLogic Fast!UTIL, complete the following steps: 1. Reboot the host. 2. Press Ctrl+Q to access BootBIOS. IBM System Storage N series Hardware Guide...
  • Page 265 3. BootBIOS displays a menu of available adapters. Select the appropriate HBA and press Enter, as shown in Figure 18-4. Figure 18-4 Selecting host adapter 4. The Fast!UTIL options are displayed. Select Configuration Settings and press Enter, as shown in Figure 18-5. Figure 18-5 Fast!UTIL Options panel Chapter 18.
  • Page 266 AMD Opteron 64-bit systems. It also enables you to designate a Fibre Channel drive, such as a storage system LUN, as the host's boot device. BootBIOS firmware is installed on the HBA that you purchased. IBM System Storage N series Hardware Guide...
  • Page 267 Requirement: Ensure that you are using the version of firmware that is required by this FCP Windows Host Utility. BootBIOS firmware is disabled by default. To configure SAN booting, you must first enable BootBIOS firmware and then configure it to boot from a SAN disk. You can enable and configure BootBIOS on the HBA by using one of the following tools: Emulex LP6DUTIL.EXE The default configuration for the Emulex expansion card for x86 BootBIOS in the Universal...
  • Page 268 4. From the Configure Adapter’s Parameters menu, select 1 to enable the BIOS, as shown in Figure 18-10. Figure 18-10 Configure the adapter’s parameters panel 5. This panel shows the BIOS disabled. Select 1 to enable the BIOS, as shown in Figure 18-11. Figure 18-11 Enable/disable BIOS panel IBM System Storage N series Hardware Guide...
  • Page 269 The BIOS is now enabled, as shown in Figure 18-12. Figure 18-12 Enable BIOS success panel 6. Press Esc to return to the configure adapter’s parameters menu, as shown in Figure 18-13. Figure 18-13 Configure adapter’s parameters panel Chapter 18. Boot from SAN...
  • Page 270 8. The eight boot entries are zero by default. The primary boot device is listed first (it is the first bootable device). Select a boot entry to configure and select 1, as shown in Figure 18-15. Figure 18-15 Configure boot device panel IBM System Storage N series Hardware Guide...
  • Page 271 Clarification: In target device failover, if the first boot entry fails because of a hardware error, the system can boot from the second bootable entry. If the second boot entry fails, the system boots from the third bootable entry, and so on, up to eight distinct entries.
  • Page 272 Use the WWPN for all boot-from-SAN configurations. Select item 1 to boot this device by using the WWPN, as shown in Figure 18-19. Figure 18-19 Selecting how the boot device is identified IBM System Storage N series Hardware Guide...
  • Page 273 13.After this process is complete, press X to exit and save your configuration, as shown in Figure 18-20. Your HBA’s BootBIOS is now configured to boot from a SAN on the attached storage device. Figure 18-20 Exit Emulex Boot Utility and saved boot device panel 14.Press Y to reboot your system, as shown in Figure 18-21.
  • Page 274 If the primary boot device is unavailable, the host boots from the next available device in the list. Select the first Fibre Channel adapter port and press Enter, as shown in Figure 18-23. Figure 18-23 QLogic Fast!UTIL menu IBM System Storage N series Hardware Guide...
  • Page 275 4. Select Configuration Settings and press Enter, as shown in Figure 18-24. Figure 18-24 Configuration settings for QLE2462 adapter panel 5. Select Adapter Settings and press Enter, as shown in Figure 18-25. Figure 18-25 Adapter Settings panel Chapter 18. Boot from SAN...
  • Page 276 Figure 18-26 Enabling host adapter BIOS 7. Press Esc to return to the Configuration Settings panel. Scroll to Selectable Boot Settings and press Enter, as shown in Figure 18-27. Figure 18-27 Accessing selectable boot settings IBM System Storage N series Hardware Guide...
  • Page 277 8. Scroll to Selectable Boot, as shown in Figure 18-28. If this option is disabled, press Enter to enable it. If this option is enabled, go to the next step. Figure 18-28 Enabling selectable boot in Selectable Boot Settings panel 9.
  • Page 278 The BIOS setup program differs depending on the type of PC BIOS that your host is using. This section shows example procedures for the following BIOS setup programs: “IBM BIOS” on page 259 “Phoenix BIOS 4 Release 6” on page 261 IBM System Storage N series Hardware Guide...
  • Page 279 PCI error allocation message during the boot process. To avoid this error, disable the boot options in the HBAs that are not being used for SAN boot installation. To configure the IBM BIOS setup program, complete the following steps: 1. Reboot the host.
  • Page 280 4. Scroll to the PCI Device Boot Priority option and select the slot in which the HBA is installed, as shown in Figure 18-35. Figure 18-35 Selecting PCI Device Boot Priority in Start Options panel IBM System Storage N series Hardware Guide...
  • Page 281: Windows 2003 Enterprise Sp2 Installation

    5. Scroll up to Startup Sequence Options and press Enter. Make sure that the Startup Sequence Option is configured, as shown in Figure 18-36. Figure 18-36 Selecting Hard Disk 0 in Startup Sequence Options panel Phoenix BIOS 4 Release 6 To configure Phoenix BIOS to boot from the Emulex HBA, complete the following steps: 1.
  • Page 282 The following advanced scenarios are not possible in Windows boot from SAN environments: No shared boot images: Windows servers cannot share a boot image. Each server requires its own dedicated LUN to boot. IBM System Storage N series Hardware Guide...
  • Page 283: Windows 2008 Enterprise Installation

    Mass deployment of boot images requires Automated Deployment Services (ADS): Windows does not support mass distribution of boot images. Although cloning of boot images can help here, Windows does not have the tools for distribution of these images. In enterprise configurations, however, Windows ADS can help. Lack of standardized assignment of LUN 0 to controller: Certain vendors’...
  • Page 284 Reboot the server as shown in Figure 18-38. Figure 18-38 Rebooting the server 2. Select an installation language, regional options, and keyboard input, and click Next, as shown in Figure 18-39 on page 265. IBM System Storage N series Hardware Guide...
  • Page 285 Figure 18-39 Selecting the language to install, regional options, and keyboard input 3. Click Install now to begin the installation process, as shown in Figure 18-40. Figure 18-40 Selecting Install now Chapter 18. Boot from SAN...
  • Page 286 4. Enter the product key and click Next, as shown in Figure 18-41. Figure 18-41 Entering the product key 5. Select I accept the license terms and click Next, as shown in Figure 18-42. Figure 18-42 Accepting the license terms IBM System Storage N series Hardware Guide...
  • Page 287 6. Click Custom (advanced) as shown in Figure 18-43. Figure 18-43 Selecting the Custom installation option 7. If the window that is shown in Figure 18-44 does not show any hard disk drives, or if you prefer to install the HBA device driver now, click Load Driver. Figure 18-44 Where do you want to install Windows? window Chapter 18.
  • Page 288 Figure 18-46 Installing Windows window When Windows Server 2008 Setup completes the installation, the server automatically restarts. 11.After Windows Server 2008 restarts, you are prompted to change the administrator password before you can log on. IBM System Storage N series Hardware Guide...
  • Page 289: Red Hat Enterprise Linux 5.2 Installation

    HBAs to the igroup, and install the FCP Windows Host Utilities. 18.2.7 Red Hat Enterprise Linux 5.2 installation This section shows how to install Red Hat Enterprise Linux 5.2 boot from SAN with an IBM System x server. Prerequisite: Always check hardware and software, including firmware and operating system compatibility, before you implement SAN boot in different hardware or software environments.
  • Page 290 WWPN for all other HBAs to the igroup and install the FCP Linux Host Utilities. LUNs that are connected that use a block protocol (for example, iSCSI or FCP) to Linux hosts might require special partition alignment for best performance. For more information, see this website: http://www.ibm.com/support/docview.wss?uid=ssg1S1002716&rs=573 IBM System Storage N series Hardware Guide...
  • Page 291: Boot From San And Other Protocols

    18.3 Boot from SAN and other protocols This section describes the other protocols that you can boot. Implementing them is similar to the boot from SAN with Fibre Channel. 18.3.1 Boot from iSCSI SAN iSCSI boot is a process in which the OS is initialized from a storage disk array across a SAN rather than from the locally attached hard disk drive.
  • Page 292 IBM System Storage N series Hardware Guide...
  • Page 293: Chapter 19. Host Multipathing

    Host multipathing Chapter 19. This chapter introduces the concepts of host multipathing. It addresses the installation steps and describes the management interface for the Windows, Linux, and IBM AIX operating systems. This chapter includes the following sections: Overview Multipathing software options...
  • Page 294: Overview

    19.1 Overview Multipath I/O (MPIO) provides multiple storage paths from hosts (initiators) to their IBM System Storage N series targets. The multiple paths provide redundancy against failures of hardware, such as cabling, switches, and adapters. They also provide higher performance thresholds by aggregation or optimum path selection.
  • Page 295: Multipathing Software Options

    The multipathing solution can be provided by the following resources: Third-party vendors: – Storage vendors provide support for their own storage arrays, such as the IBM Data ONTAP DSM for Windows. These solutions often are specific to the particular vendor’s equipment.
  • Page 296: Native Multipathing Solution

    ALUA, the multipath vendor can use standard SCSI commands to determine the access characteristics. ALUA was implemented in Data ONTAP 7.2. iSCSI in N series controllers have no secondary path. Because link failover operates differently from Fibre Channel, ALUA is not supported on iSCSI connections.
  • Page 297 3. If SnapDrive is used, verify that there are no settings that disable the ALUA set in the configuration file. ALUA is enabled or disabled on the igroup that is mapped to a LUN on the N series controller. The default ALUA setting in Data ONTAP varies by version and by igroup type. Check the output of the igroup show -v <igroup name>...
  • Page 298 IBM System Storage N series Hardware Guide...
  • Page 299: Part 4. Performing Upgrades

    Performing upgrades Part This part describes the design and operational considerations for nondisruptive upgrades on the N series platform. It also provides some high-level example procedures for common hardware and software upgrades. This part contains the following chapters: Chapter 20, “Designing for nondisruptive upgrades” on page 281 Chapter 21, “Hardware and software upgrades”...
  • Page 300 IBM System Storage N series Hardware Guide...
  • Page 301: Chapter 20. Designing For Nondisruptive Upgrades

    Note: Upgrade the system software or firmware in the following order: 1. System firmware 2. Shelf firmware 3. Disk firmware This chapter includes the following sections: System NDU Shelf firmware NDU Disk firmware NDU ACP firmware NDU RLM firmware NDU © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 302: System Ndu

    – No change to NVRAM format – No change to on-disk format – Automatic takeover must be possible while the two controllers of the HA pair are running different versions within the same release family IBM System Storage N series Hardware Guide...
  • Page 303: Supported Data Ontap Upgrades

    20.1.2 Supported Data ONTAP upgrades Support for system NDU differs slightly according to the protocols that are in use on the system. The following sections describe those protocols. Support for NFS environments Table 20-1 shows the major and minor upgrades that have NDU support in an NFS environment.
  • Page 304: System Ndu Hardware Requirements

    NDU, see 20.1.4, “System NDU software requirements” on page 284. 20.1.3 System NDU hardware requirements System NDU is supported on any IBM N series storage controller, or gateway, hardware platform that supports the HA pair controller configuration. Both storage controllers must be identical platforms.
  • Page 305 N7750 N7950 Restriction: Major NDU from Data ONTAP 7.2.2L1 to 7.3.1 is not supported on IBM N3300 systems that contain aggregates larger than 8 TB. Therefore, a disruptive upgrade is required. Aggregates larger than 8 TB prevent the system from running a minor version NDU from Data ONTAP 7.2.2L1 to 7.2.x.
  • Page 306: Prerequisites For A System Ndu

    Reading the latest documentation Review the Data ONTAP Upgrade Guide for the version to which you are upgrading, not the version from which you are upgrading. These documents are available on the IBM NAS Support site, which is available at: http://www.ibm.com/storage/support/nas/...
  • Page 307: Steps For Major Version Upgrades Ndu In Nas And San Environments

    20.1.6 Steps for major version upgrades NDU in NAS and SAN environments The procedural documentation for running an NDU is in the product documentation on the IBM Support site. See the “Upgrade and Revert Guide” of the product documentation for the destination release of the planned upgrade.
  • Page 308: System Commands Compatibility

    20.2 Shelf firmware NDU The IBM N series disk shelves incorporate controller modules that support firmware upgrades as a means of providing greater stability or functionality. Because of the need for uninterrupted data I/O access by clients, these firmware updates can, depending on the model of module that is involved, be performed nondisruptively.
  • Page 309: Types Of Shelf Controller Module Firmware Ndus Supported

    Manual firmware upgrade A manual shelf firmware upgrade before the Data ONTAP NDU operations is the preferred method. Download the most recent firmware from the IBM Support site to the controller’s /etc/shelf_fw directory, then run the storage download shelf command.
  • Page 310: Upgrading The At-Fcx Shelf Firmware During System Reboot

    HA pair configurations. 20.3 Disk firmware NDU Depending on the configuration, the N series allows you to conduct disk firmware upgrades nondisruptively (without affecting client I/O). Disk firmware NDU upgrades target one disk at a time, which reduces the performance effect and results in zero downtime.
  • Page 311: Overview Of Disk Firmware Ndu

    20.3.2 Upgrading the disk firmware non-disruptively Nondisruptive upgrades are performed by downloading the most recent firmware from the IBM Support site to the controller’s /etc/disk_fw directory. Updates start automatically for any disk drives that are eligible for an update. Data ONTAP polls approximately once per minute to detect new firmware in the /etc/disk_fw directory.
  • Page 312: Acp Firmware Ndu

    To upgrade disk firmware manually, you must download the most recent firmware from the IBM Support site to the controller’s /etc/disk_fw directory. The disk_fw_update command is used to start the disk firmware upgrade. This operation is disruptive to disk drive I/O. It downloads the firmware to both nodes in an HA pair configuration unless software disk ownership is enabled.
  • Page 313: Upgrading Acp Firmware Manually

    To upgrade ACP firmware manually, you must download the most recent firmware from the IBM Support site to the controller’s /etc/acpp_fw directory. Use the storage download acp command to start the ACP firmware upgrade. It downloads the firmware to all ACPs in an active state unless a specific ACP is identified by using the storage download acp command.
  • Page 314 IBM System Storage N series Hardware Guide...
  • Page 315: Chapter 21. Hardware And Software Upgrades

    Hardware and software upgrades Chapter 21. This chapter describes high-level procedures for some common hardware and software upgrades. This chapter includes the following sections: Hardware upgrades Software upgrades © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 316: Hardware Upgrades

    Attention: The high-level procedures that are described in the section are generic in nature. They are not intended to be your only guide to performing a hardware upgrade. For more information about procedures that are specific to your environment, see the IBM support site.
  • Page 317: Upgrading A Storage Controller Head

    The storage system automatically recognizes the new expansion adapter. 21.1.3 Upgrading a storage controller head An N series controller can be upgraded from an older hardware controller model without the need to migrate any data (“data in place”). For example, to replace a N5000 head with a N6000 head, complete the following steps: 1.
  • Page 318: Upgrading To Data Ontap 7.3

    21.2.1 Upgrading to Data ONTAP 7.3 To identify the compatible IBM System Storage N series hardware for the supported releases of Data ONTAP, see the IBM System Storage N series Data ONTAP Matrix that is available at this website: http://www.ibm.com/storage/support/nas Update the installed N series storage system to the latest Data ONTAP release.
  • Page 319: Upgrading To Data Ontap 8.1

    DOT8 requires you to use 64-bit hardware. Older 32-bit hardware is not supported. At the time of this writing, the following systems and hardware are supported: N series: N7900, N7700, N6070, N6060, N6040, N5600, N5300, N3040 Performance acceleration cards (PAM)
  • Page 320 Revert considerations The N series does not support NDU for the revert process for DOT 8 7-mode. The following restrictions apply to the revert process: User data is temporarily offline and unavailable during the revert. You must plan when the data is offline to limit the unavailability window and make it fall within the timeout window for the Host attach kits.
  • Page 321 TUCSON1> revert_to 7.3 autoboot TUCSON1> version Data ONTAP Release 7.3.2: Thu Oct 15 04:39:55 PDT 2009 (IBM) TUCSON1> You can use the netboot option for a fresh installation of the storage system. This installation boots from a Data ONTAP version that is stored on a remote HTTP or Trivial File Transfer Protocol (TFTP) server.
  • Page 322 EXN4000 shelf. An upgrade is performed from DOT 7.3.7. If a clean installation is required, DOT 8.1 7-mode also supports the netboot process. First, review the current system configuration by using the sysconfig -a command. The output is shown in Example 21-4 on page 303. IBM System Storage N series Hardware Guide...
  • Page 323 Example 21-4 sysconfig command N6070A> sysconfig -a Data ONTAP Release 7.3.7: Thu May 3 04:32:51 PDT 2012 (IBM) System ID: 0151696979 (N6070A); partner ID: 0151697146 (N6070B) System Serial Number: 2858133001611 (N6070A) System Rev: A1 System Storage Configuration: Multi-Path HA System ACP Connectivity: NA slot 0: System Board 2.6 GHz (System Board XV A1)
  • Page 324 Attention: Ensure that all firmware is up to date. If you are experiencing long boot times, you can disable the auto update of disk firmware before you download Data ONTAP by using the following command: options raid.background_disk_fw_update.enable off IBM System Storage N series Hardware Guide...
  • Page 325 Figure 21-2 Boot loader Next, run the autoboot command and perform another reboot if DOT 8 did not load immediately after the flash update. After the boot process is complete, verify the version by running the version and sysconfig commands, as shown in Example 21-6. Example 21-6 Version and sysconfig post upgrade N6070A>...
  • Page 326 IBM System Storage N series Hardware Guide...
  • Page 327: Part 5

    Part 5 Appendixes Part © Copyright IBM Corp. 2012, 2014. All rights reserved.
  • Page 328 IBM System Storage N series Hardware Guide...
  • Page 329: Appendix A. Getting Started

    Getting started Appendix A. This appendix provides information to help you document, install, and set up your IBM System Storage N series storage system. This appendix includes the following sections: Preinstallation planning Start with the hardware Power on N series...
  • Page 330: Preinstallation Planning

    ONTAP, see the Cluster Installation and Administration Guide or Active/Active Configuration Guide GC26-7964. 4. For more information about how to set up the N series Data ONTAP, see IBM System Storage N series Data ONTAP Software Setup Guide, GC27-2206. This document describes how to set up and configure new storage systems that run Data ONTAP software.
  • Page 331 Table A-1 provides a worksheet for setting up the node. Table A-1 Initial worksheet Types of information Your values Storage system Host name If the storage system is licensed for the Network File System (NFS) protocol, the name can be no longer than 32 characters.
  • Page 332 Server address 1, 2, 3 Domain name Server address 1, 2, 3 Customer Primary Name contact Phone Alternative phone Email address or IBM Web ID Secondary Name Phone Alternative phone Email address or IBM Web ID IBM System Storage N series Hardware Guide...
  • Page 333 Machine location Business name Address City State Country code (value must be two uppercase letters) Postal code CIFS Windows domain WINS servers Multiprotocol or NTFS only storage system Should CIFS create default etc/passwd and etc/group files? Enter y here if you have a multiprotocol environment. Default UNIX accounts are created that are used when user mapping is run.
  • Page 334: Start With The Hardware

    You access the systems and manage the disk resources from a remote console by using a web browser or command line after initial setup; otherwise, use a serial port. The ASCII terminal console enables you to monitor the boot process, configure your N series system after it boots, and perform system administration.
  • Page 335: Power On N Series

    2. Connect the DB-9 null modem cable to the DB-9 to RJ-45 adapter cable. 3. Connect the RJ-45 end to the console port on the N series system and the other end to the ASCII terminal. 4. Connect to the ASCII terminal console.
  • Page 336 Fibre Channel adapter 0a. Wed May 2 03:01:27 GMT [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0b. Data ONTAP Release 7.2.4L1: Wed Nov 21 06:07:37 PST 2007 (IBM) Copyright (c) 1992-2007 Network Appliance, Inc. Starting boot on Wed May 2 03:01:12 GMT 2007 Wed May 2 03:01:28 GMT [nvram.battery.turned.on:info]: The NVRAM battery is turned...
  • Page 337 d. At the 1-5 special boot menu, choose option 4 or option 4a. Option 4 creates a RAID 4 traditional volume. Selecting option 4a creates a RAID-DP aggregate with a root FlexVol. The size of the root flexvol is dependant upon platform type, as shown in Example A-3.
  • Page 338 “Updating Data ONTAP” on page 319. 3. The system begins to boot. Complete the initial setup by answering all the installation questions as in the initial worksheet. For more information, see the IBM System Storage Data ONTAP Software Setup Guide, GA32-0530.
  • Page 339: Updating Data Ontap

    6. Repeat these steps on the second filer for N series with model A20 or A21. Updating Data ONTAP To identify the compatible IBM System Storage N series hardware for the supported releases of Data ONTAP, see the IBM System Storage N series Data ONTAP Matrix that is available at this website: http://www.ibm.com/support/docview.wss?uid=ssg1S7001786 Update the installed N series storage system to the latest Data ONTAP release.
  • Page 340: Obtaining The Data Ontap Software From The Ibm Nas Website

    Obtaining the Data ONTAP software from the IBM NAS website To obtain Data ONTAP, complete the following steps: 1. Log in to IBM Support by using a registered user account at this website: https://www-947.ibm.com/support/entry/myportal/overview/hardware/system_storage /network_attached_storage_%28nas%29/n_series_software/data_ontap 2. Enter a search query for Data ONTAP under Search support and downloads.
  • Page 341: Installing Data Ontap System Files

    3. Select the Data ONTAP version. 4. Select the installation kit that you want to download. Select and confirm the license agreement to start downloading the software. Installing Data ONTAP system files You can install Data ONTAP system files from a UNIX client, Windows client, or HTTP server. To install from a Windows client, complete the following steps: 1.
  • Page 342 C$ directory: a. Click Tools  Map Network Drive, as shown in Figure A-1. Figure A-1 Map Network Drive IBM System Storage N series Hardware Guide...
  • Page 343 b. Enter the network mapping address, as shown in Figure A-2. Figure A-2 Mapping address c. Enter a user name and password to access the storage system, as shown in Figure A-3. Figure A-3 Storage access Appendix A. Getting started...
  • Page 344 Go to the drive to which you previously downloaded the software (see “Obtaining the Data ONTAP software from the IBM NAS website” on page 320). b. Double-click the files that you downloaded. A dialog box opens, as shown in Figure A-5.
  • Page 345 c. In the WinZip dialog box, enter the letter of the drive to which you mapped the storage system. For example, if you chose drive Y, replace DRIVE:\ETC with the following path, as shown in Figure A-6: Y:\ETC Figure A-6 Extract path d.
  • Page 346: Downloading Data Ontap To The Storage System

    Thu May 3 05:43:50 GMT [download.request:notice]: Operator requested download initiated download: Downloading boot device Version 1 ELF86 kernel detected.... download: Downloading boot device (Service Area) ..n3300a*> Thu May 3 05:49:44 GMT [download.requestDone:notice]: Operator requested download completed IBM System Storage N series Hardware Guide...
  • Page 347 CompactFlash card, as shown in Example A-12: sysconfig -a Example A-12 sysconfig -a n3300a*> sysconfig -a Data ONTAP Release 7.2.5.1: Wed Jun 25 11:01:02 PDT 2008 (IBM) System ID: 0135018677 (n3300a); partner ID: 0135018673 (n3300b) System Serial Number: 2859138306700 (n3300a) System Rev: B0...
  • Page 348: Setting Up The Network Using Console

    The easiest way to change the network configuration is by using setup command. However, the new contents do not take effect until the filer is rebooted. This section describes how to change the network configuration without rebooting the filer. IBM System Storage N series Hardware Guide...
  • Page 349: Changing The Ip Address

    8160 127 127.0.0.1 3. If you want this IP address to be persistent after the N series is rebooted, update the /etc/hosts for IP address changes in the associated interface. For netmask and other network parameters, update the /etc/rc file. You can modify this file from the N series console, CIFS, or NFS.
  • Page 350: Setting Up The Dns

    – To display the current DNS domain name: options dns.domainname – To update the DNS domain name (as shown in Example A-18 on page 331), run the following command: options dns.domainname <domain name> IBM System Storage N series Hardware Guide...
  • Page 351 Sun May 6 03:41:01 GMT [n3300a: reg.options.cf.change:warning]: Option dns.domainname changed on one cluster node. n3300a> options dns.domainname dns.domainname itso.tucson.ibm.com (value might be overwritten in takeover) 3. Check that the DNS is already enabled by running the dns info command, as shown in Example A-19: options dns.enable on...
  • Page 352 4. To make this change persistent after filer reboot, update the /etc/rc file to ensure that the name server exists, as shown in Figure A-13. Figure A-13 The /etc/rc file IBM System Storage N series Hardware Guide...
  • Page 353: Appendix B. Operating Environment

    Operating environment Appendix B. This appendix provides information about the Physical environment and operational environment specifications of N series controller and disk shelves. This appendix includes the following sections: N3000 entry-level systems N6000 mid-range systems N7000 high-end systems N series expansion shelves...
  • Page 354: N3000 Entry-Level Systems

    Weight: Add 0.8 kg (1.8 lb) for each SAS drive Weight: Add 0.65 kg (1.4 lb) for each SATA drive The IBM System Storage N3400 features the following operating environment specifications: Temperature: – Maximum range: 10 - 40 degrees C (50 - 104 degrees F) –...
  • Page 355: N3240

    Weight: 25.4 kg (56 lb) (two controllers) The IBM System Storage N3220 Model A12/A22 features the following operating environment specifications: Temperature: – Maximum range: 10 - 40 degrees C (50 - 104 degrees F) – Recommended: 20 - 25 degrees C (68 - 77 degrees F) –...
  • Page 356: N6000 Mid-Range Systems

    – 7.2 bels @ 1 m @ 23 degrees C N6000 mid-range systems This section lists the N6000 mid-range specifications. N6210 The IBM System Storage N6240 Models C10, C20, C21, E11, and E21 feature the following physical specifications: Width: 44.7 cm (17.6 in.) Depth: –...
  • Page 357: N6240

    – 55.5 dBa @ 1 m @ 23 degrees C – 7.5 bels @ 1 m @ 23 degrees C N6240 The IBM System Storage N6240 Models C10, C20, C21, E11, and E21 feature the following physical specifications: Width: 44.7 cm (17.6 in.) Depth: –...
  • Page 358: N6270

    – 7.5 bels @ 1 m @ 23 degrees C N7000 high-end systems This section lists N7000 high-end specifications. N7950T The IBM System Storage N7950T Model E22 features the following physical specifications: Width: 44.7 cm (17.6 in.) Depth: – 74.6 cm (29.4 in.) with cable management arms –...
  • Page 359: N Series Expansion Shelves

    Weight: 117.2 kg (258.4 lb) The IBM System Storage N7950T Model E22 features the following operating environment specifications: Temperature: – Maximum range: 10 - 40 degrees C (50 - 104 degrees F) – Recommended: 20 - 25 degrees C (68 - 77 degrees F) –...
  • Page 360: Exn3500

    – Non-operating: -40 - 70 degrees C (-40 - 158 degrees F) Relative humidity: – Maximum operating range: 20% - 80% (non-condensing) – Recommended operating range: 40 - 55% – Non-operating range: 10% - 95% (non-condensing) IBM System Storage N series Hardware Guide...
  • Page 361: Exn4000

    Maximum wet bulb: 28 degrees C Maximum altitude: 3050 m (10,000 ft.) Warning: Operating at environmental extremes can increase failure probability. Wet bulb (caloric value): 1,724 Btu/hr (fully loaded shelf) Maximum electrical power: 100 - 240VAC, 12-5.9 A Nominal electrical power: –...
  • Page 362 IBM System Storage N series Hardware Guide...
  • Page 363: Related Publications

    Managing Unified Storage with IBM System Storage N series Operation Manager, SG24-7734 Using an IBM System Storage N series with VMware to Facilitate Storage and Server Consolidation, REDP-4211 Using the IBM System Storage N series with IBM Tivoli Storage Manager, SG24-7243...
  • Page 364: Other Publications

    Network-attached storage: http://www.ibm.com/systems/storage/network/ IBM support: Documentation: http://www.ibm.com/support/entry/portal/Documentation IBM Storage – Network Attached Storage: Resources: http://www.ibm.com/systems/storage/network/resources.html IBM System Storage N series Machine Types and Models (MTM) Cross Reference: http://www-304.ibm.com/support/docview.wss?uid=ssg1S7001844 IBM N Series to NetApp Machine type comparison table: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105042 Interoperability matrix: http://www-304.ibm.com/support/docview.wss?uid=ssg1S7003897...
  • Page 368 TECHNICAL offerings. your environment SUPPORT The IBM System Storage N series systems can help you tackle the ORGANIZATION challenge of effective data management by using virtualization Understand N series technology and a unified storage architecture. The N series delivers...

Table of Contents