Public Works and Government Services Canada
Symbol of the Government of Canada

Common menu bar links

Annex A2: Technical Specifications-Tier 1 (Enterprise) & Tier 2 (Mid-Range) storage products

1.0 INTRODUCTION

  1. This document addresses the requirement for seven (7) groups of Tier 1, Tier 2 Storage, and Converged Infrastructure Systems. The groups are as follows:
    1. Group 1.0 Small Mid-Range iSCSI
    2. Group 2.0 Small Mid-Range Fibre Channel (FC)
    3. Group 3.0 Medium Mid-Range Fibre Channel (FC)
    4. Group 4.0 Large Mid-Range Fibre Channel (FC)
    5. Group 5.0 Large Enterprise Fibre Channel (FC)
    6. Group 6.0 Scalable NAS
    7. Group 7.0 Converged Infrastructure System
  2. Systems must be fully operational and in ready-to-use state, containing all major components, software and all requisite ancillary items when combined. These include but are not limited to: cabinet / enclosure, disk drives and shelves, array controllers, interconnect (eg: 4 x 10GbE or 2 x 20Gb Infiniband for Group 6.0), cache / cache modules, cooling system, power supplies and PDUs, management software, device drivers / software licenses, port licenses, internal / external cables to the system, I/O cables, etc. to allow the system to satisfy the requirements.
  3. All storage software and licenses must be perpetual, transferrable and must be available as part of a global pool.
  4. Systems must be purpose-built for the group requirement and must be marketed as a single product by the OEM, including documentation and support. Any system that is designed for another purpose or application will not be considered. Example #1: A standard NAS Head or Gateway turned Scale Out NAS will not be considered. Example #2: A vendor has 2 systems, X (scales to 250 Drives) and Y (scales to 500 Drives). If this same vendor cobbles or clusters multiple model X's and proposes this system in Group 3 (accommodates 448 Drives), will not be considered.
  5. Converged Infrastructure System:
    1. Converged Infrastructure Systems must be fully operational and fully integrated, containing all major components, management software, as well as all ancillary items when shipped. These include but are not limited to: system enclosure and/or rack cabinet (where appropriate), compute systems, network/storage fabric switches, disk array controllers, drives and shelves, and all power supplies and cooling systems necessary for the system. Any required management software, software or port licenses (for any of the included components), device drivers, and cabling required for the system must also be included.
    2. Converged Infrastructure Systems must be purpose-built for the Group 7.0 requirement and must be marketed as a single product by the OEM or consortium, including documentation and support. Any system designed for another purpose, or consisting of a number of disparate components cobbled together without providing a single point of management, and a single point of support for customers (e.g. single 1-800 number to place a service call) will not be considered.
    3. All software and licenses must be perpetual and must be available as part of a global pool.
    4. Must have a Storage System in any Group (eg 1.0 - 5.0) in order to qualify in Group 7.0
    5. For Group 7.0, vendors must have deployed no less than 100 Converged Infrastructure Systems of similar configuration in a production environment. A proof reference may be requested.

2.0 CONFIGURATIONS

Systems must meet or exceed the technical specifications outlined in this annex.

2.1 Group 1.0 Small Mid-Range iSCSI

The following describes the configuration and features of a Small Mid-Range iSCSI Storage solution.

2.1.1 Storage Platform

2.1.1.1 Capacity and Platform

Each storage platform must meet the following capacity and platform requirements:

  1. the hard disk drive technologies and densities must be commercially available, meaning that the Manufacturer is continuing to manufacture and ship them to customers generally;
  2. the hard disk drive technologies and densities must be tested and fully supported within the storage platform by the storage platform Manufacturer;
  3. it must include industry-standard hard disk drives operating at 6Gbps for Serial Attach SCSI - 2 (SAS-2) drives;
  4. it must also include industry-standard Serial Advanced Technology Attachment (SATA) revision 3.0 or Nearline SAS (NL-SAS) hard disk drives operating at 6Gbps. This may be achieved either by:
    1. using the same shelves as SAS disk drives, or
    2. using specialized shelves for these drive types;
  5. the available drive options must include at least three (3) from the following list:
    • - drives with 6 Gbps (for SAS) interfaces and 15000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
    • - drives with 6 Gbps (for SAS) interfaces and 10000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
      5. 1.2TB
      6. 1.8TB
      7. 1.5TB
    • - drives with 6Gbps for SATA or NL-SAS interfaces and 7200 RPM rotational speed:
      1. 1TB
      2. 2TB
      3. 3TB
      4. 4TB
    • - solid state drives (SSD) based on Single Level Cell (SLC) or enterprise-class Multi-Level Cell (eMLC) technology
      1. 100GB
      2. 200GB
      3. 300GB
      4. 400GB
      5. 600GB
      6. 800GB
      7. 1.2TB
      8. 1.6TB
      9. 3.2TB
  6. It must accommodate a minimum of 100 hard disk drives
  7. It must include lights or an LCD panel for power, activity and fault indications.
  8. It must be packaged in a standard 19" rack mount form factor (NOTE: it is understood that standard rack depth will be increased when "high density" disk shelves are provided); and

2.1.1.2 Cooling

Each storage platform must meet the following cooling requirements:

  1. it must provide sufficient cooling for a fully populated cabinet at the mandatory minimum storage capacity;
  2. all cooling for the system controller(s) as well as all hard disk drives must be redundant and monitored for failure by the storage platform hardware;
  3. it must allow hot swapping of failed cooling fans;
  4. the cooling system within the storage platform itself must be fully redundant; and
  5. in the event of a component failure, the cooling system must allow continued operation of the storage platform until service can be performed.

2.1.1.3 Drives and Shelves

Each storage platform must meet the following drives and shelves requirements:

  1. the hard disk drives must include a minimum of 6Gbps for Serial Attach SCSI - 2 (SAS-2) with dual ported;
  2. it must provide minimum of 4 active connections to the mandatory 120 hard disk drives. Bandwidth must be allocated evenly to the total number of physical drives over several channels;
  3. A channel failure must not interrupt access to attached disk drives;
  4. it must allow hot addition of storage shelves without needing to power the storage platform down and without interrupting access to existing drives and redundant arrays of inexpensive disk (RAID) groups;
  5. it must include as many back-end channels as necessary to support all the back-end shelves of disks so that a shelf component replacement or failure does not interrupt access to adjacent shelves in the platform.
  6. the hard disk drives in the storage platform must be fully hot pluggable while the storage platform is operational. There must be no loss of data if a hard drive is removed, assuming the drive is part of a fault-tolerant configuration in the platform;
  7. it must rebuild a replaced hard disk drive automatically and without user intervention when it is inserted, assuming it is replacing a hard disk drive that was part of a fault-tolerant configuration; and
  8. it must allow the allocation of hard disk drives as hot spares and or virtual spares, which must automatically rebuild the contents of a failed hard disk drive in any fault-tolerant RAID set. This process must be fully automatic whenever a disk failure occurs in a fault-tolerant RAID set.

2.1.1.4 Power

Each storage platform must meet the following power requirements:

  1. it must provide sufficient power to operate a fully populated system with all boards and cache installed, and the maximum number of hard disk drives installed;
  2. the power supplies must be fully redundant, allowing uninterrupted operation of the storage platform in the event of a power supply failure, until service can be performed. Redundancy may be achieved either by using:
    1. a second power supply, or
    2. an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.1.1.5 Controllers

Each storage platform must meet the following controller requirements:

  1. it must include dual redundant active/active storage controllers for handling both I/O to the attached host systems as well as disk I/O and RAID functionality;
  2. it must be redundant, so that the surviving controller automatically recovers controller subsystem failures, and service to attached hosts is continued without disruption;
  3. the storage platform must have access to all 120 of the mandatory hard disk drives in order to assign, configure, protect and share those drives;
  4. the storage controllers must allow configuration of hard disk drives within the storage platform as:
    1. RAID5 stripes with parity, RAID6 stripes with dual parity, RAID-DP, or triple parity RAID (RAIDZ for single parity, RAIDZ2 for dual parity, RAIDZ3 for triple parity); and
    2. RAID1, RAID4, RAID0+1 stripes with mirroring, or RAID1+0 striped mirrors (aka RAID10).
  5. it must allow the creation and addressing of up to 256 simultaneous logical drives where logical drive is the logical unit of capacity presented to a client host; and
  6. it must simultaneously support all RAID types from 2.1.1.5 (d) within the storage platform.

2.1.1.6 Cache

Each storage platform must meet the following cache requirements:

  1. it must include a total of at least 8GB of dedicated I/O cache;
  2. the cache on the storage controller must perform both read and write I/O operations;
  3. the write cache must be mirrored cache; and
  4. the write data within the cache on the storage controllers must be protected by one of these three (3) methods:
    1. a battery that allows the cache contents to be held intact for a minimum of 48 hours. The caches must then complete their write operations to disk when power is restored; or
    2. all pending write data must be automatically written to disk before the disk system is powered off, and the platform must provide sufficient battery power to complete this function.
  5. NVRAM or flash cache that is used solely for de-staging cache data to in the event of power loss to the array.

2.1.1.7 I/O Ports and Connectivity

Each storage platform must meet the following requirements for I/O ports and connectivity:

  1. it must include a minimum of 2 storage controllers that may be replaced in the event of a controller failure;
  2. it must provide a minimum of 4 10GbE iSCSI ports for connectivity to Intel and Open System host computers;
  3. all 4 iSCSI ports must be independent ports operating at 10GbE each;
  4. it must provide simultaneous connectivity to any combination of 125 or more Intel and UNIX hosts using dual NIC's in each host;
  5. it must provide the necessary software to support all supported operating systems; and
  6. it must provide "no single point of failure" connectivity options, for both failover as well as load balancing under all of the mandatory operating system environments. This may be provided using add-on failover software packages or using native Operating System facilities

2.1.1.8 Hosts

Each storage platform must meet the following requirements for host connectivity:

  1. it must connect to Intel and AMD-based host computers running the following:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X;
  2. it must connect to at least two (2) of the following UNIX and Open Systems hosts simultaneously, in addition to the previously listed Intel systems:
    1. Oracle DELETE 10 SPARC systems;
    2. Oracle DELETE 10 X86 systems;
    3. HP-UX 11i v.X systems;
    4. IBM AIX v6.X and v7.X systems;
  3. Support of additional platform types and operating systems is desirable, but not mandatory.

2.1.1.9 Clustering

Each storage platform must meet the following requirements for clustering:

  1. it must directly support clustering for at least two (2) of the following host operating environments:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X with shared access to the same logical unit numbers (LUNs) for Vmotion;
  2. Clustering support from the following host operating environments is desirable:
    1. Oracle Solaris Cluster for Solaris SPARC;
    2. Oracle Solaris Cluster for Solaris X86;
    3. MC/Serviceguard for HP-UX;
    4. PowerHA for AIX;

2.1.1.10 Software and Additional Capabilities

The storage platform must include the following software functionalities and additional capabilities. Furthermore, these must be entirely storage platform-based functionality and must not require any software or assistance from host systems on the SAN:

  1. it must provide LUN-masking functionality. This means it must mask or limit visibility of logical drive configurations within the storage platform to only specific hosts connected to the storage platform;
  2. it must synchronously replicate logical volumes remotely; OR
  3. it must asynchronously replicate logical volumes remotely;
  4. it must perform up to 4 concurrent host-less point-in-time snapshot copies of any logical volume that may be reassigned to any other host on the SAN;
  5. it must perform up to 2 concurrent host-less full block data copies of any logical volume that may be reassigned to any other host on the SAN; and
  6. it must allow online firmware upgrades to be made without disrupting the operation of the platform;

2.1.1.11 Management

The storage platform must provide the following management capabilities:

  1. it must provide a comprehensive graphical user interface (GUI) based management system that allows real-time monitoring of all components in the platform and reports degradation of components and failures;
  2. the GUI interface must either be a Windows-based application included with the system or a WEB or Java-based embedded function accessible using a standard browser;
  3. it must connect to an IP-based network either through a direct Ethernet connection on the platform;
  4. it must issue SNMP traps or SMTP mail in the event of device degradation or failure;
  5. the GUI interface must show all installed hardware and its current operational status; and
  6. it must monitor the full performance of the storage array, including:
    1. disk, LUN or RAID group I/O’s per second for both read and write requests;
    2. cache utilization and hit rate statistics; and
    3. queuing or latency information for disks, arrays, LUNs or RAID sets.

2.1.2 Fabric

2.1.2.1 10Gb Ethernet Switch

The storage platform must operate with 10 Gbps 24 port ethernet switches, which must be fully supported and warranted by the storage platform Manufacturer. The ethernet switches must meet the following requirements:

  1. They must have a minimum total throughput of 480 Gbps (Data rate, Full duplex);
  2. They must accommodate up to 16,000 MAC addresses;
  3. They must support a minimum of 4000 VLANs;
  4. They must provide lights or indicators for power and port status for all ethernet ports;
  5. For management purposes, switches must provide a 10/100/1000 MBps Ethernet interface using TCP/IP as the transport protocol;
  6. They must provide redundant cooling and power;
  7. They must be available in both stand-alone and rack mountable configurations. A rack mounting kit that is applied to a stand-alone switch will be accepted;
  8. They must auto-negotiate for speed, duplex mode and flow control on 10GBASE-T ports;
  9. They must provide resilient High Availability stacking with up a minimum of 4 switches (only SFP+ ports, 10Gbase-CX4 or SFP_ Direct attach can be used for stack connections);
  10. They must include a comprehensive GUI-based or CLI-based management system that allows real-time monitoring of all components in the platform and to report failures or degraded components;
  11. They must generate SNMP traps in the event of a degraded condition in the switch;
  12. The GUI or CLI interface must show the current operational status for all installed hardware components;
  13. The GUI or CLI interface must allow configuration of all aspects of the ethernet switches including:
    1. the name,
    2. the passwords and user accounts for management,
    3. the IP addressing, and
    4. any other parameters critical to the operation of the switch;
  14. The GUI or CLI interface must provide complete performance monitoring allowing a storage administrator to view:
    1. the number of frames per second, with a breakdown of which were good frames and which were error frames,
    2. the throughput (Mbps) of ethernet port,
    3. the operational speed of ethernet ports, and
    4. the throughput in frames as well as MB per second.
  15. They must fully comply with the following standards:
    1. IEEE 802.3ae 10 Gigabit Ethernet
    2. IEEE 802.3 Ethernet
    3. IEEE 802.1Q VLAN tagging
    4. IEEE 802.1p Quality of Service (QoS)
    5. IEEE 802.3x Flow Control
    6. IEEE 802.1w Rapid Spanning Tree Protocol
    7. IEEE 802.1D Spanning Tree Protocol
    8. IEEE 802.1s Multiple Spanning Tree
    9. IEEE 802.ad LACP Support
    10. IEEE 802.1AB Link Layer Discovery Protocol (LLDP)
    11. Jumbo Frames of sizes up to 9000 bytes
    12. Internet Group Management Protocol (IGMP) Snooping Versions 2 and
    13. IPv6

2.1.4 NAS Gateway

2.1.4.1 Capacity and Platform

The storage platform must include a Network Attached Storage Gateway. The NAS Gateway must meet the following requirements:

  1. it must either:
    1. be manufactured by the same Manufacturer as the base storage platform defined in 1.1; or
    2. be sold under the name of the same Manufacturer (sometimes referred to as rebranding) as the base storage platform, but only if that Manufacturer warrants, supports and maintains the solution
  2. it must be a device that is tightly integrated and managed commensurately with the storage platform so that the combination of the two is viewed together under a "single pane of glass";
  3. it must be fully compatible with and supported by the base storage platform defined at 1.1. Use of this NAS Gateway with the base storage platform must not preclude the base storage platform from also servicing other iSCSI block attached hosts at the same time;
  4. it must include sufficient cooling for a fully populated configuration. All cooling for the NAS Gateway must be redundant and monitored for failures by the NAS Gateway;
  5. it must allow hot swapping of failed cooling fans; and
  6. it must be packaged in an industry-standard 19" rack mount form factor and must include all accessories, cables and hardware required to mount and power the unit in an industry-standard 19" rack.

2.1.4.2 Power

Each Network Attached Storage Gateway must meet the following power requirements:

  1. it must provide sufficient power to operate a fully loaded system with all boards and components installed;
  2. it must be fully redundant, allowing the NAS Gateway to continue operating without interruption in the event of a power supply failure, until service can be performed. Redundancy may be achieved either by using:
    1. a second power supply, or
    2. an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.1.4.3 NAS Processor Unit

Each Network Attached Storage Gateway must meet the following requirements for the NAS processor unit(s):

  1. it must include a micro-kernel operating system designed for providing file services to CIFS and NFS via the included Ethernet interfaces. The micro-kernel operating system may be either a Windows, Linux, Unix-based operating system or FPGA (hardware)-based operating system.
  2. it must load the micro-kernel operating system from a fault-tolerant medium that is either RAID protected, or duplicated and included, in a second NAS processor unit that may assume operation in the event of a failure to load the operating system at boot time;
  3. it must contain 2 separate redundant clustered processor units or "heads" that operate in an Active / Active or Active / Hot Standby fashion providing network services to clients for CIFS and NFS. In the event of a failure of one of the processor units, the remaining unit must assume the IP address and identity of the failed processor unit and must continue to provide service to clients on the network automatically;
  4. the processor units if separate, must both be attached via a total aggregate minimum of 4 X 10GbE or 4 X 4Gb FC interfaces to the base storage platform; and
  5. the processor units in the NAS Gateway must contain an aggregate minimum of either 6 X 1Gbps or 2 X 10Gbps Ethernet interfaces for TCP/IP client access.

2.1.4.4 Software and Additional Capabilities

Each Network Attached Storage Gateway must meet the following requirements for software functionality and additional capabilities:

  1. it must include all client access licenses for end user workstations to access and use the shared file systems via CIFS or NFS, with no requirement for additional fees or licensing;
  2. it must fully integrate in native mode, Microsoft Active directory environments and must be manageable as a Windows server in those environments using native Microsoft tools for viewing and managing sessions, shares and open files;
  3. it must support snapshot functionality for all shared file systems allowing an administrator to create point-in-time copies of all files for the purpose of recovering deleted files; and
  4. it must include and be licensed for NDMP or support the installation of backup agents to facilitate backups of the shared file systems to fibre channel attached backup targets.

2.1.4.5 Management

Each Network Attached Storage Gateway must meet the following requirements for management capabilities:

  1. it must be manageable remotely via an Ethernet interface and must provide an intuitive GUI-based interface for day-to-day operations;
  2. it must include a simple and intuitive installation system allowing operators to create and provision the unit for operation on a network;
  3. it must provide GUI-based functionality to:
    1. create and manage volumes and file systems across LUN or RAID sets;
    2. it must work with authentication methods such as Active Directory or LDAP;
    3. view attributes of file system type and used capacity;
    4. configure all user-assigned parameters required for operation of the system;
    5. monitor utilization of network interfaces, processors and disk subsystems to gauge the load on those items;
    6. backup all locally hosted data to a locally-attached tape drive or provide an agent or facility for a remote console to initiate this process directly from the NAS disk to a backup target; and
    7. load balance file shares across either of the 2 processor units as needed and allow an administrator to manually failover file shares if required from 1 processor unit to the other.
  4. the GUI management system must be accessible via a single browser instance or program to manage and operate both processor units, allowing a single session to facilitate all management functions described here.

2.2 Group 2.0 Small Mid-Range (FC)

The following describes the configuration and features of a Small Mid-Range Storage solution.

2.2.1 Storage Platform

2.2.1.1 Capacity and Platform

Each storage platform must meet the following capacity and platform requirements:

  1. the hard disk drive technologies and densities must be commercially available, meaning that the Manufacturer is continuing to manufacture and ship them to customers generally;
  2. the hard disk drive technologies and densities must be tested and fully supported within the storage platform by the storage platform Manufacturer;
  3. it must include industry-standard hard disk drives operating at either 4Gbps for Fibre Channel (FC) drives or 6Gbps for Serial Attach SCSI - 2 (SAS-2) drives;
  4. it must also include industry-standard Serial Advanced Technology Attachment (SATA) revision 3.0 or Nearline SAS (NL-SAS) hard disk drives operating at 6Gbps. This may be achieved either by:
    1. using the same shelves as either the FC or SAS disk drives, or
    2. using specialized shelves for these drive types;
  5. the available drive options must include at least four (4) from the following list:
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 15000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 10000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
      5. 1.2TB
      6. 1.8TB
      7. 1.5TB
    • - drives with 6Gbps for SATA or NL-SAS interfaces and 7200 RPM rotational speed:
      1. 1TB
      2. 2TB
      3. 3TB
      4. 4TB
    • - solid state drives (SSD) based on Single Level Cell (SLC) or enterprise-class Multi-Level Cell (eMLC) technology
      1. 100GB
      2. 200GB
      3. 300GB
      4. 400GB
      5. 600GB
      6. 800GB
      7. 1.2TB
      8. 1.6TB
      9. 3.2TB
  6. it must accommodate a minimum of 224 hard disk drives
  7. It must be packaged in a standard 19" rack mount form factor (NOTE: it is understood that standard rack depth will be increased when "high density" disk shelves are provided); and
  8. It must include lights or an LCD panel for power, activity and fault indications.

2.2.1.2 Cooling

Each storage platform must meet the following cooling requirements:

  1. it must provide sufficient cooling for a fully populated cabinet at the mandatory minimum storage capacity;
  2. all cooling for the system controller(s) as well as all hard disk drives must be redundant and monitored for failure by the storage platform hardware;
  3. it must allow hot swapping of failed cooling fans;
  4. the cooling system within the storage platform itself must be fully redundant; and
  5. in the event of a component failure, the cooling system must allow continued operation of the storage platform until service can be performed.

2.2.1.3 Drives and Shelves

Each storage platform must meet the following drives and shelves requirements:

  1. the hard disk drives must include a minimum of either 4Gbps for Fibre Channel (FC) or 6Gbps for Serial Attach SCSI - 2 (SAS-2) with dual ported;
  2. it must provide minimum of 4 active connections to the mandatory 224 hard disk drives. Bandwidth must be allocated evenly to the total number of physical drives over several channels;
  3. A channel failure must not interrupt access to attached disk drives;
  4. it must allow hot addition of storage shelves without needing to power the storage platform down and without interrupting access to existing drives and redundant arrays of inexpensive disk (RAID) groups;
  5. it must include as many back-end channels as necessary to support all the back-end shelves of disks so that a shelf component replacement or failure does not interrupt access to adjacent shelves in the platform.
  6. the hard disk drives in the storage platform must be fully hot pluggable while the storage platform is operational. There must be no loss of data if a hard drive is removed, assuming the drive is part of a fault-tolerant configuration in the platform;
  7. it must rebuild a replaced hard disk drive automatically and without user intervention when it is inserted, assuming it is replacing a hard disk drive that was part of a fault-tolerant configuration; and
  8. it must allow the allocation of hard disk drives as hot spares and or virtual spares, which must automatically rebuild the contents of a failed hard disk drive in any fault-tolerant RAID set. This process must be fully automatic whenever a disk failure occurs in a fault-tolerant RAID set.

2.2.1.4 Power

Each storage platform must meet the following power requirements:

  1. it must provide sufficient power to operate a fully populated system with all boards and cache installed, and the maximum number of hard disk drives installed;
  2. the power supplies must be fully redundant, allowing uninterrupted operation of the storage platform in the event of a power supply failure, until service can be performed. Redundancy may be achieved either by using:
    1. a second power supply, or
    2. an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.2.1.5 Controllers

Each storage platform must meet the following controller requirements:

  1. it must include dual redundant active/active storage controllers for handling both I/O to the attached host systems as well as disk I/O and RAID functionality;
  2. it must be redundant, so that the surviving controller automatically recovers controller subsystem failures, and service to attached hosts is continued without disruption;
  3. the storage platform must have access to all 224 of the mandatory hard disk drives in order to assign, configure, protect and share those drives;
  4. the storage controllers must allow configuration of hard disk drives within the storage platform as:
    1. RAID5 stripes with parity, RAID6 stripes with dual parity, RAID-DP, or triple parity RAID (RAIDZ for single parity, RAIDZ2 for dual parity, RAIDZ3 for triple parity); and
    2. RAID1, RAID4, RAID0+1 stripes with mirroring, or RAID1+0 striped mirrors (aka RAID10).
  5. it must allow the creation and addressing of up to 2000 simultaneous logical drives where logical drive is the logical unit of capacity presented to a client host; and
  6. it must simultaneously support all RAID types from 2.2.1.5 (d) within the storage platform.

2.2.1.6 Cache

Each storage platform must meet the following cache requirements:

  1. it must include a total of at least 16GB of dedicated I/O cache;
  2. the cache on the storage controller must perform both read and write I/O operations;
  3. the write cache must be mirrored cache; and
  4. the write data within the cache on the storage controllers must be protected by one of these three (3) methods:
    1. a battery that allows the cache contents to be held intact for a minimum of 48 hours. The caches must then complete their write operations to disk when power is restored; or
    2. all pending write data must be automatically written to disk before the disk system is powered off, and the platform must provide sufficient battery power to complete this function.
    3. NVRAM or flash cache that is used solely for de-staging cache data to in the event of power loss to the array.

2.2.1.7 I/O Ports and Connectivity

Each storage platform must meet the following requirements for I/O ports and connectivity:

  1. it must include a minimum of 2 storage controllers that may be replaced in the event of a controller failure;
  2. it must provide a minimum of 4 fibre channel ports for connectivity to Intel and Open System host computers;
  3. all 4 fibre channel ports must be independent ports operating at 8Gbps each and support both point-to-point and loop modes of operation;
  4. each of the 4 fibre ports must support full fabric login and must have a unique fibre channel World Wide Name;
  5. it must provide simultaneous connectivity to any combination of 250 or more Intel and UNIX hosts using dual fibre channel host bus adapters in each host;
  6. it must provide the necessary software to support all supported operating systems; and
  7. it must provide "no single point of failure" connectivity options, for both failover as well as load balancing under all of the mandatory operating system environments. This may be provided using add-on failover software packages or using native Operating System facilities
  8. it must provide an option of two (2) native 10GbE connections for either FCoE host connectivity that meets the ANSI T11 FC-BB-5 Fibre Channel Over Ethernet (FCoE) or Internet Small Computer System Interface (iSCSI) host connectivity that meets the RFC3720 standards for the encapsulation of FC or SCSI packets over Full Duplex and Lossless Ethernet networks. and must be compliant with the following IEEE standards:

The FCoE implementation must be compliant with the following IEEE standards:

  1. 802.1Qbb;
  2. 802.1Qaz which defines:
    1. enhanced transmission selection (ETS); and
    2. data center bridging exchange (DCBX).

The iSCSI implementation must be compliant with the following IEEE standards:

  1. iSCSI Qualified Name (IQN) as documented in RFC 3720
  2. iSCSI initiator and security authentication using the CHAP protocol
  3. Internet Storage Name Service (iSNS) as documented in RFC 4171

2.2.1.8 Hosts

Each storage platform must meet the following requirements for host connectivity:

  1. it must connect to Intel and AMD-based host computers running the following:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X;
  2. it must connect to the following UNIX and Open Systems hosts simultaneously, in addition to the previously listed Intel systems:
    1. Oracle DELETE 10 systems;
    2. HP-UX 11i v.X systems;
    3. IBM AIX v6.X and v7.X systems;
  3. Support of additional platform types and operating systems is desirable, but not mandatory.

2.2.1.9 Clustering

Each storage platform must meet the following requirements for clustering:

  1. it must directly support clustering under the following host operating environments:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X with shared access to the same logical unit numbers (LUNs) for Vmotion;
  2. it must directly support clustering under the following host operating environments:
    1. MC/Serviceguard for HP-UX;
    2. PowerHA for AIX; and
    3. Oracle Solaris Cluster for Solaris with Oracle Cluster or Veritas Cluster Server for Solaris;

2.2.1.10 Software and Additional Capabilities

The storage platform must include the following software functionalities and additional capabilities. Furthermore, these must be entirely storage platform-based functionality and must not require any software or assistance from host systems on the SAN:

  1. it must provide LUN-masking functionality. This means it must mask or limit visibility of logical drive configurations within the storage platform to only specific hosts connected to the storage platform;
  2. it must synchronously replicate logical volumes remotely via an extended network backbone, which could include TCP/IP or fibre channel;
  3. it must asynchronously replicate logical volumes remotely via an extended network backbone, which could include TCP/IP or fibre channel;
  4. it must perform up to 4 concurrent host-less point-in-time snapshot copies of any logical volume that may be reassigned to any other host on the SAN;
  5. it must perform up to 2 concurrent host-less full block data copies of any logical volume that may be reassigned to any other host on the SAN;
  6. it must allow online firmware upgrades to be made without disrupting the operation of the platform; and
  7. it must perform sub-LUN auto-tiering of data written to the storage platform.

2.2.1.11 Management

The storage platform must provide the following management capabilities:

  1. it must provide a comprehensive graphical user interface (GUI) based management system that allows real-time monitoring of all components in the platform and reports degradation of components and failures;
  2. the GUI interface must either be a Windows-based application included with the system or a WEB or Java-based embedded function accessible using a standard browser;
  3. it must connect to an IP-based network either through a direct Ethernet connection on the platform or through an in-band connection via a fibre-attached host;
  4. it must issue SNMP traps, or SMTP mail in the event of device degradation or failure;
  5. the GUI interface must show all installed hardware and its current operational status; and
  6. it must monitor the full performance of the storage array, including:
    1. disk, LUN or RAID group I/O’s per second for both read and write requests;
    2. cache utilization and hit rate statistics; and
    3. queuing or latency information for disks, arrays, LUNs or RAID sets.

2.2.2 Fabric

2.2.2.1 Fibre Channel Switch

The storage platform must operate with 8 Gbps 24 port fibre channel fabric switches, which must be fully supported and warranted by the storage platform Manufacturer. The fibre channel switches must meet the following requirements:

  1. they must operate with fibre channel fabrics and must be capable of full fibre channel zoning across switched fabrics;
  2. they must support a minimum of 512 active enabled unique zones at a time per fibre channel fabric;
  3. they must be available in both stand-alone and rack mountable configurations. A rack mounting kit that is applied to a stand-alone switch will be accepted;
  4. they must operate at 8Gb/s and must be fully populated with small form factor pluggable optical media modules for shortwave operation;
  5. they must provide lights or indicators for power and port status for all fibre channel ports;
  6. they must provide a 10/100MBps or 1Gbps Ethernet interface and must be manageable using TCP/IP as the transport protocol;
  7. they must provide redundant cooling and power;
  8. they must fully comply with the following ANSI T-11 standards:
    1. FC-FS-2 ANSI/INCITS 424:2006
    2. FC-FS-2 ANSI/INCITS 424:2006
    3. . FC-FS-2 ANSI/INCITS 424:2006
    4. FC-AL-2 INCITS 332: 1999
    5. FC-AL-2 INCITS 332: 1999
    6. FC-DA INCITS TR-36
    7. FC-SW-4 INCITS 418:2006
    8. FC-GS-5 ANSI INCITS 427:2006
    9. FC-DA INCITS TR-36
    10. FC-VI INCITS 357: 2002
    11. . FC-SW-4 INCITS 418:2006
  9. they must support fibre channel class 2 and 3 connections;
  10. they must provide full fabric support as per the ANSI standards specified at 2.2.2.1 (h);
  11. they must support cascading by connecting 2 or more switches together to form a single fabric that is compliant with the ANSI standards specified at 2.2.2.1(h);
  12. they must include a comprehensive GUI-based management system that allows real-time monitoring of all components in the platform and to report failures or degraded components;
  13. The GUI interface must either be an embedded or accessed via WEB or Java-based function accessible using a standard browser;
  14. they must generate SNMP traps in the event of a degraded condition in the switch;
  15. the GUI interface must show the current operational status for all installed hardware components;
  16. the GUI interface must allow configuration of all aspects of the fibre channel switches including:
    1. the name,
    2. the domain ID,
    3. the passwords and user accounts for management,
    4. the IP addressing,
    5. the modes of operation of the ports,
    6. all zone and path information, and
    7. any other parameters critical to the operation of the switch;
  17. The GUI interface must provide complete performance monitoring allowing a storage administrator to view:
    1. the number of frames per second, with a breakdown of which were good frames and which were error frames,
    2. the throughput (Mbps) of fibre channel port,
    3. the operational speed of fibre channel ports,
    4. the mode of operation of fibre channel port (e.g. F-port, N-port, E-port), and
    5. the throughput in frames as well as MB per second.

2.2.4 NAS Gateway

2.2.4.1 Capacity and Platform

The storage platform must include a Network Attached Storage Gateway. The NAS Gateway must meet the following requirements:

  1. it must either:
    1. be manufactured by the same Manufacturer as the base storage platform defined in 2.1; or
    2. be sold under the name of the same Manufacturer (sometimes referred to as rebranding) as the base storage platform, but only if that Manufacturer warrants, supports and maintains the solution
  2. it must be a discrete and independent device(s) that does not rely upon any components, functionality or software from the base storage platform defined at 2.1; however, the capacity this NAS Gateway will address and share may be provided by the base storage platform defined at 2.1;
  3. it must address and share a minimum of 128TB of usable data storage while also adhering to all other minimums; the usable storage must not be computed through the use of a deduplication feature
  4. it must be fully compatible with and supported by the base storage platform defined at 2.1. Use of this NAS Gateway with the base storage platform must not preclude the base storage platform from also servicing other fibre channel block attached hosts at the same time;
  5. it must include sufficient cooling for a fully populated configuration. All cooling for the NAS Gateway must be redundant and monitored for failures by the NAS Gateway;
  6. it must allow hot swapping of failed cooling fans; and
  7. it must be packaged in an industry-standard 19" rack mount form factor and must include all accessories, cables and hardware required to mount and power the unit in an industry-standard 19" rack.

2.2.4.2 Power

Each Network Attached Storage Gateway must meet the following power requirements:

  1. it must provide sufficient power to operate a fully loaded system with all boards and components installed;
  2. it must be fully redundant, allowing the NAS Gateway to continue operating without interruption in the event of a power supply failure, until service can be performed. Redundancy may be achieved either by using:
    1. a second power supply, or
    2. an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.2.4.3 Controllers and RAID

Each Network Attached Storage Gateway must utilize capacity that is provided by the base storage platform where the capacity is RAID protected by the base platform.
The NAS gateway may rely on internal drives for booting (operating system/kernel), saving configuration data, or for buffering data, however, the user data must reside on the storage provided by the base storage platform

2.2.4.4 NAS Processor Unit

Each Network Attached Storage Gateway must meet the following requirements for the NAS processor unit(s):

  1. it must include a micro-kernel operating system designed for providing file services to CIFS and NFS via the included Ethernet interfaces. The micro-kernel operating system may be either a Windows, Linux, Unix-based operating system or FPGA (hardware)-based operating system.
  2. it must load the micro-kernel operating system from a fault-tolerant medium that is either RAID protected, or duplicated and included, in a second NAS processor unit that may assume operation in the event of a failure to load the operating system at boot time;
  3. it must contain 2 separate redundant clustered processor units or "heads" that operate in an Active / Active or Active / Hot Standby fashion providing network services to clients for CIFS and NFS. In the event of a failure of one of the processor units, the remaining unit must assume the IP address and identity of the failed processor unit and must continue to provide service to clients on the network automatically;
  4. the processor units must both be attached via a total aggregate minimum of 4 X 10GbE or 4 X 4Gb FC interfaces to the base storage platform; and
  5. the processor units in the NAS Gateway must contain an aggregate minimum of either 8 X 1Gbps or 2 X 10Gbps Ethernet interfaces for TCP/IP client access.

2.2.4.5 Software and Additional Capabilities

Each Network Attached Storage Gateway must meet the following requirements for software functionality and additional capabilities:

  1. it must include all client access licenses for end user workstations to access and use the shared file systems via CIFS or NFS, with no requirement for additional fees or licensing;
  2. it must fully integrate in native mode, Microsoft Active directory environments and must be manageable as a Windows server in those environments using native Microsoft tools for viewing and managing sessions, shares and open files;
  3. it must support snapshot functionality for all shared file systems allowing an administrator to create point-in-time copies of all files for the purpose of recovering deleted files; and
  4. it must include and be licensed for NDMP or support the installation of backup agents to facilitate backups of the shared file systems to fibre channel attached backup targets.

2.2.4.6 Management

Each Network Attached Storage Gateway must meet the following requirements for management capabilities:

  1. it must be manageable remotely via an Ethernet interface and must provide an intuitive GUI-based interface for day-to-day operations;
  2. it must include a simple and intuitive installation system allowing operators to create and provision the unit for operation on a network;
  3. it must provide GUI-based functionality to:
    1. create and manage volumes and file systems across LUN or RAID sets;
    2. it must work with authentication methods such as Active Directory or LDAP;
    3. view attributes of file system type and used capacity;
    4. configure all user-assigned parameters required for operation of the system;
    5. monitor utilization of network interfaces, processors and disk subsystems to gauge the load on those items;
    6. backup all locally hosted data to a locally-attached tape drive or provide an agent or facility for a remote console to initiate this process directly from the NAS disk to a backup target; and
    7. load balance file shares across either of the 2 processor units as needed and allow an administrator to manually failover file shares if required from 1 processor unit to the other.
  4. the GUI management system must manage and operate both processor units as a single entity, allowing a single session to facilitate all management functions described here.

2.3 Group 3.0 Medium Mid-Range (FC)

The following describes the configuration and features of a Medium Mid-Range Storage solution.

2.3.1 Storage Platform

2.3.1.1 Capacity and Platform

Each storage platform must meet the following capacity and platform requirements:

  1. the hard disk drive technologies and densities must be commercially available, meaning that the Manufacturer is continuing to manufacture and ship them to customers generally;
  2. the hard disk drive technologies and densities must be tested and fully supported within the storage platform by the storage platform Manufacturer;
  3. it must include industry-standard hard disk drives operating at either 4Gbps for Fibre Channel (FC) drives or 6Gbps for Serial Attach SCSI - 2 (SAS-2) drives;
  4. it must also include industry-standard Serial Advanced Technology Attachment (SATA) revision 3.0 or Nearline SAS (NL-SAS) hard disk drives operating at 6Gbps. This may be achieved either by:
    1. using the same shelves as either the FC or SAS disk drives, or
    2. using specialized shelves for these drive types;
  5. the available drive options must include at least four (4) from the following list:

    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 15000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 10000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
      5. 1.2TB
      6. 1.5TB
      7. 1.8TB
    • - drives with 6Gbps for SATA or NL-SAS interfaces and 7200 RPM rotational speed:
      1. 1TB
      2. 2TB
      3. 3TB
      4. 4TB
    • - solid state drives (SSD) based on Single Level Cell (SLC) or enterprise-class Multi-Level Cell (eMLC) technology
      1. 100GB
      2. 200GB
      3. 300GB
      4. 400GB
      5. 600GB
      6. 800GB
      7. 1.2TB
      8. 1.6TB
      9. 3.2TB
  6. it must accommodate a minimum of 448 hard disk drives;
  7. It must be packaged in a standard 19" rack mount form factor (NOTE: it is understood that standard rack depth will be increased when "high density" disk shelves are provided); and
  8. It must include lights or an LCD panel for power, activity and fault indications.

2.3.1.2 Cooling

Each storage platform must meet the following cooling requirements:

  1. it must provide sufficient cooling for a fully populated cabinet at the mandatory minimum storage capacity;
  2. all cooling for the system controller(s) as well as all hard disk drives must be redundant and monitored for failure by the storage platform hardware;
  3. it must allow hot swapping of failed cooling fans;
  4. the cooling system within the storage platform itself must be fully redundant; and
  5. in the event of a component failure, the cooling system must allow continued operation of the storage platform until service can be performed.

2.3.1.3 Drives and Shelves

Each storage platform must meet the following drives and shelves requirements:

  1. the hard disk drives must include a minimum of either 4Gbps for Fibre Channel (FC) or 6Gbps for Serial Attach SCSI - 2 (SAS-2) with dual ported;
  2. it must provide minimum of 4 active connections to the mandatory 448 hard disk drives. Bandwidth must be allocated evenly to the total number of physical drives over several channels;
  3. A channel failure must not interrupt access to attached disk drives;
  4. it must allow hot addition of storage shelves without needing to power the storage platform down and without interrupting access to existing drives and redundant arrays of inexpensive disk (RAID) groups;
  5. it must include as many back-end channels as necessary to support all the back-end shelves of disks so that a shelf component replacement or failure does not interrupt access to adjacent shelves in the platform.
  6. the hard disk drives in the storage platform must be fully hot pluggable while the storage platform is operational. There must be no loss of data if a hard drive is removed, assuming the drive is part of a fault-tolerant configuration in the platform;
  7. it must rebuild a replaced hard disk drive automatically and without user intervention when it is inserted, assuming it is replacing a hard disk drive that was part of a fault-tolerant configuration; and
  8. it must allow the allocation of hard disk drives as hot spares and or virtual spares, which must automatically rebuild the contents of a failed hard disk drive in any fault-tolerant RAID set. This process must be fully automatic whenever a disk failure occurs in a fault-tolerant RAID set.

2.3.1.4 Power

Each storage platform must meet the following power requirements:

  1. it must provide sufficient power to operate a fully populated system with all boards and cache installed, and the maximum number of hard disk drives installed;
  2. the power supplies must be fully redundant, allowing uninterrupted operation of the storage platform in the event of a power supply failure, until service can be performed. Redundancy may be achieved either by using:
    1. a second power supply, or
    2. an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.3.1.5 Controllers

Each storage platform must meet the following controller requirements:

  1. it must include dual redundant active/active storage controllers for handling both I/O to the attached host systems as well as disk I/O and RAID functionality;
  2. it must be redundant, so that the surviving controller automatically recovers controller subsystem failures, and service to attached hosts is continued without disruption;
  3. the storage platform must have access to all 448 of the mandatory hard disk drives in order to assign, configure, protect and share those drives;
  4. the storage controllers must allow configuration of hard disk drives within the storage platform as:
    1. RAID5 stripes with parity, RAID6 stripes with dual parity, RAID-DP, or triple parity RAID (RAIDZ for single parity, RAIDZ2 for dual parity, RAIDZ3 for triple parity); and
    2. RAID1, RAID4, RAID0+1 stripes with mirroring, or RAID1+0 striped mirrors (aka RAID10).
  5. it must allow the creation and addressing of up to 4096 simultaneous logical drives where logical drive is the logical unit of capacity presented to a client host; and
  6. it must simultaneously support all RAID types from 2.3.1.5 (d) within the storage platform.

2.3.1.6 Cache

Each storage platform must meet the following cache requirements:

  1. it must include a total of at least 32GB of dedicated I/O cache;
  2. the cache on the storage controller must perform both read and write I/O operations;
  3. the write cache must be mirrored cache; and
  4. the write data within the cache on the storage controllers must be protected by one of these three (3) methods:
    1. a battery that allows the cache contents to be held intact for a minimum of 48 hours. The caches must then complete their write operations to disk when power is restored; or
    2. all pending write data must be automatically written to disk before the disk system is powered off, and the platform must provide sufficient battery power to complete this function.
    3. NVRAM or flash cache that is used solely for de-staging cache data to in the event of power loss to the array.

2.3.1.7 I/O Ports and Connectivity

Each storage platform must meet the following requirements for I/O ports and connectivity:

  1. it must include a minimum of 2 storage controllers that may be replaced in the event of a controller failure
  2. it must provide a minimum of 8 fibre channel ports for connectivity to Intel and Open System host computers;
  3. all 8 fibre channel ports must be independent ports operating at 8Gbps each and support both point-to-point and loop modes of operation;
  4. each of the 8 fibre ports must support full fabric login and must have a unique fibre channel World Wide Name;
  5. it must provide simultaneous connectivity to any combination of 500 or more Intel and UNIX hosts using dual fibre channel host bus adapters in each host;
  6. it must provide the necessary software to support all supported operating systems; and
  7. it must provide "no single point of failure" connectivity options, for both failover as well as load balancing under all of the mandatory operating system environments. This may be provided using add-on failover software packages or using native Operating System facilities
  8. it must provide an option of two (2) native 10GbE connections for either FCoE host connectivity that meets the ANSI T11 FC-BB-5 Fibre Channel Over Ethernet (FCoE) or Internet Small Computer System Interface (iSCSI) host connectivity that meets the RFC3720 standards for the encapsulation of FC or SCSI packets over Full Duplex and Lossless Ethernet networks. and must be compliant with the following IEEE standards:

The FCoE implementation must be compliant with the following IEEE standards:

  1. 802.1Qbb;
  2. 802.1Qaz which defines:
    1. enhanced transmission selection (ETS); and
    2. data center bridging exchange (DCBX).

The iSCSI implementation must be compliant with the following IEEE standards:

  1. iSCSI Qualified Name (IQN) as documented in RFC 3720
  2. iSCSI initiator and security authentication using the CHAP protocol
  3. Internet Storage Name Service (iSNS) as documented in RFC 4171

2.3.1.8 Hosts

Each storage platform must meet the following requirements for host connectivity:

  1. it must connect to Intel and AMD-based host computers running the following:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X;
  2. it must connect to the following UNIX and Open Systems hosts simultaneously, in addition to the previously listed Intel systems:
    1. Oracle DELETE 10 systems;
    2. HP-UX 11i v.X systems;
    3. IBM AIX v.6X and v.7X systems;
  3. Support of additional platform types and operating systems is desirable, but not mandatory.

2.3.1.9 Clustering

Each storage platform must meet the following requirements for clustering:

  1. it must directly support clustering under the following host operating environments:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X with shared access to the same logical unit numbers (LUNs) for Vmotion;
  2. it must directly support clustering under the following host operating environments:
    1. MC/Serviceguard for HP-UX;
    2. PowerHA for AIX; and
    3. Oracle Solaris Cluster for Solaris

2.3.1.10 Software and Additional Capabilities

The storage platform must include the following software functionalities and additional capabilities. Furthermore, these must be entirely storage platform-based functionality and must not require any software or assistance from host systems on the SAN:

  1. it must provide LUN-masking functionality. This means it must mask or limit visibility of logical drive configurations within the storage platform to only specific hosts connected to the storage platform;
  2. it must synchronously replicate logical volumes remotely via an extended network backbone, which could include TCP/IP or fibre channel;
  3. it must asynchronously replicate logical volumes remotely via an extended network backbone, which could include TCP/IP or fibre channel;
  4. it must perform up to 4 concurrent host-less point-in-time snapshot copies of any logical volume that may be reassigned to any other host on the SAN;
  5. it must perform up to 2 concurrent host-less full block data copies of any logical volume that may be reassigned to any other host on the SAN;
  6. it must allow online firmware upgrades to be made without disrupting the operation of the platform; and
  7. it must perform sub-LUN auto-tiering of data written to the storage platform.

2.3.1.11 Management

The storage platform must provide the following management capabilities:

  1. it must provide a comprehensive graphical user interface (GUI) based management system that allows real-time monitoring of all components in the platform and reports degradation of components and failures;
  2. the GUI interface must either be a Windows-based application included with the system or a WEB or Java-based embedded function accessible using a standard browser;
  3. it must connect to an IP-based network either through a direct Ethernet connection on the platform or through an in-band connection via a fibre-attached host;
  4. it must issue SNMP traps or SMTP mail in the event of device degradation or failure;
  5. the GUI interface must show all installed hardware and its current operational status; and
  6. it must monitor the full performance of the storage array, including:
    1. disk, LUN or RAID group I/O’s per second for both read and write requests;
    2. cache utilization and hit rate statistics; and
    3. queuing or latency information for disks, arrays, LUNs or RAID sets.

2.3.2 Fabric

2.3.2.1 Fibre Channel Switch

The storage platform must operate with 8 Gbps 48 port fibre channel fabric switches, which must be fully supported and warranted by the storage platform Manufacturer. The fibre channel switches must meet the following requirements:

  1. they must operate with fibre channel fabrics and must be capable of full fibre channel zoning across switched fabrics;
  2. they must support a minimum of 512 active enabled unique zones at a time per fibre channel fabric;
  3. they must be available in both stand-alone and rack mountable configurations. A rack mounting kit that is applied to a stand-alone switch will be accepted;
  4. they must operate at 8Gb/s and must be fully populated with small form factor pluggable optical media modules for shortwave operation;
  5. they must provide lights or indicators for power and port status for all fibre channel ports;
  6. they must provide a 10/100MBps or 1Gbps Ethernet interface and must be manageable using TCP/IP as the transport protocol;
  7. they must provide redundant cooling and power;
  8. they must fully comply with the following ANSI T-11 standards:
    1. FC-FS-2 ANSI/INCITS 424:2006
    2. FC-FS-2 ANSI/INCITS 424:2006
    3. . FC-FS-2 ANSI/INCITS 424:2006
    4. FC-AL-2 INCITS 332: 1999
    5. FC-AL-2 INCITS 332: 1999
    6. FC-DA INCITS TR-36
    7. FC-SW-4 INCITS 418:2006
    8. FC-GS-5 ANSI INCITS 427:2006
    9. FC-DA INCITS TR-36
    10. FC-VI INCITS 357: 2002
    11. . FC-SW-4 INCITS 418:2006
  9. they must support fibre channel class 2 and 3 connections;
  10. they must provide full fabric support as per the ANSI standards specified at 2.3.2.1 (h);
  11. they must support cascading by connecting 2 or more switches together to form a single fabric that is compliant with the ANSI standards specified at 2.3.2.1 (h);
  12. they must include a comprehensive GUI-based management system that allows real-time monitoring of all components in the platform and to report failures or degraded components;
  13. The GUI interface must either be an embedded or accessed via WEB or Java-based function accessible using a standard browser;
  14. they must generate SNMP traps in the event of a degraded condition in the switch;
  15. the GUI interface must show the current operational status for all installed hardware components;
  16. the GUI interface must allow configuration of all aspects of the fibre channel switches including:
    1. the name,
    2. the domain ID,
    3. the passwords and user accounts for management,
    4. the IP addressing,
    5. the modes of operation of the ports,
    6. all zone and path information, and
    7. any other parameters critical to the operation of the switch;
  17. The GUI interface must provide complete performance monitoring allowing a storage administrator to view:
    1. the number of frames per second, with a breakdown of which were good frames and which were error frames,
    2. the throughput (Mbps) of fibre channel port,
    3. the operational speed of fibre channel ports,
    4. the mode of operation of fibre channel port (e.g. F-port, N-port, E-port), and
    5. the throughput in frames as well as MB per second.

2.3.4 NAS Gateway

2.3.4.1 Capacity and Platform

The storage platform must include a Network Attached Storage Gateway. The NAS Gateway must meet the following requirements:

  1. it must either:
    1. be manufactured by the same Manufacturer as the base storage platform defined in 3.1; or
    2. be sold under the name of the same Manufacturer (sometimes referred to as rebranding) as the base storage platform, but only if that Manufacturer warrants, supports and maintains the solution
  2. it must be a discrete and independent device(s) that does not rely upon any components, functionality or software from the base storage platform defined at 3.1; however, the capacity this NAS Gateway will address and share may be provided by the base storage platform defined at 3.1;
  3. it must address and share a minimum of 256TB of usable data storage while also adhering to all other minimums; the usable storage must not be computed through the use of a deduplication feature
  4. it must be fully compatible with and supported by the base storage platform defined at 3.1. Use of this NAS Gateway with the base storage platform must not preclude the base storage platform from also servicing other fibre channel block attached hosts at the same time;
  5. it must include sufficient cooling for a fully populated configuration. All cooling for the NAS Gateway must be redundant and monitored for failures by the NAS Gateway;
  6. it must allow hot swapping of failed cooling fans; and
  7. it must be packaged in an industry-standard 19" rack mount form factor and must include all accessories, cables and hardware required to mount and power the unit in an industry-standard 19" rack.

2.3.4.2 Power

Each Network Attached Storage Gateway must meet the following power requirements:

  1. it must provide sufficient power to operate a fully loaded system with all boards and components installed;
  2. it must be fully redundant, allowing the NAS Gateway to continue operating without interruption in the event of a power supply failure, until service can be performed. Redundancy may be achieved either by using:
    1. a second power supply, or
    2. an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.3.4.3 Controllers and RAID

Each Network Attached Storage Gateway must utilize capacity that is provided by the base storage platform where the capacity is RAID protected by the base platform.
The NAS gateway may rely on internal drives for booting (operating system/kernel), saving configuration data, or for buffering data, however, the user data must reside on the storage provided by the base storage platform

2.3.4.4 NAS Processor Unit

Each Network Attached Storage Gateway must meet the following requirements for the NAS processor unit(s):

  1. it must include a micro-kernel operating system designed for providing file services to CIFS and NFS via the included Ethernet interfaces. The micro-kernel operating system may be either a Windows, Linux, Unix-based operating system or FPGA (hardware)-based operating system.
  2. it must load the micro-kernel operating system from a fault-tolerant medium that is either RAID protected, or duplicated and included, in a second NAS processor unit that may assume operation in the event of a failure to load the operating system at boot time;
  3. it must contain 2 separate redundant clustered processor units or "heads" that operate in an Active / Active or Active / Hot Standby fashion providing network services to clients for CIFS and NFS. In the event of a failure of one of the processor units, the remaining unit must assume the IP address and identity of the failed processor unit and must continue to provide service to clients on the network automatically;
  4. the processor units must both be attached via a total aggregate minimum of 4 X 10GbE or 4 X 4Gb FC interfaces to the base storage platform; and
  5. the processor units in the NAS Gateway must contain an aggregate minimum of either 12 X 1Gbps or 4 X 10Gbps Ethernet interfaces for TCP/IP client access.

2.3.4.5 Software and Additional Capabilities

Each Network Attached Storage Gateway must meet the following requirements for software functionality and additional capabilities:

  1. it must include all client access licenses for end user workstations to access and use the shared file systems via CIFS or NFS, with no requirement for additional fees or licensing;
  2. it must fully integrate in native mode, Microsoft Active directory environments and must be manageable as a Windows server in those environments using native Microsoft tools for viewing and managing sessions, shares and open files;
  3. it must support snapshot functionality for all shared file systems allowing an administrator to create point-in-time copies of all files for the purpose of recovering deleted files; and
  4. it must include and be licensed for NDMP or support the installation of backup agents to facilitate backups of the shared file systems to fibre channel attached backup targets.

2.3.4.6 Management

Each Network Attached Storage Gateway must meet the following requirements for management capabilities:

  1. it must be manageable remotely via an Ethernet interface and must provide an intuitive GUI-based interface for day-to-day operations;
  2. it must include a simple and intuitive installation system allowing operators to create and provision the unit for operation on a network;
  3. it must provide GUI-based functionality to:
    1. create and manage volumes and file systems across LUN or RAID sets;
    2. it must work with authentication methods such as Active Directory or LDAP;
    3. view attributes of file system type and used capacity;
    4. configure all user-assigned parameters required for operation of the system;
    5. monitor utilization of network interfaces, processors and disk subsystems to gauge the load on those items;
    6. backup all locally hosted data to a locally-attached tape drive or provide an agent or facility for a remote console to initiate this process directly from the NAS disk to a backup target; and
    7. load balance file shares across either of the 2 processor units as needed and allow an administrator to manually failover file shares if required from 1 processor unit to the other.
  4. the GUI management system must manage and operate both processor units as a single entity, allowing a single session to facilitate all management functions described here.

2.4 Group 4.0 Large Mid-Range (FC)

The following describes the configuration and features of a Large Mid-Range Storage solution.

2.4.1 Storage Platform

2.4.1.1 Capacity and Platform

Each storage platform must meet the following capacity and platform requirements:

  1. the hard disk drive technologies and densities must be commercially available, meaning that the Manufacturer is continuing to manufacture and ship them to customers generally;
  2. the hard disk drive technologies and densities must be tested and fully supported within the storage platform by the storage platform Manufacturer;
  3. it must include industry-standard hard disk drives operating at either 4Gbps for Fibre Channel (FC) drives or 6Gbps for Serial Attach SCSI - 2 (SAS-2) drives;
  4. it must also include industry-standard Serial Advanced Technology Attachment (SATA) revision 3.0 or Nearline SAS (NL-SAS) hard disk drives operating at 6Gbps. This may be achieved either by:
    1. using the same shelves as either the FC or SAS disk drives, or
    2. using specialized shelves for these drive types
  5. the available drive options must include at least four (4) from the following list:
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 15000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 10000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
      5. 1.2TB
      6. 1.8TB
      7. 1.5TB
    • - drives with 6Gbps for SATA or NL-SAS interfaces and 7200 RPM rotational speed:
      1. 1TB
      2. 2TB
      3. 3TB
      4. 4TB
    • - solid state drives (SSD) based on Single Level Cell (SLC) or enterprise-class Multi-Level Cell (eMLC) technology
      1. 100GB
      2. 200GB
      3. 300GB
      4. 400GB
      5. 600GB
      6. 800GB
      7. 1.2TB
      8. 1.6TB
      9. 3.2TB
  6. it must accommodate a minimum of 960 hard disk drives;
  7. It must be packaged in a standard 19" rack mount form factor (NOTE: it is understood that standard rack depth will be increased when "high density" disk shelves are provided); and
  8. It must include lights or an LCD panel for power, activity and fault indications.

2.4.1.2 Cooling

Each storage platform must meet the following cooling requirements:

  1. it must provide sufficient cooling for a fully populated cabinet at the mandatory minimum storage capacity;
  2. all cooling for the system controller(s) as well as all hard disk drives must be redundant and monitored for failure by the storage platform hardware;
  3. it must allow hot swapping of failed cooling fans;
  4. the cooling system within the storage platform itself must be fully redundant; and
  5. in the event of a component failure, the cooling system must allow continued operation of the storage platform until service can be performed.

2.4.1.3 Drives and Shelves

Each storage platform must meet the following drives and shelves requirements:

  1. the hard disk drives must include a minimum of either 4Gbps for Fibre Channel (FC) or 6Gbps for Serial Attach SCSI - 2 (SAS-2) with dual ported;
  2. it must provide minimum of 4 active connections to the mandatory 960 hard disk drives. Bandwidth must be allocated evenly to the total number of physical drives over several channels;
  3. A channel failure must not interrupt access to attached disk drives;
  4. it must allow hot addition of storage shelves without needing to power the storage platform down and without interrupting access to existing drives and redundant arrays of inexpensive disk (RAID) groups;
  5. it must include as many back-end channels as necessary to support all the back-end shelves of disks so that a shelf component replacement or failure does not interrupt access to adjacent shelves in the platform.
  6. the hard disk drives in the storage platform must be fully hot pluggable while the storage platform is operational. There must be no loss of data if a hard drive is removed, assuming the drive is part of a fault-tolerant configuration in the platform;
  7. it must rebuild a replaced hard disk drive automatically and without user intervention when it is inserted, assuming it is replacing a hard disk drive that was part of a fault-tolerant configuration; and
  8. it must allow the allocation of hard disk drives as hot spares and or virtual spares, which must automatically rebuild the contents of a failed hard disk drive in any fault-tolerant RAID set. This process must be fully automatic whenever a disk failure occurs in a fault-tolerant RAID set.

2.4.1.4 Power

Each storage platform must meet the following power requirements:

  1. it must provide sufficient power to operate a fully populated system with all boards and cache installed, and the maximum number of hard disk drives installed;
  2. the power supplies must be fully redundant, allowing uninterrupted operation of the storage platform in the event of a power supply failure, until service can be performed. Redundancy may be achieved either by using:
    1. a second power supply, or
    2. an N+1 approach; and
  3. each AC power supply must connect independently to a discrete AC power source.

2.4.1.5 Controllers

Each storage platform must meet the following controller requirements:

  1. it must include dual redundant active/active storage controllers for handling both I/O to the attached host systems as well as disk I/O and RAID functionality;
  2.  it must be redundant, so that the surviving controller automatically recovers controller subsystem failures, and service to attached hosts is continued without disruption;
  3. the storage platform must have access to all 960 of the mandatory hard disk drives in order to assign, configure, protect and share those drives;
  4. the storage controllers must allow configuration of hard disk drives within
    1. RAID5 stripes with parity, RAID6 stripes with dual parity, RAID-DP, or triple parity RAID (RAIDZ for single parity, RAIDZ2 for dual parity, RAIDZ3 for triple parity); and
    2. RAID1, RAID4, RAID0+1 stripes with mirroring, or RAID1+0 striped mirrors (aka RAID10).
  5. it must allow the creation and addressing of up to 4096 simultaneous logical drives where logical drive is the logical unit of capacity presented to a client host; and
  6. it must simultaneously support all RAID types from 2.4.1.5 (d) within the storage platform.

2.4.1.6 Cache

Each storage platform must meet the following cache requirements:

  1. it must include a total of at least 64GB of dedicated I/O cache;
  2. the cache on the storage controller must perform both read and write I/O operations;
  3. the write cache must be mirrored cache; and
  4. the write data within the cache on the storage controllers must be protected by one of these three (3) methods:
    1. a battery that allows the cache contents to be held intact for a minimum of 48 hours. The caches must then complete their write operations to disk when power is restored; or
    2. all pending write data must be automatically written to disk before the disk system is powered off, and the platform must provide sufficient battery power to complete this function.
    3. NVRAM or flash cache that is used solely for de-staging cache data to in the event of power loss to the array.

2.4.1.7 I/O Ports and Connectivity

Each storage platform must meet the following requirements for I/O ports and connectivity:

  1. it must include a minimum of 2 storage controllers that may be replaced in the event of a controller failure;
  2. it must provide a minimum of 16 fibre channel ports for connectivity to Intel and Open System host computers;
  3. all 16 fibre channel ports must be independent ports operating at 8Gbps each and support both point-to-point and loop modes of operation;
  4. each of the 16 fibre ports must support full fabric login and must have a unique fibre channel World Wide Name;
  5. it must provide simultaneous connectivity to any combination of 1000 or more Intel and UNIX hosts using dual fibre channel host bus adapters in each host;
  6. it must provide the necessary software to support all supported operating systems; and
  7. it must provide "no single point of failure" connectivity options, for both failover as well as load balancing under all of the mandatory operating system environments. This may be provided using add-on failover software packages or using native Operating System facilities
  8. it must provide an option of two (2) native 10GbE connections for either FCoE host connectivity that meets the ANSI T11 FC-BB-5 Fibre Channel Over Ethernet (FCoE) or Internet Small Computer System Interface (iSCSI) host connectivity that meets the RFC3720 standards for the encapsulation of FC or SCSI packets over Full Duplex and Lossless Ethernet networks. and must be compliant with the following IEEE standards:

The FCoE implementation must be compliant with the following IEEE standards:

  • 802.1Qbb;
  • 802.1Qaz which defines:
    1. enhanced transmission selection (ETS); and
    2. data center bridging exchange (DCBX).
  • The iSCSI implementation must be compliant with the following IEEE standards:
    1. iSCSI Qualified Name (IQN) as documented in RFC 3720
    2. iSCSI initiator and security authentication using the CHAP protocol
    3. Internet Storage Name Service (iSNS) as documented in RFC 4171

2.4.1.8 Hosts

Each storage platform must meet the following requirements for host connectivity:

  1. it must connect to Intel and AMD-based host computers running the following:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X;
  2. it must connect to the following UNIX and Open Systems hosts simultaneously, in addition to the previously listed Intel systems:
    1. Oracle DELETE 10 systems;
    2. HP-UX 11i v.X systems;
    3. IBM AIX v.6X and v.7X systems;
  3. Support of additional platform types and operating systems is desirable, but not mandatory.

2.4.1.9 Clustering

Each storage platform must meet the following requirements for clustering:

  1. it must directly support clustering under the following host operating environments:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X with shared access to the same logical unit numbers (LUNs) for Vmotion; and
  2. it must directly support clustering under the following host operating environments:
    1. MC/Serviceguard for HP-UX;
    2. PowerHA for AIX; and
    3. Oracle Solaris Cluster for Solaris

2.4.1.10 Software and Additional Capabilities

The storage platform must include the following software functionalities and additional capabilities. Furthermore, these must be entirely storage platform-based functionality and must not require any software or assistance from host systems on the SAN:

  1. it must provide LUN-masking functionality. This means it must mask or limit visibility of logical drive configurations within the storage platform to only specific hosts connected to the storage platform;
  2. it must synchronously replicate logical volumes remotely via an extended network backbone, which could include TCP/IP or fibre channel;
  3. it must asynchronously replicate logical volumes remotely via an extended network backbone, which could include TCP/IP or fibre channel;
  4. it must perform up to 4 concurrent host-less point-in-time snapshot copies of any logical volume that may be reassigned to any other host on the SAN;
  5. it must perform up to 2 concurrent host-less full block data copies of any logical volume that may be reassigned to any other host on the SAN;
  6. it must allow online firmware upgrades to be made without disrupting the operation of the platform; and
  7. it must perform sub-LUN auto-tiering of data written to the storage platform.

2.4.1.11 Management

The storage platform must provide the following management capabilities:

  1. it must provide a comprehensive graphical user interface (GUI) based management system that allows real-time monitoring of all components in the platform and reports degradation of components and failures;
  2. the GUI interface must either be a Windows-based application included with the system or a WEB or Java-based embedded function accessible using a standard browser;
  3. it must connect to an IP-based network either through a direct Ethernet connection on the platform or through an in-band connection via a fibre-attached host;
  4. it must issue SNMP traps or SMTP mail in the event of device degradation or failure;
  5. the GUI interface must show all installed hardware and its current operational status; and
  6. it must monitor the full performance of the storage array, including:
    1. disk, LUN or RAID group I/O’s per second for both read and write requests;
    2. cache utilization and hit rate statistics; and
    3. queuing or latency information for disks, arrays, LUNs or RAID sets.

2.4.2 Fabric

2.4.2.1 Fibre Channel Switch

The storage platform must operate with 8 Gbps 64 port fibre channel fabric switches, which must be fully supported and warranted by the storage platform Manufacturer. The fibre channel switches must meet the following requirements:

  1. they must operate with fibre channel fabrics and must be capable of full fibre channel zoning across switched fabrics;
  2. they must support a minimum of 512 active enabled unique zones at a time per fibre channel fabric;
  3. they must be available in both stand-alone and rack mountable configurations. A rack mounting kit that is applied to a stand-alone switch will be accepted;
  4. they must operate at 8Gb/s and must be fully populated with small form factor pluggable optical media modules for shortwave operation;
  5. they must provide lights or indicators for power and port status for all fibre channel ports;
  6. they must provide a 10/100MBps or 1Gbps Ethernet interface and must be manageable using TCP/IP as the transport protocol;
  7. they must provide redundant cooling and power;
  8. they must fully comply with the following ANSI T-11 standards:
    1. FC-FS-2 ANSI/INCITS 424:2006
    2. FC-FS-2 ANSI/INCITS 424:200
    3. FC-FS-2 ANSI/INCITS 424:2006
    4. FC-AL-2 INCITS 332: 1999
    5. FC-AL-2 INCITS 332: 1999
    6. FC-DA INCITS TR-36
    7. FC-SW-4 INCITS 418:2006
    8. FC-GS-5 ANSI INCITS 427:2006
    9. FC-DA INCITS TR-36
    10. FC-VI INCITS 357: 2002
    11. FC-SW-4 INCITS 418:2006
  9. They must support fibre channel class 2 and 3 connections;
  10. they must provide full fabric support as per the ANSI standards specified at 2.4.2.1 (h);
  11. they must support cascading by connecting 2 or more switches together to form a single fabric that is compliant with the ANSI standards specified at 2.4.2.1 (h);
  12. they must include a comprehensive GUI-based management system that allows real-time monitoring of all components in the platform and to report failures or degraded components;
  13. The GUI interface must either be an embedded or accessed via WEB or Java-based function accessible using a standard browser;
  14. they must generate SNMP traps in the event of a degraded condition in the switch;
  15. the GUI interface must show the current operational status for all installed hardware components;
  16. the GUI interface must allow configuration of all aspects of the fibre channel switches including:
    1. the name,
    2. the domain ID,
    3. the passwords and user accounts for management,
    4. the IP addressing,
    5. the modes of operation of the ports,
    6. all zone and path information, and
    7. any other parameters critical to the operation of the switch;
  17. The GUI interface must provide complete performance monitoring allowing a storage administrator to view:
    1. the number of frames per second, with a breakdown of which were good frames and which were error frames,
    2. the throughput (Mbps) of fibre channel port,
    3. the operational speed of fibre channel ports,
    4. the mode of operation of fibre channel port (e.g. F-port, N-port, E-port), and
    5. the throughput in frames as well as MB per second.

2.4.4 NAS Gateway

2.4.4.1 Capacity and Platform

The storage platform must include a Network Attached Storage Gateway. The NAS Gateway must meet the following requirements:

  1. it must either:
    1. be manufactured by the same Manufacturer as the base storage platform defined in 4.1; or
    2. be sold under the name of the same Manufacturer (sometimes referred to as rebranding) as the base storage platform, but only if that Manufacturer warrants, supports and maintains the solution
  2. it must be a discrete and independent device(s) that does not rely upon any components, functionality or software from the base storage platform defined at 4.1; however, the capacity this NAS Gateway will address and share may be provided by the base storage platform defined at 4.1;
  3. it must address and share a minimum of 512TB of usable data storage while also adhering to all other minimums; the usable storage must not be computed through the use of a deduplication feature
  4. it must be fully compatible with and supported by the base storage platform defined at 4.1. Use of this NAS Gateway with the base storage platform must not preclude the base storage platform from also servicing other fibre channel block attached hosts at the same time;
  5. it must include sufficient cooling for a fully populated configuration. All cooling for the NAS Gateway must be redundant and monitored for failures by the NAS Gateway;
  6. it must allow hot swapping of failed cooling fans; and
  7. it must be packaged in an industry-standard 19" rack mount form factor and must include all accessories, cables and hardware required to mount and power the unit in an industry-standard 19" rack.

2.4.4.2 Power

Each Network Attached Storage Gateway must meet the following power requirements:

  1. it must provide sufficient power to operate a fully loaded system with all boards and components installed;
  2. it must be fully redundant, allowing the NAS Gateway to continue operating without interruption in the event of a power supply failure, until service can be performed. Redundancy may be achieved either by using:
    1. a second power supply, or
    2. an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.4.4.3 Controllers and RAID

Each Network Attached Storage Gateway must utilize capacity that is provided by the base storage platform where the capacity is RAID protected by the base platform.The NAS gateway may rely on internal drives for booting (operating system/kernel), saving configuration data, or for buffering data, however, the user data must reside on the storage provided by the base storage platform

2.4.4.4 NAS Processor Unit

Each Network Attached Storage Gateway must meet the following requirements for the NAS processor unit(s):

  1. it must include a micro-kernel operating system designed for providing file services to CIFS and NFS via the included Ethernet interfaces. The micro-kernel operating system may be either a Windows, Linux, Unix-based operating system or FPGA (hardware)-based operating system.
  2. it must load the micro-kernel operating system from a fault-tolerant medium that is either RAID protected, or duplicated and included, in a second NAS processor unit that may assume operation in the event of a failure to load the operating system at boot time;
  3. it must contain 2 separate redundant clustered processor units or "heads" that operate in an Active / Active or Active / Hot Standby fashion providing network services to clients for CIFS and NFS. In the event of a failure of one of the processor units, the remaining unit must assume the IP address and identity of the failed processor unit and must continue to provide service to clients on the network automatically;
  4. the processor units must both be attached via a total aggregate minimum of 8 X 4Gbps fibre channel, 4 X 8Gbps fibre channel, or 4 X 10Gbps Ethernet interfaces to the base storage platform; and
  5. the processor units in the NAS Gateway must contain an aggregate minimum of 4 X 10Gbps Ethernet interfaces for TCP/IP client access;

2.4.4.5 Software and Additional Capabilities

Each Network Attached Storage Gateway must meet the following requirements for software functionality and additional capabilities:

  1. it must include all client access licenses for end user workstations to access and use the shared file systems via CIFS or NFS, with no requirement for additional fees or licensing;
  2. it must fully integrate in native mode, Microsoft Active directory environments and must be manageable as a Windows server in those environments using native Microsoft tools for viewing and managing sessions, shares and open files;
  3. it must support snapshot functionality for all shared file systems allowing an administrator to create point-in-time copies of all files for the purpose of recovering deleted files; and
  4. it must include and be licensed for NDMP or support the installation of backup agents to facilitate backups of the shared file systems to fibre channel attached backup targets.

2.4.4.6 Management

Each Network Attached Storage Gateway must meet the following requirements for management capabilities:

  1. it must be manageable remotely via an Ethernet interface and must provide an intuitive GUI-based interface for day-to-day operations;
  2. it must include a simple and intuitive installation system allowing operators to create and provision the unit for operation on a network;
  3. it must provide GUI-based functionality to:
    1. create and manage volumes and file systems across LUN or RAID sets;
    2. it must work with authentication methods such as Active Directory or LDAP;
    3. view attributes of file system type and used capacity;
    4. configure all user-assigned parameters required for operation of the system;
    5. monitor utilization of network interfaces, processors and disk subsystems to gauge the load on those items;
    6. backup all locally hosted data to a locally-attached tape drive or provide an agent or facility for a remote console to initiate this process directly from the NAS disk to a backup target; and
    7. load balance file shares across either of the 2 processor units as needed and allow an administrator to manually failover file shares if required from 1 processor unit to the other.
  4. the GUI management system must manage and operate both processor units as a single entity, allowing a single session to facilitate all management functions described here.

2.5 Group 5.0 Large Enterprise (FC)

The following describes the configuration and features of a Large Enterprise Storage solution.

2.5.1 Storage Platform

2.5.1.1 Capacity and Platform

Each storage platform must meet the following capacity and platform requirements:

  1. the hard disk drive technologies and densities must be commercially available, meaning that the Manufacturer is continuing to manufacture and ship them to customers generally;
  2. the hard disk drive technologies and densities must be tested and fully supported within the storage platform by the storage platform Manufacturer;
  3. it must include industry-standard hard disk drives operating at either 4Gbps for Fibre Channel (FC) drives or 6Gbps for Serial Attach SCSI - 2 (SAS-2) drives;
  4. it must also include industry-standard Serial Advanced Technology Attachment (SATA) revision 3.0 or Nearline SAS (NL-SAS) hard disk drives operating at 6Gbps. This may be achieved either by:
    1. using the same shelves as either the FC or SAS disk drives, or
    2. using specialized shelves for these drive types;
  5. the available drive options must include at least four (4) from the following list:
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 15000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 10000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
      5. 1.2TB
      6. 1.8TB
      7. 1.5TB
    • - drives with 6Gbps for SATA or NL-SAS interfaces and 7200 RPM rotational speed:
      1. 1TB
      2. 2TB
      3. 3TB
      4. 4TB
    • - solid state drives (SSD) based on Single Level Cell (SLC) or enterprise-class Multi-Level Cell (eMLC) technology
      1. 100GB
      2. 200GB
      3. 300GB
      4. 400GB
      5. 600GB
      6. 800GB
      7. 1.2TB
      8. 1.6TB
      9. 3.2TB
  6. it must accommodate a minimum of 1536 hard disk drives;
  7. it must be packaged in a standard 19" rack mount form factor (NOTE: it is understood that standard rack depth will be increased when "high density" disk shelves are provided) or installed in a rack specifically designed for the storage system; and
  8. it must include lights or an LCD panel for power, activity and fault indications.

2.5.1.2 Cooling

Each storage platform must meet the following cooling requirements:

  1. it must provide sufficient cooling for a fully populated cabinet at the mandatory minimum storage capacity;
  2. all cooling for the system controller(s) as well as all hard disk drives must be redundant and monitored for failure by the storage platform hardware;
  3. it must allow hot swapping of failed cooling fans;
  4. the cooling system within the storage platform itself must be fully redundant; and
  5. in the event of a component failure, the cooling system must allow continued operation of the storage platform until service can be performed.

2.5.1.3 Drives and Shelves

Each storage platform must meet the following drives and shelves requirements:

  1. the hard disk drives must include a minimum of either 4Gbps for Fibre Channel (FC) or 6Gbps for Serial Attach SCSI - 2 (SAS-2) with dual ported;
  2. it must provide minimum of 4 active connections to the mandatory 1536 hard disk drives. Bandwidth must be allocated evenly to the total number of physical drives over several channels;
  3. A channel failure must not interrupt access to attached disk drives;
  4. it must allow hot addition of storage shelves without needing to power the storage platform down and without interrupting access to existing drives and RAID groups;
  5. it must include as many back-end channels as necessary to support all the back-end shelves of disks so that a shelf component replacement or failure does not interrupt access to adjacent shelves in the platform;
  6. the hard disk drives in the storage platform must be fully hot pluggable while the storage platform is operational. There must be no loss of data if a hard drive is removed, assuming the drive is part of a fault-tolerant configuration in the platform;
  7. it must rebuild a replaced hard disk drive automatically and without user intervention when it is inserted, assuming it is replacing a hard disk drive that was part of a fault-tolerant configuration; and
  8. it must allow the allocation of hard disk drives as hot spares and or virtual spares, which must automatically rebuild the contents of a failed hard disk drive in any fault-tolerant RAID set. This process must be fully automatic whenever a disk failure occurs in a fault-tolerant RAID set.

2.5.1.4 Power

Each storage platform must meet the following power requirements:

  1. it must provide sufficient power to operate a fully populated system with all boards and cache installed, and the maximum number of hard disk drives installed;
  2. the power supplies must be fully redundant, allowing uninterrupted operation of the storage platform in the event of a power supply failure, until service can be performed. Redundancy may be achieved either through:
    1. use of a second power supply, or
    2. through an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.5.1.5 Controllers

Each storage platform must meet the following controller requirements:

  1. it must provide a multiprocessor architecture that provides multiple processors for handling both I/O to the attached host systems as well as disk I/O and RAID functionality. There must be sufficient processing power within the storage platform to handle the load of the hosts and the ability to manage 1536 of disks in the storage array;
  2. it must use separate ports for front end SAN connectivity versus back end connectivity and must support full redundancy and hot swapping of all controllers;
  3. it must provide redundancy, so that processor subsystem failures can be recovered by surviving processors which can continue to service the attached hosts without disruption;
  4. the storage platform must have access to all 1536 of the mandatory hard disk drives in order to assign, configure, protect and share those drives. Implementations involving small discrete stacked storage units with individual controllers for smaller groups of drives will not be considered compliant with this requirement;
  5. the storage controllers must allow configuration of hard disk drives within the storage platform as:
    1. RAID5 stripes with parity, RAID6 stripes with dual parity, RAID-DP, or triple parity RAID (RAIDZ for single parity, RAIDZ2 for dual parity, RAIDZ3 for triple parity); and
    2. RAID1, RAID4, RAID0+1 stripes with mirroring, or RAID1+0 striped mirrors (aka RAID10).
  6. it must simultaneously support all RAID types from 2.5.1.5(e) within the storage platform; and
  7. it must allow the creation and allocation of a minimum of 64000 logical units to connected hosts.

2.5.1.6 Cache

Each storage platform must meet the following cache requirements:

  1. it must include at least 384GB of dedicated I/O cache that may be shared between all storage processors. It is understood and accepted that a small portion of this memory is used for storing platform specific software as required;
  2. it must perform both read and write I/O operations;
  3. the write cache must be mirrored cache;
  4. it must be serviceable without disruption to the operation of the storage platform so that failed portions of the cache may be replaced hot; and
  5. the write data within the cache on the storage controllers must be protected by one of these three (3) methods:
    1. a battery that allows the cache contents to be held intact for a minimum of 48 hours. The caches must then complete their write operations to disk when power is restored; or
    2. all pending write data must be automatically written to disk before the disk system is powered off, and the platform must provide sufficient battery power to complete this function.
    3. NVRAM or flash cache that is used solely for de-staging cache data to in the event of power loss to the array.

2.5.1.7 I/O Ports and Connectivity

Each storage platform must meet the following requirements for I/O ports and connectivity:

  1. it must utilize a system of installable cards and slots for populating the system with the customer desired combination of fibre channel and FICON ports;
  2. it must provide a minimum of 32 fibre channel ports for connectivity to Intel and Open System host computers;
  3. it must support WAN connectivity for the purposes of data mirroring to another like storage platform at a separate physical location through all of the following:
    1. Extended SAN fibre channel,
    2. Fibre channel over IP (FCIP), and
  4. it must be able to provide a minimum of 32 FICON ports for mainframe connectivity
  5. each of the 32 fibre channel ports must meet the following requirements:
    1. each must operate at a minimum of 8Gbps;
    2. each must be compliant with ANSI T11 standards for fibre channel;
    3. each must support full fabric login;
    4. each must have its own unique fibre channel world wide name (WWN)
    5. each must act as independent ports providing aggregate separate bandwidth to host computers;
    6. each must have the ability to be configured for active/active failover with appropriate host based load balancing and failover software;
  6. it must provide simultaneous connectivity to 2000 or more Intel hosts using dual fibre channel host bus adapters in each host;
  7. it must provide the necessary software to support all supported operating systems;
  8. it must provide "no single point of failure" connectivity options, for both failover as well as load balancing under all of the mandatory operating system environments. This may be provided using add-on failover software packages or using native Operating System facilities, such as HP-UX PVLinks.; and
  9. DELETE

2.5.1.8 Hosts

Each storage platform must meet the following requirements for host connectivity:

  1. it must connect to Intel-based host computers running the following:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X;
  2. it must connect to the following UNIX and Open Systems hosts simultaneously, in addition to the previously listed Intel systems:
    1. Oracle DELETE 10 systems;
    2. HP-UX 11i v.X systems;
    3. IBM AIX v.6X and v.7X systems;
  3. it must connect to IBM AS400 systems natively or via VIOS;
  4. it must support FICON mainframe connectivity and must emulate DELETE 3390-9, 3390-27 and 3390-54 modes;
  5. support of additional platform types and operating systems is desirable but not mandatory.

2.5.1.9 Clustering

Each storage platform must meet the following requirements for clustering:

  1. it must directly support clustering under the following host operating environments:
    1. Windows Server 2008 R2;
    2. Red Hat Enterprise Linux 6 or SUSE Linux Enterprise Server 11 in 32-bit and 64-bit configurations; and
    3. VMWare ESX Server 5.X with shared access to the same logical unit numbers (LUNs) for Vmotion;
  2. it must directly support clustering under the following host operating environments:
    1. MC/Serviceguard for HP-UX;
    2. PowerHA for AIX; and
    3. Oracle Solaris Cluster for Solaris

2.5.1.10 Software and Additional Capabilities

The storage platform must provide the following software functionalities and additional capabilities.Furthermore, these must be entirely storage platform-based functionality and must not require any software or assistance from host systems on the SAN:

  1. it must provide LUN-masking functionality, meaning it masks or limits visibility of logical drive configurations within the storage platform to only specific hosts connected to the storage platform;
  2. it must synchronously replicate logical volumes remotely via an extended network backbone, which could include TCP/IP or fibre channel;
  3. it must asynchronously replicate logical volumes remotely via an extended network backbone, which could include TCP/IP or fibre channel;
  4. it must perform up to 8 concurrent host-less point-in-time snapshot copies of any logical volume that may be reassigned to any other host on the SAN;
  5. it must perform up to 8 concurrent host-less full block data copies of any logical volume that may be reassigned to any other host on the SAN;
  6. it must allow online firmware upgrades to be made without disrupting the operation of the platform; and
  7. it must perform sub-LUN auto-tiering of data written to the storage platform.

2.5.1.11 Management

The storage platform must provide the following management capabilities:

  1. it must provide a comprehensive GUI-based management system that allows real-time monitoring of all components in the platform and reports degradation of components and failures;
  2. several management packages may be required in order to provide the required functionality. This is acceptable provided that these packages may all be executed from one dedicated management console system and that the packages coexist and function properly together;
  3. the GUI interface must be a Windows-based application included with the system or may be a WEB or Java-based embedded function accessible using a standard browser;
  4. it must connect to an IP-based network either through a direct Ethernet connection on the platform or through an in-band connection via a fibre-attached host;
  5. it must provide the ability to issue SNMP traps or SMTP mail in the event of a storage platform device degradation or failure;
  6. the GUI interface must offer complete visibility to all installed hardware and its current operational status;
  7. it must monitor the full performance of the storage array, including:
    1. disk, LUN or RAID group I/O’s per second for both read and write request
    2. cache utilization and hit rate statistics;
    3. queuing or latency information for disks, arrays, LUNs or RAID sets.
    4. I/O throughput statistics on a per interface basis for fibre channel, FICON,
    5. FICON and remote link connections to remote replication storage platforms.
  8. the GUI interface must also provide the following:
    1. it must explicitly identify bottlenecks allowing a storage administrator to take corrective action;
    2. it must allow configuration of all aspects of the storage platform including the controllers, cache, interfaces, drives and RAID configurations as well as logical drives and associated permissions;
    3. it must be able to update all firmware and software resident on the storage platform as an integrated function and must be able to activate this new software non-disruptively for all dual attached host platforms;
    4. it must have integrated control for the mandatory snapshot and remote replication features of the storage platform allowing the creation, assignment, configuration and destruction of snapshots and remote LUN replicas;
    5. it must have a facility for graphically displaying the SAN and FICON connections from attached hosts to their target volumes to clearly illustrate to a storage administrator the relationship between hosts and LUNs;
    6. it must have provisions for the creation, configuration, allocation and management of 3390 mainframe type volumes for mainframes;
    7. it must manage both LUN allocation to hosts as well as switch the zone and path from a single interface for the storage platform and all fibre channel switches. This may be supplied as a separate product from the management tool both must be compatible and have launch support for the storage platform management tool; and
    8. it must feature full SAN topology mapping showing all attached systems, switches and fibre connected hosts as well as physical and logical path information including fibre channel zoning;
  9. it must have an array-based LUN-masking feature that allows assigning explicit permissions between specific logical drives and specified SAN attached hosts. This must be configured and enforced at the storage array; and
  10. it must include the array-based ability to expand RAID sets or logical volumes presented to hosts. This must be available through the GUI interface as part of the RAID and logical volume management capabilities. Please note that this is referring to the ability to expand the size of RAID sets and LUNs at the hardware level and does not require a utility on the host to manipulate this space at the operating system level.

2.5.2 Fabric

The following describes the configuration and features of fibre channel switches.

2.5.2.1 Director Class Fibre Channel Switches

The storage platform must operate with 256 and 384 port director class fibre channel fabric switch, which must be fully supported and warranted by the storage platform Manufacturer. The director class fibre channel switches must provide the following capabilities:

  1. the full 256 and 384 ports must be connected through a non-blocking backplane architecture;
  2. the must allow all ports to be simultaneously active and send data without traversing any hops or Inter Switch Links, either obvious or embedded;
  3. they must operate with fibre channel fabrics and must be capable of full fibre channel zoning across switched fabrics;
  4. they must support a minimum of 2048 active enabled unique zones at a time per fibre channel fabric;
  5. they must be provided in a rack mountable configuration;
  6. they must operate at 8Gb/s and must be fully populated with small form factor pluggable optical media modules for shortwave operation;
  7. they must support optional long wave small form factor pluggable fibre channel optical media modules or blades, with these modules preinstalled for creating long distance connections to a minimum of 25KM without repeaters or extenders;
  8. they must provide lights or indicators for power and port status for all fibre channel ports;
  9. they must provide a 10/100MBps or 1Gbps Ethernet interface and must be manageable using TCP/IP as the transport protocol;
  10. they must provide the following redundant components:
    1. cooling and power,
    2. memory and processors,
    3. fibre ports and associated port circuitry connected into the backplane;
  11. they must fully comply with the following ANSI T-11 standards:
    1. FC-FS-2 ANSI/INCITS 424:2006
    2. FC-FS-2 ANSI/INCITS 424:2006
    3. FC-FS-2 ANSI/INCITS 424:2006
    4. FC-AL-2 INCITS 332: 1999
    5. FC-AL-2 INCITS 332: 1999
    6. FC-DA INCITS TR-36
    7. FC-SW-4 INCITS 418:2006
    8. FC-GS-5 ANSI INCITS 427:2006
    9. FC-DA INCITS TR-36
    10. FC-VI INCITS 357: 2002
    11. FC-SW-4 INCITS 418:2006
  12. they must support fibre channel class 2 and 3 connections;
  13. they must provide full fabric support as per the ANSI standards specified at 2.5.2.1 (l);
  14. they must support cascading by connecting 16 or more switches together to form a single fabric that is compliant with the ANSI standards specified at 2.5.2.1 (l);
  15. they must include a comprehensive GUI-based management system that allows real-time monitoring of all components in the platform and to report failures or degraded components;
  16. The GUI interface must either be an embedded or accessed via WEB or Java-based function accessible using a standard browser;
  17. they must provide full failure monitoring for all components and must be thermally monitored;
  18. they must provide alerting via SNMP and the GUI console to advise a storage administrator of a failure or degraded condition;
  19. the GUI interface must show the current operational status for all installed hardware components;
  20. the GUI interface must allow configuration of all aspects of the fibre channel switches including:
    1. the name,
    2. the domain ID,
    3. the passwords and user accounts for management,
    4. the IP addressing,
    5. the modes of operation of the ports,
    6. all zone and path information, and
    7. any other parameters critical to the operation of the switch.
  21. the GUI interface must provide complete performance monitoring allowing a storage administrator to view:
    1. the number of frames per second, with a breakdown of which were good frames and which were error frames,
    2. the throughput (Mbps) of fibre channel port,
    3. the operational speed of fibre channel ports,
    4. the mode of operation of fibre channel port (e.g. F-port, N-port, E-port), and
    5. the throughput in frames as well as MB per second.
  22. they must accept a new firmware or microcode upgrade non-disruptively.

2.5.3 Virtualization

2.5.3.1 Virtualization

The storage platform must include a virtualization solution that is manufactured or OEM rebranded, warranted, supported and maintained by the Manufacturer of the base storage platform defined at 5.1. The virtualization solution must provide the following capabilities:

  1. it must be a discrete and independent device(s) that does not rely upon any components, functionality or software from the base storage platform defined at 5.1;
  2. it must be packaged in an industry-standard 19” rack mount form factor and must include all accessories, cables and hardware required to mount and power the unit in an industry-standard 19” rack;
  3. it must have redundant and hot swappable power and cooling in all electronic components of the solution. Fully redundant pairs of equipment with fixed cooling and power that allow swapping of an entire portion of the solution without interrupting host access to virtualized storage is acceptable;
  4. it must include 32 x 8Gbps fibre channel ports for connections to a client host side SAN fabric and mandatory base storage platform
  5. it must support the base storage platform from the offeror and must simultaneously support third party mainstream storage platforms from at least 5 of the following Manufacturers:
    1. Dell,
    2. EMC,
    3. Hitachi Data Systems,
    4. HP,
    5. IBM,
    6. Network Appliance, and
    7. Oracle.
  6. it must provide the following capabilities by using the storage LUNs from the mandatory base platform(s) and third-party storage platforms simultaneously (collectively known as external LUNs): i. thin provisioning capabilities on external LUNs ii. creation of virtual pools of storage from external LUNs;
  7. it must present block storage from all the supported additional third party storage platforms in the form of LUNs to the mandatory supported host platforms. The underlying details of the capacity from the base and third party systems must be masked from the host systems so that LUNs may be comprised of any or all of the underlying capacity;
  8. after loading the appropriate device driver, it must allow full block copies of LUNs to be created over local SAN connections between the supported base and third party systems and the dynamic relocation of LUNs without interrupting host access, without a loss of data and without changing the addressing of those LUNs to the hosts; and
  9. it must allow synchronous and/or asynchronous copies of LUNs to be created over distance-extended SAN connections between the supported base platform and storage systems at remote location for the purpose of disaster recovery between any of the supported storage platforms.

2.5.4 NAS Gateway

2.5.4.1 Capacity and Platform

The storage platform must include a Network Attached Storage Gateway. The NAS Gateway must meet the following requirements:

  1. it must either:
    1. be manufactured by the same Manufacturer as the base storage platform defined in 5.1; or
    2. be sold under the name of the same Manufacturer (sometimes referred to as rebranding) as the base storage platform, but only if that Manufacturer warrants, supports and maintains the solution
  2. it must be a discrete and independent device(s) that does not rely upon any components, functionality or software from the base storage platform defined at 5.1; however, the capacity this NAS Gateway will address and share may be provided by the base storage platform defined at 5.1;
  3. it must address and share a minimum of 1PB of usable data storage while also adhering to all other minimums; the usable storage must not be computed through the use of a deduplication feature
  4. it must be fully compatible with and supported by the base storage platform defined at 5.1. Use of this NAS Gateway with the base storage platform must not preclude the base storage platform from also servicing other fibre channel block attached hosts at the same time;
  5. it must include sufficient cooling for a fully populated configuration. All cooling for the NAS Gateway must be redundant and monitored for failures by the NAS Gateway;
  6. it must allow hot swapping of failed cooling fans; and
  7. it must be packaged in an industry-standard 19" rack mount form factor and must include all accessories, cables and hardware required to mount and power the unit in an industry-standard 19" rack.

2.5.4.2 Power

Each Network Attached Storage Gateway must meet the following power requirements:

  1. it must provide sufficient power to operate a fully loaded system with all boards and components installed;
  2. it must be fully redundant allowing the NAS Gateway to continue operation without interruption in the event of a power supply failure, until service can be performed. Redundancy may be achieved either through:
    1. use of a second power supply, or
    2. through an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.5.4.3 Controllers and RAID

Each Network Attached Storage Gateway must utilize capacity that is provided by the base storage platform where the capacity is RAID protected by the base platform.
The NAS gateway may rely on internal drives for booting (operating system/kernel), saving configuration data, or for buffering data, however, the user data must reside on the storage provided by the base storage platform

2.5.4.4 NAS Processor Unit

Each Network Attached Storage Gateway must meet the following requirements for the NAS processor unit(s):

  1. it must include a micro-kernel operating system designed for providing file services to CIFS and NFS via the included Ethernet interfaces. The micro-kernel operating system may be either a Windows, Linux, Unix-based operating system or FPGA (hardware)-based operating system.
  2. it must load the micro-kernel operating system from a fault-tolerant medium that is either RAID protected, or duplicated and included, in a second NAS processor unit that may assume operation in the event of a failure to load the operating system at boot time;
  3. it must contain 2 separate redundant clustered processor units or "heads" that operate in an Active / Active or Active / Hot Standby fashion providing network services to clients for CIFS and NFS. In the event of a failure of one of the processor units, the remaining unit must assume the IP address and identity of the failed processor unit and must continue to provide service to clients on the network automatically;
  4. the processor units must both be attached via a total aggregate minimum of 8 X 4Gbps fibre channel, 8 X 8Gbps fibre channel, or 6 X 10Gbps Ethernet interfaces to the base storage platform; and
  5. the processor units in the NAS Gateway must contain an aggregate minimum of 4 X 10Gbps and 8 X 1Gbps Ethernet interfaces for TCP/IP client access.

2.5.4.5 Software and Additional Capabilities

Each Network Attached Storage Gateway must meet the following requirements for software functionality and additional capabilities:

  1. it must include all client access licenses for end user workstations to access and use the shared file systems via CIFS or NFS with no requirement for additional fees or licensing;
  2. it must fully integrate in native mode, Microsoft Active directory environments and must be manageable as a Windows server in those environments using native Microsoft tools for viewing and managing sessions, shares and open files;
  3. it must support snapshot functionality for all shared file systems allowing an administrator to create point-in-time copies of all files for the purpose of recovering deleted files; and
  4. it must include and be licensed for NDMP or support the installation of backup agents to facilitate backups of the shared file systems to fibre channel attached backup targets; and
  5. it must support asynchronous file level replication over TCP/IP to another NAS Gateway of the same type for data recovery or data distribution purposes.

2.5.4.6 Management

Each Network Attached Storage Gateway must meet the following requirements for management capabilities:

  1. it must be manageable remotely via an Ethernet interface and must provide an intuitive GUI-based interface for day-to-day operations;
  2. it must include a simple and intuitive installation system allowing operators to create and provision the unit for operation on a network;
  3. it must provide GUI-based functionality to perform the following:
    1. create and manage volumes and file systems across LUN or RAID sets;
    2. it must work with authentication methods such as Active Directory or LDAP;
    3. view attributes of file system type and used capacity;
    4. configure all user-assigned parameters required for operation of the system;
    5. monitor utilization of network interfaces, processors and disk subsystems to gauge the load on those items
    6. backup all locally hosted data to a locally-attached tape drive or provide an agent or facility for a remote console to initiate this process directly from the NAS disk to a backup target; and
    7. load balance file shares across either of the 2 processor units as needed and allow an administrator to manually failover file shares if required from 1 processor unit to the other;
  4. the GUI management system must manage and operate both processor units as a single entity, allowing a single session to facilitate all management functions described here.

2.6 Group 6.0 Scalable NAS

The following describes the configuration and features of a Scalable NAS Storage solution.

2.6.1 Storage Platform

2.6.1.1 Capacity and Platform

Each Scale Out NAS platform must meet the following capacity and platform requirements:

  1. the hard disk drive technologies and densities must be commercially available, meaning that the Manufacturer is continuing to manufacture and ship them to customers generally;
  2. the hard disk drive technologies and densities must be tested and fully supported within the storage platform by the storage platform Manufacturer;
  3. it must include industry-standard hard disk drives operating at either 4Gbps for Fibre Channel (FC) drives or 6Gbps for Serial Attach SCSI - 2 (SAS-2) drives;
  4. it must also include industry-standard Serial Advanced Technology Attachment (SATA) revision 3.0 or Nearline SAS (NL-SAS) hard disk drives operating at 6Gbps. This may be achieved either by:
    1. using the same shelves as either the FC or SAS disk drives, or
    2. using specialized shelves for these drive types;
  5. the available drive options must include at least three (3) from the following list:
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 15000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
    • - drives with 4Gbps (for FC) or 6 Gbps (for SAS) interfaces and 10000 RPM rotational speed:
      1. 300GB
      2. 450GB
      3. 600GB
      4. 900GB
      5. 1.2TB
      6. 1.8TB
      7. 1.5TB
    • - drives with 6Gbps for SATA or NL-SAS interfaces and 7200 RPM rotational speed:
      1. 1TB
      2. 2TB
      3. 3TB
      4. 4TB
    • - solid state drives (SSD) based on Single Level Cell (SLC) or enterprise-class Multi-Level Cell (eMLC) technology
      1. 100GB
      2. 200GB
      3. 300GB
      4. 400GB
      5. 600GB
      6. 800GB
      7. 1.2TB
      8. 1.6TB
      9. 3.2TB
  6. it must accommodate a minimum of 864 hard disk drives;
  7. When fully configured, it must provide an aggregate minimum of 16 active connections to the mandatory 864 hard disk drives. This bandwidth must be allocated evenly to the total number of physical drives over several channels;
  8. it must provide fully redundant back-end paths to all hard disk drives. A channel failure must not interrupt access to attached disk drives;
  9. it must allow hot addition of nodes or storage shelves without needing to power the storage platform down and without interrupting access to existing drives and RAID groups;
  10. it must utilize redundant hot-pluggable components so that a node or shelf component replacement or failure does not interrupt access to adjacent nodes or shelves in the platform;
  11. the hard disk drives in the storage platform must be fully hot pluggable while the storage platform is operational. There must be no loss of data if a hard drive is removed, assuming the drive is part of a fault-tolerant configuration in the platform;
  12. it must rebuild a replaced hard disk drive automatically and without user intervention when it is inserted, assuming it is replacing a hard disk drive that was part of a fault-tolerant configuration;
  13. it must allow the allocation of hard disk drives as hot spares and or virtual spares, which must automatically rebuild the contents of a failed hard disk drive in any fault-tolerant RAID set. This process must be fully automatic whenever a disk failure occurs in a fault-tolerant RAID set.
  14. it must include a minimum of 6 storage controllers / nodes that may be replaced in the event of a controller failure; DELETE
  15. it must be packaged in a standard 19" rack mount form factor (NOTE: it is understood that standard rack depth will be increased when "high density" disk shelves are provided) or installed in a rack specifically designed for the storage system;
  16. It must include lights or an LCD panel for power, activity and fault indications;
  17. It must scale to a minimum 1 petabyte (PB) single file system; and
  18. it must address and share a minimum of 1 petabyte (PB) of usable data storage while also adhering to all other minimums; the usable storage must not be computed through the use of a deduplication feature

2.6.1.2 NAS Processor Unit

Each Scale Out NAS platform must meet the following requirements for the NAS processor unit(s):

  1. it must include a micro-kernel or FPGA-based operating system designed for providing file services to CIFS and NFS via the included Ethernet interfaces. The micro-kernel or FPGA-based operating system may be either a Linux or Unix-based operating system;
  2. it must load the micro-kernel operating system from a fault-tolerant medium that is either RAID protected, or duplicated and included, in a second NAS processor unit that may assume operation in the event of a failure to load the operating system at boot time;
  3. it must contain a minimum of 4 clustered processor “heads” or “nodes that operate in an active / active fashion providing network services to clients for CIFS and NFS. In the event of a failure of one of the processor units, the remaining unit must assume the IP address and identity of the failed processor unit and must continue to provide service to clients on the network automatically;
  4. the NAS units must contain an aggregate minimum of either 12 X 1Gbps or 6 X 10Gbps Ethernet interfaces for TCP/IP client access.

2.6.1.3 DELETED

2.6.1.4 Cooling

Each Scale Out NAS platform must meet the following cooling requirements:

  1. it must provide sufficient cooling for a fully populated node or cabinet;
  2. all cooling for the system controller(s) as well as all hard disk drives must be redundant and monitored for failure by the storage platform hardware;
  3. it must allow hot swapping of failed cooling fans;
  4. the cooling system within the storage platform itself must be fully redundant; and
  5. in the event of a component failure, the cooling system must allow continued operation of the storage platform until service can be performed.

2.6.1.5 Power

Each Scale Out NAS platform must meet the following power requirements:

  1. it must provide sufficient power to operate a fully populated system with all boards and cache installed, and the maximum number of hard disk drives installed;
  2. the power supplies must be fully redundant, allowing uninterrupted operation of the storage platform in the event of a power supply failure, until service can be performed. Redundancy may be achieved either through:
    1. use of a second power supply, or
    2. through an N+1 approach;
  3. each AC power supply must connect independently to a discrete AC power source.

2.6.1.6 Controllers

Each Scale Out NAS platform must meet the following controller requirements:

  1. it must include redundant storage controllers / nodes for handling both I/O to the attached host systems as well as disk I/O and RAID functionality;
  2. it must be redundant, so that the surviving controller / nodes automatically recovers controller subsystem failures, and service to attached hosts is continued without disruption;
  3. the storage controllers / nodes must allow configuration of hard disk drives within the storage platform as:
    1. RAID5 stripes with parity, RAID6 stripes with dual parity, RAID-DP, or triple parity RAID (RAIDZ for single parity, RAIDZ2 for dual parity, RAIDZ3 for triple parity); and
    2. RAID1, RAID4, RAID0+1 stripes with mirroring, or RAID1+0 striped mirrors (aka RAID10).
    3. Or equivalent at the Clustered Nodes
  4. it must simultaneously support all RAID types from 2.6.1.6(c) within the storage platform or equivalent at the Clustered Node; and
  5. it must enable auto-tiering when appropriate drive types are selected, for a minimum of two tiers.

2.6.1.7 Cache

Each Scale Out NAS platform must meet the following cache requirements:

  1. it must include at least 64GB of I/O cache or flash cache that may be shared between all nodes or storage processors. It is understood and accepted that a small portion of this memory is used for storing platform specific software as required;
  2. it must perform both read and write I/O operations;
  3. the write cache must be mirrored cache;
  4. it must be serviceable without disruption to the operation of the storage platform so that failed portions of the cache may be replaced without interruption of service;
  5. the write data within the cache on the storage controllers must be protected by one of these three (3) methods:
    1. a battery that allows the cache contents to be held intact for a minimum of 48 hours. The caches must then complete their write operations to disk when power is restored; or
    2. all pending write data must be automatically written to disk before the disk system is powered off, and the platform must provide sufficient battery power to complete this function.
    3. NVRAM or flash cache that is used solely for de-staging cache data to in the event of power loss to the array.

2.6.1.8 Software and Additional Capabilities

The Scale Out NAS platform must provide the following software functionality and additional capabilities:

  1. it must perform up to 8 concurrent host-less point-in-time snapshot copies of files that may be reassigned to any other host DELETE. This must be entirely storage platform-based functionality and must not require any software or assistance from host systems;
  2. it must allow minor firmware version upgrades to be made online without disrupting the operation of the platform; and
  3. it must include all client access licenses for end user workstations to access and use the shared file systems via CIFS and NFS, with no requirement for additional fees or licensing;
  4. it must fully integrate, in mixed mode or native mode, Microsoft Active directory environments and must be manageable as a Windows server in those environments using native Microsoft tools for viewing and managing sessions, shares and open files;
  5. it must support snapshot functionality for all shared file systems allowing an administrator to create point-in-time copies of all files for the purpose of recovering deleted files; and
  6. it must include and be licensed for NDMP or support the installation of backup agents to facilitate backups of the shared file systems to fibre channel attached backup targets.

2.6.1.9 Management

Each Scale Out NAS platform must meet the following requirements for management capabilities:

  1. it must be manageable remotely via an Ethernet interface and must provide an intuitive GUI-based interface for day-to-day operations;
  2. it must include a simple and intuitive installation system allowing operators to create and provision the unit for operation on a network with only a basic knowledge of TCP/IP addresses and volume and file system management;
  3. it must provide GUI-based functionality to:
    1. create and manage volumes and file systems across RAID sets;
    2. assign and manage user permissions for CIFS and NFS users to volumes and files;
    3. view attributes of file system type and used capacity;
    4. configure all user-assigned parameters required for operation of the system;
    5. monitor utilization of network interfaces , processors or Cluster Latency, and disk subsystems to gauge the load on those items
    6. backup all locally hosted data to a locally-attached tape drive or provide an agent or facility for a remote console to initiate this process directly from the NAS disk to a backup target; and
    7. load balance file shares across as needed and allow an administrator to manually failover file shares if required from 1 processor / node unit to the other.

2.6.2.0 Category - Optional Virtualization Solution

If available from the product portfolio, the storage platform from Sections 2.2, 2.3, and 2.4 must include a virtualization solution that meets the following requirements:

  1. it must be either:
    1. be manufactured by the same Manufacturer as the base storage platform defined in 2.2.1, 2.3.1, and 2.4.1; or
    2. be sold under the name of the same Manufacturer (sometimes referred to as rebranding) as the base storage platform, but only if that Manufacturer warrants, supports and maintains the solution.
  2. it must be a discrete and independent device(s) that does not rely on any components, functionality or software from the base storage platform defined in 2.2.1, 2.3.1, and 2.4.1 with the exception of the requirement for internal or external disk for advanced functions such as asynchronous replication.;
  3. it must be packaged in an industry-standard 19” rack mount form factor and must include all accessories, cables and hardware required to mount and power the unit in an industry-standard 19” rack;
  4. it must have redundant and hot swappable power and cooling for all electronic components of the solution. Fully redundant pairs of equipment with fixed cooling and power that allow swapping of an entire portion of the solution without interrupting host access to virtualized storage is acceptable,
  5. it must include 4 x 8Gbps fibre channel ports for connections to a client host side SAN fabric and mandatory base storage platform
  6. it must virtualize the base storage platform and must simultaneously support third party storage platforms from at least 5 of the following Manufacturers:
    1. Dell,
    2. EMC,
    3. Hitachi Data Systems,
    4. HP,
    5. IBM,
    6. Network Appliance, and
    7. Oracle.
  7. it must provide the following capabilities by using the storage LUNs from the mandatory base platform(s) and third-party storage platforms simultaneously (collectively known as external LUNs): i. thin provisioning capabilities on external LUNs ii. creation of virtual pools of storage from external LUNs;
  8. it must present block storage from all the supported additional third party storage platforms in the form of LUNs to the mandatory supported host platforms. The underlying details of the capacity from the base and third party systems must be masked from the host systems so that LUNs may be comprised of any or all of the underlying capacity;
  9. after loading the appropriate device driver, it must allow full block copies of LUNs to be created over local SAN connections between the supported base and third party systems and the dynamic relocation of LUNs without interrupting host access, without a loss of data and without changing the addressing of those LUNs to the hosts; and
  10. it must allow synchronous and/or asynchronous copies of LUNs to be created over distance-extended SAN connections between the supported base platform and storage systems at remote location for the purpose of disaster recovery between any of the supported storage platforms.

2.7 Group 7.0 Converged Infrastructure System

The following describes the configuration and features of a Converged Infrastructure System solution.

2.7.1 Converged Infrastructure System Requirements

  1. Must provide and deliver an integrated system comprised of compute/server systems, network switches providing both Ethernet and Storage connectivity (detailed in section on Fabrics), shared storage (detailed in section on Storage), as well as management software to see and control all of the components listed above. The management interface must allow administrators to control (at a minimum):
    1. compute/server systems (UEFI settings, power on/restart capabilities, firmware updates for the systems themselves as well as all installed options)
    2. power and cooling components of the bid solution (allowing administrators to view any alerts or status changes)
    3. all networking components (whether ethernet, fibre channel, FCoE, Infiniband, etc., all network switches should be configurable from the provided management interface)
    4. storage array (the SAN can be configured, LUNs created and assigned, etc. from the management interface).
  2. The infrastructure must provide a highly available and scalable infrastructure that IT can evolve over time to support multiple physical and virtual application workloads. It must have no single point of failure at any level, from the server through the network, to the storage. The fabric must be fully redundant and scalable and must provide seamless traffic failover should any individual component fail at the physical or virtual layer.
  3. The Converged Infrastructure System must be certified, pre-validated and supported by the OEM or consortium as defined by its model name, technical, support and marketing documentation. The System must be branded with specific configurations that are pre-defined and pre-sized to ensure consistency and repeatability.
  4. The Converged Infrastructure System must have a portfolio of detailed documentation, information, and references to assist customers in transforming their data centers to this converged infrastructure model. This must be provided for pre-sales and post-sales instances. This portfolio must include, but is not limited to the following items:
    1. Best practice architectural design
    2. Workload sizing and scaling guidance
    3. Implementation and deployment instructions
    4. Technical specifications (rules for what is, and what is not, a converged configuration)
    5. Frequently asked questions (FAQs)
    6. Converged architecture focused on a variety of use cases
  5. Must provide a uniform approach to IT architecture and documented shared pool of resources for application workloads. Must deliver operational efficiency and consistency with the versatility to meet a variety of SLAs and IT initiatives, including at least three out of six:
    1. Application rollouts or application migrations
    2. Business continuity/disaster recovery
    3. Desktop virtualization
    4. Cloud delivery models (multi-tenancy, public, private, hybrid) and service models (IaaS, PaaS, SaaS)
    5. Asset consolidation and virtualization
    6. Application workload (eg. Database heavy environment)
  6. Any converged system consisting of a number of disparate components cobbled together without providing a single point of management, and a single point of support for customers (e.g. single 1-800 number to place a service call) will not be considered.
  7. Support for the entire system (including all constituant components) must be provided by a single vendor (via a single telephone number) in order to provide accountability for the solution as a whole.
  8. The system must be sold in a single SKU.

2.7.2 Converged Infrastructure System Configuration

2.7.2.1 Category 1.0 Entry-Level Converged Infrastructure System

  1. The compute server for the converged system must be Rack-Based form factor
  2. The computing component of the solution must be scalable to a minimum of 22 processor sockets
  3. The computing component of the solution must support sufficient RAM to allow for a minimum of 128GB per processor sockets
  4. Must include a management software and if neccesary, include a dedicated management server to allow full functionality of converged solution.
  5. Redundant 10GbE or 40Gb Infiniband switches supporting compute components. Bid solution must include a minimum of 2 dedicated 10GbE network paths (one per switch) and 2 x 8Gbps fibre paths to EACH compute system, OR 2 x 10GbE FCoE paths plus 2 x 1Gb Ethernet to EACH compute system OR 2 x 40Gb Infiniband paths to EACH compute system. The bid solution must also include a minimum overall bandwidth of 80Gb (e.g. 8x 10GbE ports, or 2x 40GbE ports) from the solution to external network(s).
  6. Redundant 8Gbps FC SAN or 10GbE or 40Gbps Infiniband switches supporting compute and storage components. Bid solution must include a minimum of 2 ports to EACH compute system (one per switch).
  7. Must support a minimum of 50TB of shared storage
  8. Installation services - must include at a minimum:
    1. Ensure all components are racked, cabled, and physically installed per the customer requirements
    2. Perform the physical installation, including successful power-up and running of all systems' diagnostics
    3. Update the firmware on all components (compute, storage, switches, systems management if applicable, etc)
    4. Ensure the Systems Management software is fully enabled/licensed and accessible by the customer
    5. Inventory: ensure all components are visible to the Systems Management software
    6. Provide 1 full day of skills transfer on the configuration, customization and features of the Systems Management software
    7. Provide documentation and install record of all components to the customer

2.7.2.2 Category 2.0 Small-Sized Converged Infrastructure System

  1. The compute server for the converged system must be Blade-Based form factor
  2. The computing component of the solution must be scalable to a minimum of 82 processor sockets
  3. The computing component of the solution must support sufficient RAM to allow for a minimum of 192GB per processor sockets
  4. Must include a management software and if neccesary, include a dedicated management server to allow full functionality of converged solution.
  5. Redundant 10GbE switches supporting compute components. Bid solution must include a minimum of 2 dedicated 10GbE network paths (one per switch) and 2x 8Gbps fibre paths to EACH compute system, OR 4x 10GbE FCoE paths to EACH compute system. The bid solution must also include a minimum overall bandwidth of 320Gb (e.g. 8x 10GbE ports, or 2x 40GbE ports) from the solution to external network(s).
    Additionally, the solution must support scaling to 4 dedicated 10GbE paths from the bid switches to EACH node, in order to allow for future growth of the converged system.
  6. Redundant 8Gbps FC SAN switches supporting compute and storage components. Bid solution must include a minimum of 2 ports to EACH compute system (in order to provide redundancy).
  7. Must support a minimum of 200TB of shared storage
  8. Installation services - must include at a minimum:
    1. Hold installation and implementation meeting with customer to ensure site readiness, confirm configuration, and provide checklist of necessary space, power, and switch port requirements.
    2. Ensure all components are racked, cabled, and physically installed per the customer requirements
    3. Perform the physical installation, including successful power-up and running of all systems' diagnostics
    4. Update the firmware on all components (compute, storage, switches, systems management if applicable, etc)
    5. Ensure the Systems Management software is fully enabled/licensed and accessible by the customer
    6. Inventory: ensure all components are visible to the Systems Management software
    7. Configure the Systems Management software to enable all error logging and alerting features
    8. Provide 2 full days of skills transfer on the configuration, customization and features of the Systems Management software
    9. Provide documentation and install record of all components to the customer

2.7.2.3 Category 3.0 Medium-Sized Converged Infrastructure System

  1. The compute server for the converged system must be Blade-Based form factor
  2. The computing component of the solution must scalable to a minimum of 192 processor sockets
  3. The computing component of the solution must support sufficient RAM to allow for a minimum of 256GB per processor sockets
  4. Must include a management software and if neccesary, include a dedicated management server to allow full functionality of converged solution.
  5. Redundant 10GbE switches supporting compute components. Bid solution must include a minimum of 2 dedicated 10GbE network paths (one per switch) and 2x 8Gbps fibre paths to EACH compute system, OR 4x 10GbE FCoE paths to EACH compute system. The bid solution must also include a minimum overall bandwidth of 480Gb (e.g. 8x 10GbE ports, or 2x 40GbE ports) from the solution to external network(s).
    Additionally, the solution must support scaling to 4 dedicated 10GbE paths from the bid switches to EACH node, in order to allow for future growth of the converged system.
  6. Redundant 8Gbps FC SAN switches supporting compute and storage components. Bid solution must include a minimum of 2 ports to EACH compute system (in order to provide redundancy).
  7. Must support a minimum of 1PB of shared storage
  8. Installation services - must include at a minimum:
    1. Hold installation and implementation meeting with customer to ensure site readiness, confirm configuration, and provide checklist of necessary space and power requirements.
    2. Ensure all components are racked, cabled, and physically installed per the customer requirements
    3. Perform the physical installation, including successful power-up and running of all systems' diagnostics
    4. Update the firmware on all components (compute, storage, switches, systems management if applicable, etc)
    5. Ensure the Systems Management software is fully enabled/licensed and accessible by the customer
    6. Inventory: ensure all components are visible to the Systems Management software
    7. Configure the Systems Management software to enable all error logging and alerting features
    8. Provide 3 full days of skills transfer on the configuration, customization and features of the Systems Management software
    9. Hold a network planning meeting with customer to determine all networking expectations and information (how many networks the system will be attaching to, IP schema, VLANs, VNICs, etc) and provide documentation of output
    10. Implement the internal Ethernet switching per the output of the network planning meeting
    11. Hold a storage planning meeting with customer to determine all storage expectations and requirements (SAN zoning, LUNs, LUN mappings, etc) and provide documentation of output
    12. Implement the storage networking per the output of the storage planning meeting
    13. Implement and assign up to one LUN per compute server; validate accessibility of SAN from each compute server

2.7.2.4 Category 4.0 Large-Sized Converged Infrastructure System

  1. The compute server for the converged system must be Blade-Based form factor
  2. The computing component of the solution must scalable to a minimum of 384 processor sockets
  3. The computing component of the solution must support sufficient RAM to allow for a minimum of 256GB per processor sockets
  4. Must include a management software and if neccesary, include a dedicated management server to allow full functionality of converged solution.
  5. Redundant 10GbE switches supporting compute components. Bid solution must include a minimum of 2 dedicated 10GbE network paths (one per switch) and 2x 8Gbps fibre paths to EACH compute system, OR 4x 10GbE FCoE paths to EACH compute system. The bid solution must also include a minimum overall bandwidth of 800Gb (e.g. 8x 10GbE ports, or 2x 40GbE ports) from the solution to external network(s).
    Additionally, the solution must support scaling to 4 dedicated 10GbE paths from the bid switches to EACH node, in order to allow for future growth of the converged system.
  6. Redundant 8Gbps FC SAN switches supporting compute and storage components. Bid solution must include a minimum of 2 ports to EACH compute system (in order to provide redundancy).
  7. Must support a minimum of 1.9PB of shared storage
  8. Installation services - must include at a minimum:
    1. Hold planning session(s) with customer consisting of:
      1. network planning meeting to determine all networking expectations and information (how many networks the system will be attaching to, IP schema, VLANs, VNICs, etc)
      2. storage planning meeting to determine all storage expectations and requirements (SAN zoning, LUNs, LUN mappings, etc)
      3. virtualization planning meeting to determine which hypervisor (and which version) will be implemented, and architecture of virtual environment (how many host pools, how will shared storage be divided/assigned, any high-availability requirements, etc.)
      4. installation and implementation meeting with customer to ensure site readiness, confirm configuration, and provide checklist of necessaryspace and power requirements.
    2. Documentation of all output from the planning sessions(s) will be provided to the customer
    3. Ensure all components are racked, cabled, and physically installed per the customer requirements
    4. Perform the physical installation, including successful power-up and running of all systems' diagnostics
    5. Update the firmware on all components (compute, storage, switches, systems management if applicable, etc)
    6. Ensure the Systems Management software is fully enabled/licensed and accessible by the customer
    7. Inventory: ensure all components are visible to the Systems Management software
    8. Configure the Systems Management software to enable all error logging and alerting features
    9. Provide 4 full day of skills transfer on the configuration, customization and features of the Systems Management software, particularly as it relates to virtual machine management.
    10. Implement the internal Ethernet switching per the output of the network planning meeting
    11. Implement the storage networking per the output of the storage planning meeting
    12. Implement and assign up to one LUN per compute server; validate accessibility of SAN from each compute server
    13. Load/install all operating systems/hypervisors on the compute servers
    14. Provide documentation and install record of all components to the customer

2.7.3 Compute / Server System

  1. For Categories 2.0, 3.0, and 4.0, all bid Compute or Server Systems must meet or exceed the Technical Specifications identified in E60EJ-11000C (all categories except for 1.0V, 1.0U, 2.0U, 3.0U, B4.0, and B4.1). Vendors must follow the appropriate specifications for the class of Compute or Server System bid (e.g. Rack vs Blade, Enterprise vs Departmental class of server, 2-Socket vs 4-Socket, etc.)
  2. For Category 1.0 Compute or Servers Systems only, it must meet or exceed the following Technical Specifications below:

2.7.3.1 Category 1.0 Rack-Based Server

  1. Be available in an industry standard 19” rack form-factor and fully compatible with the bid storage rack.
  2. Have two (2) Intel Xeon E5-2630 v2 or greater.
  3. Provide hardware virtualization (e.g.: Intel VT or AMD-V 2.0) capability.
  4. Support a minimum of 96 Gigabyte (GB) of Quad-Channel PC3-10600 (DDR3-1333) Registered DIMMs.
  5. Include a SAS controller with sufficient ports supporting the maximum installable disk drives. However, the controller must have minimum support for RAID 0, 1, 5 and 6 (double-parity) with 512MB of ECC (BBWC) Battery-Backed-Write-Cache if it requires internally disk drives to meet 2.7.2.1 as a converged solution .
  6. Have four (4) vacant hot-swap drive bays to accommodate the installation of SAS Hard Disk Drives.
  7. Have an integrated quad-port 100/1000Base-T or integrated dual-port 10Gb network interface adapter capable of fault tolerance (FT) and load balancing.
  8. Have one (1) internal ISO9660 compliant 8X speed DVD-ROM drive or via virtual media (eg: ILO or ILOM) that facilitates access to a remote optical media.
    1. Have two (2) vacant 64bit PCI-Express Gen 3 (minimum 4x lane) slots or better before configuration.
  9. Provide Keyboard, Mouse, and Serial ports or three (3) USB ports.
  10. Have one (1) management port. A serial port or NIC port may be used for this function. If a NIC port is used, it must not be from item (g) above.
  11. Have an integrated video graphics controller supporting a minimum of 1024 x 768 resolution.
  12. Have a minimum of two (2) hot-swap / hot plug power supplies one of which must be redundant.
  13. Support 110 to 125 VAC or 200 to 240 VAC @ 50Hz & 60Hz.
  14. Provide hot-swap / hot-plug redundant cooling fans. These fans are in addition to the power supply fans and any CPU fans (if offered). These fans must either be constantly operational or thermostatically controlled.
  15. Provide sufficient cooling to permit full density rack mounting (without spacing).

2.7.3.2 Processor & Chipset

All processors must:

  1. Be an Intel Xeon or an AMD Opteron.
  2. Function in a symmetrical multi-processing (SMP) or Parallel mode
  3. Provide the latest release in hardware virtualization (i.e.: Intel VT or AMD V 2.0) capability
  4. Be able to support 32-bit and 64-bit applications natively and simultaneously.
  5. Be of identical stepping within each processor socket.

2.7.3.3 BIOS / Firmware

All BIOS / firmware must:

  1. Be upgradeable through flash ROM technology.
  2. Have the ability to accept a previous version of the BIOS or firmware in the event of an incompatible or corrupted version.

2.7.3.4 RAM

All RAM must:

  1. Be a minimum of 8GB per Registered DIMM (e.g.: 1 x 8GB RDIMM)
  2. Be manufactured by an ISO (International Standards Organization) 9001:2008 specs certified manufacturer. The ISO certification applies to the RAM manufacturer's manufacturing process and applies to both the RAM chip manufacturer and the DIMM assembly manufacturer.
  3. Have advanced ECC, chip-kill functionality or equivalent feature
  4. All RAM modules must either be an OEM or OEM approved component.

2.7.3.5 Hard Disk and Controller

  1. Serial Attached SCSI (SAS)
    1. If the storage platform uses Serial Attached SCSI hard disk drives, the hard disks must:
      1. Have a maximum average seek time of 10ms or less and a minimum spin rate of 7.2K revolutions per minute;
      2. Have physical bytes of storage as specified without the use of hardware or software disk compression utilities, as actual data space available to user;
      3. Support all of the capabilities and throughput of the SAS controller below;
      4. all drives must be hot-pluggable (without downing the system and without disruption of service when configured).
    2. The SAS disk controller must:
      1. Be a minimum of PCI-Express 2.0, x4 wide;
      2. Support a burst transfer rate of 600MB per second.
  2. Enterprise Multi-Level Cell – Solid State Drive (eMLC-SSD)
    1. If the storage platform uses Solid-State-Drive hard disk device, the hard disks must:
      1. Have a read / write speeds (IOPS) (4K blocks) of 20,000 / 3,000;
      2. Have physical bytes of storage as specified without the use of hardware or software disk compression utilities, as actual data space available to user;
      3. Support all of the capabilities and throughput of the SAS controller below, or include a dedicated PCI-Express 2.0 x4 wide controller.
    2. The SAS Disk controller must be a 64-Bit PCI-Express 2.0 supporting a burst transfer rate of 3Gb/sec per SAS/SATA port.

2.7.3.6 Serial & Management Ports

This port must be:

  1. A USB port;
  2. An RS-232-C serial interface port or;
  3. Similar in function that will provide a method for out of band management capability.
  4. Supported by the storage vendor's management stack, and be integrated into the management solution provided by the storage vendor.

2.7.3.7 Redundant Power Supplies

  1. The power supplies must be installed and removed without requiring any tool or requiring the removal of the chassis / enclosure cover.
  2. The power supplies must have the ability to connect to 3-Phase North America N x NEMA L15-30p or Single Phase N x IEC-320 13 or C19 where N matches the number of power supplies in the system.
  3. The power supply must run on 100 - 240 volts AC @ 60Hz or 200 - 240 volts AC @ 60 Hz.
  4. If dual power supplies are included in the system, then at least one power supply must operate in a redundant fashion to the other(s) in that in the case of one power supply failing the other will continue to power the system without any interruption of services or performance. If three or more power supplies are included in the system, then they must be configured in an N+1 configuration so that in the case of one power supply failing, the others will continue to power the system without any interruption of services or performance.
  5. If the power supply fails there must be a provision to communicate the condition through the system management utility to alert the network administrator.
  6. If one power supply fails, the remaining functional power supply or supplies must be able to support a fully populated system on its own. A fully populated system is defined as having the maximum installed processors, all internal drive bays, all I/O slots or modules and memory slots populated.
  7. System must use a secondary system of additional cooling fans or provide sufficient cooling to support a fully configured system. If a secondary system of additional cooling fans is provided, these fans must be in addition to the power supply fan and any CPU fans (if included in Default System). These fans must either be constantly operational or thermostatically controlled.
  8. All external cabling must be positively secured and resistant to damage.

2.7.4 Network / Fabric

Specifications identified in E60EJ-11000C (for all integrated switches) or in Annex A, Groups 1-5 (for all standalone switches). Vendors must follow the appropriate specifications for the class of switch bid (e.g. 10GbE, Fibre Channel, FCoE, etc.)

2.7.5 Storage

Specifications identified in Annex A, Groups 1-5 (for all storage systems). Vendors must follow the appropriate specifications for the class of storage system bid.

2.7.6 Management Software and Diagnostic Suite

  1. Must provide a unified and centralized management software interface (eg. single pane of glass) allowing administrators the ability to control all components of the solution including, but not limited to the compute servers (allowing UEFI settings changes, firmware updates, and power on/restart capabilities), the power and cooling components of the system (allowing administrators to view any alerts or status changes), all switching components (Ethernet, Fibre Channel, Infiniband, etc.; switches must be configurable from the management interface), and the storage array (meaning that the SAN can be configured, LUNs created and assigned, etc. from the management interface)
  2. The Management software must provide the capability to easily view system resource health and usage information. This includes but is not limited to:
    1. Available memory, both by compute system and by resource pool
    2. Health "dashboard" of both virtual and physical servers
    3. Available capacity (cores/MHz, memory, storage, etc.) both by physical system component (e.g. compute server or LUN) and by resource pool
    4. Available capacity of specified resource pool, as defined by service level
    5. Capacity by availability
    6. Pool capacity by availability
  3. The Management software must provide the ability to identify configuration changes, and compare them against pre-set configuration values/policies, sending alerts when there is a discrepancy. In addition, changes must be clearly logged, in order to provide a revision history for the system or subsystem. These compliance policies and logs must include (but are not limited to) the following:
    1. Blade-Based: Chassis/Enclosure and Servers, Rack-Based: Servers
    2. Fabric / Network switches
    3. Storage
    4. Network configuration
    5. SAN zoning
    6. LUN configuration
  4. Firmware patches for the converged solution must be provided proactively to Canada as a single package (which may contain multiple updates for several components) for updating. Canada must not be required to find and download individual component updates for their Converged Infrastructure System.

2.8 Category – Optional FCoE Switch

If available from the product portfolio, the storage platform from Sections 2.2, 2.3, 2.4, and 2.5 must include an FCoE switch that meets the following requirements:

2.8.1 FCoE – Group-2 (24 ports), Group-3 (48 ports), Group-4 (60 ports and 96 ports)

The storage platform must operate with 10 Gbps FCoE switches, which must be fully supported and warranted by the storage platform Manufacturer. The FCoE switches must meet the following requirements:

  1. Data rate must have a minimum total full-duplex throughput of 480Gbps (24 ports), 960Gbps (48 ports), 1200Gbps (60 ports), and 1920Gbps (96 ports);
  2. They must accommodate up to 32,000 MAC addresses;
  3. They must support minimum of 4000 VLANs;
  4. They must include full Layer 2 and Layer 3 support;
  5. They must provide lights or indicators for power and port status for all ethernet ports;
  6. For management purposes, switches must provide a 10/100/1000 MBps Ethernet interface using TCP/IP as the transport protocol;
  7. They must provide redundant cooling and power;
  8. They must be available in both stand-alone and rack mountable configurations. A rack mounting kit that is applied to a stand-alone switch will be accepted;
  9. They must fully comply with the following standards:
    1. IEEE 802.3ae 10 Gigabit Ethernet
    2. IEEE 802.3 Ethernet
    3. IEEE 802.1Q VLAN tagging
    4. IEEE 802.1p Quality of Service (QoS)
    5. IEEE 802.3x Flow Control
    6. IEEE 802.1w Rapid Spanning Tree Protocol
    7. IEEE 802.1D Spanning Tree Protocol
    8. IEEE 802.1s Multiple Spanning Tree
    9. IEEE 802.ad LACP Support
    10. IEEE 802.1AB Link Layer Discovery Protocol (LLDP)
    11. IEEE 802.1x DELETE
    12. Jumbo Frames of sizes up to 9000 bytes
    13. Internet Group Management Protocol (IGMP) Snooping Versions 2
  10. They must support the following Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) standards:
    1. IEEE 802.1Qbb Priority-based Flow Control
    2. IEEE 802.1Qaz Enhanced Transmission Selection
    3. IEEE 802.1 DCB Capability Exchange Protocol
    4. FC-BB-5 FCoE (Rev 2.0) standard
    5. FIP snooping
  11. They must support the following security standards:
    1. RADIUS
    2. TACACS+
    3. SCP
    4. SSH DELETE v2
    5. Ability to connect to the web GUI via an HTTPS connection
    6. Secure interface and login
    7. Must have a password recovery mechanism capable of restoring the factory default configuration
  12. they must support cascading by connecting 4 or more switches together to form a single fabric that is compliant with the standards specified at 2.8.1 (l), (m) and (n);
  13. they must include a comprehensive GUI-based or CLI-based management system that allows real-time monitoring of all components in the platform and to report failures or degraded components;
  14. They must generate SNMP traps in the event of a degraded condition in the switch;
  15. the GUI or CLI interface must show the current operational status for all installed hardware components;
  16. the GUI or CLI interface must allow configuration of all aspects of the switches including:
    1. the name,
    2. the passwords and user accounts for management,
    3. the IP addressing, and
    4. any other parameters critical to the operation of the switch;
  17. The GUI or CLI interface must provide complete performance monitoring allowing a storage administrator to view:
    1. the number of frames per second, with a breakdown of which were good frames and which were error frames,
    2. the throughput (Mbps) of the ports,
    3. the operational speed of the ports,
    4. the throughput in frames as well as MB per second.

2.8.2 FCoE – Group 5 (256 ports)

The storage platform must operate with 10Gbps 256 port FCoE switches, which must be fully supported and warranted by the storage platform Manufacturer. The FCoE switches must meet the following requirements:

  1. They must have a minimum total throughput of 3.85 Tbps per chassis / enclosure;
  2. They must have a maximum port-to-port latency under 6 microseconds;
  3. They must accommodate up to 384,000 MAC addresses;
  4. They must support a minimum of 4000 VLANs;
  5. They must include full Layer 2 and Layer 3 support;
  6. They must provide lights or indicators for power and port status for all ethernet ports;
  7. For management purposes, switches must provide a 10/100/1000 MBps Ethernet interface using TCP/IP as the transport protocol;
  8. They must provide redundant cooling and power;
  9. They must be available in both stand-alone and rack mountable configurations. A rack mounting kit that is applied to a stand-alone switch will be accepted;
  10. They must generate SNMP traps in the event of a degraded condition in the switch;
  11. They must fully comply with the following standards:
    1. IEEE 802.3ae 10 Gigabit Ethernet
    2. IEEE 802.3 Ethernet
    3. IEEE 802.1Q VLAN tagging
    4. IEEE 802.1p Quality of Service (QoS)
    5. IEEE 802.3x Flow Control
    6. IEEE 802.1w Rapid Spanning Tree Protocol
    7. IEEE 802.1D Spanning Tree Protocol
    8. IEEE 802.1s Multiple Spanning Tree
    9. IEEE 802.ad LACP Support
    10. IEEE 802.1AB Link Layer Discovery Protocol (LLDP)
    11. IEEE 802.1x DELETE
    12. Jumbo Frames of sizes up to 9000 bytes
    13. Internet Group Management Protocol (IGMP) Snooping Versions 2
  12. They must support the following Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) standards:
    1. IEEE 802.1Qbb Priority-based Flow Control
    2. IEEE 802.1Qaz Enhanced Transmission Selection
    3. IEEE 802.1 DCB Capability Exchange Protocol
    4. FC-BB-5 FCoE (Rev 2.0) standard;
    5. FIP snooping
  13. They must support the following security standards:
    1. RADIUS
    2. TACACS+
    3. SCP
    4. Wire Speed Filtering: Allow and Deny
    5. SSH DELETE v2
    6. Ability to connect to the web GUI via an HTTPS connection
    7. Secure interface and login
    8. Must have a password recovery mechanism capable of restoring the factory default configuration
  14. they must support cascading by connecting 16 or more switches together to form a single fabric that is compliant with the standards specified at 2.8.2 (m), (n) and (o);
  15. they must include a comprehensive GUI-based or CLI-based management system that allows real-time monitoring of all components in the platform and to report failures or degraded components;
  16. they must provide full failure monitoring for all components and must be thermally monitored;
  17. they must provide alerting via SNMP to advise a storage administrator of a failure or degraded condition;
  18. They must generate SNMP traps in the event of a degraded condition in the switch;
  19. the GUI or CLI interface must show the current operational status for all installed hardware components;
  20. the GUI or CLI interface must allow configuration of all aspects of the switches including:
    1. the name,
    2. the passwords and user accounts for management,
    3. the IP addressing,
    4. any other parameters critical to the operation of the switch;
  21. The GUI or CLI interface must provide complete performance monitoring allowing a storage administrator to view:
    1. the number of frames per second, with a breakdown of which were good frames and which were error frames,
    2. the throughput (Mbps) of the ports,
    3. the operational speed of the ports, and
    4. the throughput in frames as well as MB per second.
  22. they must accept a new firmware or microcode upgrade non-disruptively.

3.0 CERTIFICATIONS

3.1 Hardware Certification:

  1. All high voltage electrical equipment supplied under the Standing Offer must be certified or approved for use in accordance with the Canadian Electrical Code, Part 1, before delivery, by an agency accredited by the Standards Council of Canada. All Systems must bear the certification logo that applies to the accredited agency. Any System not bearing a logo from the accredited agency described below will be considered non-compliant. Current accredited agencies include, but are not exclusively comprised of:
    1. Canadian Standards Association (CSA);
    2. Underwriters' Laboratory Inc. (cUL) (cULus);
    3. Underwriters' Laboratories of Canada (ULC);
    4. Entela Canada (cEntela);
    5. Intertek Testing Services (cETL);
    6. Met Laboratories (cMET); and
    7. OMNI Environmental Services Inc (cOTL).
    8. TUV Rhineland of North America (cTUV).
  2. Systems must comply with the emission limits and labeling requirements set out in the Interference Causing Standard ICES-003, "Digital Apparatus", published by Industry Canada. Systems that have obtained Industry Canada ICES-003 approval that have been assembled from tested components and have not undergone entire system testing will be considered noncompliant. All devices tested must bear the appropriate labels indicating trade name, model number, and the words indicating Industry Canada ICES-003 compliance.
  3. Systems must comply to FCC Class A certification and must include proof that for each Product being offered that includes a digital apparatus, an accredited agency has certified that it does not exceed the FCC Class A limits for radio noise emissions set out in the Radio Interference Regulations and the Products must bear the certification logo of the appropriate accredited agency.

3.2 Software Certification For Groups 2.0, 3.0, 4.0, 5.0, and 6.0:

Tier 1 & Tier 2 Storage solutions must have the following certifications:

  1. SNIA's Storage Management Initiative Specification (SMI-S) Provider Test
  2. VMWare vSphere 5 including VAAI (with the exception of Group 6.0)

Converged Infrastructure Systems:

  1. Converged Infrastructure Systems in Categories 2.0, 3.0, and 4.0 must commit to a SDDC (Software-Defined Data Centre) initiative where deployment, provisioning, configuration and operation of the entire infrastructure is abstracted from hardware and implemented through software.
  2. For Categories 2.0, 3.0, and 4.0, must have membership to OpenStack Foundation.

4.0 GREEN PROCUREMENT INITIATIVES

  1. In support of the Canadian Federal Government’s Sustainable Development Strategy which includes policies on Green Procurement, system manufacturers must commit to a comprehensive, nationally recognized environmental standards for:
    1. The reduction or elimination of environmentally hazardous materials
    2. Design for reuse and recycling
    3. Energy efficiency
    4. End of Life Management for reuse and recycling
    5. Environmental stewardship in the manufacturing process
    6. Packaging
    7. All systems must be RoHS Certified.
    8. The OEM must be a member in good standing of EPSC – Electronic Product Stewardship of Canada.
    9. The OEM must be ISO 14001 certified.
    10. The OEM must have a plan or strategy in place for achieving EPA’s evolving Energy Star compliancy requirements for all storage systems.
    11. As technical requirements are modified and new Groups are added through the processes outlined in this NMSO, additional emerging requirements in support of Green Procurement and Sustainable Development will be introduced.

5.0 VALUE-ADDED VENDOR SUPPORT

For Storage Groups 2.0, 3.0, 4.0, 5.0, and 6.0 the technical support infrastructure must consist of no less than fifteen (15) support personnel available across Canada and must have at least three (3) support personnel per group as identified below. For Storage Group 1.0 the requirement is ten (10) and two (2) respectively:

  • Group I:
    • VMWare Certified Professional
  • Group II:
    • Microsoft Certified Systems Engineers
    • Linux Certified Engineers
  • Group III:
    • Oracle Solaris Certified Engineers
    • HP-UX Certified or HP UNIX Trained Engineers
    • IBM AIX Certified Engineers
    • IBM ZOS Certified Engineers
  • Group IV:
    • SNIA Certified Engineers
    • Brocade Certified Professional Engineers
    • Brocade Certified Network Professionals
    • Brocade Certified Fabric Designers
    • Brocade Certified Fabric Professional (BCFP)
    • Cisco DCNI-2 Certified Engineers
    • Cisco Certified Network Associates who have completed DCUFI
    • Cisco Certified Network Infrastructure Support Specialists
    • Cisco Certified Network Architect

For Converged Infrastructure System Group 7.0:

Must have an experienced support team dedicated to converged solutions, including but not limited to customer account and technical sales representatives, professional services and technical support engineers. The support system must provide customers and partners direct access to technical experts who collaborate with and have access to shared resources to resolve potential issues.