101 Terms Regarding SAN, NAS, and Cluster

1. active copper: A type of Fibre Channel physical connection that allows up to 30 meters of copper cable between adjacent devices.
2. adaptive array: A disk array that is capable of changing its algorithm for virtual data address to physical data location mapping dynamically (i.e., while the array is operating). For instance, an array which can change a given virtual disk representation from mirrored to RAID 5 on the fly is an adaptive array.
3. AL_PA: Acronym for Arbitrated Loop Physical Address.
4. asynchronous I/O: An I/O operation whose initiator does not await its completion before proceeding with other work. Asynchronous I/O operations enable an initiator to have multiple concurrent I/O operations in progress. Commonly referred to in discussions concerning long distance data replication.
5. automatic failover: Failover of resources from one computer in a cluster to another that occurs without human intervention.
6. availability: The amount of time that a system is available during those time periods when it is expected to be available. Availability is often measured as a percentage of an elapsed year. For example, 99.99% availability equates to 52 minutes of downtime in a year for a system that is expected to be available 24X7.
7. block virtualization: The act of applying virtualization, to one or more block based (storage) services for the purpose of providing a new aggregated, higher level, richer, simpler, secure etc. block service. Block virtualization functions can be nested. A disk drive, RAID system or volume manager all performs some form of block address to (different) block address mapping or aggregation.
8. bridge controller: A storage controller that forms a bridge between two external I/O buses. Bridge controllers are commonly used to connect single-ended SCSI disks to differential SCSI or Fibre Channel host I/O buses
9. byte: An eight-bit organizational unit for data. 8 bits/byte. 100 Megabits/Second (100Mb) yields 12.5 MegaBytes/Second (12.5MB). This is a limiting factor for NAS solutions without GigaBit network infrastructure.
10. cascading: The process of connecting two or more Fibre Channel hubs or switches together to increase the number of SAN ports or extend distances
11. cluster: A collection of computers that are interconnected (typically at high-speeds) for the purpose of improving reliability, availability, serviceability and/or performance (via load balancing). Often, clustered computers have access to a common pool of storage, and run special software to coordinate the component computers’ activities
12. Common Internet File System: A network file system access protocol originally designed and implemented by Microsoft Corporation under the name Server Message Block protocol, and primarily used by Windows clients to communicate file access requests to Windows servers. Abbreviated CIFS. Today, other implementations of the CIFS protocol allow other clients and servers to use it for intercommunication and interoperation with Microsoft operating systems.
13. concurrency: The property of overlapping in time. Usually refers to the execution of I/O operations or I/O requests.
14. disk drive: A non-volatile, randomly addressable, re-writable data storage device. This definition includes both rotating magnetic and optical disks and solid-state disks, or non-volatile electronic storage elements. It does not include specialized devices such as write-once-read-many (WORM) optical disks, nor does it include so-called RAM disks implemented using software to control a dedicated portion of a host computer’s volatile random access memory.
15. disk array: A set of disks from one or more commonly accessible disk subsystems, combined with a body of control software. The control software presents the disks’ storage capacity to hosts as one or more virtual disks. Control software is often called firmware or microcode when it runs in a disk controller. Control software that runs in a host computer is usually called a volume manager.
16. disk cache: A cache that resides in a controller or host whose primary purpose is to improve disk or array I/O performance.
17. E_Port: An expansion port on a Fibre Channel switch. E_Ports are used to link multiple Fibre Channel switches together into a fabric.
18. embedded storage controller: An intelligent storage controller that mounts in a host computer’s housing and attaches directly to a host’s internal I/O bus. Embedded controllers obviate the need for host bus adapters and external host I/O buses. Embedded storage controllers differ from host bus adapters in that they provide functions beyond I/O bus protocol conversion (e.g., RAID).
19. ESCON: Acronym for Enterprise Systems Connection, used with mainframe computer storage arrays.
20. Ethernet: The predominant local area networking technology, based on packetized transmissions between physical ports over a variety of electrical and optical media. Ethernet can transport any of several upper layer protocols, the most popular of which isTCP/IP. Ethernet standards are maintained by the IEEE 802.3 committee. The unqualified term Ethernet usually refers to 10 Mbps transmission on multi-point copper. Fast Ethernet is used to denote 100 Mbps transmission, also on multipoint copper facilities. Ethernet and Fast Ethernet both use CSMA/CD physical signaling. Gigabit Ethernet (abbreviated GBE) transmits at 1250 Megabaud (1Gbit of data per second) using 8b/10b encoding with constant transmission detection.
21. Ethernet adapter: An adapter that connects an intelligent device to an Ethernet network. Usually called an Ethernet network interface card, Ethernet NIC, or just plain NIC.
22. F_Port: A port that is part of a Fibre Channel fabric. An F_Port on a Fibre Channel fabric connects to a node’s N_PORT.
23. Fabric: A Fibre Channel switch or two or more Fibre Channnel switches interconnected in such a way that data can be physically transmitted between any two N_Ports on any of the switches.
24. Failover: The automatic substitution of a functionally equivalent system component for a failed one. The term failover is often applied to intelligent controllers connected to the same storage devices and host computers. If one of the controllers fails, failover occurs, and the survivor takes over its I/O load. Also, this term applies to the relocation of a resource or service between nodes in a clustered computer configuration when a node has failed.
25. fast SCSI: A form of SCSI that provides 10 megatransfers per second. Wide fast SCSI has a 16-bit data path, and transfers 20 MBytes per second. Narrow fast SCSI transfers 10 MBytes per second.
26. FC-AL: Acronym for Fibre Channel Arbitrated Loop.
27. FCP: Acronym for Fibre Channel Protocol.
28. Fiber Distributed Data Interface: An ANSI standard for a token ring Metropolitan Area Networks (MANs), based on the use of optical fiber cable to transmit data at a rate of 100 Mbits/second. Both optical fiber and twisted copper pair variations of the FDDI physical standard exist. FDDI is a completely separate set of standards from Fibre Channel. The two are not directly interoperable.
29. Fibre Channel: A set of standards for a serial I/O bus capable of transferring data between two ports at up to 100 MBytes/second, with standards proposals to go to higher speeds. Fibre Channel supports point to point, arbitrated loop, and switched topologies. Fibre Channel was completely developed through industry cooperation, unlike SCSI, which was developed by a vendor and submitted for standardization after the fact.
30. Fibre Channel Arbitrated Loop: A form of Fibre Channel network in which up to 126 nodes are connected in a loop topology, with each node’s L_Port transmitter connecting to the L_Port receiver of the node to its logical right. Nodes connected to a Fibre Channel Arbitrated Loop arbitrate for the single transmission that can occur on the loop at any instant using a Fibre Channel Arbitrated Loop protocol that is different from Fibre Channel switched and point to point protocols. An arbitrated loop may be private (no fabric connection) or public (attached to a fabric by an FL_Port).
31. Filer: An intelligent network node whose hardware and software are designed to provide file services to client computers. Filers are pre-programmed by their vendors to provide file services, and are not normally user programmable.
32. FL_Port : A port that is part of a Fibre Channel fabric. An FL_Port on a Fibre Channel fabric connects to an arbitrated loop. Nodes on the loop use NL_Ports to connect to the loop. NL_Ports give nodes on a loop access to nodes on the fabric to which the loop’s FL_Port is attached.
33. frame: An ordered vector of words that is the basic unit of data transmission in a Fibre Channel network. A Fibre Channel frame consists of a Start of Frame Word (SoF) (40 bits); a Frame Header (8 Words or 320 bits); data (0 to 524 Words or 0 to 2192 ten bit encoded bytes; a CRC (One Word or 40 bits); and an End of Frame (EoF) (40 bits).
34. full duplex: Concurrent transmission and reception of data on a single link.
35. Gb, Gbit, gigabit: Shorthand for 1,000,000,000 (10^9) bits. Storage Networking Industry Association publications typically use the term Gbit to refer to 10^9 bits, rather than 1,073,741,824 (2^30) bits. For Fibre Channel, 1,062,500,000 bits per second.
36. GB, Gbyte: Synonym for gigabyte. Shorthand for 1,000,000,000 (10^9) bytes. The Storage Networking Industry Association uses GByte to refer to 10^9 bytes, as is common in I/O-related applications rather than the 1,073,741,824 (2^30) convention sometimes used in describing computer system random access memory.
37. GBIC: Acronym for gigabit interface converter. Modular units that plug into fibre channel and gigabit network switches that consist of either short wavelength or long wavelength laser and receiver pairs for connecting to the gigabit (usually fiber, but also copper) network infrastructure. A transceiver that converts between electrical signals used by host bus adapters (and similar Fibre Channel and Ethernet devices) and either electrical or optical signals suitable for transmission. Gigabit interface converters allow designers to design one type of device and adapt it for either copper or optical applications.
38. Gigabit Ethernet: A group of Ethernet standards in which data is transmitted at 1Gbit per second.
39. hierarchical storage management: The automated migration of data objects among storage devices, usually based on inactivity. Abbreviated HSM. Hierarchical storage management is based on the concept of a cost-performance storage hierarchy. By accepting lower access performance (higher access times), one can store objects less expensively. By automatically moving less frequently accessed objects to lower levels in the hierarchy, higher cost storage is freed for more active objects, and a better overall cost performance ratio is achieve.
40. high availability: The ability of a system to perform its function continuously (without interruption) for a significantly longer period of time than the reliabilities of its individual components would suggest. High availability is most often achieved through failure tolerance. High availability is not an easily quantifiable term. Both the bounds of a system that is called highly available and the degree to which its availability is extraordinary must be clearly understood on a case-by-case basis.
41. host bus adapter: An I/O adapter that connects a host I/O bus to a computer’s memory system. Abbreviated HBA. Host bus adapter is the preferred term in SCSI contexts. Adapter and NIC are the preferred terms in Fibre Channel contexts. The term NIC is used in networking contexts such as Ethernet and token ring.
42. in-band (transmission): Transmission of a protocol other than the primary data protocol over the same medium as the primary data protocol. Management protocols are a common example of in-band transmission.
43. in-band virtualization: Virtualization functions or services that are in the data path. In a system that implements in-band virtualization, virtualization services such as address mapping are performed by the same functional components used to read or write data.
44. initiator: The system component that originates an I/O command over an I/O bus or network. I/O adapters, network interface cards, and intelligent controller device I/O bus control ASICs are typical initiators.
45. JBOD: Acronym for “Just a Bunch Of Disks.” Originally used to mean a collection of disks without the coordinated control provided by control software; today the term JBOD most often refers to a cabinet of disks whether or not RAID functionality is present.
46. L_Port: A port used to connect a node to a Fibre Channel arbitrated loop.
47. LAN-free backup: A disk backup methodology in which a SAN appliance performs the actual backup I/O operations, thus freeing the LAN server to perform I/O operations on behalf of LAN clients. Differentiated from serverless by the requirement of an additional SAN appliance to perform the backup I/O operations.
48. latency: Synonym for I/O request execution time, the time between the making of an I/O request and completion of the request’s execution.
49. logical unit (LUN): The entity within a SCSI target that executes I/O commands. SCSI I/O commands are sent to a target and executed by a logical unit within that target. A SCSI physical disk typically has a single logical unit. Tape drives and array controllers may incorporate multiple logical units to which I/O commands can be addressed. Each logical unit exported by an array controller corresponds to a virtual disk.
50. long wavelength laser: A laser with a wavelength 1300 nm or longer; usually 1300 or 1550 nanometers; widely used in the telecommunications industry.
51. mean time to repair: The average time between a failure and completion of repair in a large population of identical systems, components, or devices. Mean time to repair comprises all elements of repair time, from the occurrence of the failure to restoration of complete functionality of the failed component. This includes time to notice and respond to the failure, time to repair or replace the failed component, and time to make the replaced component fully operational. In mirrored and RAID arrays, for example, the mean time to repair a disk failure includes the time required to reconstruct user data and check data from the failed disk on the replacement disk. With clustered, highly available systems it can be used to indicate the time taken to automatically detect a failure and relocate a resource or service to an alternate node, typically seconds to a few minutes.
52. metropolitan area network (MAN): A network that connects nodes distributed over a metropolitan (city-wide) area as opposed to a local area (campus) or wide area (national or global). Abbreviated MAN. From a storage perspective, MANs are of interest because there are MANs over which block storage protocols (e.g., ESCON, Fibre Channel) can be carried natively, whereas most WANs that extend beyond a single metropolitan area do not currently support such protocols.
53. mirroring: A form of storage array in which two or more identical copies of data are maintained on separate media. Also known as RAID level 1.
54. N_Port : A port that connects a node to a fabric or to another node. Nodes’ N_Ports connect to fabrics’ F_Ports or to other nodes’ N_Ports. N_Ports handle creation, detection, and flow of message units to and from the connected systems. N_Ports are end points in point to point links.
55. network attached storage (NAS): A class of systems that provide file services to host computers. A host system that uses network attached storage uses a file system device driver to access data using file access protocols such as NFS or CIFS. NAS systems interpret these commands and perform the internal file and device I/O operations necessary to execute them. A NAS Storage Element consists of an engine, which implements the file services, and one or more devices, on which data is stored.
56. Network Data Management Protocol (NDMP): A communications protocol that allows intelligent devices on which data is stored, robotic library devices, and backup applications to intercommunicate for the purpose of performing backups. Abbreviated NDMP. An open standard protocol for network-based backup of NAS devices. NDMP allows a network backup application to control the retrieval of data from, and backup of, a server without third-party software. The control and data transfer components of backup and restore are separated. NDMP is intended to support tape drives, but can be extended to address other devices and media in the future.
57. Network File System (NFS) : A distributed file system and its associated network protocol originally developed by Sun Microsystem Computer Corporation and commonly implemented in UNIX systems, although most other computer systems have implemented NFS clients and/or servers.
58. non-volatile random access memory (NVRAM): Computer system random access memory that has been made impervious to data loss due to power failure through the use of UPS, batteries, or implementation technology such as flash memory.
59. out-of-band virtualization: Virtualization functions or services that are not in the data path. Examples are functions related to meta data, the management of data or storage, security management, backup of data, etc.
60. pcnfsd: A daemon that permits personal computers to access file systems accessed through the NFS protocol.
61. RAID: An Acronym for Redundant Array of Independent Disks, a family of techniques for managing multiple disks to deliver desirable cost, data availability, and performance characteristics to host environments.
62. RAID 0, RAID Level 0: Synonym for data striping.
63. RAID 1, RAID Level 1: Synonym for mirroring.
64. RAID 2, RAID Level 2: A form of RAID in which a Hamming code computed on stripes of data on some of an array’s disks is stored on the remaining disks and serves as check data.
65. RAID 3, RAID Level 3: A form of parity RAID in which all disks are assumed to be rotationally synchronized, and in which the data stripe size is no larger than the exported block size.
66. RAID 4, RAID Level 4: A form of parity RAID in which the disks operate independently, the data strip size is no smaller than the exported block size, and all parity check data is stored on one disk.
67. RAID 5, RAID Level 5: A form of parity RAID in which the disks operate independently, the data strip size is no smaller than the exported block size, and parity check data is distributed across the array’s disks.
68. RAID 6, RAID Level 6: Any form of RAID that can continue to execute read and write requests to all of an array’s virtual disks in the presence of two concurrent disk failures. Both dual check data computations (parity and Reed Solomon) and orthogonal dual parity check data have been proposed for RAID Level 6.
69. SAN (storage area network): A network whose primary purpose is the transfer of data between computer systems and storage elements and among storage elements. A SAN consists of a communication infrastructure, which provides physical connections, and a management layer, which organizes the connections, storage elements, and computer systems so that data transfer is secure and robust. The term SAN is usually (but not necessarily) identified with block I/O services rather than file access services. Note: The SNIA definition specifically does not identify the term SAN with Fibre Channel technology. When the term SAN is used in connection with Fibre Channel technology, use of a qualified phrase such as “Fibre Channel SAN” is encouraged. According to this definition an Ethernet-based network whose primary purpose is to provide access to storage elements would be considered a SAN.
70. SCSI: Acronym for Small computer System Interface. SCSI is a collection of ANSI standards and proposed standards which define I/O buses primarily intended for connecting storage subsystems or devices to hosts through host bus adapters. Originally intended primarily for use with small (desktop and desk-side workstation) computers, SCSI has been extended to serve most computing needs, and is arguably the most widely implemented I/O bus in use today.
71. SCSI adapter: An adapter that connects an intelligent device to a SCSI bus.
72. SCSI bus: Any parallel (multi-signal) I/O bus that implements some version of the ANSI SCSI standard. A wide SCSI bus may connect up to16 initiators and targets. A narrow SCSI bus may connect up to eight initiators and targets.
73. serverless backup: A disk backup methodology in which either the disk being backed up or the tape device receiving the backup manages and performs actual backup I/O operations. Server-free backup frees the LAN server to perform I/O operations on behalf of LAN clients and reduces the number of trips the backup data takes through processor memory. Differentiated from LAN-free backup in that no additional SAN appliance is required to offload backup I/O operations from the LAN server.
74. server based virtualization: Virtualization implemented in a host computer rather than in a storage subsystem or storage appliance. Virtualization can be implemented either in host computers, in storage subsystems or storage appliances, or in a specific virtualization appliances in the storage interconnect fabric.
75. Server Message Block (protocol): A network file system access protocol designed and implemented by Microsoft Corporation and used by Windows clients to communicate file access requests to Windows servers. Abbreviated SMB. Current versions of the SMB protocol are usually referred to as CIFS, the Common Internet File System.
76. snapshot: A fully usable copy of a defined collection of data that contains an image of the data as it appeared at the point in time at which the copy was initiated. A snapshot may be either a duplicate or a replicate of the data it represents.
77. single mode (fiber optic cable): A fiber optic cabling specification that provides for up to 10 kilometer distance between devices.
78. soft zone: A zone consisting of zone members that are permitted to communicate with each other via the fabric. Soft zones are typically implemented through a combination of name server and Fibre Channel protocol — when a port contacts the name server, the name server returns information only about Fibre Channel ports in the same zone(s) as the requesting port. This prevents ports outside the zone(s) from being discovered and hence the Fibre Channel protocol will not attempt to communicate with such ports. In contrast to hard zones, soft zones are not enforced by hardware; e.g., a frame that is erroneously addressed to a port that should not receive it will nonetheless be delivered. Well known addresses {link} are implicitly included in every zone.
79. storage virtualization: The act of abstracting, hiding, or isolating the internal function of a storage (sub) system or service from applications, compute servers or general network resources for the purpose of enabling application and network independent management of storage or data. The application of virtualization to storage services or devices for the purpose of aggregating, hiding complexity or adding new capabilities to lower level storage resources. Storage can be virtualized simultaneously in multiple layers of a system, for instance to create HSM like systems.
80. synchronous operations: Operations which have a fixed time relationship to each other. Most commonly used to denote I/O operations which occur in time sequence, i.e., a successor operation does not occur until its predecessor is complete.
81. Synchronous Optical Network (SONET): A standard for optical network elements. SONET provides modular building blocks, fixed overheads, integrated operations channels, and flexible payload mappings. Basic SONET provides a bandwidth of 51.840 megabits/second. This is known as OC-1. Higher bandwidths that are n times the basic rate are available (known as OC-n). OC-3, OC-12, OC-48, and OC-192 are currently in common use.
82. target: The system component that receives a SCSI I/O command.
83. target ID: The SCSI bus address of a target device or controller.
84. terabyte: Shorthand for 1,000,000,000,000 (10^12) bytes. SNIA publications typically use the 10^12 convention commonly found in I/O literature rather than the 1,099,5111,627,776 (2^40) convention sometimes used when discussing random access memory. 1000 gigabytes.
85. third party copy: A protocol for performing tape backups using minimal server resources by copying data directly from the source device (disk or array) to the target device (tape transport) without passing through a server.
86. Total cost of ownership (TCO): The comprehensive cost of a particular capability such as data processing, storage access, file services, etc. TCO includes acquisition, environment, operations, management, service, upgrade, loss of service, and residual value.
87. tunneling: A technology that enables one network protocol to send its data via another network protocol’s connections. Tunneling works by encapsulating the first network protocol within packets carried by the second protocol. A tunnel may also encapsulate a protocol within itself (e.g., an IPsec gateway operates in this fashion, encapsulating IP in IP and inserting additional IPsec information between the two IP headers).
88. Ultra SCSI: A form of SCSI capable of 20 megatransfers per second. Single ended Ultra SCSI supports bus lengths of up to 1.5 meters. Differential Ultra SCSI supports bus lengths of up to 25 meters. Ultra SCSI specifications define both narrow (8 data bits) and wide (16 data bits) buses. A narrow Ultra SCSI bus transfers data at a maximum of 20 MBytes per second. A wide Ultra SCSI bus transfers data at a maximum of 40 MBytes per second.
89. Ultra2 SCSI: A form of SCSI capable of 40 megatransfers per second. There is no single ended Ultra2 SCSI specification. Low voltage differential(LVD) Ultra2 SCSI supports bus lengths of up to 12 meters. High voltage differential Ultra2 SCSI supports bus lengths of up to 25 meters. Ultra2 SCSI specifications define both narrow (8 data bits) and wide (16 data bits) buses. A narrow Ultra SCSI bus transfers data at a maximum of 40 MBytes per second. A wide Ultra2 SCSI bus transfers data at a maximum of 80 MBytes per second.
90. Ultra3 SCSI: A form of SCSI capable of 80 megatransfers per second. There is no single ended Ultra3 SCSI specification. Low voltage differential(LVD) Ultra2 SCSI supports bus lengths of up to 12 meters. There is no high voltage differential Ultra3 SCSI specification. Ultra3 SCSI specifications only define wide (16 data bits) buses. A wide Ultra3 SCSI bus transfers data at a maximum of 160 MBytes per second.
91. virtual device: A device presented to an operating environment by control software or by a volume manager. From an application standpoint, a virtual device is equivalent to a physical one. In some implementations, virtual devices may differ from physical ones at the operating system level (e.g., booting from a host based disk array may not be possible).
92. virtual disk: A set of disk blocks presented to an operating environment as a range of consecutively numbered logical blocks with disk-like storage and I/O semantics. The virtual disk is the disk array object that most closely resembles a physical disk from the operating environment’s viewpoint.
93. Virtualization: The act of integrating one or more (back end) services or functions with additional (front end) functionality for the purpose of providing useful abstractions. Typically virtualization hides some of the back end complexity, or adds or integrates new functionality with existing back end services. Examples of virtualization are the aggregation of multiple instances of a service into one virtualized service, or to add security to an otherwise insecure service. Virtualization can be nested or applied to multiple layers of a system.
94. Wave Division Multiplexing (WDM): The splitting of light into a series of “colors” from a few (sparse) to many with a narrow wavelength separation (Dense WDM) for the purpose of carrying simultaneous traffic over the same physical fiber (9 micron usually). Each “color” is a separate data stream.
95. Windows Internet Naming Service: A facility of the Windows NT operating system that translates between IP addresses and symbolic names for network nodes and resources.
96. world wide name (WWN): A 64-bit unsigned Name Identifier which is worldwide unique. A unique 48 or 64 bit number assigned by a recognized naming authority (often via block assignment to a manufacturer) that identifies a connection or a set of connections to the network. A WWN is assigned for the life of a connection (device). Most networking technologies (e.g., Ethernet, FDDI, etc.) use a world wide name convention.
97. write back cache: A caching technique in which the completion of a write request is signaled as soon as the data is in cache, and actual writing to non-volatile media occurs at a later time. Write-back cache includes an inherent risk that an application will take some action predicated on the write completion signal, and a system failure before the data is written to non-volatile media will cause media contents to be inconsistent with that subsequent action. For this reason, good write-back cache implementations include mechanisms to preserve cache contents across system failures (including power failures) and to flush the cache at system restart time.
98. write through cache: A caching technique in which the completion of a write request is not signaled until data is safely stored on non-volatile media. Write performance with a write-through cache is approximately that of a non-cached system, but if the data written is also held in cache, subsequent read performance may be dramatically improved.
99. zone: A collection of Fibre Channel N_Ports and/or NL_Ports (i.e., device ports) that are permitted to communicate with each other via the fabric. Any two N_Ports and/or NL_Ports that are not members of at least one common zone are not permitted to communicate via the fabric. Zone membership may be specified by: 1) port location on a switch, (i.e., Domain_ID and port number); or, 2) the device’s N_Port_Name; or, 3) the device’s address identifier; or, 4) the device’s Node_Name. Well-known are implicitly included in every zone.
100. zone set: A set of zone definitions for a fabric. Zones in a zone set may overlap (i.e., a port may be a member of more than one zone). Fabric management may support switching between zone sets to enforce different access restrictions (e.g., at different times of day).
101. zoning: A method of subdividing a SAN into disjoint zones, or subsets of nodes on the network. Storage area network nodes outside a zone are invisible to nodes within the zone. Moreover, with switched SANs, traffic within each zone may be physically isolated from traffic outside the zone.

Leave a Reply