Skip to main content

NAS vs. SAN: Differences and Use Cases

These two storage architectures, both NAS and SAN, are as much complementary as they are competitive and fill different needs and usage cases in the organization. Many larger organizations own both.

NAS “versus” SAN doesn’t tell the whole story in comparing these two popular storage architectures. NAS and SAN are as complementary as they are competitive and fill different needs and usage cases in the organization. Many larger organizations own both.
However, enterprise IT budgets are not infinite, and organizations need to optimize their storage expenditures to suit their priority requirements. This article will help you do that by defining NAS and SAN, calling out their distinctions, and presenting usage cases for both architectures.

NAS is a file-level data storage device attached to an TCP/IP network, usually Ethernet. It typically uses NFS or CIFS protocols, although other choices like HTTP are available.
NAS appears to the operating system as a shared folder. Employees access files from the NAS like they do any other file on the network. NAS is LAN-dependent; if the LAN goes down so does the NAS.
NAS is not typically as fast as block-based SAN, but high-speed LANs can overcome most performance and latency issues.
Storage Area Network (SAN)
SAN is a dedicated high-performance network for consolidated block-level storage. The network interconnects storage devices, switches, and hosts. High-end enterprise SANs may also include SAN directors for higher performance and efficient capacity usage.
Servers connect to the SAN fabric using host bus adapters (HBAs). Servers identify the SAN as locally attached storage, so multiple servers can share a storage pool. SANs are not dependent on the LAN and relieves pressure on the local network by offloading data directly from attached servers.

NAS vs. SAN: 7 Big Differences

1)  Fabric. NAS uses TCP/IP networks, most commonly Ethernet. Traditional SANs typically run on high speed Fibre Channel networks, although more SANs are adopting IP-based fabric because of FC’s expense and complexity. High performance remains a SAN requirement and flash-based fabric protocols are helping to close the gap between FC speeds and slower IP.
2)  Data processing. The two storage architectures process data differently: NAS processes file-based data and SAN processes block data. The story is not quite as straightforward as that of course: NAS may operate with a global namespace, and SANs have access to a specialized SAN file system. A global namespace aggregates multiple NAS file systems to present a consolidated view. SAN file systems enable servers to share files. Within the SAN architecture, each server maintains a dedicated, non-shared LUN. SAN file systems allow servers to safely share data by providing file-level access to servers on the same LUN.
3)  Protocols. NAS connects directly to an Ethernet network via a cable into an Ethernet switch. NAS can use several protocols to connect with servers including NFS, SMB/CIFS, and HTTP. On the SAN side, servers communicate with SAN disk drive devices using the SCSI protocol. The network is formed using SAS/SATA fabrics, or mapping layers to other protocols such as Fibre Channel Protocol (FCP) that maps SCSI over Fibre Channel, or iSCSI that maps SCSI over TCP/IP.
4)  Performance. SANs are the higher performers for environments that need high-speed traffic such as high transaction databases and ecommerce websites. NAS generally has lower throughput and higher latency because of its slower file system layer, but high-speed networks can make up for performance losses within NAS.
5)  Scalability. Entry level and NAS devices are not highly scalable, but high-end NAS systems scale to petabytes using clusters or scale-out nodes. In contrast, scalability is a major driver for purchasing a SAN. Its network architecture enables admins to scale performance and capacity in scale-up or scale-out configurations.
6)  Price. Although a high-end NAS will cost more than an entry-level SAN, in general NAS is less expensive to purchase and maintain. NAS devices are considered appliances and have fewer hardware and software management components than a storage area network. Administrative costs also figure into the equation. SANs are more complex to manage with FC SANs on top of the complexity heap. A rule of thumb is to figure 10 to 20 times the purchase cost as an annual maintenance calculation. 
7)  Ease of management. In a one-to-one comparison, NAS wins the ease of management contest. The device easily plugs into the LAN and offers a simplified management interface. SANs require more administration time than the NAS device. Deployment often requires making physical changes to the data center, and ongoing management typically requires specialized admins. The exception to the SAN-is-harder argument is multiple NAS devices that do not share a common management console.
nas-vs-san-differences

NAS and SAN Use Cases

NAS and SAN serve different needs and use cases. Understand what you need and where you need it.
NAS: When you need to consolidate, centralize, and share.
· File storage and sharing. This is NAS major use case in mid-sized, SMB, and enterprise remote offices. A single NAS device allows IT to consolidate multiple file servers for simplicity, ease of management, and space and energy savings.
· Active archives. Long-term archives are best stored on less expensive storage like tape or cloud-based cold storage. NAS is a good choice for searchable and accessible active archives, and high capacity NAS can replace large tape libraries for archives.
· Big data. Businesses have several choices for big data: scale-out NAS, distributed JBOD nodes, all-flash arrays, and object-based storage. Scale-out NAS is good for processing large files, ETL (extract, transform, load), intelligent data services like automated tiering, and analytics. NAS is also a good choice for large unstructured data such as video surveillance and streaming, and post-production storage.
· Virtualization. Not everyone is sold on using NAS for virtualization networks, but the usage case is growing and VMware and Hyper-V both support their datastores on NAS. This is a popular choice for new or small virtualization environments when the business does not already own a SAN.
· Virtual desktop interface (VDI). Mid-range and high-end NAS systems offer native data management features that support VDI such as fast desktop cloning and data deduplication.
SAN: When you need to accelerate, scale, and protect.
·  Databases and ecommerce websites. General file serving or NAS will do for smaller databases, but high-speed transactional environments need the SAN’s high I/O processing speeds and very low latency. This makes SANs a good fit for enterprise databases and high traffic ecommerce websites.
·  Fast backup. Server operating systems view the SAN as attached storage, which enables fast backup to the SAN. Backup traffic does not travel over the LAN since the server is backing up directly to the SAN. This makes for faster backup without increasing the load on the Ethernet network.  
·  Virtualization. NAS supports virtualized environments, but SANs are better suited to large-scale and/or high-performance deployments. The storage area network quickly transfers multiple I/O streams between VMs and the virtualization host, and high scalability enables dynamic processing.
·  Video editing. Video editing applications need very low latency and very high data transfer rates. SANs provide this high performance because it cables directly to the video editing desktop client, dispensing with an extra server layer. Video editing environments need a third-party SAN distributed file system and per-node load balancing control.

SAN and NAS Convergence

Unified (or multi-protocol) SAN/NAS combines file and block storage into a single storage system. These unified systems support up to four protocols. The storage controllers allocate physical storage for NAS or SAN processing.
They are popular for mid-range enterprises who need both SAN and NAS, but lack data center space and specialized admins for separate systems. Converged SAN/NAS are a much smaller part of the market than distinct deployments but show steady growth.

Comments

Popular posts from this blog

Interpreting the output of lspci

On Linux, the lspci command lists all PCI devices connected to a host (a computer). Modern computers and PCI devices communicate with each other via PCI Express buses instead of the older Conventional PCI and PCI-X buses since the former buses offer many advantages such as higher throughput rates, smaller physical footprint and native hot plugging functionality. The high performance of the PCI Express bus has also led it to take over the role of other buses such as AGP ; it is also expected that SATA buses too will be replaced by PCI Express buses in the future as solid-state drives become faster and therefore demand higher throughputs from the bus they are attached to (see this article for more on this topic). As a first step, open a terminal and run lspci without any flags (note: lspci may show more information if executed with root privileges): lspci   This is the output I get on my laptop: 00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Co

Boot process hangs at dracut: Switching root

Environment Red Hat Enterprise Linux 6 Issue When server is booting the boot process hangs at  dracut: Switching root , and never displays anything else. Raw device-mapper: ioctl: 4.33.1-ioctl (2015-8-18) initialised: xx-xxxx@redhat.com udev: starting version 147 dracut: Starting plymouth daemon dracut: rd_NO_DM: removing DM RAID activation dracut: rd_NO_MD: removing MD RAID activation scsi0 : ata_piix scsi1 : ata_piix ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Refined TSC clocksource calibration: 2599.999 MHz. virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10 virtio-pci 0000:00:07.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 virtio-pci 0000:00:08.0: PCI INT A -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11 input: ImExPS/2 Gener

How to get the SAN environment information and statistics on AIX, HP-UX, Linux, Solaris, and Windows

How to get the SAN environment information and statistics on AIX, HP-UX, Linux, Solaris, and Windows Description NetBackup SAN Client is supported on the Linux , Solaris, Windows, HP-UX and AIX operating systems.  These environments provide the initiator device driver which can login to the SAN client media server and mount an pseudo   target device “ARCHIVE PYTHON” so that the backup or restore can be use the fiber transport (FT).  If there is an issue in the SAN environment, it is necessary to get the information/statistics from the SAN fabric for analysis.  The commands below can be used, on the respective operating system, to gather the necessary information. If the outputs show many or steadily increasing error counts, that indicates one or more issues with  the fabric  infrastructure. The issue(s) can be caused by cabling, SFP, san switch, DWDM, HBA or ISL and those components will need to be analyzed and evaluated.  Linux Get the hardware information fo