Skip to main content

An In-Depth Guide to the Differences Between SAN and NAS

An explanation of storage area networks and network attached storage


An illustration of the differences between SAN and NAS.

Storage area networks (SANs) and network attached storage (NAS) both provide networked storage solutions. A NAS is a single storage device that operates on data files, while a SAN is a local network of multiple devices.
The differences between NAS and SAN can be seen when comparing their cabling and how they're connected to the system, as well as how other devices communicate with them. However, the two are sometimes used together to form what's known as a unified SAN.

SAN vs. NAS Technology

A NAS unit includes a dedicated hardware device that connects to a local area network, usually through an Ethernet connection. This NAS server authenticates clients and manages file operations in much the same manner as traditional file servers, through well-established network protocols.
To reduce the costs that occur with traditional file servers, NAS devices generally run an embedded operating system on simplified hardware and lack peripherals like a monitor or keyboard and are instead managed through a browser tool.
A SAN commonly utilizes Fibre Channel interconnects and connects a set of storage devices that are able to share data with one another.

Important NAS and SAN Benefits

The administrator of a home or small business network can connect one NAS device to a local area network. The device itself is a network node, much like computers and other TCP/IP devices, all of which maintain their own IP address and can effectively communicate with other networked devices.
Given that the network attached storage device is attached to the network, all the other devices on that same network have easy access to it (given that proper permissions are set up). Because of their centralized nature, NAS devices offer an easy way for multiple users to access the same data, which is important in situations where users are collaborating on projects or utilizing the same company standards.
Using a software program provided with the NAS hardware, a network administrator can set up automatic or manual backups and file copies between the NAS and all the other connected devices. Therefore, a NAS device is also useful for the opposite reason: to offload local data to the network storage device's much larger storage container.
This is useful not only to ensure that users do not lose data, since the NAS can be backed up on a regular schedule regardless of the end-user's ability to back up, but also to give other network devices a place to keep large files, especially large files that are often shared among other network users.
Without a NAS, users have to find another (often slower) means to send data to other devices on the network, like over email or physically with flash drives. The NAS holds many gigabytes or terabytes of data, and administrators can add additional storage capacity to their network by installing additional NAS devices, although each NAS operates independently. 
Administrators of large enterprise networks may require many terabytes of centralized file storage or extremely high-speed file transfer operations. While installing an army of many NAS devices is not a practical option, administrators can instead install a SAN containing a high-performance disk array to provide the needed scalability and performance.
However, SANs are not always physical. You can also create virtual SANs (VSANs) that are defined by a software program. Virtual SANs are easier to manage and offer better scalability since they're hardware independent and controlled entirely by easy-to-change software.

SAN/NAS Convergence

As internet technologies like TCP/IP and Ethernet proliferate worldwide, some SAN products are making the transition from Fibre Channel to the same IP-based approach NAS uses. Also, with the rapid improvements in disk storage technology, today's NAS devices now offer capacities and performance that once were only possible with SAN.
These two industry factors have led to a partial convergence of NAS and SAN approaches to network storage, effectively creating high-speed, high-capacity, centrally located network devices. 
When SAN and NAS are joined together into one device in this way, it's sometimes referred to as "unified SAN," and it's often the case that the device is a NAS device that simply utilizes the same technology behind SAN.

Comments

Popular posts from this blog

Interpreting the output of lspci

On Linux, the lspci command lists all PCI devices connected to a host (a computer). Modern computers and PCI devices communicate with each other via PCI Express buses instead of the older Conventional PCI and PCI-X buses since the former buses offer many advantages such as higher throughput rates, smaller physical footprint and native hot plugging functionality. The high performance of the PCI Express bus has also led it to take over the role of other buses such as AGP ; it is also expected that SATA buses too will be replaced by PCI Express buses in the future as solid-state drives become faster and therefore demand higher throughputs from the bus they are attached to (see this article for more on this topic). As a first step, open a terminal and run lspci without any flags (note: lspci may show more information if executed with root privileges): lspci   This is the output I get on my laptop: 00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Co

Boot process hangs at dracut: Switching root

Environment Red Hat Enterprise Linux 6 Issue When server is booting the boot process hangs at  dracut: Switching root , and never displays anything else. Raw device-mapper: ioctl: 4.33.1-ioctl (2015-8-18) initialised: xx-xxxx@redhat.com udev: starting version 147 dracut: Starting plymouth daemon dracut: rd_NO_DM: removing DM RAID activation dracut: rd_NO_MD: removing MD RAID activation scsi0 : ata_piix scsi1 : ata_piix ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Refined TSC clocksource calibration: 2599.999 MHz. virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10 virtio-pci 0000:00:07.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 virtio-pci 0000:00:08.0: PCI INT A -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11 input: ImExPS/2 Gener

How to get the SAN environment information and statistics on AIX, HP-UX, Linux, Solaris, and Windows

How to get the SAN environment information and statistics on AIX, HP-UX, Linux, Solaris, and Windows Description NetBackup SAN Client is supported on the Linux , Solaris, Windows, HP-UX and AIX operating systems.  These environments provide the initiator device driver which can login to the SAN client media server and mount an pseudo   target device “ARCHIVE PYTHON” so that the backup or restore can be use the fiber transport (FT).  If there is an issue in the SAN environment, it is necessary to get the information/statistics from the SAN fabric for analysis.  The commands below can be used, on the respective operating system, to gather the necessary information. If the outputs show many or steadily increasing error counts, that indicates one or more issues with  the fabric  infrastructure. The issue(s) can be caused by cabling, SFP, san switch, DWDM, HBA or ISL and those components will need to be analyzed and evaluated.  Linux Get the hardware information fo