To
discuss this subject, we need to understand NFS (Network File System)
and VMFS (Virtual Machine File System) are different File Systems. NFS
is a file level file system, and VMFS is a block level file system. NFS,
VMFS (here is included LUNs/Disks), vSAN and recently VVols (Virtual
Volumes) are the type of Datastores that we can use in VMware.
A brief history of NFS and VMFS file systems:
NFS (Network File System):
NFS
is a network file system that exists since 1984 and was developed by
SUN Microsystems, and initial was only build and use for UNIX base.
Since
1984 there were many changes in the NFS. Version 1 used only internally
in SUN development and then new version 2 had some changes and was
starting spread out for other UNIX and Linux servers. Middle 90̢۪s NFS
v3 arrives, and support for 64-bit file sizes and offsets and can now
handle files larger than 2GB. NFS v3 extends to more UNIX/Linux servers
and now also to NAS with storage vendors implementing NFS technology on
their systems.
Mainly adopted by NetApp, a significant contributor that develop with SUN the next NFS version.
That is also when Windows systems are starting using NFS with Server Message Block [SMB], also known as CIFS.
Main version changes:
• NFS v4
1. TCP over IPv4 is now used as a standard
2. Was ratified as an Internet standard, described in RFC -3530
3.
Client/server interactions are done using the GSS-API framework. Adding
three security mechanisms to NFSv4: Kerberos, LIPKEY, and SPKM-3
4. Exported pseudo-file system on the server is mountable as a single common root on the client
• NFS v4.1
1.
One of the most significant changes in v4.1 was adding multipath, by
introducing better performance and availability through load balancing
and multipathing
2. Kerberos and thus non-root user authentication are now supported.
3. Parallel NFS (pNFS) support
4. Directory Delegation and other file types
5. Improved compatibility with Microsoft Windows for Access Control Lists (ACLs)
6. LIPKEY and SPKM-3 are no longer required security mechanisms
7. Firewall-friendly single port operations
8. Replication and migration facilities
9. Support for parallelism and data striping
Important Note: With NFSv4.1 VMware doesn’t support these features:
1. Storage I/O control
2. Site Recovery Manager
3. Storage DRS
• NFS v4.2
(version still in discussion and still in development, even some of the features are already implemented)
• Server-Side Clone and Copy
• Application Input/Output (I/O) Advise
• Sparse Files
• Space Reservation
• Application Data Block (ADB) Support
• Labeled NFS
• Layout Enhancements
In vCenter when selecting an NFS Datastore, we can also check some of the above changes between versions.
NFS in VMware:
An
NFS client built into ESXi uses the Network File System (NFS) protocol
over TCP/IP to access a designated NFS volume that is located on a NAS
server. The ESXi host can mount the volume and use it for its
storage needs. vSphere supports versions 3 and 4.1 of the NFS protocol.
VMFS (Virtual Machine File System):
On
opposed to NFS, VMFS is a block level file system. VMware created VMFS
with the primary purpose to store Virtual Machines or snapshots. VMFS is
a high-performance clustered file system for virtualization and can be
shared between multiple ESXi hosts or VMs and can simultaneously write
and read data to and from a single storage location.
In
the earlier versions VMFS1 (was a flat file system with no directory
structure) and v2, VMFS was created for an idea of Virtual
Infrastructure (VI).
However
only after VMFS3 (around 2006) is when VMFS was starting to go further
with a directory structure in the file system, more improvements, and a
more mature file system that will be used until vSphere 4.x.
After
vSphere 5.x a new VMFS5 version (arrives around 2011) improves most of
the v3 attributes and introducing many performance enhancements.
Main version changes:
VMFS3 (vSphere 3.x/4.x):
1. Maximum single Extend size: 2TB
2. Raw Device Mapping size (virtual and physical): 2TB minus 512 bytes
3. Partition Style: MBR
4. File per volume: Approximately 30,720
5. Block size: 8M
VMFS5 (vSphere 5.x/6.x):
• Maximum single Extend size: 64 TB
• Raw Device Mapping size (virtual compatibility): 62TB
• Raw Device Mapping size (physical compatibility): 62TB
• Partition Style: GPT (GUID Partition Table)
• Spanned Volume size: 64 TB (32 extends with any size combination)
• File per volume: Approximately 130,690
• Standard 1MB file system block size with the support of 2TB virtual disks.
• File located on a VMFS5 can be opened in a shared mode by a maximum of 32 hosts
• ATS-only locking mechanisms for ATS
• VMFS online version upgrades
• Manually reclaim physical storage space on thin provisioned storage devices
VMFS6 (vSphere 6.5/6.7):
• Support 4Kn and 512n storage devices and local devices.
• Automatic space reclamation
• Full Space reclamation from guest OS
• New default snapshot system Sesparse (VMFS 5 uses Sesparse only when VMDK size goes beyond 2TB)
• Content Based Read Cache (CBRC), or View Storage Accelerator is now set to 32GB cache
VMFS
5 and VMFS 6 can coexist. However, due to the changes done in VMFS 6
metadata structures to make it 4K aligned, you cannot inline/offline
upgrade from VMFS5 to VMFS6.
Note:
Most of the new features, maximums, and improvements between VMFS
versions is bound to the vSphere version. Even in one version of vSphere
is not support, the next version of VMFS have a new feature
or improvement that is transported by vSphere itself.
In vCenter when selecting a VMFS Datastore, we can also check some of the above changes between versions.
VMFS in VMware:
Datastores
that you deploy on block storage devices use the vSphere Virtual
Machine File System (VMFS) format. VMFS is a special high-performance
file system format that is optimized for storing virtual machines.
Main Differences between VMFS and NFS Datastores in VMware:
Like
we stated above, VMFS is a block level file system, while NFS is a file
level file system. When creating a VMFS Datastore, the VMFS file system
is created by the vSphere, while the NFS file system is on Storage side
and is only mounted has a shared folder on the vSphere. In the next
image, we can see how each Datastore type access to Virtual Machines
using a different type of network connectivity.
VMFS / NFS Pros vs. Cons:
Next, we can check features and options that VMS and NFS can provide when used in VMware.
VMFS main benefits:
• Enables Automated CFS Capability
• Optimizes Virtual Machine Access
• Encapsulates the Entire Virtual Machine State in a Single Directory
• Simplifies Provisioning and Administration of Virtual Machines
• Provides Distributed Infrastructure Services for Multiple vSphere Hosts
• Facilitates Dynamic Growth
• Provides Intelligent Cluster Volume Management
• Optimizes Storage Utilization
• Enables High Availability with Lower Management Overhead
• Simplifies Disaster Recovery
NFS main benefits:
• VMDK Thin Provisioning by default
• De-duplication (file, block, variable-block, and application-aware)
• Backup and Restore Granularity
• Virtual Machines stored and manage in groups
• Use of ILM and HSM, or QoS (can also be used with VMFS)
• Maximum Volume Size(depending on Storage Array vendor)
Conclusion:
The
final question is, is VMFS or NFS Storage better for your VMware
Environment? In my opinion, both file systems are mature enough to use
in VMware. But, It always depends on the needs of your VMware
environment. Analyzing Storage management features enabled and the final
pros and cons, then you can decide what it fits best in your
environment.
However,
moving to NFS4.1 now is something that is on hold for many companies
when using VMware environment, since there are some restrictions that we
enumerated above.
When
there is a need of use of shared folders CIFS (to Windows or store
users home directories), then the choice is NFS. If not, you need to
create a File Server to provide shared folders and the
access/permissions to those files/folders.
Want to download this blog as a PDF document, click the link below:
Abd El-Rahman Oreiby
Senior Data Center Engineer
Al Thuraya Security Egypt
www.abdelrahmanoreiby.weebly.com
No comments:
Post a Comment