Session: FUT3040BU, Richard Brunner
Speed analysis is dependent on storage and database access latency. A key component is local storage latency – nothing can compete with DRAM
What if you can move storage closer to the processing? You can with byte-addressable persistent memory (PMEM)
Future vSphere will bring PMEM support using virtualized NVDIMM device
What is PMEM? Few hundred nanoseconds latency; byte-level access; regular non-privileged access; load/store CPU instructions
Updating storage at a finer grain level – read/modify/write latency is vastly less than block-storage
Uses: fast-caching layer; database logs; etc.
With future vSphere release – no VM driver needed. Guest storage is directly mapped to PMEM transparently.
PMEM is vMotion and FT compatible
DIMM size can range from 8 GiB to 100s of GiB
PMEM Tech: 3D XPoint; HPE DIMMs; HybriDIMM
All PMEM solutions need a way of ensuring that the last set of updates have ‘made it’ to the persistent media
Server hardware, firmware, and software support
VMware Implementation of PMEM
Concept: Virtualize and manage NVDIMMs; accelerate legacy legacy and modified applications; virtual disks stored on PMEM; Byte addressable virtual hardware
Two access methods: vSCSI with VMDK; vNVDIMM with modern OS (WS2016)
FT will be compatible with vNVDIMM for high availability
vCenter & DRS will support PMEM and manage it at a cluster level
Maintenance mode can also vacate powered off hosts too and move their data
New VM creation workflow will have a PMEM storage option
Add new device will now have a NVDIMM option – up to 64 per VM
Storage migration will also support NVDIMM
Modes of operation:
NVMe SSD – Requires emulation and multiple layers to access device – slowest mode
vPMEMDISK – No storage stack needed and faster performance
nNVDIMM – only Filesystem needed – very fast
vNVDIMM DAX – (Direct access mode) directly maps blocks into the application (fastest)
DAX mode – 35GB of persistent data written in a second (512KB random writes)