VMware NSX-T Data Center is the next generation product that provides a scalable network virtualization and micro-segmentation platform for multi-hypervisor environments, container deployments, and native workloads.
Under this umbrella there are two main products: NSX-v and NSX-T.
Also if NSX-T has not yet become features parity with NSX-v, it’s becoming clear that, in the future, it will replace the NSX-v product and become the only NSX Data Center line.
NSX-T provides a variety of new features to provide new functionality for virtualized networking and security for private, public, and hybrid clouds. Highlights include a new intent-based networking user interface, context-aware firewall, guest and network introspection features, IPv6, highly-available clustered management, profile-based NSX installation for vSphere compute clusters, rebootless maintenance upgrade mode of NSX for vSphere compute, new in-place upgrade mode for vSphere compute and a migration coordinator for migrating from NSX Data Center for vSphere to NSX-T Data Center.
The gap between the two products is closing faster and there are also several new features and capabilities available ONLY on NSX-T. Also NSX-T has a better and more flexible architecture, including a different tunning protocol from NSX-v (GENEVE instead of VXLAN).
And the product is growing faster: on June 2018 was release the NSX-T Data Center 2.2.0, on September 2018 the NSX-T Data Center 2.3.0 and now there is the new NSX-T Data Center 2.4.0 release (see the release notes).
The new product version include a lot of new features and capabilities:
The first big news it’s related to the management and control plane: both can be collapsed on a single cluster of appliances making the implementation of NSX-T much easier.
But there is a specific news related only to the management plane that rapresent a huge improvement: NSX-T Data Center 2.4 now supports the ability to create a cluster of managers for high availability of the user interface and API. This clustering supports both external balancers for redundancy and load distribution or a NSX provided virtual IP for redundancy. In addition, the management plane function and the central control plane function have been collapsed into this new management cluster to reduce the number of virtual appliances that need to be deployed and managed by the NSX administration. The NSX Manager appliance is available in three different sizes for different deployment scenarios. A small appliance for lab or proof-of-concept deployments. A medium appliance for deployments to 64 hosts and a large appliance for customers who deploy to a large scale environment. For details on configuration maximums, see the VMware configuration maximums tool at: https://configmax.vmware.com
On the network part there is an important milestone: now NSX-T fully supports IPv6. NSX-T 2.4 supports following unicast IPv6 addresses:
- Global Unicast: Globally unique IPv6 address and internet routable
- Link-Local: Link specific IPv6 address and used as next hop for IPv6 routing protocols
- Unique local: Site specific unique IPv6 addresses used for inter-site communication but not routable on internet. Based on RFC4193.
For more information of IPv6 support see this blog post.
There are also a lot of news in the firewall part that was still weaker compared with the same part in NSX-v… Now there are several new features:
- Identity Firewall – Identity (User ID) based rules for Distributed Firewall is quite similar to the same feature of NSX-v. Firewall Administrators can now configure distributed rules on virtual machines based on Active Directory based groups. This feature allows firewall administrators to provide firewall rules based on the users logged into the virtual machines. NSX will auto-detect users logged in / logged off and accordingly specific rules will be enabled for the users. Identity based Firewall can detect and enforce rules for a single user per VM or even track multiple users with particular sessions in the same VM. Firewall administrators will be creating NSX-T groups using Active Directory groups as a criteria. NSX-T manager will automatically retrieve a list of Active directory groups from the provide Domain Controllers. Firewall administrators can control east-west access of users especially in virtual desktop environments or remote desktop sessions with terminal services enabled.
- L7 Application Signatures for Context-Aware Distributed Firewall – A L7 based application signatures in Distributed Firewall rules. Users can either use a combination of L3/L4 rules with L7 Application Signatures or can just create L7 Application Signature-based rules only. We currently support Application signatures with various sub-attributes for Server-Server or Client-Server Communications only. In NSX-T Data Center 2.4, this will be available for ESXi based transport nodes only.
- FQDN/URL Whitelisting for Context-Aware Distributed Firewall – An innovation using Distributed DNS snooping to allow each connection from each VM to have its own resolution of URL/FQDN. Firewall Administrators can use pre-canned URL domains and apply it to rules in distributed firewall. Applications that are hybrid in nature accessing SaaS services or cloud-based services can micro-segment based on the URLs accessed. Client applications or Browsers accessing SaaS applications can be given access on a granular basis. In NSX-T Data Center 2.4, this will be available for ESXi based transport nodes only.
- Service Insertion – A broad array of native security functionality such as Layer 7 Application Identity, FQDN Whitelisting and Identity Firewall, which enable even more granular micro-segmentation. Besides the native security controls delivered by the Distributed and Gateway Firewall, the NSX Service Insertion Framework allows various types of Partner Services like IDS/IPS, NGFW and Network Monitoring solutions to be inserted transparently into the data path and consumed from within NSX without making changes in the topology. In NSX-T Data Center 2.4, Service Insertion now supports East-West Traffic (i.e., Traffic between the VMs in the Data Center). All the traffic between VMs in the Data Center can be redirected to a dynamic chain of partner services. The E-W service plane provides its own forwarding mechanism that allows policy-based redirection of traffic along chains of services. Forwarding along the service plane is entirely automated by the platform: failures are detected, and existing/new flows redirected as appropriate, flow pinning is performed to support stateful services, and multiple path selection policies are available to optimize for throughput/latency or density.
- Guest Introspection – The Guest Introspection service platform for VMware partners to provide policy-based agent-less antivirus and anti-malware offload capabilities for Windows-based Guest VM workloads on vSphere ESXi hypervisors.
For more information on all the other improvements see the release notes or this blog post.