close


I/Overview:

– Software Defined Networking: Differrent from traditional networking by seperating control plane and data plane.

+ Make decision more inteligent.

+ Scale much large.

+ More advanced capability.

– Deploy NSX-T not require dedicated hardware appliance only need OVA.

– Fully intagrated with vSphere to secure and provide networking function for VMs.

– Furthermore, support workload for bare metal servers, public cloud and containers on K8s.

Example: One VM for Application Server and one VM for Web Server. Both VM on same VLAn and need only allow traffice over port TCP/8080 => Only filter traffic as requirement by using NSX-T policy rule.

– With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2 through Layer 7 networking services (for example, switching, routing, access control, firewalling, QoS) in software.

*Overlay Networking:

– Sort of connection between two endpoints created on “overlay” network on top of physical infrastructure network. On overlay network can include multiple component as routing,switching, security and physical infrastructure know nothing about overlay network.

– NSX-T using protocol GENEVE for overlay networking (traffice encapsulated used for data plane communication). Geneve works by creating Layer 2 logical networks that are encapsulated in UDP packets. A Segment ID in every frame identifies the Geneve logical networks without the need for VLAN tags. As a result, many isolated Layer 2 networks can coexist on a common Layer 3 infrastructure using the same VLAN ID. GENEVE work similar ad VXLAN used by Cisco ACI overlay network.

– In the vSphere architecture, the encapsulation is performed between the virtual NIC of the guest VM and the logical port on the virtual switch, making the Geneve overlay transparent to both the guest virtual machines and the underlying Layer 3 network.

– The Tier-0 Gateway performs gateway services between overlay and non-overlay hosts, for example, a physical server or the Internet router. The NSX-T Edge virtual machine translates overlay segment IDs to VLAN IDs, so that non-overlay hosts can communicate with virtual machines on an overlay network.

– NSX-T working need provide an MTU size of 1600 or greater on any network that carries Geneve overlay traffic must and enable dynamic routing support on the upstream Layer 3 devices (BGP on the upstream Layer 3 devices to establish routing adjacency with the Tier-0 SRs).

– NSX-T also support create logical network stretched multi-site and all VM same VLAN but on multiple site now can be on the same overlay network. The package generate by one host running a VM will be encapsulate with fields SRC + DEST TEP (Tunnel End Point) and VNI (defined for a network segment on logical switch) on orginal packet include address of source and dest VM (now physical network not know abount src and dest IP of 2 VMs).

*NSX-T Manager Architecture:

– Just a VM deployd by OVA. It has Management Plane and Control Plan same VM. Management plane and control plane are converged on each node. NSX managers provides Web-GUI and REST API for management purposes. Standard deployment is there VM ina cluster for redundancy. If all there manager go down, all traffic will working normally but cannot move objects around.

– No packets passthrough manager plan. Make config changes in manager and manager push config to any effected nodes. NSX-T control plane always active on all nodes but mgmt only ative-standy node. Each NSX Manager virtual appliance holds Manager role , Controller role and Policy role. Requests from users and integrated systems like vRA through Web-GUI and API can be handled by each manager node in the cluster.

– NSX Manager is also could be consumed by Cloud Management Platform(CMP) like vRealize Automation to integrate SDN into cloud automation platforms.NSX-T Manager can also connect to vSphere infrastructure through integration with vCenter Server(Compute Manager).

http://www.velements.net/wp-content/uploads/2020/03/image-1.png

*NSX-T Control Plane Architecture:

– NSX-T Manager maintain ARP-Table, MAC-Table and TEP-Table. ARP-Table store MAC Address, IP Address and VNI for define network segment for each VM related to Transport node (vSphere host or bare-metal server). TEP-Table mapping MAC Address and IP Address of TEP interface (vSphere Host). When VM send packet to other VM need encapsulate with TEP MAC and  IP looked for in 3 these  tables. TEP-IP interface is a vmkernel adapter of  vSphere Hosts.

– The control plane is split into two parts in NSX-T Data Center, the central control plane (CCP), which runs on the NSX Controller cluster nodes, and the local control plane (LCP), which runs on the transport nodes.

– Data plane forwards network traffic based on the configuration advertised by Control plane and is also reports back topology information to Control plane. NSX-T Datacenter supports ESXi and KVM hypervisor transport nodes and Linux bare-metal servers.

– On NSX-T manager CLI use command for show entry on these tables:

# get logical-switch <UID of logical switch> arp-table/mac-table/vtep

*Transport Node:

-Transport nodes are hypervisor hosts such as ESXi & KVM hosts and NSX Edges that will participate in an NSX-T overlay network. When confiugre ESXi as transport node in NSX-T Manager cluster, automated deploy VIB package to this host. This package enable functionality as forwarding and encapsulating traffic to NSX-T overlay network and also adding some NSX-T cli command.

– On bare metal server can push code directly from NST-Manager or mannualy install agent.

– Edge transport node deploy as appliance.

*N-VDS:

– Unique type of distributed switch created by NSX-T Manager. The N-VDS Teaming Policy only support Two remaining policies in NSX-T – Failover Order and Source Port (only in ESXi) and not support load base and IP-Hash teaming. N-VDS require dedicated uplink on transport node. If use vSphere 7.0+ and VDS 7.0+ can retain existing VDS and basically install NSX on top of it anh when configure NSX for transport node choosing option VDS on menu New Node Switch  (not need remove/migrate uplink from VDS to N-VDS) and change VDS to converged VDS

– NSX-Edge appliance has built-in N-VDS. After push VIB and create N-VDS > deploy network segments.

*NSX-T Transport Zone:

– Define nework overlay segment on transport zone and when add transport zone to hsot, all network segments are available for this host. A transport zone is a logical container which controls which Hosts/VM’s can participate in a particular network by limiting what logical switches that a host can see. A transport zone can span multiple host clusters. in NSX-T, there are two different types of Transport zones, Overlay and VLAN.

– Overlay Transport Zones: Both ESXi/KVM hosts and Edge nodes are part of overlay transport zone. When hosts/edges are added to an overlay transport zone, a N-VDS is installed on both of them and traffic across transport nodes using TEP information.

– VLAN Transport Zones: Again both Host and Edge can be part of this transport zone and this type of transport zone supports VLAN backed segments (VLAN mgmt or traditional VLAN traffic as VDS).

– Separate overlay transport zone for security function: zone for production VM and zone for dev VM (also using same VLAN). Edge transport node is member of overlay transport zone and VLAN transport zone because it has role de-encapsulate packet between overlay network and regular network. Add new segment with VLAN ID on VLAN transport zone. When edit netowrk of VM on a host only show segment of transport zone already configured with this host.

II/NSX-T Deployment:

1/Deploy NSX-T Manager using nsx-unified-appliance-version.ova:

– Configure network for management NSX-T Manager:

– After deployment successful power on NSX-T Manager appliance

– Login to NSX-T Manger GUI using account admin

– Clic menu System > License > Add trial License for NSX-T Data Center

2/Add Compute Manager (vCenter):

– After add vCenter to NSX-T Manager can import inventory and resource to NSX-T GUI

– Click menu System > Choose Option > Fabric > Comput Managers > Add Compute Manager

– Checking all ESXi node already import ton inventory Transport Node (but not configure NSX yet)

3/Deploying Additional NSX-T Manager Node For High Availability:

Menu System > Appliances

– Click Add NSX Appliance and input information for new NSX-T appliance

4/Configure ESXi Transport Node With Converged VDS:

– Define new uplink profile: Uplink profile use for configure how many uplink transport node to NSX-T logical switches (not uplinks of VDS to external switch) or from NSX Edge nodes to top-of-rack switches ,nic teaming policy using (active-active when using source teaming or active-standby when using failover)  and TEP VLAN (transport VLAN).

– TEP’s are a critical component of NSX-T, each hypervisor participating in NSX-T (known as a host transport node) and Edge Appliance will have at minimum one TEP IP address assigned to it. Transport VLANs is just what NSX-T calls the VLAN used for the TEPs. Host transport node TEP IP addresses can be assigned to hosts statically, using DHCP or using predefined IP Pools in NSX-T. Edge Appliances can have their TEP IP addresses assigned statically or using a pool. TEP Ips for each transport node used for source/destination for GENEVE encapsulated packets.

– Create new Uplink Profile: Tab System > Menu Fabric > Profiles

– Click Add Profile

– Configure NSX-T on each ESXi transport node:

Menu System > Nodes > Check to ESXi host in cluster > Click Configure NSX

– Repeat this task for all ESXi hosts in cluster

Tags : AutomationContainerDevOpsK8sKubernetesLinux-Unix

Leave a Response

error: Content is protected !!