
This way VSAN and vMotion get their own dedicated 25gb links. In review, two 25gb SFP28 links from each host would connect (1 link to each physical switch). O Network failure detection: link status only O Load balancing: use explicit failover order
Create one vmnic1 for VSAN (25gb SFP28 port). Create one vmnic0 for vMotion (25gb SFP28 port). No LAG/LACP configured on any ports for VSAN or vMotion links connected to the TOR switches (for simplicity). Incorporating two new HPE SN2010M TOR switches (Planning to isolate with VSAN and vMotion only). VM traffic will be LAG across 3 ethernet ports to a third switch (Cisco). Management traffic will be single ethernet port to a third switch (Cisco). Dual port 25GB SFP28 NICs for each host (VSAN and vMotion). 3 host VSAN 6.7U3 cluster (adding 4th node in the near future).
VSAN is currently connected to a single 10GB switch bad design I know, hence the upgrade. Implementing 2 new SN2010M TOR switches in the near future. Hello community I have two questions at the end based on the planned config below.