vSphere5.0虛擬網絡詳解(學習筆記之一)

  1. The importance of Virtual Networks
  2. Clearly, virtual networking within ESXi is a key area for every vSphere administrator to understand fully.(因爲虛擬網絡是VM進行通信的生命,沒有網絡,一切功能都將沒有用處)

  3. Putting Together a Virtual Network
  4. 設計vSphere的網絡其實跟物理網絡是有很多相似的地方,主要的factors:

  • vSphere Standard Switch
  • vSphere Distributed Switch
  • Port/port Group:

A logical object on a vSwitch that provides specialized services for the VMkernel or VMs. A virtual switch can contain a VMkernel port or a VM port group. On a vSphere Distributed Switch, these are called dvPort groups.

  • VMkernel Port:

A specialized virtual switch port type that is configured with an IP address to allow vMotion, iSCSI storage access, network attached storage (NAS) or Network File System (NFS) access, or vSphere Fault Tolerance (FT) logging. Now that vSphere 5 includes only VMware ESXi hosts, a VMkernel port also provides management connectivity for managing the host. A VMkernel port is also referred to as a vmknic.

  • VM Port Group :

A group of virtual switch ports that share a common configuration and allow VMs to access other VMs or the physical network.

  • Virtual LAN
  • Trunk Port (Trunking)
  • Access Port

A port on a physical switch that passes traffic for only a single VLAN. Unlike a trunk port, which maintains the VLAN identification for traffic moving through the port, an access port strips away(分離) the VLAN information for traffic moving through the port.

  • Network Interface Card Team:(一個網卡的集合,提供FT和load balancing)
  • vmxnet Adapter

A virtualized network adapter operating inside a guest operating system (guest OS). The vmxnet adapter is a high-performance, 1 Gbps virtual network adapter that operates only if the VMware Tools have been installed. The vmxnet adapter is sometimes referred to as a paravirtualized driver. The vmxnet adapter is identified as Flexible in the VM properties.

  • vlance Adapter

A virtualized network adapter operating inside a guest OS. The vlance adapter is a 10/100 Mbps network adapter that is widely compatible with a range of operating systems and is the default adapter used until the VMware Tools installation is completed.

  • e1000 Adapter

A virtualized network adapter that emulates the Intel e1000 network adapter. The Intel e1000 is a 1 Gbps network adapter. The e1000 network adapter is the most common in 64-bit VMs.

clip_image001

  • Comparing Virtual Switches and Physical Switches
    • Like its physical counterpart, a vSwitch functions at Layer 2, maintains MAC address tables, forwards
  • frames to other switch ports based on the MAC address, supports VLAN configurations, is capable

    of trunking by using IEEE 802.1q VLAN tags, and is capable of establishing port channels.

    • A vSwitch does not support the use of dynamic negotiation protocols for establishing 802.1q trunks

    or port channels, such as Dynamic Trunking Protocol (DTP) or Port Aggregation Protocol (PAgP).

    A vSwitch cannot be connected to another vSwitch, thereby eliminating a potential loop configuration. Because there is no possibility of looping, the vSwitches do not run Spanning Tree Protocol (STP).

    (不會產生環,因此不使用STP來增加鏈路的消耗)

    • 你不能使用虛擬交換機作爲幾個設備的互聯設備,因爲從一個Uplink進來的流量永遠不會從另一個Uplink出去
    • clip_image002
    1. Creating and Configuring Virtual Switches

    By default, every virtual switch is created with 128 ports. However, only 120 of the ports are available, and only 120

    are displayed when looking at a vSwitch configuration through the vSphere Client. Reviewing a vSwitch configuration

    via the vicfg-vswitch command shows the entire 128 ports. The 8-port difference is attributed to the fact that the

    VMkernel reserves 8 ports for its own use.

    After a virtual switch is created, you can adjust the number of ports to 8, 24, 56, 120, 248, 504, 1016, 2040, or 4088.

    These are the values that are reflected in the vSphere Client. But, as noted, there are 8 ports reserved, and therefore the

    command line will show 16, 32, 64, 128, 256, 512, 1024, 2048, and 4096 ports for virtual switches.

    Note:改變端口需要重啓ESXi主機

  • 詳細介紹vSwitch
  • clip_image003

    Note:一個vSwitch包含兩種類型的端口:Vmkernel port and Virtual machine port group

    You can creat they in vSphere Client:

    clip_image004

  • Understand Uplink
  • Although a vSwitch provides for communication between VMs connected to the vSwitch, it cannot

    communicate with the physical network without uplinks.

    No Uplink, No vMotion

    VMs communicating through an internal-only vSwitch do not pass any traffic through a physical adapter.

    clip_image005

    A vSwitch with a single network adapter allows VMs to communicate with physical servers and other VMs on the network.

    clip_image006

    A vSwitch can also be bound to multiple physical network adapters. In this configuration, the

    vSwitch is sometimes referred to as a NIC team, but in this book I’ll use the term NIC team or NIC

    teaming to refer specifically to the grouping of network connections together, not to refer to a

    vSwitch with multiple uplinks.

    Uplink Limits

    Although a single vSwitch can be associated with multiple physical adapters as in a NIC team, a single physical

    adapter cannot be associated with multiple vSwitches. ESXi hosts can have up to 32 e1000 network adapters, 32

    Broadcom TG3 Gigabit Ethernet network ports, or 16 Broadcom BNX2 Gigabit Ethernet network ports. ESXi hosts

    support up to four 10 Gigabit Ethernet adapters.(一個網卡只能是一個vSwitch使用,而不能是多個vSwitch使用)

    一個vSwitch最多的Uplink網卡是32個

    clip_image007

    Note:使用網卡team來增加帶寬或者負載均衡,冗餘

    clip_image008

  • Configuring Management Networking
  • 兩種設置的方式:

    • 在安裝的時候
    • 在DCUI界面上進行設置
    • Configuring VMkernel Networking

    功能:

    • Vmkernel Networking carries management traffic
    • VMkernel ports are used for vMotion, iSCSI, NAS/NFS access, and vSphere FT
    • Both of the other ports have a one-to-one relationship with an interface: each VMkernel NIC, or vmknic, requires a matching VMkernel port on a vSwitch.

    VMkernel port is associated with an interface and assigned an IP address for accessing iSCSI or NFS storage devices or for performing vMotion with other ESXi hosts.

    clip_image009

    The port labels for VMkernel ports should be as descriptive as possible.(一般要求把VMkernel端口做標記)

    clip_image010

    配置方法:

    • 使用圖形界面去配置,使用vSphere Client or Web client登錄到vCenter or ESXi主機進行配置
    • 使用powershell:命令使用的名字不能是IP地址,必須是FQDN格式
    • 使用SSH進入ESXi主機進行配置

    clip_image011

    7. Configuring VM Networking

      A VM port group, on the other hand, does not have a one-to-one relationship, and it does not

      require an IP address

      A vSwitch with a VM port group uses an associated physical network adapter to establish a switch-to-switch connection with a physical switch:

      clip_image012

    8. Configuring VLANs

      傳統的VLAN的作用:有效分割流量,安全性考慮,控制廣播風暴等:

      clip_image013

      Note:VLANs utilize the IEEE 802.1Q standard for tagging(VMware虛擬網絡使用傳統的Dot1Q)

      Normally the VLAN ID will range from 1 to 4094. In the ESXi environment, however, a VLAN ID of 4095 is also

      valid.

      VLANs的作用:

    • l The management network needs access to the network segment carrying management traffic.
    • l Other VMkernel ports, depending upon their purpose, may need access to an isolated vMotion segment or the network segment carrying iSCSI and NAS/NFS traffic.
    • l VM port groups need access to whatever network segments are applicable for the VMs running on the ESXi hosts.

    沒有VLAN和有VLAN的對比

    clip_image014

    clip_image015

    Note:

    The relationship between VLANs and port groups is not a one-to-one relationship; a portgroup can be associated with only one VLAN at a time, but multiple port groups can be associated with a single VLAN.

    The physical switch ports must be configured as trunk ports in order to pass the VLAN information to the ESXi hosts for the port groups to use:

    clip_image016

    9. Configuring NIC Teaming

      NIC teaming involves connecting multiple physical network adapters to single vSwitch. NIC teaming provides redundancy and load balancing of network communications to the VMkernel and VMs.

      Virtual switches with multiple uplinks offer redundancy and load balancing:

      clip_image017

      Note:

      Building a functional NIC team requires that all uplinks be connected to physical switches in the

      same broadcast domain. If VLANs are used, then all the switches should be configured for VLAN

      trunking, and the appropriate subset of VLANs must be allowed across the VLAN trunk. In a

      Cisco switch, this is typically controlled with the switchport trunk allowed vlan statement.

      All the physical network adapters in a NIC team must belong to the same Layer 2 broadcast domain.

      clip_image018

      Note:這裏的vSwitch0的NIC team會正常工作,因爲他們都在同一個VLAN100中,vSwitch1的NIC team無法正常工作,因爲沒有所有的網卡都在同一個VLAN中

      策略:

      The load-balancing algorithm for NIC teams in a vSwitch is a balance of the number of connections — not the amount of traffic,NIC teams on a vSwitch can be configured with one of the following four load-balancing policies:

    • vSwitch port-based load balancing (default)

    The policy setting ensures that traffic from a specific virtual network adapter connected to a virtual switch port will consistently use the same physical network adapter.

    clip_image019

    NOTE:You can see how this policy does not provide dynamic load balancing but does provide redundancy.(你可以看到該策略不提供動態的負載均衡,但是提供容錯)

    This could create a situation in which one physical network adapter is much more heavily utilized than some of the other network adapters in the NIC team.(有可能會造成網卡的負載不一)

    The vSwitch port-based policy is best used when the number of virtual network adapters is greater than the number of physical network adapters.(該策略最好是用在虛擬網卡的數量大大多於物理網卡的數量)

    • Source MAC-based load balancing

    clip_image020

    Note:兩個物理的交換機還是一定要在同一個2層的廣播域的

    clip_image021

    • IP hash-based load balancing

    The IP hash-based policy uses the source and destination IP addresses to calculate a hash. The hash determines the physical network adapter to use for communication.(利用源和目的IP地址來進行hash的計算,最終的出來的結果決定使用哪個物理網卡通信)

    Balancing for Large Data Transfers

    Although the IP hash-based load-balancing policy can more evenly spread the transfer traffic for a single VM, it does not provide a benefit for large data transfers occurring between the same source and destination systems. Because the source-destination hash will be the same for the duration of the data load, it will flow through only a single physical network adapter.(對於大流量的傳輸來說,IP hash-based並沒有很完美地實現負載均衡)

    The IP hash-based policy is a more scalable load-balancing policy that allows VMs to use more than one physical network adapter when communicating with multiple destination hosts.()

    Note:

    Unless the physical hardware supports it, a vSwitch with the NIC teaming load-balancing policy set to use the IP-based hash must have all physical network adapters connected to the same physical switch. Some newer switches support link aggregation across physical switches, but otherwise all the physical network adapters will need to connect to the same switch. In addition, the switch must be configured for link aggregation. ESXi supports standard 802.3ad teaming in static (manual) mode.

    Another consideration to point out when using IP hash-based load balancing policy is that all physical NICs must be set to active instead of configuring some as active and some as passive.(所有的物理網卡都必須設置成活動的狀態,而不能是一部分是激活的,一部分是不激活的)

    clip_image022

    The physical switches must be configured to support the IP hash-based load-balancing policy(物理交換機也要進行相關的配置來支持IP hash-based policy):

    clip_image023

    Note:這個配置在Cisco網絡體系中稱爲以太通道(Etherchannel, bei Cisco;IEEE標準中,稱Link Aggregation爲鏈路聚合)

    • Explicit failover order(不是真正意義上的負載均衡的策略,這個策略更多是傾向容災的)

    clip_image024

    • Configuring Failover Detection and Failover Policy(故障檢測和故障轉移)

    出現故障的原因:

    Failure of an uplink is identified by the link status provided by the physical network adapter. In this case, failure is

    identified for events like removed cables or power failures on a physical switch.(移除了線纜or物理交換機被關掉)

    Other Ways of Detecting Upstream Failures

    link state tracking(Cisco的鏈路狀態跟蹤)

    檢測的方法:

    The beacon-probing failover-detection setting, which includes link status as well, sends Ethernet broadcast frames across all physical network adapters in the NIC team. These broadcast frames allow the vSwitch to detect upstream network connection failures and will force failover when Spanning Tree Protocol blocks ports, when ports are configured with the wrong VLAN, or when a switch-to-switch connection has failed.

    When a beacon(信號,信標) is not returned on a physical network adapter, the vSwitch triggers the failover notice and reroutes the traffic from the failed network adapter through another available network adapter based on the failover policy.(首先,發送以太網廣播幀通過所有的物理的網卡,也就是在NIC team裏面的網卡)

    clip_image025

    Note:圖示三個網卡連接到不同的交換機上

    NIC team的設置的方法:

    1. 兩張網卡都是激活狀態的,兩張網卡都是有流量通過的

    clip_image026

    1. Active的網卡是有流量通過的,但是standy的網卡是沒有流量通過的,Standby adapters automatically activate when an active adapter fails

    clip_image027

    • The Failback option controls how ESXi will handle a failed network adapter when it recovers from failure.

    Using Failback with VMkernel Ports and IP-Based Storage

    I recommend setting Failback to No for VMkernel ports you’ve configured for IP-based storage. Otherwise, in the event of a “port-flapping” issue — a situation in which a link may repeatedly go up and down quickly — performance is negatively impacted. Setting Failback to No in this case protects performance in the event of port flapping.(不建議 把Failback配置在IP-Based Storage上,因爲這樣會造成端口翻滾的現象)

    By default, a vSwitch using NIC teaming has Failback enabled (set to Yes)

    clip_image028

    • vSwitch includes a Notify Switches configuration setting

    如果你設置了通知,物理交換機會很快就知道一下的信息:

    • A VM is powered on (or any other time a client registers itself with the vSwitch)
    • A vMotion occurs
    • A MAC address is changed
    • A NIC team failover or failback has occurred

    Turning Off Notify Switches

    The Notify Switches option should be set to No when the port group has VMs using Microsoft Network Load

    Balancing (NLB) in Unicast mode.

    clip_image029

    Note:

    In any of these events, the physical switch is notified of the change using the Reverse Address

    Resolution Protocol (RARP).

    • VMware recommends taking the following actions to minimize networking delays:
      • Disable Port Aggregation Protocol (PAgP) and Link Aggregation Control Protocol (LACP) on the physical switches.
      • Disable Dynamic Trunking Protocol (DTP) or trunk negotiation.
      • Disable Spanning Tree Protocol (STP).

    Virtual Switches with Cisco Switches

    VMware recommends configuring Cisco devices to use PortFast mode for access ports or PortFast trunk mode for trunk ports.

    1. Using and Configuring Traffic Shaping(流量管制)

    Traffic shaping involves the establishment of hard-coded limits for peak bandwidth, average bandwidth, and burst size to reduce a VM’s outbound bandwidth capability.

    Traffic shaping reduces the outbound bandwidth available to a port group

    clip_image030

    1. Bringing It All Together(綜合範例)

    It is true, however, to say that the greater the number of physical network adapters in an ESXi host, the more flexibility you will have in your virtual networking architecture.

    With the use of port groups and VLANs in the vSwitches, even fewer vSwitches and uplinks are required.

    clip_image031

    Note:這個圖示比較的虛擬網絡的設計實踐

    This time, you’re able to provide NIC teaming to all the traffic types involved — Management, vMotion, IP storage, and VM traffic — using only a single vSwitch with multiple uplinks.

    Configuration maximums for ESXi networking components (vSphere Standard Switches)

    clip_image032

    1. Working with vSphere Distributed Switches(vSphere分佈式虛擬交換機)

    There are a number of similarities between a vSphere Distributed Switch and a Standard vSwitch:

    • Like a vSwitch, a vSphere Distributed Switch provides connectivity for VMs and Vmkernel interfaces.
    • Like a vSwitch, a vSphere Distributed Switch leverages physical network adapters as uplinks to provide connectivity to the external physical network.
    • Like a vSwitch, a vSphere Distributed Switch can leverage VLANs for logical network segmentation.

    different:

    • but the biggest of these is that a vSphere Distributed Switch spans multiple servers in a cluster instead of each server having its own set of vSwitches.(跨越集羣中的多個服務器,而不是每個服務器都有自己的vSwitch)
    • This greatly reduces complexity in clustered ESXi environments and simplifies the addition of new servers to an ESXi cluster.(簡化ESXi虛擬網絡的環境)

    限制:

    vSphere Distributed Switches Require vCenter Server

    This may seem obvious, but it’s important to point out that because of the shared nature of a vSphere Distributed

    Switch, vCenter Server is required. That is, you cannot have a vSphere Distributed Switch in an environment that is

    not being managed by vCenter Server.

    The vSphere Client won’t allow a host to be removed from a dvSwitch if a VM is still attached.(還有虛擬機掛載的話)

    clip_image033

    VLAN的詳細解釋:

    clip_image034

    The big difference here is that with a dvSwitch, you can apply traffic-shaping policies to both ingress and egress traffic. With vSphere Standard Switches, you could apply traffic-shaping policies only to egress (outbound) traffic:

    clip_image035

    基於dvSwitch的新的負載均衡policy:Requirements for Load-Based Teaming

    Load-Based Teaming (LBT) requires that all upstream physical switches be part of the same Layer 2 (broadcast) domain. In addition, VMware recommends that you enable the PortFast or PortFast Trunk option on all physical switch ports connected to a dvSwitch that is using Load-Based Teaming.

    shows that the Block All Ports setting is set to either Yes or No. If you set the Block policy to Yes, then all traffic to

    and from that dvPort group is dropped. Don’t set the Block policy to Yes unless you are prepared for network downtime for all VMs attached to that dvPort group!(把所有的端口都禁掉,估計該項會被用來做troubleshooting)

    clip_image036

    可以使用內置的工具從標準的vSwitch遷移到dvSwitch

    clip_image037

    Using NetFlow on vSphere Distributed Switch(一個關注網絡流量的工具,可以用管理員更清楚從這裏經過的流量的類型)

    Configuring NetFlow is a two-step process:

    1. Configure the NetFlow properties on the dvSwitch.
    2. Enable or disable NetFlow (the default is disabled) on a per–dvPort group basis.

    設置PVLAN:

    By using PVLANs, you can isolate hosts from each other while keeping them on the same IP subnet. Figure 5.67 provides a graphical overview of how PVLANs work.

    Private VLANs can help isolate ports on the same IP subnet.

    clip_image038

    Note:

    PVLANs are configured in pairs: the primary VLAN and any secondary VLANs. The primary VLAN is considered the downstream VLAN; that is, traffic to the host travels along the primary VLAN. The secondary VLAN is considered the upstream VLAN; that is, traffic from the host travels along the secondary VLAN.

    1. Configuring Virtual Switch Security

    security settings include the following three options:

    • Promiscuous Mode (默認是拒絕)
      1. clip_image039
      1. 混雜模式(Promiscuous Mode)是指一臺機器能夠接收所有經過它的數據流,而不論其目的地址是否是他。
      1. 網卡的混雜模式是爲網絡分析而提供的。

    clip_image040

    Note:***檢測系統的port group一般是要混雜模式開啓

    • MAC Address Changes (默認是接受)
      1. You can see the 6-byte, randomly generated MAC addresses for a VM in the configuration file (.vmx) of the VM
      2. Manually configuring a MAC address in the configuration file of a VM does not work unless the first three bytes are VMware-provided prefixes and the last three bytes are unique. If a non-VMware MAC prefix is entered in the configuration file, the VM will not power on.
      3. A VM’s source MAC address is the effective MAC address, which by default matches the initial MAC address configured in the VMX file. The guest OS, however, may change the effective MAC address.(所有的VM都有兩個MAC,一個是系統的,還有個是對外通信的mac)

    clip_image041

    1. Both of these security policies are concerned with allowing or denying differences between the initial MAC address in the configuration file and the effective MAC address in the guest OS
    2. If the MAC Address Changes option is set to Reject, traffic will not be passed through the vSwitch to the VM

    (incoming), if the initial and the effective MAC addresses do not match. If the Forged Transmits option is set to Reject, traffic will not be passed from the VM to the vSwitch (outgoing) if the initial and the effective MAC addresses do not match.(mac--->incoming;For--->Outgoing)

    clip_image042

    • Forged Transmits(僞信號,默認是接受)

    Note:

    For vSphere Standard Switches, you can apply security policies at the vSwitch or at the port group level. For vSphere Distributed Switches, you apply security policies only at the dvPort group level.

    Default Security profile:

    clip_image043

    • Virtual Switch Policies for Microsoft Network Load Balancing

    For VMs that will be configured as part of a Microsoft Network Load Balancing (NLB) cluster set in Unicast mode, the VM port group must allow MAC address changes and forged transmits.

    發表評論
    所有評論
    還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
    相關文章