傳統交換機是基於MAC表進行轉發的,所以OVS也支持MAC學習特性,但是由於OVS也支持Openflow協議作爲控制面,其功能就不僅僅是一個二層交換機了。
先簡單談談Openflow
由於現在的網絡暴露出了越來越多的弊病以及人們對網絡性能需求的提高,於是研究人員不得不把很多複雜功能加入到路由器的體系結構當中,例如OSPF,BGP,組播,區分服務,流量工程,NAT,防火牆,MPLS等等。這就使得路由器等交換設備越來越臃腫而且性能提升的空間越來越小。
然而與網絡領域的困境截然不同的是,計算機領域實現了日新月異的發展。仔細回顧計算機領域的發展,不難發現其關鍵在於計算機領域找到了一個簡單可用的硬件底層(x86指令集)。由於有了這樣一個公用的硬件底層,所以在軟件方面,不論是應用程序還是操作系統都取得了飛速的發展。很多主張重新設計計算機網絡體系結構的人士認爲:網絡可以複製計算機領域的成功經驗來解決現在網絡所遇到的所有問題。在這種思想的指導下,將來的網絡必將是這樣的:底層的數據通路(交換機、路由器)是“啞的、簡單的、最小的”,並定義一個對外開放的關於流表的公用的API,同時採用控制器來控制整個網絡。未來的研究人員就可以在控制器上自由的調用底層的API來編程,從而實現網絡的創新。
OpenFlow正是這種網絡創新思想的強有力的推動者。OpenFlow交換機將原來完全由交換機/路由器控制的報文轉發過程轉化爲由OpenFlow交換機(OpenFlow Switch)和控制服務器(Controller)來共同完成,從而實現了數據轉發和路由控制的分離。控制器可以通過事先規定好的接口操作來控制OpenFlow交換機中的流表,從而達到控制數據轉發的目的。
在Openflow的設計中,其匹配項除了二層的eth_src, eth_dst, eth_type, vlan 等的匹配外,還有三層的源目的IP,IP協議類型,的匹配,甚至於還有四層的端口號的匹配等,所以在Openflow和控制器的支持下,OVS已經不僅僅是一個二層交換機。
Openflow相關接口一覽
static enum ofperr
ofproto_flow_mod_start(struct ofproto *ofproto, struct ofproto_flow_mod *ofm)
OVS_REQUIRES(ofproto_mutex)
{
enum ofperr error;
rule_collection_init(&ofm->old_rules);
rule_collection_init(&ofm->new_rules);
switch (ofm->command) {
case OFPFC_ADD:
error = add_flow_start(ofproto, ofm);
break;
case OFPFC_MODIFY:
error = modify_flows_start_loose(ofproto, ofm);
break;
case OFPFC_MODIFY_STRICT:
error = modify_flow_start_strict(ofproto, ofm);
break;
case OFPFC_DELETE:
error = delete_flows_start_loose(ofproto, ofm);
break;
case OFPFC_DELETE_STRICT:
error = delete_flow_start_strict(ofproto, ofm);
break;
default:
OVS_NOT_REACHED();
}
/* Release resources not needed after start. */
ofproto_flow_mod_uninit(ofm);
if (error) {
rule_collection_destroy(&ofm->old_rules);
rule_collection_destroy(&ofm->new_rules);
}
return error;
}
Ofproto模塊會收到flow_mod消息,無論是控制器發出還是命令行客戶端添加流表,都會調用這個函數,OVS對收到的流表會保存在用戶空間的oftable數據結構中,接下來就是整個OpenVswitch的核心,在用戶空間對內核態上送的報文尋找匹配的Flow Entry。
相關數據結構和優化一覽
數據結構和算法的設計上有幾個難點:
- Openflow過於靈活的匹配規則對Pipe的實現提出了挑戰,其rule包括metadata,L2,L3,L4的匹配項,可能都分佈在一張表中,另外還有優先級的考慮
- 在併發的情況下,如何保障表項的修改,增加和刪除對最終結果沒有影響
- 如何保障高性能
最終OpenVswitch設計了一個複雜的數據結構和保障機制來達到上述要求
上圖是用戶空間各個核心數據結構的關係,其核心數據結構式Classifier,Pipeline裏的每個Table對應一個Classifier,Classifier把整張表裏的各個rule進行了分類分成了不同的subtable,每個subtable有自己的掩碼,這個掩碼錶示這個subtable裏的rule要匹配的掩碼。很顯然,這種分類並不能提高匹配效率,這個數據結構還做了不少優化策略:
- subtable之間還做了優先級排序,需要優先級向量來標識。這樣的話從高優先級的subtable先開始,一旦匹配就可以跳過不少低優先級的subtable。
- 分階段匹配,對於一個subtable還可以再繼續拆分多個hashtable,因爲如果某個subtabl同時要匹配的項比較多,包含了metadata,L2, L3,L4的匹配項,那麼就按這四個子類來分,第一階段先匹配metadata;然後再匹配metadata,L2;繼續metadata,L2,L3;最後纔是metadata,L2,L3,L4。對於能匹配的rule其實並不能增加效率,但是對於不匹配的情況卻是可以增加很大效率。
- 前綴追蹤,前綴跟蹤允許分類器跳過比必要的前綴更長的rule,從而爲數據流更好的通配符。當流表包含具有不同前綴長度的IP地址字段匹配項時,前綴跟蹤可能是有益的。例如,當一個流表中包含一個完整的IP地址匹配和一個地址前綴匹配,完整的地址匹配通常會導致此表的該字段非通配符全0(取決於rule優先級)。在這種情況下,每個有不同的地址的數據包只能被交給用戶空間處理並生成自己的數據流。在前綴跟蹤啓用後用戶空間會爲問題Packet生成更短的前綴地址匹配,而把無關的地址位置成通配,可以使用一個rule來處理所有的問題包。在這種情況下,可以避免許多的用戶上調,這樣整體性能可以更好。當然這僅僅是性能優化,因爲不管是否有前綴跟蹤數據包將得到相同的處理。另外前綴跟蹤是可以和分階段匹配配套使用,Trie樹會追蹤整個Classifier中所有Rule的地址前綴的數量。更神奇的是通過維護一個在Trie樹遍歷中遇到的最長前綴的列表或者維護通過不同Metadata分開的規則子集獨立的Trie樹可以實現表跳躍。前綴追蹤是通過OVSDB“Flow_Table”表”fieldspec” 列來配置的,”fieldspec” 列是用string map這裏前綴的key值告訴哪些字段可以被用來做前綴追蹤。
併發的支持
很顯然,出於對性能的考慮,對Pipeline的併發的支持是必須的,而且在轉發的場景下,相對而言會是一個reader多writer少的一個場景,針對這種場景,OVS採用了類似於悲觀鎖的版本控制機制和Linux RCU機制的保護,下面分別對兩者進行介紹:
基於版本的併發的支持
Classifier檢索總是在一個特定的版本上進行的,這個版本號是一個自然數。當一個新的Rule被添加到Classifier之後,它會被設置爲在一個特定的版本號上可見。如果這個插入時的版本號比當前檢索的時候的版本號要大,那麼它暫時是不可見的。這意味着檢索不會發現這個rule,但是rule會在Classifier迭代之後馬上可見。
類似的,一條rule可以被在將來的某個版本刪除。在當前的檢索沒有完成之前,rule是不允許被刪除的,首先第一步應該把rule設置爲不可見,之後當用來檢索的Classifier的版本號已經大於刪除版本號時再實際上把這個rule刪除掉
Classifier 支持版本的兩個原因:
支持基於版本的修改使得對Classifier的修改具備原子特徵,不同版本間的中間狀態是對外不可見的。同時,當一個rule被添加到未來的版本里,這些修改可以回退掉且不會對當前的檢索產生任何影響
性能:添加或刪除一個rule集合,其性能的影響和Classifier已有的rule數量成正比。當多個rule一口氣添加上之後,只要整批的rule修改還沒有完成之前不可見,那麼這種影響其實是可以避免。
Linux RCU機制
RCU(Read-Copy Update),顧名思義就是讀-拷貝修改,它是基於其原理命名的。對於被RCU保護的共享數據結構,讀者不需要獲得任何鎖就可以訪問它,但寫者在訪問它時首先拷貝一個副本,然後對副本進行修改,最後使用一個回調(callback)機制在適當的時機把指向原來數據的指針重新指向新的被修改的數據。這個時機就是所有引用該數據的CPU都退出對共享數據的操作。
長圖以刪除爲例,在做刪除操作後起始並沒有把數據銷燬掉,而是等待已經讀取該數據的線程出讀臨界區之後再做銷燬工作,這段時間叫寬限期,而在這段時間內新來的線程則可以讀取最新修改的值。
但是注意到RCU機制和版本控制機制是同時使用的,由於Classifier rule是RCU保護的,Rule的銷燬在執行remove操作之後必須是RCU推遲的。同時,當版本特性也在使用時,remove操作自身也必須是RCU推遲的(此時的刪除必須等待到指定的版本之後)。在這種情形下rule的刪除就經理了兩次RCU推遲。 比如,第二次調用ovsrcu_postpone() 來銷燬Rule就是在第一次RCU回調執行remove操作的時候調用的
用戶態轉發的核心都在Classifier裏,這裏也會附加Classifier的詳細解釋,讀者可以自行體會。
/* Flow classifier.
*
*
* What?
* =====
*
* A flow classifier holds any number of "rules", each of which specifies
* values to match for some fields or subfields and a priority. Each OpenFlow
* table is implemented as a flow classifier.
*
* The classifier has two primary design goals. The first is obvious: given a
* set of packet headers, as quickly as possible find the highest-priority rule
* that matches those headers. The following section describes the second
* goal.
*
*
* "Un-wildcarding"
* ================
*
* A primary goal of the flow classifier is to produce, as a side effect of a
* packet lookup, a wildcard mask that indicates which bits of the packet
* headers were essential to the classification result. Ideally, a 1-bit in
* any position of this mask means that, if the corresponding bit in the packet
* header were flipped, then the classification result might change. A 0-bit
* means that changing the packet header bit would have no effect. Thus, the
* wildcarded bits are the ones that played no role in the classification
* decision.
*
* Such a wildcard mask is useful with datapaths that support installing flows
* that wildcard fields or subfields. If an OpenFlow lookup for a TCP flow
* does not actually look at the TCP source or destination ports, for example,
* then the switch may install into the datapath a flow that wildcards the port
* numbers, which in turn allows the datapath to handle packets that arrive for
* other TCP source or destination ports without additional help from
* ovs-vswitchd. This is useful for the Open vSwitch software and,
* potentially, for ASIC-based switches as well.
*
* Some properties of the wildcard mask:
*
* - "False 1-bits" are acceptable, that is, setting a bit in the wildcard
* mask to 1 will never cause a packet to be forwarded the wrong way.
* As a corollary, a wildcard mask composed of all 1-bits will always
* yield correct (but often needlessly inefficient) behavior.
*
* - "False 0-bits" can cause problems, so they must be avoided. In the
* extreme case, a mask of all 0-bits is only correct if the classifier
* contains only a single flow that matches all packets.
*
* - 0-bits are desirable because they allow the datapath to act more
* autonomously, relying less on ovs-vswitchd to process flow setups,
* thereby improving performance.
*
* - We don't know a good way to generate wildcard masks with the maximum
* (correct) number of 0-bits. We use various approximations, described
* in later sections.
*
* - Wildcard masks for lookups in a given classifier yield a
* non-overlapping set of rules. More specifically:
*
* Consider an classifier C1 filled with an arbitrary collection of rules
* and an empty classifier C2. Now take a set of packet headers H and
* look it up in C1, yielding a highest-priority matching rule R1 and
* wildcard mask M. Form a new classifier rule R2 out of packet headers
* H and mask M, and add R2 to C2 with a fixed priority. If one were to
* do this for every possible set of packet headers H, then this
* process would not attempt to add any overlapping rules to C2, that is,
* any packet lookup using the rules generated by this process matches at
* most one rule in C2.
*
* During the lookup process, the classifier starts out with a wildcard mask
* that is all 0-bits, that is, fully wildcarded. As lookup proceeds, each
* step tends to add constraints to the wildcard mask, that is, change
* wildcarded 0-bits into exact-match 1-bits. We call this "un-wildcarding".
* A lookup step that examines a particular field must un-wildcard that field.
* In general, un-wildcarding is necessary for correctness but undesirable for
* performance.
*
*
* Basic Classifier Design
* =======================
*
* Suppose that all the rules in a classifier had the same form. For example,
* suppose that they all matched on the source and destination Ethernet address
* and wildcarded all the other fields. Then the obvious way to implement a
* classifier would be a hash table on the source and destination Ethernet
* addresses. If new classification rules came along with a different form,
* you could add a second hash table that hashed on the fields matched in those
* rules. With two hash tables, you look up a given flow in each hash table.
* If there are no matches, the classifier didn't contain a match; if you find
* a match in one of them, that's the result; if you find a match in both of
* them, then the result is the rule with the higher priority.
*
* This is how the classifier works. In a "struct classifier", each form of
* "struct cls_rule" present (based on its ->match.mask) goes into a separate
* "struct cls_subtable". A lookup does a hash lookup in every "struct
* cls_subtable" in the classifier and tracks the highest-priority match that
* it finds. The subtables are kept in a descending priority order according
* to the highest priority rule in each subtable, which allows lookup to skip
* over subtables that can't possibly have a higher-priority match than already
* found. Eliminating lookups through priority ordering aids both classifier
* primary design goals: skipping lookups saves time and avoids un-wildcarding
* fields that those lookups would have examined.
*
* One detail: a classifier can contain multiple rules that are identical other
* than their priority. When this happens, only the highest priority rule out
* of a group of otherwise identical rules is stored directly in the "struct
* cls_subtable", with the other almost-identical rules chained off a linked
* list inside that highest-priority rule.
*
* The following sub-sections describe various optimizations over this simple
* approach.
*
*
* Staged Lookup (Wildcard Optimization)
* -------------------------------------
*
* Subtable lookup is performed in ranges defined for struct flow, starting
* from metadata (registers, in_port, etc.), then L2 header, L3, and finally
* L4 ports. Whenever it is found that there are no matches in the current
* subtable, the rest of the subtable can be skipped.
*
* Staged lookup does not reduce lookup time, and it may increase it, because
* it changes a single hash table lookup into multiple hash table lookups.
* It reduces un-wildcarding significantly in important use cases.
*
*
* Prefix Tracking (Wildcard Optimization)
* ---------------------------------------
*
* Classifier uses prefix trees ("tries") for tracking the used
* address space, enabling skipping classifier tables containing
* longer masks than necessary for the given address. This reduces
* un-wildcarding for datapath flows in parts of the address space
* without host routes, but consulting extra data structures (the
* tries) may slightly increase lookup time.
*
* Trie lookup is interwoven with staged lookup, so that a trie is
* searched only when the configured trie field becomes relevant for
* the lookup. The trie lookup results are retained so that each trie
* is checked at most once for each classifier lookup.
*
* This implementation tracks the number of rules at each address
* prefix for the whole classifier. More aggressive table skipping
* would be possible by maintaining lists of tables that have prefixes
* at the lengths encountered on tree traversal, or by maintaining
* separate tries for subsets of rules separated by metadata fields.
*
* Prefix tracking is configured via OVSDB "Flow_Table" table,
* "fieldspec" column. "fieldspec" is a string map where a "prefix"
* key tells which fields should be used for prefix tracking. The
* value of the "prefix" key is a comma separated list of field names.
*
* There is a maximum number of fields that can be enabled for any one
* flow table. Currently this limit is 3.
*
*
* Partitioning (Lookup Time and Wildcard Optimization)
* ----------------------------------------------------
*
* Suppose that a given classifier is being used to handle multiple stages in a
* pipeline using "resubmit", with metadata (that is, the OpenFlow 1.1+ field
* named "metadata") distinguishing between the different stages. For example,
* metadata value 1 might identify ingress rules, metadata value 2 might
* identify ACLs, and metadata value 3 might identify egress rules. Such a
* classifier is essentially partitioned into multiple sub-classifiers on the
* basis of the metadata value.
*
* The classifier has a special optimization to speed up matching in this
* scenario:
*
* - Each cls_subtable that matches on metadata gets a tag derived from the
* subtable's mask, so that it is likely that each subtable has a unique
* tag. (Duplicate tags have a performance cost but do not affect
* correctness.)
*
* - For each metadata value matched by any cls_rule, the classifier
* constructs a "struct cls_partition" indexed by the metadata value.
* The cls_partition has a 'tags' member whose value is the bitwise-OR of
* the tags of each cls_subtable that contains any rule that matches on
* the cls_partition's metadata value. In other words, struct
* cls_partition associates metadata values with subtables that need to
* be checked with flows with that specific metadata value.
*
* Thus, a flow lookup can start by looking up the partition associated with
* the flow's metadata, and then skip over any cls_subtable whose 'tag' does
* not intersect the partition's 'tags'. (The flow must also be looked up in
* any cls_subtable that doesn't match on metadata. We handle that by giving
* any such cls_subtable TAG_ALL as its 'tags' so that it matches any tag.)
*
* Partitioning saves lookup time by reducing the number of subtable lookups.
* Each eliminated subtable lookup also reduces the amount of un-wildcarding.
*
*
* Classifier Versioning
* =====================
*
* Classifier lookups are always done in a specific classifier version, where
* a version is defined to be a natural number.
*
* When a new rule is added to a classifier, it is set to become visible in a
* specific version. If the version number used at insert time is larger than
* any version number currently used in lookups, the new rule is said to be
* invisible to lookups. This means that lookups won't find the rule, but the
* rule is immediately available to classifier iterations.
*
* Similarly, a rule can be marked as to be deleted in a future version. To
* delete a rule in a way to not remove the rule before all ongoing lookups are
* finished, the rule should be made invisible in a specific version number.
* Then, when all the lookups use a later version number, the rule can be
* actually removed from the classifier.
*
* Classifiers can hold duplicate rules (rules with the same match criteria and
* priority) when at most one of these duplicates is visible in any given
* lookup version. The caller responsible for classifier modifications must
* maintain this invariant.
*
* The classifier supports versioning for two reasons:
*
* 1. Support for versioned modifications makes it possible to perform an
* arbitraty series of classifier changes as one atomic transaction,
* where intermediate versions of the classifier are not visible to any
* lookups. Also, when a rule is added for a future version, or marked
* for removal after the current version, such modifications can be
* reverted without any visible effects to any of the current lookups.
*
* 2. Performance: Adding (or deleting) a large set of rules can, in
* pathological cases, have a cost proportional to the number of rules
* already in the classifier. When multiple rules are being added (or
* deleted) in one go, though, this pathological case cost can be
* typically avoided, as long as it is OK for any new rules to be
* invisible until the batch change is complete.
*
* Note that the classifier_replace() function replaces a rule immediately, and
* is therefore not safe to use with versioning. It is still available for the
* users that do not use versioning.
*
*
* Deferred Publication
* ====================
*
* Removing large number of rules from classifier can be costly, as the
* supporting data structures are teared down, in many cases just to be
* re-instantiated right after. In the worst case, as when each rule has a
* different match pattern (mask), the maintenance of the match patterns can
* have cost O(N^2), where N is the number of different match patterns. To
* alleviate this, the classifier supports a "deferred mode", in which changes
* in internal data structures needed for future version lookups may not be
* fully computed yet. The computation is finalized when the deferred mode is
* turned off.
*
* This feature can be used with versioning such that all changes to future
* versions are made in the deferred mode. Then, right before making the new
* version visible to lookups, the deferred mode is turned off so that all the
* data structures are ready for lookups with the new version number.
*
* To use deferred publication, first call classifier_defer(). Then, modify
* the classifier via additions (classifier_insert() with a specific, future
* version number) and deletions (use cls_rule_make_removable_after_version()).
* Then call classifier_publish(), and after that, announce the new version
* number to be used in lookups.
*
*
* Thread-safety
* =============
*
* The classifier may safely be accessed by many reader threads concurrently
* and by a single writer, or by multiple writers when they guarantee mutually
* exlucive access to classifier modifications.
*
* Since the classifier rules are RCU protected, the rule destruction after
* removal from the classifier must be RCU postponed. Also, when versioning is
* used, the rule removal itself needs to be typically RCU postponed. In this
* case the rule destruction is doubly RCU postponed, i.e., the second
* ovsrcu_postpone() call to destruct the rule is called from the first RCU
* callback that removes the rule.
*
* Rules that have never been visible to lookups are an exeption to the above
* rule. Such rules can be removed immediately, but their destruction must
* still be RCU postponed, as the rule's visibility attribute may be examined
* parallel to the rule's removal. */