OVS源碼--vswitchd啓動(二十一)

bridge 重配置

bridge 平滑

vswitchd啓動時, bridge模塊需要經過reconfigure使實際生效的配置與數據庫中保持一致

static void 
bridge_reconfigure(const struct ovsrec_open_vswitch *ovs_cfg)
{
    /* Destroy "struct bridge"s, "struct port"s, and "struct iface"s according
     * to 'ovs_cfg', with only very minimal configuration otherwise.
     *
     * This is mostly an update to bridge data structures. Nothing is pushed
     * down to ofproto or lower layers. */ 
    add_del_bridges(ovs_cfg);
    HMAP_FOR_EACH(br, node, &all_bridges) {
        bridge_collect_wanted_ports(br, &br->wanted_ports);
        bridge_del_ports(br, &br->wanted_ports);
    }
    .....
}

首先調用add_del_bridges根據數據庫中的記錄ovs_cfg增加刪除bridge, 增加是增加數據庫中有而當前進程中沒有的bridge, 刪除是指刪除數據庫中沒有,而進程中已有的bridge. 之後遍歷每個bridge再調用bridge_del_ports, 對port進行增加和刪除. 實際上,對於vswitchd啓動時的reconfigure, 由於進程中原本不存在任何bridge和port, 因此此時只會按照ovs_cfg進行創建操作.

刪除多餘的 ofproto

static void 
bridge_reconfigure(const struct ovsrec_open_vswitch *ovs_cfg)
{
	 ......
     /* Start pushing configuration changes down to the ofproto layer:
     *
     *   - Delete ofprotos that are no longer configured.
     *
     *   - Delete ports that are no longer configured.
     *
     *   - Reconfigure existing ports to their desired configurations, or
     *     delete them if not possible. 
     */
     bridge_delete_ofprotos();
     HMAP_FOR_EACH (br, node, &all_bridges) {
        if (br->ofproto) {
            bridge_delete_or_reconfigure_ports(br);
        }
    } 
}

從註釋中就可以看出,這一步進行的是ofproto的平滑.在bridge_delete_ofprotos中,會遍歷所有ofproto,如果ofproto沒有對應的bridge或者他們的type不符, 就要調用ofproto_delete將它們刪除.

當完成ofproto的平滑之後, 還要刪除ofproto上記錄的ofport

創建缺少的 ofproto

前面將多餘的ofproto刪除了, 那麼對於新創建的bridge, 自然也需要創建對應的ofproto

static void 
bridge_reconfigure(const struct ovsrec_open_vswitch *ovs_cfg)
{
	 ......
  /* Finish pushing configuration changes to the ofproto layer:
     *
     *     - Create ofprotos that are missing.
     *
     *     - Add ports that are missing. */ 
     HMAP_FOR_EACH_SAFE (br, next, node, &all_bridges) {
        if (!br->ofproto) {
            int error;

            error = ofproto_create(br->name, br->type, &br->ofproto);
            if (error) {
                VLOG_ERR("failed to create bridge %s: %s", br->name,
                         ovs_strerror(error));
                shash_destroy(&br->wanted_ports);
                bridge_destroy(br, true);
            } else {
                /* Trigger storing datapath version. */
                seq_change(connectivity_seq_get());
            }
        }
    } 
}

對於缺失的ofproto, 會調用ofproto_create創建. 注意傳入的參數是datapath_name和datapath_type

int ofproto_create(const char *datapath_name, const char *datapath_type, struct ofproto **ofprotop)
{
    const struct ofproto_class* class;
    struct ofproto* ofproto;
    datapath_type = ofproto_nomalize_type(datapath_type);
    class = ofproto_class_find__(datapath_type);

    ofproto = class->alloc();
    /* Initialize. */
    ......
    error = ofproto->ofproto_class->construct(ofproto)
    ......
    init_ports(ofproto);
    *ofprotop = ofproto;
    return 0;
}

ofproto_create首先找到datapath_type對應的class, 顯然會找到ofproto_dpif_class(當前ovs只有這一種),然後調用ofproto_dpif_class->alloc, 看其實現可知, 其實創建的數據結構不僅是ofproto,而是ofproto_dpif ,可以將ofproto_dpif 看作ofproto的派生類. 但alloc接口返回的還是標準的ofproto. ovs代碼中很多時候都使用了這種技巧. 申請大結構,返回小結構. 上層調用使用標準的小結構作爲參數, 在內部再還原成大結構. 比如這裏調用construct接口,實際調用的是ofproto_dpif_class->construct()

static int 
construct(struct ofproto *ofproto_)
{
    struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_);
    error = open_dpif_backer(ofproto->up.type, &ofproto->backer);
    .....
}

這裏, 會調用open_dpif_backer打開一個backer存儲在&ofproto->backer.注意參數是type, 也就是說明, 每一種type對應一個backer, 即無論這種type的ofproto有多少個,始終只有一個backer.
在這裏插入圖片描述
上圖爲backer相關的數據結構。有dpif_backer、udpif、dpif,既然backer是一種type一個,那麼udpif和dpif也是一種type一個,注意dpif只是基類,實際使用的結構是dpif_netdev和dpif_netlink,分別對應當前ovs支持的netdev和system兩種type, open_dpif_backer在本文就不展開了,感興趣的讀者可以自行查看。

回到bridge_reconfigure, 可以看到.它還會對每個bridge進行許許多多配置, 這部分太多了, 同樣也不再展開了

bridge_configure_mirrors(br);
bridge_configure_forward_bpdu(br);
.....

最後bridge_reconfigure將調用bridge_run__, 這個在之前也提到過,只是那時由於vswitchd剛啓動,不會有實際作用,但現在不一樣了。

static void
bridge_run__(void)
{
    struct bridge *br;
    struct sset types;
    const char *type
    /* Let each datapath type do the work that it needs to do. */
    sset_init(&types);
    ofproto_enumerate_types(&types);
    SSET_FOR_EACH (type, &types) {
        ofproto_type_run(type);
    }
    sset_destroy(&types);

    /* Let each bridge do the work that it needs to do. */
    HMAP_FOR_EACH (br, node, &all_bridges) {
        ofproto_run(br->ofproto);
    } 
}

可以看出,關鍵就是對每種支持的type調用 ofproto_type_run, 對每個bridge調用ofproto_run, 一個一個來看。

ofproto_run最終調用ofproto_dpif_class->type_run()

static int
type_run(const char *type)
{
	struct dpif_backer *backer;
	backer = shash_find_data(&all_dpif_backers, type);

    if (dpif_run(backer->dpif)) {
        backer->need_revalidate = REV_RECONFIGURE;
    }
    
    backer->recv_set_enable = true;
    dpif_recv_set(backer->dpif, backer->recv_set_enable);

    udpif_set_threads(backer->udpif, n_handlers, n_revalidators);
    ....
}

還記得在flow子系統中說的當內核datapath收到一個報文時,會查詢流表,如果沒有對應表項,則會將報文上送到用戶態vswitchd進程,這裏type_run最重要的作用就是啓動n_handlers個接收線程,接收來自內核datapath的消息。

而另一個ofproto_run, 就是運行在這個bridge上的其他協議,具體內容就等到用到時再看吧。

結語

本文和(上)篇描述了ovs系統中vswitchd進程的啓動流程,其中忽略了許多旁枝末節,但總的枝幹是保留的。

原文鏈接:https://blog.csdn.net/chenmo187J3X1/article/details/83304845

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章