欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

ovs源码--vswitchd启动(二十一)

程序员文章站 2022-04-30 10:32:58
...

bridge 重配置

bridge 平滑

vswitchd启动时, bridge模块需要经过reconfigure使实际生效的配置与数据库中保持一致

static void 
bridge_reconfigure(const struct ovsrec_open_vswitch *ovs_cfg)
{
    /* Destroy "struct bridge"s, "struct port"s, and "struct iface"s according
     * to 'ovs_cfg', with only very minimal configuration otherwise.
     *
     * This is mostly an update to bridge data structures. Nothing is pushed
     * down to ofproto or lower layers. */ 
    add_del_bridges(ovs_cfg);
    HMAP_FOR_EACH(br, node, &all_bridges) {
        bridge_collect_wanted_ports(br, &br->wanted_ports);
        bridge_del_ports(br, &br->wanted_ports);
    }
    .....
}

首先调用add_del_bridges根据数据库中的记录ovs_cfg增加删除bridge, 增加是增加数据库中有而当前进程中没有的bridge, 删除是指删除数据库中没有,而进程中已有的bridge. 之后遍历每个bridge再调用bridge_del_ports, 对port进行增加和删除. 实际上,对于vswitchd启动时的reconfigure, 由于进程中原本不存在任何bridge和port, 因此此时只会按照ovs_cfg进行创建操作.

删除多余的 ofproto

static void 
bridge_reconfigure(const struct ovsrec_open_vswitch *ovs_cfg)
{
	 ......
     /* Start pushing configuration changes down to the ofproto layer:
     *
     *   - Delete ofprotos that are no longer configured.
     *
     *   - Delete ports that are no longer configured.
     *
     *   - Reconfigure existing ports to their desired configurations, or
     *     delete them if not possible. 
     */
     bridge_delete_ofprotos();
     HMAP_FOR_EACH (br, node, &all_bridges) {
        if (br->ofproto) {
            bridge_delete_or_reconfigure_ports(br);
        }
    } 
}

从注释中就可以看出,这一步进行的是ofproto的平滑.在bridge_delete_ofprotos中,会遍历所有ofproto,如果ofproto没有对应的bridge或者他们的type不符, 就要调用ofproto_delete将它们删除.

当完成ofproto的平滑之后, 还要删除ofproto上记录的ofport

创建缺少的 ofproto

前面将多余的ofproto删除了, 那么对于新创建的bridge, 自然也需要创建对应的ofproto

static void 
bridge_reconfigure(const struct ovsrec_open_vswitch *ovs_cfg)
{
	 ......
  /* Finish pushing configuration changes to the ofproto layer:
     *
     *     - Create ofprotos that are missing.
     *
     *     - Add ports that are missing. */ 
     HMAP_FOR_EACH_SAFE (br, next, node, &all_bridges) {
        if (!br->ofproto) {
            int error;

            error = ofproto_create(br->name, br->type, &br->ofproto);
            if (error) {
                VLOG_ERR("failed to create bridge %s: %s", br->name,
                         ovs_strerror(error));
                shash_destroy(&br->wanted_ports);
                bridge_destroy(br, true);
            } else {
                /* Trigger storing datapath version. */
                seq_change(connectivity_seq_get());
            }
        }
    } 
}

对于缺失的ofproto, 会调用ofproto_create创建. 注意传入的参数是datapath_name和datapath_type

int ofproto_create(const char *datapath_name, const char *datapath_type, struct ofproto **ofprotop)
{
    const struct ofproto_class* class;
    struct ofproto* ofproto;
    datapath_type = ofproto_nomalize_type(datapath_type);
    class = ofproto_class_find__(datapath_type);

    ofproto = class->alloc();
    /* Initialize. */
    ......
    error = ofproto->ofproto_class->construct(ofproto)
    ......
    init_ports(ofproto);
    *ofprotop = ofproto;
    return 0;
}

ofproto_create首先找到datapath_type对应的class, 显然会找到ofproto_dpif_class(当前ovs只有这一种),然后调用ofproto_dpif_class->alloc, 看其实现可知, 其实创建的数据结构不仅是ofproto,而是ofproto_dpif ,可以将ofproto_dpif 看作ofproto的派生类. 但alloc接口返回的还是标准的ofproto. ovs代码中很多时候都使用了这种技巧. 申请大结构,返回小结构. 上层调用使用标准的小结构作为参数, 在内部再还原成大结构. 比如这里调用construct接口,实际调用的是ofproto_dpif_class->construct()

static int 
construct(struct ofproto *ofproto_)
{
    struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_);
    error = open_dpif_backer(ofproto->up.type, &ofproto->backer);
    .....
}

这里, 会调用open_dpif_backer打开一个backer存储在&ofproto->backer.注意参数是type, 也就是说明, 每一种type对应一个backer, 即无论这种type的ofproto有多少个,始终只有一个backer.
ovs源码--vswitchd启动(二十一)
上图为backer相关的数据结构。有dpif_backer、udpif、dpif,既然backer是一种type一个,那么udpif和dpif也是一种type一个,注意dpif只是基类,实际使用的结构是dpif_netdev和dpif_netlink,分别对应当前ovs支持的netdev和system两种type, open_dpif_backer在本文就不展开了,感兴趣的读者可以自行查看。

回到bridge_reconfigure, 可以看到.它还会对每个bridge进行许许多多配置, 这部分太多了, 同样也不再展开了

bridge_configure_mirrors(br);
bridge_configure_forward_bpdu(br);
.....

最后bridge_reconfigure将调用bridge_run__, 这个在之前也提到过,只是那时由于vswitchd刚启动,不会有实际作用,但现在不一样了。

static void
bridge_run__(void)
{
    struct bridge *br;
    struct sset types;
    const char *type
    /* Let each datapath type do the work that it needs to do. */
    sset_init(&types);
    ofproto_enumerate_types(&types);
    SSET_FOR_EACH (type, &types) {
        ofproto_type_run(type);
    }
    sset_destroy(&types);

    /* Let each bridge do the work that it needs to do. */
    HMAP_FOR_EACH (br, node, &all_bridges) {
        ofproto_run(br->ofproto);
    } 
}

可以看出,关键就是对每种支持的type调用 ofproto_type_run, 对每个bridge调用ofproto_run, 一个一个来看。

ofproto_run最终调用ofproto_dpif_class->type_run()

static int
type_run(const char *type)
{
	struct dpif_backer *backer;
	backer = shash_find_data(&all_dpif_backers, type);

    if (dpif_run(backer->dpif)) {
        backer->need_revalidate = REV_RECONFIGURE;
    }
    
    backer->recv_set_enable = true;
    dpif_recv_set(backer->dpif, backer->recv_set_enable);

    udpif_set_threads(backer->udpif, n_handlers, n_revalidators);
    ....
}

还记得在flow子系统中说的当内核datapath收到一个报文时,会查询流表,如果没有对应表项,则会将报文上送到用户态vswitchd进程,这里type_run最重要的作用就是启动n_handlers个接收线程,接收来自内核datapath的消息。

而另一个ofproto_run, 就是运行在这个bridge上的其他协议,具体内容就等到用到时再看吧。

结语

本文和(上)篇描述了ovs系统中vswitchd进程的启动流程,其中忽略了许多旁枝末节,但总的枝干是保留的。

原文链接:https://blog.csdn.net/chenmo187J3X1/article/details/83304845

相关标签: