spice-server以及spice-gtk非视频状态下传输流量分析研究

服务端编码:

目前的spice图像压缩主要采用了quic,glz和jpeg。quic和glz是无损压缩算法,quic主要用于照片,glz用于人工图像,jpeg也主要用于照片压缩但是是有损的。jpeg能节省50%的带宽,glz只能节省20%,但是jpeg会带来更大的开销,所以不能都使用jpeg进行压缩。

考虑降低流量,首先尽可能选用jpeg方式进行图像压缩进行测试,

red-worker里面其实可以选择是否使用jpeg  worker->jpeg_state = reds_get_jpeg_state(reds);                    

reds_get_jpeg_state(reds);其实是在reds.c里面的spice_server_new(void)里面reds->config->jpeg_state = SPICE_WAN_COMPRESSION_AUTO;

理论上只要将SPICE_WAN_COMPRESSION_AUTO改成SPICE_WAN_COMPRESSION_ALWAYS其实就好了

在display-channel.c里面display_channel_update_compression(DisplayChannel *display, DisplayChannelClient *dcc)函数会选择是否用jpeg取决于之前的spice_server_new里面的设置,但是前面改成always貌似没起作用,所以我们这里索性直接选择写死 display->priv->enable_jpeg = 1;

之后在dcc-send.c里面就marshall_qxl_drawable函数中选择非视频传输方式的时候会选用jpeg,就会选择有损压缩方式marshall_lossy_qxl_drawable,而这个里面其实指令最多的就是red_lossy_marshall_qxl_draw_copy函数,这个函数最终还是会跳到red_marshall_qxl_draw_copy无损情况,最后的归宿都是fill_bits函数,这里90%的情况都是调用了

case SPICE_IMAGE_TYPE_BITMAP:而这里面有重要的图像编码函数dcc_compress_image,这个函数里面只有你选择quic编码方式才会使用到之前设置的jpeg的编码方式,这里如何选择quic的方式?可以在spicy的界面上非全屏模式下点击options->Preferred image compression->中选择quic,选择之后客户端会通过协议传输到服务端,在服务端的dcc.c里面的bool dcc_handle_message(RedChannelClient *rcc, uint16_t type, uint32_t size, void *msg),这个协议函数里面的dcc_handle_preferred_compression函数中改变服务端的编码方式,最终修改:dcc->priv->image_compression = pc->image_compression;

想要降低非视频状态下的流量情况,最主要的应该是解析下面这段代码:在不播放视频的情况下%99的数据流量都是来自这里!其实也就是上面说的draw copy指令, 对于该指令可做一定程度合并,在较小的时间区间内,同一区域内即将被覆盖的指令就不再发送,即有可能降低带宽消耗。 

case SPICE_IMAGE_TYPE_BITMAP: {

        spice_debug("fill_bits_SPICE_IMAGE_TYPE_BITMAP");
        SpiceBitmap *bitmap = &image.u.bitmap;

#ifdef DUMP_BITMAP
        //dump_bitmap_dcc(&simage->u.bitmap, drawable->red_drawable->bbox);
#endif
        /* Images must be added to the cache only after they are compressed
           in order to prevent starvation in the client between pixmap_cache and
           global dictionary (in cases of multiple monitors) */

        if (reds_stream_get_family(red_channel_client_get_stream(rcc)) == AF_UNIX ||

            !dcc_compress_image(dcc, &image, &simage->u.bitmap,
                                drawable, can_lossy, &comp_send_data)) {

            SpicePalette *palette;
            red_display_add_image_to_pixmap_cache(rcc, simage, &image, FALSE);

            *bitmap = simage->u.bitmap;
            bitmap->flags = bitmap->flags & SPICE_BITMAP_FLAGS_TOP_DOWN;

            palette = bitmap->palette;
            dcc_palette_cache_palette(dcc, palette, &bitmap->flags);

            spice_marshall_Image(m, &image,
                                 &bitmap_palette_out, &lzplt_palette_out);
            spice_assert(lzplt_palette_out == NULL);

            if (bitmap_palette_out && palette) {

                spice_marshall_Palette(bitmap_palette_out, palette);
            }

            /* 'drawable' owns this bitmap data, so it must be kept
             * alive until the message is sent. */

            for (unsigned int i = 0; i < bitmap->data->num_chunks; i++) {

                drawable->refs++;
                spice_marshaller_add_by_ref_full(m, bitmap->data->chunk[i].data,
                                                 bitmap->data->chunk[i].len,
                                                 marshaller_unref_drawable, drawable);

            }

            pthread_mutex_unlock(&dcc->priv->pixmap_cache->lock);
            return FILL_BITS_TYPE_BITMAP;

        }

里面带着几个问题:

    1、如何从源头开始减少实际图像传输的带宽,或者说是否有多余的图像数据可以不用发送?

    2、图像本地缓存到底是什么样的一种机制或者算法?

客户端数据:

首先进行初始化

在channel-display.c中的display_handle_surface_create(SpiceChannel *channel, SpiceMsgIn *in)会创建一个基础的画面,里面的create_canvas是主要的创建画布,创建画布里面会有glz zlib jpeg三种解码器的初始化,之后调用canvas_create_for_data来进行创建,这里面有两个函数:

pixman_image_create_bits

canvas_create_common->canvas_base_init...

另外他们在sw_canvas.c里面有一个全局函数里面进行了canvas_base_init_ops这个操作,这个里面其实是将canvas_base.h里面有一个结构体里面的函数指正指向真正的定义  。

 ops->draw_fill = canvas_draw_fill;

    ops->draw_copy = canvas_draw_copy;
    ops->draw_opaque = canvas_draw_opaque;
    ops->copy_bits = canvas_copy_bits;
    ops->draw_blend = canvas_draw_blend;
    ops->draw_blackness = canvas_draw_blackness;
    ops->draw_whiteness = canvas_draw_whiteness;
    ops->draw_invers = canvas_draw_invers;
    ops->draw_transparent = canvas_draw_transparent;
    ops->draw_alpha_blend = canvas_draw_alpha_blend;
    ops->draw_stroke = canvas_draw_stroke;
    ops->draw_rop3 = canvas_draw_rop3;
    ops->draw_composite = canvas_draw_composite;
    ops->group_start = canvas_base_group_start;
    ops->group_end = canvas_base_group_end;

之后开始收数据,其实图像的数据包都来自channel-display.c里面的最后的指令,看我们最关心的display_handle_draw_copy指令最后要到了DRAW(copy);宏定义,这个宏定义很清楚的指向的就是canvas_base.c里面的canvas_draw_copy函数。

#define DRAW(type) {                                                    \

        display_surface *surface =                                      \
            find_surface(SPICE_DISPLAY_CHANNEL(channel)->priv,          \
                op->base.surface_id);                                   \
        g_return_if_fail(surface != NULL);                              \
        surface->canvas->ops->draw_##type(surface->canvas, &op->base.box, \
                                          &op->base.clip, &op->data);   \
        if (surface->primary) {                                         \
            emit_invalidate(channel, &op->base.box);                    \
        }                                                               \
}

canvas_draw_copy里面有一个src_image = canvas_get_image(canvas, copy->src_bitmap, FALSE);函数其实就是进行图像解码的。

canvas_get_image_internal(canvas, image, want_original, TRUE);

surface = get_surface_from_canvas(canvas, image, want_original);这个函数里面有一系列的case

switch (image->descriptor.type) {

    case SPICE_IMAGE_TYPE_QUIC:
        return canvas_get_quic(canvas, image, want_original);

#if defined(SW_CANVAS_CACHE)
    case SPICE_IMAGE_TYPE_LZ_PLT:
    case SPICE_IMAGE_TYPE_LZ_RGB:
        return canvas_get_lz(canvas, image, want_original);

    case SPICE_IMAGE_TYPE_GLZ_RGB:
        return canvas_get_glz(canvas, image, want_original);

    case SPICE_IMAGE_TYPE_ZLIB_GLZ_RGB:
        return canvas_get_zlib_glz_rgb(canvas, image, want_original);

    case SPICE_IMAGE_TYPE_FROM_CACHE_LOSSLESS:
        return canvas->bits_cache->ops->get_lossless(canvas->bits_cache, image->descriptor.id);

#endif
    case SPICE_IMAGE_TYPE_JPEG:
        return canvas_get_jpeg(canvas, image);

    case SPICE_IMAGE_TYPE_JPEG_ALPHA:
        return canvas_get_jpeg_alpha(canvas, image);

    case SPICE_IMAGE_TYPE_LZ4:
#ifdef USE_LZ4
        return canvas_get_lz4(canvas, image);
#else
        spice_warning("Lz4 compression algorithm not supported.\n");

        return NULL;
#endif
    case SPICE_IMAGE_TYPE_FROM_CACHE:
        return canvas->bits_cache->ops->get(canvas->bits_cache, image->descriptor.id);

    case SPICE_IMAGE_TYPE_BITMAP:
        return canvas_get_bits(canvas, &image->u.bitmap, want_original);

    default:
        spice_warn_if_reached();

        return NULL;
    }

最后选择canvas_get_jpeg的话里面就会有canvas->jpeg->ops->decode(canvas->jpeg, dest, stride, SPICE_BITMAP_FMT_32BIT);

解码的操作,整个流程就基本上走通了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章