spice-server以及spice-gtk非視頻狀態下傳輸流量分析研究

服務端編碼:

目前的spice圖像壓縮主要採用了quic,glz和jpeg。quic和glz是無損壓縮算法,quic主要用於照片,glz用於人工圖像,jpeg也主要用於照片壓縮但是是有損的。jpeg能節省50%的帶寬,glz只能節省20%,但是jpeg會帶來更大的開銷,所以不能都使用jpeg進行壓縮。

考慮降低流量,首先儘可能選用jpeg方式進行圖像壓縮進行測試,

red-worker裏面其實可以選擇是否使用jpeg  worker->jpeg_state = reds_get_jpeg_state(reds);                    

reds_get_jpeg_state(reds);其實是在reds.c裏面的spice_server_new(void)裏面reds->config->jpeg_state = SPICE_WAN_COMPRESSION_AUTO;

理論上只要將SPICE_WAN_COMPRESSION_AUTO改成SPICE_WAN_COMPRESSION_ALWAYS其實就好了

在display-channel.c裏面display_channel_update_compression(DisplayChannel *display, DisplayChannelClient *dcc)函數會選擇是否用jpeg取決於之前的spice_server_new裏面的設置,但是前面改成always貌似沒起作用,所以我們這裏索性直接選擇寫死 display->priv->enable_jpeg = 1;

之後在dcc-send.c裏面就marshall_qxl_drawable函數中選擇非視頻傳輸方式的時候會選用jpeg,就會選擇有損壓縮方式marshall_lossy_qxl_drawable,而這個裏面其實指令最多的就是red_lossy_marshall_qxl_draw_copy函數,這個函數最終還是會跳到red_marshall_qxl_draw_copy無損情況,最後的歸宿都是fill_bits函數,這裏90%的情況都是調用了

case SPICE_IMAGE_TYPE_BITMAP:而這裏面有重要的圖像編碼函數dcc_compress_image,這個函數裏面只有你選擇quic編碼方式纔會使用到之前設置的jpeg的編碼方式,這裏如何選擇quic的方式?可以在spicy的界面上非全屏模式下點擊options->Preferred image compression->中選擇quic,選擇之後客戶端會通過協議傳輸到服務端,在服務端的dcc.c裏面的bool dcc_handle_message(RedChannelClient *rcc, uint16_t type, uint32_t size, void *msg),這個協議函數裏面的dcc_handle_preferred_compression函數中改變服務端的編碼方式,最終修改:dcc->priv->image_compression = pc->image_compression;

想要降低非視頻狀態下的流量情況,最主要的應該是解析下面這段代碼:在不播放視頻的情況下%99的數據流量都是來自這裏!其實也就是上面說的draw copy指令, 對於該指令可做一定程度合併,在較小的時間區間內,同一區域內即將被覆蓋的指令就不再發送,即有可能降低帶寬消耗。 

case SPICE_IMAGE_TYPE_BITMAP: {

        spice_debug("fill_bits_SPICE_IMAGE_TYPE_BITMAP");
        SpiceBitmap *bitmap = &image.u.bitmap;

#ifdef DUMP_BITMAP
        //dump_bitmap_dcc(&simage->u.bitmap, drawable->red_drawable->bbox);
#endif
        /* Images must be added to the cache only after they are compressed
           in order to prevent starvation in the client between pixmap_cache and
           global dictionary (in cases of multiple monitors) */

        if (reds_stream_get_family(red_channel_client_get_stream(rcc)) == AF_UNIX ||

            !dcc_compress_image(dcc, &image, &simage->u.bitmap,
                                drawable, can_lossy, &comp_send_data)) {

            SpicePalette *palette;
            red_display_add_image_to_pixmap_cache(rcc, simage, &image, FALSE);

            *bitmap = simage->u.bitmap;
            bitmap->flags = bitmap->flags & SPICE_BITMAP_FLAGS_TOP_DOWN;

            palette = bitmap->palette;
            dcc_palette_cache_palette(dcc, palette, &bitmap->flags);

            spice_marshall_Image(m, &image,
                                 &bitmap_palette_out, &lzplt_palette_out);
            spice_assert(lzplt_palette_out == NULL);

            if (bitmap_palette_out && palette) {

                spice_marshall_Palette(bitmap_palette_out, palette);
            }

            /* 'drawable' owns this bitmap data, so it must be kept
             * alive until the message is sent. */

            for (unsigned int i = 0; i < bitmap->data->num_chunks; i++) {

                drawable->refs++;
                spice_marshaller_add_by_ref_full(m, bitmap->data->chunk[i].data,
                                                 bitmap->data->chunk[i].len,
                                                 marshaller_unref_drawable, drawable);

            }

            pthread_mutex_unlock(&dcc->priv->pixmap_cache->lock);
            return FILL_BITS_TYPE_BITMAP;

        }

裏面帶着幾個問題:

    1、如何從源頭開始減少實際圖像傳輸的帶寬,或者說是否有多餘的圖像數據可以不用發送?

    2、圖像本地緩存到底是什麼樣的一種機制或者算法?

客戶端數據:

首先進行初始化

在channel-display.c中的display_handle_surface_create(SpiceChannel *channel, SpiceMsgIn *in)會創建一個基礎的畫面,裏面的create_canvas是主要的創建畫布,創建畫布裏面會有glz zlib jpeg三種解碼器的初始化,之後調用canvas_create_for_data來進行創建,這裏面有兩個函數:

pixman_image_create_bits

canvas_create_common->canvas_base_init...

另外他們在sw_canvas.c裏面有一個全局函數裏面進行了canvas_base_init_ops這個操作,這個裏面其實是將canvas_base.h裏面有一個結構體裏面的函數指正指向真正的定義  。

 ops->draw_fill = canvas_draw_fill;

    ops->draw_copy = canvas_draw_copy;
    ops->draw_opaque = canvas_draw_opaque;
    ops->copy_bits = canvas_copy_bits;
    ops->draw_blend = canvas_draw_blend;
    ops->draw_blackness = canvas_draw_blackness;
    ops->draw_whiteness = canvas_draw_whiteness;
    ops->draw_invers = canvas_draw_invers;
    ops->draw_transparent = canvas_draw_transparent;
    ops->draw_alpha_blend = canvas_draw_alpha_blend;
    ops->draw_stroke = canvas_draw_stroke;
    ops->draw_rop3 = canvas_draw_rop3;
    ops->draw_composite = canvas_draw_composite;
    ops->group_start = canvas_base_group_start;
    ops->group_end = canvas_base_group_end;

之後開始收數據,其實圖像的數據包都來自channel-display.c裏面的最後的指令,看我們最關心的display_handle_draw_copy指令最後要到了DRAW(copy);宏定義,這個宏定義很清楚的指向的就是canvas_base.c裏面的canvas_draw_copy函數。

#define DRAW(type) {                                                    \

        display_surface *surface =                                      \
            find_surface(SPICE_DISPLAY_CHANNEL(channel)->priv,          \
                op->base.surface_id);                                   \
        g_return_if_fail(surface != NULL);                              \
        surface->canvas->ops->draw_##type(surface->canvas, &op->base.box, \
                                          &op->base.clip, &op->data);   \
        if (surface->primary) {                                         \
            emit_invalidate(channel, &op->base.box);                    \
        }                                                               \
}

canvas_draw_copy裏面有一個src_image = canvas_get_image(canvas, copy->src_bitmap, FALSE);函數其實就是進行圖像解碼的。

canvas_get_image_internal(canvas, image, want_original, TRUE);

surface = get_surface_from_canvas(canvas, image, want_original);這個函數裏面有一系列的case

switch (image->descriptor.type) {

    case SPICE_IMAGE_TYPE_QUIC:
        return canvas_get_quic(canvas, image, want_original);

#if defined(SW_CANVAS_CACHE)
    case SPICE_IMAGE_TYPE_LZ_PLT:
    case SPICE_IMAGE_TYPE_LZ_RGB:
        return canvas_get_lz(canvas, image, want_original);

    case SPICE_IMAGE_TYPE_GLZ_RGB:
        return canvas_get_glz(canvas, image, want_original);

    case SPICE_IMAGE_TYPE_ZLIB_GLZ_RGB:
        return canvas_get_zlib_glz_rgb(canvas, image, want_original);

    case SPICE_IMAGE_TYPE_FROM_CACHE_LOSSLESS:
        return canvas->bits_cache->ops->get_lossless(canvas->bits_cache, image->descriptor.id);

#endif
    case SPICE_IMAGE_TYPE_JPEG:
        return canvas_get_jpeg(canvas, image);

    case SPICE_IMAGE_TYPE_JPEG_ALPHA:
        return canvas_get_jpeg_alpha(canvas, image);

    case SPICE_IMAGE_TYPE_LZ4:
#ifdef USE_LZ4
        return canvas_get_lz4(canvas, image);
#else
        spice_warning("Lz4 compression algorithm not supported.\n");

        return NULL;
#endif
    case SPICE_IMAGE_TYPE_FROM_CACHE:
        return canvas->bits_cache->ops->get(canvas->bits_cache, image->descriptor.id);

    case SPICE_IMAGE_TYPE_BITMAP:
        return canvas_get_bits(canvas, &image->u.bitmap, want_original);

    default:
        spice_warn_if_reached();

        return NULL;
    }

最後選擇canvas_get_jpeg的話裏面就會有canvas->jpeg->ops->decode(canvas->jpeg, dest, stride, SPICE_BITMAP_FMT_32BIT);

解碼的操作,整個流程就基本上走通了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章