https://blog.csdn.net/liumangxiong/article/details/10279089
使用:
/**
* generic_writepages - walk the list of dirty pages of the given address space and writepage() all of them.
* @mapping: address space structure to write
* @wbc: subtract the number of written pages from *@wbc->nr_to_write
*
* This is a library function, which implements the writepages()
* address_space_operation.
*
* Return: %0 on success, negative error code otherwise
*/
int generic_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct blk_plug plug;
int ret;
/* deal with chardevs and other special file */
if (!mapping->a_ops->writepage)
return 0;
blk_start_plug(&plug);
ret = write_cache_pages(mapping, wbc, __writepage, mapping);
blk_finish_plug(&plug);
return ret;
}
blk_plug構建了一個緩存碎片IO的請求隊列。用於將順序請求合併成一個大的請求。合併後請求批量從per-task鏈表移動到設備請求隊列,減少了設備請求隊列鎖競爭,從而提高了效率。
blk_plug的使用很簡單:
1、設置該線程開啓請求合併模式 blk_start_plug
2、關閉線程請求合併 blk_finish_plug
至於如何合併、如何下發請求,這些工作都是由內核來完成的。
那麼blk_plug適用於什麼情況呢?由於是專門優化請求合併的,所以適合於連續的小塊請求。
下面是一個測試的結果:
測試環境:
SATA控制器:intel 82801JI
OS: linux3.6, redhat
raid5: 4個ST31000524NS盤
沒有blk_plug:
Total (8,16):
Reads Queued: 309811, 1239MiB Writes Queued: 0, 0KiB
Read Dispatches: 283583, 1189MiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 273351, 1149MiB Writes Completed: 0, 0KiB
Read Merges: 23533, 94132KiB Write Merges: 0, 0KiB
IO unplugs: 0 Timer unplugs: 0
添加了 blk_plug:
Total (8,16):
Reads Queued: 428697, 1714MiB Writes Queued: 0, 0KiB
Read Dispatches: 3954, 1714MiB Write Dispatches: 0, 0KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 3956, 1715MiB Writes Completed: 0, 0KiB
Read Merges: 424743, 1698MiB Write Merges: 0, 0KiB
IO unplugs: 0 Timer unplugs: 3384
可以看出,讀請求大量被合併下發了。
blk_plug其他域說明:
magic:用於判斷blk_plug是否有效
list:用於緩存請求的隊列
cb_list:回調函數的鏈表,下發請求時會調用到
should_sort:下發之前是否對請求進行排序