linux內核源碼閱讀之facebook硬盤加速flashcache之三


上一節講到在刷緩存的時候會調用new_kcahed_job創建kcached_job,由此我們也可以看到cache數據塊與磁盤數據的對應關係。上一篇:http://blog.csdn.net/liumangxiong/article/details/11726651
現在繼續從new_kcached_job函數中挖掘有用的信息。那就是cache塊跟磁盤上扇區是怎麼對應起來的?即329行的爲什麼要寫的disk.sector是後面這個值呢?
          job->disk.sector = dmc->cache[index].dbn;
最這裏是時候揭開變量dmc也就是結構體struct cache_c的真面目了。dmc可以理解成device mapper context或者device mapper cache。先看struct cache_c
134struct cache_c {
135	struct dm_target	*tgt;
136	
137	struct dm_dev 		*disk_dev;   /* Source device */
138	struct dm_dev 		*cache_dev; /* Cache device */
139
140#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,27)
141	struct kcopyd_client *kcp_client; /* Kcopyd client for writing back data */
142#else
143	struct dm_kcopyd_client *kcp_client; /* Kcopyd client for writing back data */
144	struct dm_io_client *io_client; /* Client memory pool*/
145#endif
146
147	spinlock_t		cache_spin_lock;
148
149	struct cacheblock	*cache;	/* Hash table for cache blocks */
150	struct cache_set	*cache_sets;
151	struct cache_md_sector_head *md_sectors_buf;
152	
153	sector_t size;			/* Cache size */
154	unsigned int assoc;		/* Cache associativity */
155	unsigned int block_size;	/* Cache block size */
156	unsigned int block_shift;	/* Cache block size in bits */
157	unsigned int block_mask;	/* Cache block mask */
158	unsigned int consecutive_shift;	/* Consecutive blocks size in bits */
159
160	wait_queue_head_t destroyq;	/* Wait queue for I/O completion */
161	/* XXX - Updates of nr_jobs should happen inside the lock. But doing it outside
162	   is OK since the filesystem is unmounted at this point */
163	atomic_t nr_jobs;		/* Number of I/O jobs */
164	atomic_t fast_remove_in_prog;
165
166	int	dirty_thresh_set;	/* Per set dirty threshold to start cleaning */
167	int	max_clean_ios_set;	/* Max cleaning IOs per set */
168	int	max_clean_ios_total;	/* Total max cleaning IOs */
169	int	clean_inprog;
170	int	sync_index;
171	int	nr_dirty;
172
173	int	md_sectors;		/* Numbers of metadata sectors, including header */
174
175	/* Stats */
176	unsigned long reads;		/* Number of reads */
177	unsigned long writes;		/* Number of writes */
178	unsigned long read_hits;	/* Number of cache hits */
179	unsigned long write_hits;	/* Number of write hits (includes dirty write hits) */
180	unsigned long dirty_write_hits;	/* Number of "dirty" write hits */
181	unsigned long replace;		/* Number of cache replacements */
182	unsigned long wr_replace;
183	unsigned long wr_invalidates;	/* Number of write invalidations */
184	unsigned long rd_invalidates;	/* Number of read invalidations */
185	unsigned long pending_inval;	/* Invalidations due to concurrent ios on same block */
186	unsigned long cached_blocks;	/* Number of cached blocks */
187#ifdef FLASHCACHE_DO_CHECKSUMS
188	unsigned long checksum_store;
189	unsigned long checksum_valid;
190	unsigned long checksum_invalid;
191#endif
192	unsigned long enqueues;		/* enqueues on pending queue */
193	unsigned long cleanings;
194	unsigned long noroom;		/* No room in set */
195	unsigned long md_write_dirty;	/* Metadata sector writes dirtying block */
196	unsigned long md_write_clean;	/* Metadata sector writes cleaning block */
197	unsigned long pid_drops;
198	unsigned long pid_adds;
199	unsigned long pid_dels;
200	unsigned long expiry;
201	unsigned long front_merge, back_merge;	/* Write Merging */
202	unsigned long uncached_reads, uncached_writes;
203	unsigned long disk_reads, disk_writes;
204	unsigned long ssd_reads, ssd_writes;
205	unsigned long ssd_readfills, ssd_readfill_unplugs;
206
207	unsigned long clean_set_calls;
208	unsigned long clean_set_less_dirty;
209	unsigned long clean_set_fails;
210	unsigned long clean_set_ios;
211	unsigned long set_limit_reached;
212	unsigned long total_limit_reached;
213
214	/* Errors */
215	int	disk_read_errors;
216	int	disk_write_errors;
217	int	ssd_read_errors;
218	int	ssd_write_errors;
219	int	memory_alloc_errors;
220
221#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20)
222	struct work_struct delayed_clean;
223#else
224	struct delayed_work delayed_clean;
225#endif
226
227	/* State for doing readfills (batch writes to ssd) */
228	int readfill_in_prog;
229	struct kcached_job *readfill_queue;
230	struct work_struct readfill_wq;
231
232	unsigned long pid_expire_check;
233
234	struct flashcache_cachectl_pid *blacklist_head, *blacklist_tail;
235	struct flashcache_cachectl_pid *whitelist_head, *whitelist_tail;
236	int num_blacklist_pids, num_whitelist_pids;
237	unsigned long blacklist_expire_check, whitelist_expire_check;
238	
239	struct cache_c	*next_cache;
240
241	char cache_devname[DEV_PATHLEN];
242	char disk_devname[DEV_PATHLEN];
243};
這個多field,如果一個挨一個看一遍,估計我都要睡着了。就像看書一樣,如果從第一章看到最後一章,看過之後腦子裏總是一片空白。如果是先看目錄,帶着疑問找自己感興趣的地方,不時回味一下爲什麼是這樣子,不失爲一種愉快並且有效的閱讀方式。
那麼在看這個數據結構之前,頭腦風暴一下產生了以下的疑問:
1)源設備和目的設備分別是什麼?映射後數據流是怎麼樣的?
2)緩存大小是多少?塊大小多少?塊是怎麼組織的
3)髒數據刷新機制是什麼樣的?水位線是多少
第137,138行表示的磁盤和SSD盤,即目的盤和緩存盤。這裏必須十分清楚緩存的概念,一般情況下講到緩存都是在內存中的,但flashcache中提到緩存塊的時候要記住是寫在SSD盤上的。數據流的變化就是多了一層flashcache device,在命中情況下直接返回,不到磁盤層。
緩存大小由153行size表示,但要注意的是,這裏的size既不是以字節爲單位,而不是以sector爲單位,而是以cache數據塊爲單位的。每個cache數據塊爲block_size大小。cache塊的組織是以集合爲單位,每個集合有assoc個塊,簡單地理解爲二維數組,第一維得到的是一個集合,第二維得到集合內塊數據。爲了理解這個結構,來看函數flashcache_lookup,
543/* 
544 * dbn is the starting sector, io_size is the number of sectors.
545 */
546static int 
547flashcache_lookup(struct cache_c *dmc, struct bio *bio, int *index)
548{
549	sector_t dbn = bio->bi_sector;
550#if DMC_DEBUG
551	int io_size = to_sector(bio->bi_size);
552#endif
553	unsigned long set_number = hash_block(dmc, dbn);
554	int invalid, oldest_clean = -1;
555	int start_index;
556
557	start_index = dmc->assoc * set_number;
558	DPRINTK("Cache lookup : dbn %llu(%lu), set = %d",
559		dbn, io_size, set_number);
560	find_valid_dbn(dmc, dbn, start_index, index);
561	if (*index > 0) {
562		DPRINTK("Cache lookup HIT: Block %llu(%lu): VALID index %d",
563			     dbn, io_size, *index);
564		/* We found the exact range of blocks we are looking for */
565		return VALID;
566	}
567	invalid = find_invalid_dbn(dmc, start_index);
568	if (invalid == -1) {
569		/* We didn't find an invalid entry, search for oldest valid entry */
570		find_reclaim_dbn(dmc, start_index, &oldest_clean);
571	}
572	/* 
573	 * Cache miss :
574	 * We can't choose an entry marked INPROG, but choose the oldest
575	 * INVALID or the oldest VALID entry.
576	 */
577	*index = start_index + dmc->assoc;
578	if (invalid != -1) {
579		DPRINTK("Cache lookup MISS (INVALID): dbn %llu(%lu), set = %d, index = %d, start_index = %d",
580			     dbn, io_size, set_number, invalid, start_index);
581		*index = invalid;
582	} else if (oldest_clean != -1) {
583		DPRINTK("Cache lookup MISS (VALID): dbn %llu(%lu), set = %d, index = %d, start_index = %d",
584			     dbn, io_size, set_number, oldest_clean, start_index);
585		*index = oldest_clean;
586	} else {
587		DPRINTK_LITE("Cache read lookup MISS (NOROOM): dbn %llu(%lu), set = %d",
588			dbn, io_size, set_number);
589	}
590	if (*index < (start_index + dmc->assoc))
591		return INVALID;
592	else {
593		dmc->noroom++;
594		return -1;
595	}
596}
第549行dbn是bio的起始扇區,第553行set_number就是這個扇區映射到的緩存集合,簡單地看一下hash_block:
444/*
445 * Map a block from the source device to a block in the cache device.
446 */
447static unsigned long 
448hash_block(struct cache_c *dmc, sector_t dbn)
449{
450	unsigned long set_number, value;
451
452	value = (unsigned long)
453		(dbn >> (dmc->block_shift + dmc->consecutive_shift));
454	set_number = value % (dmc->size >> dmc->consecutive_shift);
455	DPRINTK("Hash: %llu(%lu)->%lu", dbn, value, set_number);
456	return set_number;
457}
看註釋,源設備塊映射到cache設備塊。看452行,1<<dmc->block_shift就是塊大小,1<<dmc->consecutive_shift就是每個集合大小,所以得到的value就是這個扇區在哪個集合上。但直接返回這個值還不行,一般情況下源設備比緩存大得多,所以源設備上多處位置會映射到緩存的一個集合上。所以有了454行,源設備的多個集合映射到緩存的同一個集合上,(dmc->size >> dmc->consecutive_shift)就表示集合的個數。
繼續flashcache_lookup第557行,start_index就是這個集合第一個cache塊的下標,560行find_valid_db就是查找緩存是否命中,如果命中的話,由index返回,如果不命中,返回-1。第561行就是判斷緩存命中,如果命中就直接返回;不命中的話就繼續567行查找可用的緩存塊。第578行是找到可用緩存塊,582就是找到乾淨的回收緩存塊,586就是沒有找到可用的緩存塊。
回到cache_c結構中來,接着講刷新。刷新是由第224行工作隊列控制的struct delayed_work delayed_clean;這個隊列爲什麼是delayed_work,搜下這個隊列的調用,在函數flashcache_clean_set中,
          if (do_delayed_clean)
               schedule_delayed_work(&dmc->delayed_clean, 1*HZ);
那爲什麼是延遲1秒調用,看do_delayed_clean
          if (dmc->cache_sets[set].nr_dirty > dmc->dirty_thresh_set)
               do_delayed_clean = 1;
這裏的意思就是超過閾值的時候延遲1秒再檢查一遍,爲什麼不立即做而要延遲呢?這個函數再往回看就知道了,原來下發的請求已經超過某一個閾值,這個時候就不再下發。
除了這個隊列之外,還需要有一些閾值來控制。從166行到171行就是這些相關的設置。
nr_dirty是當前集合裏髒cache塊數
dirty_thresh_set 是超過這個界面就要開始將髒數據寫回磁盤 
max_clean_ios_set 是單個集合下發寫數據塊的請求個數
max_clean_ios_total 是整個緩存下發寫數據塊的請求個數
clean_inprog 是已經下發的寫數據塊的請求個數
到這裏再回去掃描一下cache_c結構,還有一些IO統計和錯誤統計的field。
每一場好戲都有精彩好戲在後頭,cache_c也不例外,接着請三巨頭隆重上場:
     struct cacheblock     *cache;     /* Hash table for cache blocks */
     struct cache_set     *cache_sets;
     struct cache_md_sector_head *md_sectors_buf;
第一個結構是cache塊在內存中的表示,對應SSD上的是flash_cacheblock。第二個cache_set就是之前一直提到的集合。第三個用於flash_cacheblock刷新,即管理結構從內存cacheblock寫到SSD的flash_cacheblock。下面逐一來看這三個結構體:
111/* Cache block metadata structure */
112struct cacheblock {
113	u_int16_t	cache_state;
114	int16_t 	nr_queued;	/* jobs in pending queue */
115	u_int16_t	lru_prev, lru_next;
116	sector_t 	dbn;	/* Sector number of the cached block */
117#ifdef FLASHCACHE_DO_CHECKSUMS
118	u_int64_t 	checksum;
119#endif
120	struct pending_job *head;
121};

cache_state; cache塊的狀態
 nr_queued;     /* jobs in pending queue */ 等待工作個數
 lru_prev, lru_next;  按LRU排序,指向前一個和後一個,注意這裏是下標
 dbn;     /* Sector number of the cached block */  對應磁盤的扇區
 checksum;   校驗
 struct pending_job *head;  等待工作
第二個數據結構:
123struct cache_set {
124	u_int32_t		set_fifo_next;
125	u_int32_t		set_clean_next;
126	u_int32_t		clean_inprog;
127	u_int32_t		nr_dirty;
128	u_int16_t		lru_head, lru_tail;
129};

第三個數據結構:
344/* 
345 * We have one of these for *every* cache metadata sector, to keep track
346 * of metadata ios in progress for blocks covered in this sector. Only
347 * one metadata IO per sector can be in progress at any given point in 
348 * time
349 */
350struct cache_md_sector_head {
351	u_int32_t		nr_in_prog;
352	struct kcached_job	*pending_jobs, *md_io_inprog;
353};
看註釋,每一個cache metadata扇區對應一個struct cache_md_sector_head結構,用以追蹤這個扇區上的IO,這個扇區的IO來自該扇區對應的每一個cache塊的狀態變化。每一次只允許一個IO在下發。在初始化時,nr_in_prog爲0,兩個隊列也都爲零。其中的一個cache塊發生變化並且狀態要更新到SSD中,這時創建一個job並掛入到pending_jobs,下發時將nr_in_prog置爲1,並將job從pending_jobs移到md_io_inprog,如果job下發過程中又有其他job下發,就掛到pending_jobs,等md_io_inprog處理完成再繼續下一次下發過程。
到這裏,我們把flashcache重要的數據結構都過了一遍。下一節開始介紹flashcache的數據流。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章