我主要是需要它的adaptation部分
1. For adaptation, go to the adaptation directory. Please put the Webcaricature dataset to "CariFaceParsing/adaptation/datasets/face_webcaricature". And link "trainA" and "val" to "photo", link "trainB", "trainC", "trainD", "trainE", "trainF", "trainG", "trainH", "trainI" to "caricature". And download the provided "landmark_webcaricature" and put it to "CariFaceParsing/adaptation/datasets/"
這邊因爲下載的ladnmark_webcaricature 和原來的數據集命名不一樣,我猜應該是要根據landmark_webcaricature每個文件夾裏的文件對應的把圖片複製過來造一個分類的數據集的意思....不知道有沒有別的方便一點的方法,反正我寫了個複製的代碼:
我的trainA和trainB是之前跑MUNIT的時候,把webcaricature分開成了照片和漫畫兩個文件夾,這裏就拿來用了。
代碼也放在後面了。
# split webcaricature dataset into A~I according to ldmk dataset
import os
import shutil
ldmkPath = "D:/hx/dataset/CariFaceParsing_data/CariFaceParsing_data/adaptation/datasets/landmark_webcaricature/trainI"
imgPath_C = "D:/hx/dataset/dataset/p2c/trainA"
imgPath_P = "D:/hx/dataset/dataset/p2c/trainB"
dirPath = "D:/hx/codes/CariFaceParsing-master/CariFaceParsing-master/Adaptation/datasets/face_webcaricature/trainI"
n m,kbpyFiles():
n\
if not os.path.exists(dirPath):
os.makedirs(dirPath)
for root,dirs,files in os.walk(ldmkPath):
for eachfile in files:
a = eachfile.split("_")
b = a[0].replace(' ','_')
c = a[-1].replace('npy','jpg')
filename = b + "_" + c
file1 = a[0] + "_" + c
for root,dirs,files in os.walk(imgPath_C):
for img in files:
if img == filename:
path0 = os.path.join(imgPath_C, filename)
path1 = os.path.join(dirPath, file1)
shutil.copy(path0,path1)
print(eachfile + " copy succeed")
break
if __name__ == '__main__':
copyFiles()
import os
import shutil
def copyFiles(srcPath,pathA,pathB):
print(srcPath)
if not os.path.exists(srcPath):
print("src path not exist!")
if not os.path.exists(pathA):
os.makedirs(pathA)
if not os.path.exists(pathB):
os.makedirs(pathB)
for root,dirs,files in os.walk(srcPath):
for eachfile in files:
if eachfile[0]=='C':
a = root.split('\\')
img = a[-1] +"_"+eachfile
path1 = os.path.join(pathA,img)
shutil.copy(os.path.join(root,eachfile),path1)
print(eachfile+" copy succeeded")
else:
a = root.split('\\')
img = a[-1] +"_"+eachfile
path1 = os.path.join(pathB,img)
shutil.copy(os.path.join(root,eachfile),path1)
print(eachfile+" copy succeeded")
if __name__ == '__main__':
copyFiles('D:/hx/dataset/webcaricature_aligned_256','D:/hx/dataset/dataset/p2c/trainA/','D:/hx/dataset/dataset/p2c/trainB/')
2. Then put Helen dataset to "CariFaceParsing/adaptation/datasets/helen" and it should have three subfolder, "images", "labels", "landmark".
這一步用作者的鏈接我沒找着labels....可能是數據集鏈接更新了..... 找了好久TAT 我之後有時間會把它上傳到這裏,因爲那個鏈接下載的太慢了....
其實如果像我這樣只需要adaptation這部分,沒打算實現漫畫的分割,可以不需要這個labels,它就是漫畫的分割圖。後來我也沒用到...我直接把labels裏面放了和images一樣的圖片,然後會報個11個通道什麼的錯誤。那個是因爲labels裏面每張圖片都有一個文件夾,文件夾裏放了11張分割圖片,要把這些圖片合成一個有11個通道的圖...isssues裏面也有提到。我沒有把他們合成一張圖片,只是對代碼做了一點修改。
我印象中是改了這三個地方...
3. Put the adapted results to "CariFaceParsing/adaptation/datasets/helen_shape_adaptation" and it should have "images", "labels". Put the provided "train_style" and "test_style" here and link "train_content", "test_content" to images.
這一步...我一開始什麼都沒創建,直接跑的測試代碼。然後根據報錯創建文件夾,後來發現test_style裏邊放的是想要參照的漫畫圖片,默認設置是8張....會在test_content文件夾生成結果。
還未訓練,之後如果訓練好了再更新吧。