1.mnist數據集
訓練集:60000張灰色圖像,大小28*28,共10類(0-9)
測試集:10000張灰色圖像,大小28*28
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test: 參數規格分別爲(60000, 28, 28)和(10000, 28, 28)。
y_train, y_test: 數字標籤(0-9),參數規格分別爲(60000,)和(10000,)
數據下載地址:http://yann.lecun.com/exdb/mnist/
2.CIFAR-10數據集
CIFAR-10來自於80 million張小型圖片的數據集,如下:
總數 | 色彩 | 圖片尺寸 | 類別數 | 訓練集 | 測試集 |
60000張 | RGB | 32*32 | 10類 | 50000張 | 10000張 |
整個數據集被分爲5個training batches和1個test batch。test batch:隨機從每類選擇10000張圖片組成,training batches:從剩下的圖片中隨機選擇,但每類的圖片不是平均分給batch的,總數爲50000張圖片,這些類別是完全互斥的。
數據下載鏈接:http://www.cs.toronto.edu/~kriz/cifar.html
解壓後的文件包括:
下面是python3來打開文件,每個batch文件轉換爲dictonary:
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict_ = pickle.load(fo, encoding='bytes')
return dict_
-----
data = unpickle('test_batch')
data.keys() # dict_keys([b'batch_label', b'labels', b'data', b'filenames'])
data[b'data'][0] # array([158, 159, 165, ..., 124, 129, 110], dtype=uint8)
batches.meta文件包含了[b’num_cases_per_batch’, b’label_names’, b’num_vis’],label_names – 十個類別對應的英文名。
程序中下載數據:
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test: 參數規格分別爲(50000, 3, 32, 32)和(10000, 3, 32, 32)
y_train, y_test: 標籤取值範圍 (0-9),shape (50000)和(10000)
CIFAR-10可視化:
import numpy as np
from PIL import Image
import pickle
import os
import matplotlib.image as plimg
CHANNEL = 3
WIDTH = 32
HEIGHT = 32
data = []
labels=[]
classification = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
for i in range(5):
with open("./cifar-10-batches-py/data_batch_"+ str(i+1),mode='rb') as file:
#數據集在當腳本前文件夾下
data_dict = pickle.load(file, encoding='bytes')
data+= list(data_dict[b'data'])
labels+= list(data_dict[b'labels'])
img = np.reshape(data,[-1,CHANNEL, WIDTH, HEIGHT])
#代碼創建文件夾,也可以自行創建
data_path = "./pic3/"
if not os.path.exists(data_path):
os.makedirs(data_path)
for i in range(100):
r = img[i][0]
g = img[i][1]
b = img[i][2]
plimg.imsave("./pic4/" +str(i)+"r"+".png",r)
plimg.imsave("./pic4/" +str(i)+"g"+".png",g)
plimg.imsave("./pic4/" +str(i) +"b"+".png",b)
ir = Image.fromarray(r)
ig = Image.fromarray(g)
ib = Image.fromarray(b)
rgb = Image.merge("RGB", (ir, ig, ib))
name = "img-" + str(i) +"-"+ classification[labels[i]]+ ".png"
rgb.save(data_path + name, "PNG")
3.CIFAR-100
它有100個類,每個類包含600個圖像。每類各有500個訓練圖像和100個測試圖像。CIFAR-100中的100個類被分成20個超類。每個圖像都帶有一個“精細”標籤(它所屬的類)和一個“粗糙”標籤(它所屬的超類)。
數據下載地址:http://www.cs.toronto.edu/~kriz/cifar.html
python查看CIFAR-100數據 :
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
dict.keys()
CIFAR-100可視化:
# -*- coding:utf-8 -*-
import pickle as p
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as plimg
from PIL import Image
def load_CIFAR_batch(filename):
""" load single batch of cifar """
with open(filename, 'rb')as f:
datadict = p.load(f,encoding='bytes')
#X = datadict[b'data']
#Y = datadict[b'labels']
#X = X.reshape(10000, 3, 32, 32)
X = datadict[b'data']
Y = datadict[b'coarse_labels']+datadict[b'fine_labels']
X = X.reshape(50000, 3, 32, 32)
Y = np.array(Y)
return X, Y
if __name__ == "__main__":
#imgX, imgY = load_CIFAR_batch("./cifar-10-batches-py/data_batch_1")
imgX, imgY = load_CIFAR_batch("./cifar-100-python/train")
print(imgX.shape)
print("正在保存圖片:")
for i in range(imgX.shape[0]):
imgs = imgX[i]
if i < 100:#只循環100張圖片,這句註釋掉可以便利出所有的圖片,圖片較多,可能要一定的時間
img0 = imgs[0]
img1 = imgs[1]
img2 = imgs[2]
i0 = Image.fromarray(img0)
i1 = Image.fromarray(img1)
i2 = Image.fromarray(img2)
img = Image.merge("RGB",(i0,i1,i2))
name = "img" + str(i)+".png"
img.save("./pic1/"+name,"png")#文件夾下是RGB融合後的圖像
for j in range(imgs.shape[0]):
img = imgs[j]
name = "img" + str(i) + str(j) + ".jpg"
print("正在保存圖片" + name)
plimg.imsave("./pic2/" + name, img)#文件夾下是RGB分離的圖像
print("保存完畢.")
程序中下載數據:
from keras.datasets import cifar100
(x_train, y_train), (x_test, y_test) = cifar100.load_data(label_mode='fine')
x_train, x_test: 參數規格分別爲(50000, 3, 32, 32)和(10000, 3, 32, 32)
y_train, y_test: 標籤取值範圍 (0-99),shape (50000)和(10000)
4.SVHN數據
街景號碼SVHN數據是一個真實的圖像數據集,用於開發機器學習和對象識別算法,對數據預處理和格式化的要求最低。它可以被看作與MNIST的風味相似(例如,圖像是小的裁剪數字),但是包含更多標記數據的數量級(超過600,000個數字圖像)並且來自更加困難,未解決的現實世界問題(識別自然場景圖像中的數字和數字)。SVHN是從Google街景圖像中的門牌號碼獲得的。
SVHN 是對圖像中阿拉伯數字進行識別的數據集,該數據集中的圖像來自真實世界的門牌號數字,每張圖片中包含一組 '0-9' 的阿拉伯數字。訓練集中包含 73257 個數字,測試集中包含 26032 個數字,另有 531131 個附加數字。
數據下載地址:http://ufldl.stanford.edu/housenumbers/
爲方便轉換,可以下載train_32x32.mat和test_32x32.mat,.mat文件中包含兩個變量,X是一個4D的矩陣,維度是(32,32,3,n),n是數據個數,y是label變量。
5.fashion_mnist
訓練集:60000張灰色圖像,大小28*28,共10類(0-9)
測試集:10000張灰色圖像,大小28*28
圖像是一個28*28
的像素數組,每個像素的值爲0~255之間的8位無符號整數(uint8),使用三維NDArray存儲,最後一維表示通道個數。由於爲灰度圖像,故通道數爲1。
from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test: 參數規格分別爲(60000, 28, 28)和(10000, 28, 28)。
y_train, y_test: 數字標籤(0-9),參數規格分別爲(60000,)和(10000,)
數據下載地址:
訓練集的圖像:60000,http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
訓練集的類別標籤:60000,http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
測試集的圖像:10000,http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
測試集的類別標籤:10000,http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
參考文章:
1.https://blog.csdn.net/weixin_44633882/article/details/86905285