tensorflow入門:TFRecordDataset變長數據的batch讀取

在上一篇文章tensorflow入門:tfrecord 和tf.data.TFRecordDataset裏,講到了使用如何使用tf.data.TFRecordDatase來對tfrecord文件進行batch讀取,即使用dataset的batch方法進行;但如果每條數據的長度不一樣(常見於語音、視頻、NLP等領域),則不能直接用batch方法獲取數據,這時則有兩個解決辦法:

1.在把數據寫入tfrecord時,先把數據pad到統一的長度再寫入tfrecord;這個方法的問題在於:若是有大量數據的長度都遠遠小於最大長度,則會造成存儲空間的大量浪費。

--------------------------------------------------------------------------------------------------------------

2.使用dataset中的padded_batch方法來進行,參數padded_shapes #指明每條記錄中各成員要pad成的形狀,成員若是scalar,則用[],若是list,則用[mx_length],若是array,則用[d1,...,dn],假如各成員的順序是scalar數據、list數據、array數據,則padded_shapes=([], [mx_length], [d1,...,dn]);該方法的函數說明如下:

padded_batch(
    batch_size,
    padded_shapes,
    padding_values=None    #默認使用各類型數據的默認值,一般使用時可忽略該項
)

使用mnist數據來舉例說明,首先在把mnist寫入tfrecord之前,把mnist數據進行更改,以使得每個mnist圖像的大小不等,如下:

import tensorflow as tf
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets

mnist = read_data_sets("MNIST_data/", one_hot=True)


def get_tfrecords_example(feature, label):
	tfrecords_features = {}
	feat_shape = feature.shape
	tfrecords_features['feature'] = tf.train.Feature(float_list=tf.train.FloatList(value=feature))
	tfrecords_features['shape'] = tf.train.Feature(int64_list=tf.train.Int64List(value=list(feat_shape)))
	tfrecords_features['label'] = tf.train.Feature(float_list=tf.train.FloatList(value=label))
	return tf.train.Example(features=tf.train.Features(feature=tfrecords_features))


def make_tfrecord(data, outf_nm='mnist-train'):
	feats, labels = data
	outf_nm += '.tfrecord'
	tfrecord_wrt = tf.python_io.TFRecordWriter(outf_nm)
	ndatas = len(labels)
	print(feats[0].dtype, feats[0].shape, ndatas)
	assert len(labels[0]) > 1
	for inx in range(ndatas):
		ed = random.randint(0,3)  #隨機丟掉幾個數據點,以使長度不等
		exmp = get_tfrecords_example(feats[inx][:-ed], labels[inx])
		exmp_serial = exmp.SerializeToString()
		tfrecord_wrt.write(exmp_serial)
	tfrecord_wrt.close()

import random
nDatas = len(mnist.train.labels)
inx_lst = range(nDatas)
random.shuffle(inx_lst)
random.shuffle(inx_lst)
ntrains = int(0.85*nDatas)

# make training set
data = ([mnist.train.images[i] for i in inx_lst[:ntrains]], \
	[mnist.train.labels[i] for i in inx_lst[:ntrains]])
make_tfrecord(data, outf_nm='mnist-train')

# make validation set
data = ([mnist.train.images[i] for i in inx_lst[ntrains:]], \
	[mnist.train.labels[i] for i in inx_lst[ntrains:]])
make_tfrecord(data, outf_nm='mnist-val')

# make test set
data = (mnist.test.images, mnist.test.labels)
make_tfrecord(data, outf_nm='mnist-test')

用dataset加載批量數據,在解析數據時用到tf.VarLenFeature(tf.datatype),而非tf.FixedLenFeature([], tf.datatype)},且要配合tf.sparse_tensor_to_dense函數使用,如下:

import tensorflow as tf

train_f, val_f, test_f = ['mnist-%s.tfrecord'%i for i in ['train', 'val', 'test']]

def parse_exmp(serial_exmp):
	feats = tf.parse_single_example(serial_exmp, features={'feature':tf.VarLenFeature(tf.float32),\
		'label':tf.FixedLenFeature([10],tf.float32), 'shape':tf.FixedLenFeature([], tf.int64)})
	image = tf.sparse_tensor_to_dense(feats['feature']) #使用VarLenFeature讀入的是一個sparse_tensor,用該函數進行轉換
	label = tf.reshape(feats['label'],[2,5])  #把label變成[2,5],以說明array數據如何padding
	shape = tf.cast(feats['shape'], tf.int32)
	return image, label, shape

def get_dataset(fname):
	dataset = tf.data.TFRecordDataset(fname)
	return dataset.map(parse_exmp) # use padded_batch method if padding needed

epochs = 16
batch_size = 50  
padded_shapes = ([784],[3,5],[]) #把image pad至784,把label pad至[3,5],shape是一個scalar,不輸入數字
# training dataset
dataset_train = get_dataset(train_f)
dataset_train = dataset_train.repeat(epochs).shuffle(1000).padded_batch(batch_size, padded_shapes=padded_shapes)


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章