寫在前面
百萬張圖片的 imagenet 數據原始大小約爲 148G,整理成 TFRecord 格式文件後約爲 144G,因此至少要準備 300G 大小。
參考:https://github.com/tensorflow/models/tree/master/research/inception#getting-started
1 在 iimagenet 網站 http://image-net.org 註冊,記下用戶名與密碼。
2 下載代碼。是tensorflow 的model zoom,https://github.com/tensorflow/models, 代碼在models/research/inception/inception/data
,
3 修改 download_and_preprocess_imagenet.sh
腳本,如我改的結果如下:
主要是 WORK_DIR
,BUILD_SCRIPT
,OUTPUT_DIRECTORY
三個,另外,將 BUILD_SCRIPT
設爲可執行, 在 build_imagenet_data.py
文件頭添加了 #!/usr/bin/env python
, 目的是 BUILD_SCRIPT
默認是shell可執行文件。
# usage:
# ./download_and_preprocess_imagenet.sh [data-dir]
set -e
if [ -z "$1" ]; then
echo "Usage: download_and_preprocess_imagenet.sh [data dir]"
exit
fi
# Create the output and temporary directories.
DATA_DIR="${1%/}"
SCRATCH_DIR="${DATA_DIR}/raw-data/"
#mkdir -p "${DATA_DIR}"
#mkdir -p "${SCRATCH_DIR}"
#WORK_DIR="$0.runfiles/inception/inception"
WORK_DIR="/home/linlf/project/linlf/mlperf/reference/image_classification/dataset/inception"
# Download the ImageNet data.
LABELS_FILE="${WORK_DIR}/data/imagenet_lsvrc_2015_synsets.txt"
DOWNLOAD_SCRIPT="${WORK_DIR}/data/download_imagenet.sh"
"${DOWNLOAD_SCRIPT}" "${SCRATCH_DIR}" "${LABELS_FILE}"
# Note the locations of the train and validation data.
TRAIN_DIRECTORY="${SCRATCH_DIR}train/"
VALIDATION_DIRECTORY="${SCRATCH_DIR}validation/"
# Preprocess the validation data by moving the images into the appropriate
# sub-directory based on the label (synset) of the image.
echo "Organizing the validation data into sub-directories."
PREPROCESS_VAL_SCRIPT="${WORK_DIR}/data/preprocess_imagenet_validation_data.py"
VAL_LABELS_FILE="${WORK_DIR}/data/imagenet_2012_validation_synset_labels.txt"
"${PREPROCESS_VAL_SCRIPT}" "${VALIDATION_DIRECTORY}" "${VAL_LABELS_FILE}"
# Convert the XML files for bounding box annotations into a single CSV.
echo "Extracting bounding box information from XML."
BOUNDING_BOX_SCRIPT="${WORK_DIR}/data/process_bounding_boxes.py"
BOUNDING_BOX_FILE="${SCRATCH_DIR}/imagenet_2012_bounding_boxes.csv"
BOUNDING_BOX_DIR="${SCRATCH_DIR}bounding_boxes/"
"${BOUNDING_BOX_SCRIPT}" "${BOUNDING_BOX_DIR}" "${LABELS_FILE}" \
| sort > "${BOUNDING_BOX_FILE}"
echo "Finished downloading and preprocessing the ImageNet data."
# Build the TFRecords version of the ImageNet data.
#BUILD_SCRIPT="${WORK_DIR}/build_imagenet_data"
BUILD_SCRIPT="${WORK_DIR}/data/build_imagenet_data_newdataset.py"
#OUTPUT_DIRECTORY="${DATA_DIR}"
# mime -- output new dir
OUTPUT_DIRECTORY="/home/linlf/dataset"
IMAGENET_METADATA_FILE="${WORK_DIR}/data/imagenet_metadata.txt"
"${BUILD_SCRIPT}" \
--train_directory="${TRAIN_DIRECTORY}" \
--validation_directory="${VALIDATION_DIRECTORY}" \
--output_directory="${OUTPUT_DIRECTORY}" \
--imagenet_metadata_file="${IMAGENET_METADATA_FILE}" \
--labels_file="${LABELS_FILE}" \
--bounding_box_file="${BOUNDING_BOX_FILE}"
4 在 build_imagenet_data.py
可以自由調整多少個 TFRecord 訓練集 和 TFRecord 測試集,在代碼110行:
如下我調整成 512 個 TFRecord 訓練集和 64 個 TFRecord 測試集。
tf.app.flags.DEFINE_integer('train_shards', 512,
'Number of shards in training TFRecord files.')
tf.app.flags.DEFINE_integer('validation_shards', 64,
'Number of shards in validation TFRecord files.')