在深度學習研究中,我們對於網絡模型的學習過程的認識,大多是以一個黑盒的形式呈現出來的。網絡結構中卷積層通過訓練數據學習到怎樣的特徵,特徵圖是什麼樣的,對我們而言是隱晦難懂的,爲此我們可以通過可視化特徵圖的方法,以圖片的形式將網絡結構中某一層特徵圖顯示出來。
環境:Tensorflow1.X
需要的包:import matplotlib as plt from pylab import *
過程,首先將網絡中需要可視化網絡層張量返回到加載模型的驗證過程的文件中,然後將運行該張量,得到numpy數組,從而顯示,具體方法如下:
1.模型文件model.py
class vgg16: def __init__(self, imgs): self.imgs = imgs self.convlayers() self.fc_layers() self.c3_3 = self.conv3_3 def saver(self): return tf.train.Saver() def maxpool(self, name, input_data): out = tf.nn.max_pool(input_data,[1,2,2,1],[1,2,2,1],padding="SAME",name=name) return out def conv(self,name,input_data,out_channel): in_channel = input_data.get_shape()[-1] with tf.variable_scope(name): kernel = tf.get_variable("weights",[3, 3, in_channel,out_channel],dtype=tf.float32) biases = tf.get_variable("biases",[out_channel],dtype=tf.float32) conv_res = tf.nn.conv2d(input_data, kernel, [1,1,1,1], padding="SAME") res = tf.nn.bias_add(conv_res,biases) out = tf.nn.leaky_relu(res, name=name) return out
def convlayers(self): self.conv1_1 = self.conv("conv1re_1",self.imgs,64) self.conv1_2 = self.conv("conv1_2",self.conv1_1,64) · · · self.conv3_2 = self.conv("convrwe3_2", self.conv3_1,256) self.conv3_3 = self.conv("convrew3_3", self.conv3_2,256)#假設需要顯示該層卷積層 self.pool3 = self.maxpool("poolre3",self.conv3_3)
2加載訓練模型文件Predict.py
import tensorflow as tf import numpy as np import VGG16_Model as model import os import cv2 import matplotlib as plt from pylab import * from scipy import misc ##################################### def get_row_col(num_pic): squr = num_pic ** 0.5 row = round(squr) col = row + 1 if squr - row > 0 else row return row, col def visualize_feature_map(img_batch): feature_map = np.squeeze(img_batch, axis=0) print(feature_map.shape) feature_map_combination = [] plt.figure() num_pic = feature_map.shape[2] row, col = get_row_col(num_pic) for i in range(0, num_pic): feature_map_split = feature_map[:, :, i] feature_map_combination.append(feature_map_split) plt.subplot(row, col, i + 1) plt.imshow(feature_map_split) axis('off') #title('feature_map_{}'.format(i)) plt.savefig('feature_map.png') plt.show() def visualize_feature_map_sum(feature_batch): ''' 將每張子圖進行相加 :param feature_batch: :return: ''' feature_map = np.squeeze(feature_batch, axis=0) feature_map_combination = [] # 取出 featurn map 的數量 num_pic = feature_map.shape[2] # 將 每一層卷積的特徵圖,拼接層 5 × 5 for i in range(0, num_pic): feature_map_split = feature_map[:, :, i]#設置需要疊加的特徵圖索引位置,比如在512個特徵圖中將100,200中 #特徵圖疊加feature_map[100:200, :, i] feature_map_combination.append(feature_map_split) # 按照特徵圖 進行 疊加代碼 feature_map_sum = sum(one for one in feature_map_combination) plt.imshow(feature_map_sum) plt.show() ################################################ pic_width,pic_height = 64,128 x = tf.placeholder(tf.float32,[None,scale_pic_height,scale_pic_width,3]) sess =tf.Session() vgg = model.vgg16(x) ConVis = vgg.c3_3 fc8_finetuining = vgg.probs saver = tf.train.Saver() print("model restoring") #saver=tf.train.import_meta_graph('./model/epoch000601.ckpt.meta') saver.restore(sess,"./modeldir/epoch000602.ckpt")#重點,加載模型訓練文件 image_contents = tf.read_file('C:/Users/featuremap/1_3385.png')#重點,讀入需要可視化顯示的圖片 image = tf.image.decode_jpeg(image_contents, channels=3) image = tf.cast(image, tf.float32) image.set_shape((scale_pic_height, scale_pic_width, 3)) image = sess.run(image)#將image的tensorr結構轉化爲numpy中list數據結構,需要之後喂入網絡 prob=sess.run([fc8_finetuining], feed_dict = {x:[image]})#與可視化無關,顯示圖片預測結果 ConVis=sess.run([ConVis], feed_dict={x:[image]})#重點,運行需要可視化的某一層網絡層的tensor visualize_feature_map(ConVis)#將每個特徵圖單獨顯示 visualize_feature_map_sum(ConVis)#將特徵圖疊加顯示 max_index = np.argmax(prob) print("Pred:",max_index)#輸出預測結構
3 總結
只需要將網絡模型中需要可視化操作的某一層網絡結構tensor返回,之後sess.run(),得到特徵圖數據,最後函數visualize_feature_map(ConVis)與 visualize_feature_map_sum(ConVis)顯示。