tflearn入門筆記

import tflearn
tflearn.conv_2d(x,32,5,activation='relu',name='canv1')
fc2=tflearn.fully_connected(fc1,32,activation='tanh',regularizer='L2')
"上訴等於fc2=tflearn.fully_connected(fc1,32)" \
"tflearn.add_weights_regularization(fc2,loss='L2')"
"fc2=tflearn.tanh(fc2)"
"Optimizer, Objective and Metric:優化,目標,指標"
reg = tflearn.regression(fc4, optimizer='rmsprop', metric='accuracy', loss='categorical_crossentropy')
##Ops也可以在外部定義,以進行更深入的自定義:
momentum=tflearn.optimizers.Momentum(learning_rate=0.1,weight_decay=0.96,decay_step=200)
top5=tflearn.metrics.Top_k(k=5)#關於top_k的解釋https://blog.csdn.net/uestc_c2_403/article/details/73187915和https://blog.csdn.net/Enchanted_ZhouH/article/details/77200592
reg=tflearn.regression(fc4,optimizer=momentum,metric=top5,loss='categorical_crossentropy')

'訓練,評估,測試'
'network=...(some layers)...'
network=regression(network,optimizer='sgd',loss='categorical_crossentropy')
model=DNN(network)
#可以直接調用測試評估
'network=...'
model=DNN(network)
model.load('model.tflearn')
model.pred(X)

'tflearn還可以管理日誌'
model=DNN(network,tensorboard_verbose=3)
tensorboard_verbose爲0時值顯示loss和metrics
1: Loss, Metric & Gradients.
2: Loss, Metric, Gradients & Weights.
3: Loss, Metric, Gradients, Weights, Activations & Sparsity (Best Visualization)

 tflearn.layers.merge_ops.merge(tensors_list,model,axis=1,name='Merge')tflearn中merge函數是將tensor列表合併成一個,merge的模式(mdoel)需要指定參數中model支持的字符有:
'concat': concatenate outputs along specified axis
'elemwise_sum': outputs element-wise sum
'elemwise_mul': outputs element-wise sum
'sum': outputs element-wise sum along specified axis
'mean': outputs element-wise average along specified axis
'prod': outputs element-wise multiplication along specified axis
'max': outputs max elements along specified axis
'min': outputs min elements along specified axis
'and': `logical and` btw outputs elements along specified axis
'or': `logical or` btw outputs elements along specified axis

axis:整數Represents the axis to use for merging mode. In most cases: 0 for concat and 1 for other modes.

 

'保存和載入模型也很簡單'
model.save('my_model.tflearn')
model.load('my_model.tflearn')

 

'使用其他變量也可以使用get_weights和set_weights'
input_data = tflearn.input_data(shape=[None, 784])
fc1 = tflearn.fully_connected(input_data, 64)
fc2 = tflearn.fully_connected(fc1, 10, activation='softmax')
net = tflearn.regression(fc2)
model = DNN(net)
# Get weights values of fc2
model.get_weights(fc2.W)
# Assign new random weights to fc2
model.set_weights(fc2.W, numpy.random.rand(64, 10))

 

'檢索變量也很簡單'
fc1=tflearn.fully_connected(input_layer,64,name='fc_layer_1')
fc1_weights_var=fc1.W
fc1_biases_var=fc1.b
'使用張量名'
fc1_vars=tflearn.get_layer_variables_by_name('fc_layer_1')
fc1_weights_var=fc1_vars[0]
fc1_biases_var=fc1_vars[1]
'模型搬遷微調的時候可以用restore來指定是否重置權重,restore重置只針對於權重'
fc_layer=tflearn.fully_connected(input_layer,32)#重置權重
fc_layer=tflearn.fully_connected(input_layer,32,restore=False)#不重置權重
重置除了全連接層外所有層權重的例子見https://github.com/tflearn/tflearn/blob/master/examples/basics/finetuning.py
'數據預處理和數據增強,tflean數據流使用計算管道設計的當GPU訓練模型時,數據的處理是用的CPU'
    #實時數據預處理
    img_prep=tflearn.ImagePreprocessing()
  #zero Center(對整個數據進行平均值計算)
  img_prep.add_featurewise_zero_center()
  #標準規範化(對整個數據進行標準呢正規化)
  img_prep.add_featurewise_stdnorm()
#實時數據增強
img_aug=tflearn.ImageAugmentation()
#隨機反轉圖片
img_aug.add_random_flip_leftright()
#將這些方法加入到輸入層
network=input_data(shape=[None,32,32,3],data_preprocessing=img_prep,data_augmentation=img_aug)

更多細節見 Data Preprocessing和Data Augmentation

 

'tensorflow和tflearn的混合使用’ #一些tensorflow操作的使用
 X=tf.placeholder(shape=(None,784),dtype=tf.float32) 
net=tf.reshape(X,[-1,28,28,1]) 
#用tflearn卷積層 
net=tflearn.conv_2d(net,32,3,activation='relu') 
#使用tensorflow的最大池化操作 
net=tf.nn.max_pool(net,ksize=[1,2,2,1],stride=[1,2,2,1],padding="SAME")
例子見https://github.com/tflearn/tflearn/blob/master/examples/extending_tensorflow/layers.py




內置操作'tflearn的內置操作與任何的tensorflow表達式相兼容',見https://github.com/tflearn/tflearn/blob/master/examples/extending_tensorflow/builtin_ops.py

 

訓練器(trainer)/評估器/(evaluator)/預測器(predictor)
#tflearn使用一個TrainOp類來帶表示優化過程
trainop=TrainOp(net=my_network,loss=loss,metric=accuracy)
然後所有的TrainOp可以被傳遞到一個Trainer類裏面,Trainer類將處理整個訓練過程,將所有TrainOp當作爲一個整體模型
model=Trainer(trainops=trainop,tensorboard_dir='tmp/tflearn')
model.fit(feed_dict={input_placeholder:X,target_placeholder:Y})
雖然大多數模型只有一個優化過程,但對於更復雜的模型來說,它可以用來處理多個模型。
model=Trainer(trainops=[trainop1,trainop2])
model.fit(feed_dict=[{in1:X1,label1:Y1},{in2:X2,label2:Y2}])
關於TrainOp和Trainer見http://tflearn.org/helpers/trainer/,例子見
https://github.com/tflearn/tflearn/blob/master/examples/extending_tensorflow/builtin_ops.py

 

'預測tflearn使用的是Evaluator類,Evaluator類工作方式和Trainer類很相似,將網絡作爲參數,返回預測結果'
model=Evaluator(network)
model.predict(feed_dict={input_placeholder:X})
' 對於網絡層次在訓練和測試時有不同操作時(比如dropout和BN),Trainer採用了布爾型變量(is_training)來指明網絡是否網絡是用於訓練或者測試的,該變量是存儲在tf.GraphKeys.IS_TRAINING這個集合下面的作爲他的第一個(也是唯一一個)元素。所以當定義網絡的層時該變量應該使用condition操作(OP)' 
#對dropout的例子
def apply_dropout(): 
    return tf.nn.dropout(x,keep_prob) 
is_training=tflearn.get_training_mode()#檢索is_training變量
tf.cond(is_training,apply_dropout,lambda:x)#只在訓練的時候使用dropout

 

'爲了方便起見,TFLearn實現了檢索該變量或更改其值的函數'
#是訓練模型
tflearn.is_training(True)
#不是訓練模型
tflearn.is_training(False)
'在訓練週期中,TFLearn可以在Callback界面給出的一組函數中跟蹤訓練指標並與之交互。
 爲了簡化指標檢索,每個回調方法都接收一TrainingState,它跟蹤狀態
(例如:當前時期,步驟,批量迭代)和指標(例如:當前驗證準確性,全局準確性等)。'

class MonitorCallback(tflearn.callbacks.Callback): 
    def __init__(self,api): 
        self.my_monitor_api = api
    def on_epoch_end(self,training_state):
        self.my_monitor_api.send({accuracy:training_state.global_acc,loss"training_state.global_loss})
#然後將其加到mode.fit的調用中去
monitorcallback=MonitorCallback(api)#api是自己的API類
model=...
model.fit(...,callbacks=monitorcallback)

 

'變量,tflearn中定義變量很簡單'
import tflearn.variables as vs
my_var=vs.variable('W',shape=[784,12],initializer='truncated_normal',regularizer='L2',device='/gpu:0')
例子見https://github.com/tflearn/tflearn/blob/master/examples/extending_tensorflow/variables.py
‘summaries,當使用Trainer類時,管理summaries很簡單,只需將監視激活存儲到tf.GraphKeys.ACTIVATIONS。然後,只需指定一個詳細級別來控制可視化深度’
model=Trainner(network, loss=loss, metric=acc, tensorboard_verbose=3)
‘也可以直接使用TFLearn的操作快速增加summaries到當前tensorflow圖’
import tflearn,helpers.summarizer as s
s.summarize_varibles(train_var=[...])
'對模型添加正則化可以用tflearn的regularzer完成,目前支持權重和激活函數正則化'
# Add L2 regularization to a variable
W = tf.Variable(tf.random_normal([784, 256]), name="W")
tflearn.add_weight_regularizer(W, 'L2', weight_decay=0.001)
'數據預處理'
http://tflearn.org/data_utils/

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章