creating custom estimators

as the following figure shows, pre-made Estimators are subclasses of the tf.estimator.Estimator base class, while custom Estimators are an instance of tf.estimator.Estimator


a model function (or model_fn) implements the ML algorithm. the only difference between working with pre-made Estimators and custom Estimators is:
with pre-made Estimators, someone already wrote the model function for you
with custom Estimators, you must write the model function


the model function we'll use has the following call signature:
###
def my_model_fn(
features,
labels,
mode,
params
)


the first line of the model_fn calls tf.feature_column.input_layer to convert the feature dictionary and feature_columns into input for your model, as follows:
###
net = tf.feature_column.input_layer(features, params['feature_columns'])


if you are creating a deep neural network, you must define one or more hidden layers. the layers API provides a rich set of functions to define all types of hidden layers, including convolutional, pooling, and dropout layers


later on, these logits will be transformed into probabilities by the tf.nn.softmax function


the Estimator framework then calls your model function with mode set to ModeKeys.TRAIN


for each mode value, your code must return an instance of tf.estimator.EstimatorSpec, which contains the information the caller requires


when the Estimator's predict method is called, the model_fn receives mode=ModeKeys.PREDICT. in this case, the model function must return a tf.estimator.EstimatorSpec containing the prediction


the model must have been trained prior to making a prediction. the trained model is stored on disk in the model_dir directory established when you instantiated the Estimator


the predictions holds the following three key/value pairs:
class_ids holds the class id(0, 1 or 2) representing the model's prediction of the most likely species for this example
probabilities holds the three probabilities
logit holds the raw logit values


for both training and evaluation we need to calculate the model's loss. this is the objective that will be optimized


this function returns the average over the whole batch
###
loss = tf.losses.spares_softmax_cross_entropy(labels=labels, logits=logits)


tensorflow provides a Metrics module tf.metrics to calculate common metrics


the tf.metrics.accuracy function compares our predictions against the true values, that is, against the labels provided by the input function


the tf.train package provides many other optimizers--feel free to experiment with them


the minimize method also takes a global_step parameter. tensorflow uses this parameter to count the number of training steps that have been processed(to know when to end a training run)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章