MXNet的Model API

MXNet的API

mxnet裏面的model API不是真的API,它只不過是一個對ndarray的一個封裝,使其更容易使用。

訓練一個模型

爲了訓練一個模型,你需要遵循以下兩步,第一步是使用symbol來構造,然後調用model.Feedforward.create這個方法來創建一個model。下面的代碼創建了一個兩層的神經網絡。
# configure a two layer neuralnetwork
data = mx.symbol.Variable('data')
fc1 = mx.symbol.FullyConnected(data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(fc1, name='relu1', act_type='relu')
fc2 = mx.symbol.FullyConnected(act1, name='fc2', num_hidden=64)
softmax = mx.symbol.SoftmaxOutput(fc2, name='sm')
# create a model
model = mx.model.FeedForward.create(
     softmax,
     X=data_set,
     num_epoch=num_epoch,
     learning_rate=0.01)
你還可以使用scikit-learn一樣的風格來構造和擬合一個模型
# create a model using sklearn-style two step way
model = mx.model.FeedForward(
     softmax,
     num_epoch=num_epoch,
     learning_rate=0.01)

model.fit(X=data_set)
你如果想看更多的功能,請看Model API Reference

保存模型

# save a model to mymodel-symbol.json and mymodel-0100.params
prefix = 'mymodel'
iteration = 100
model.save(prefix, iteration)

# load model back
model_loaded = mx.model.FeedForward.load(prefix, iteration)
我們往往用一個腳本進行對數據的訓練,往往以前綴加序號的形式如mymodel-0100.params這樣的形式保存,然後用另一個腳本加載模型,並進行預測來完成相應的功能。

階段性的點檢測(Checkpoint)

我們進行週期性的點檢測是很有必要的。爲了做這個,你只要簡單的加一個回調函數do_checkpoint(path)在函數裏面。這個訓練的過程將會自動的在每次迭代的時候,在特殊的位置進行點檢測。
prefix='models/chkpt'
model = mx.model.FeedForward.create(
     softmax,
     X=data_set,
     iter_end_callback=mx.callback.do_checkpoint(prefix),
     ...)
你可以加載模型的點檢測在使用Feedforward.load之後。

使用多個設備

簡單的設置ctx,其內容爲你要訓練設備(cpu,gpu)的列表。
devices = [mx.gpu(i) for i in range(num_device)]
model = mx.model.FeedForward.create(
     softmax,
     X=dataset,
     ctx=devices,
     ...)
這個訓練過程將會通過一個並行的方式在你指定的GPUS進行。

模型API

MXNet模型模塊

mxnet.model.BatchEndParam

alias of BatchEndParams


BatchEndParam是BatchEndParams的參數

mxnet.model.save_checkpoint(prefixepochsymbolarg_paramsaux_params)

Checkpoint the model data into file.

Parameters:
  • prefix (str) – Prefix of model name.
  • epoch (int) – The epoch number of the model.
  • symbol (Symbol) – The input symbol
  • arg_params (dict of str to NDArray) – Model parameter, dict of name to NDArray of net’s weights.
  • aux_params (dict of str to NDArray) – Model parameter, dict of name to NDArray of net’s auxiliary states.

Notes

  • prefix-symbol.json will be saved for symbol.
  • prefix-epoch.params will be saved for parameters.

類功能:對模型數據點檢測後存入到文件中。
參數:
prefix(str)-模型名的前綴(可以是個文件夾)
epoch(int)-模型的epoch的數量(epoch在機器學習裏面指的是把所有的樣本進行一次全部操作(前向傳播,反向傳播等等),和普通的迭代相比,epoch的尺度比較大)
symbol(Symbol)-輸入的symbol。
arg_params(一個NDArray的字符字典)-模型參數,以及網絡權重字典。
aux_params(一個NDArray的字符字典)-模型參數,以及一些附加狀態的字典。

Notes

  • prefix-symbol.json will be saved for symbol.
  • prefix-epoch.params will be saved for parameters.
注意:
prefix-symbol.json將會存儲symbol。
prefix-epoch.params會存儲參數。
一個模型的symbol文件往往是唯一確定的,而params文件可以很多,最後你可以把一些沒用的params文件給刪掉。一般params的個數等於epoch的個數,因爲越往後面的params越好,所以你可以只保留最後一個的params文件。

mxnet.model.load_checkpoint(prefixepoch)

Load model checkpoint from file.

Parameters:
  • prefix (str) – Prefix of model name.
  • epoch (int) – Epoch number of model we would like to load.
Returns:

  • symbol (Symbol) – The symbol configuration of computation network.
  • arg_params (dict of str to NDArray) – Model parameter, dict of name to NDArray of net’s weights.
  • aux_params (dict of str to NDArray) – Model parameter, dict of name to NDArray of net’s auxiliary states.
類功能:加載檢測點(感覺還是翻譯成檢測點比較好)
參數:
prefix(str)-模型名稱的前綴
epoch(int)-你想加載的模型的epoch的序號,一般是最大的那個。
返回值:
symbol(Symbol)-我們要計算網絡的模型配置
arg_params(一個NDArray的字符字典)-模型參數,以及網絡權重字典。
aux_params(一個NDArray的字符字典)-模型參數,以及一些附加狀態的字典。

class mxnet.model.FeedForward(symbolctx=Nonenum_epoch=Noneepoch_size=None,optimizer='sgd'initializer=<mxnet.initializer.Uniform object>numpy_batch_size=128,arg_params=Noneaux_params=Noneallow_extra_params=Falsebegin_epoch=0**kwargs)

Model class of MXNet for training and predicting feedforward nets. This class is designed for a single-data single output supervised network.

Parameters:
  • symbol (Symbol) – The symbol configuration of computation network.
  • ctx (Context or list of Context, optional) – The device context of training and prediction. To use multi GPU training, pass in a list of gpu contexts.
  • num_epoch (int, optional) – Training parameter, number of training epochs(epochs).
  • epoch_size (int, optional) – Number of batches in a epoch. In default, it is set to ceil(num_train_examples / batch_size)
  • optimizer (str or Optimizer, optional) – Training parameter, name or optimizer object for training.
  • initializer (initializer function, optional) – Training parameter, the initialization scheme used.
  • numpy_batch_size (int, optional) – The batch size of training data. Only needed when input array is numpy.
  • arg_params (dict of str to NDArray, optional) – Model parameter, dict of name to NDArray of net’s weights.
  • aux_params (dict of str to NDArray, optional) – Model parameter, dict of name to NDArray of net’s auxiliary states.
  • allow_extra_params (boolean, optional) – Whether allow extra parameters that are not needed by symbol to be passed by aux_params and arg_params. If this is True, no error will be thrown when aux_params and arg_params contain extra parameters than needed.
  • begin_epoch (int, optional) – The begining training epoch.
  • kwargs (dict) – The additional keyword arguments passed to optimizer.
類功能:
MXNet的用來訓練和預測前向傳播網絡的模型類。這個類設計來是爲了得到一個單一輸出的監督網絡。
參數:
symbol(Symbol)-計算網絡的symbol構造。
ctx(Context or list of Context,optional)-用來訓練和預測的設備。如果要使用多個GPU,請傳入gpu上下文。
num_epoch(int,optional)-訓練epoches的個數。
epoch_size(int,optional)- 一個epoch裏面的batch的個數。默認ceil(num_train_examples/batch_size)即訓練的樣本的個數/batch的大小然後取整。
optimizer(str or Optimizer,optional)-訓練參數,名字或者相應的優化類用來訓練的。
initializer(initializer function,optional)-訓練參數,用來初始化的組合。
numpy_batch_size(int,optional)-訓練集的batch尺寸。只有當輸入的數組是numpy的時候需要。
arg_params(一個NDArray的字符字典)-模型參數,以及網絡權重字典。
aux_params(一個NDArray的字符字典)-模型參數,以及一些附加狀態的字典。
allow_extra_params(boolean,optional)-是否需要一些額外的參數,aux_params和arg_params不需要的。如果這是真的,那麼就不會拋出錯誤當參數的個數超出所需要的參數的時候。
begin_epoch(int,optional)-開始訓練的epoch,也就是說這一epoch後面的epoch都會重新訓練。
kwargs(dict)-額外的關鍵參數被傳到optimizer裏面的。

predict(Xnum_batch=Nonereturn_data=Falsereset=True)

Run the prediction, always only use one device. :param X: :type X: mxnet.DataIter :param num_batch: the number of batch to run. Go though all batches if None :type num_batch: int or None

Returns: y – The predicted value of the output.
Return type: numpy.ndarray or a list of numpy.ndarray if the network has multiple outputs.
類方法功能:進行預測,只能使用一個device.參數X是X類型的,batch的運行數量,如果被設置爲None的話,會對裏面的所有的批進行處理。
返回值:我們的預測值。

score(Xeval_metric='acc'num_batch=Nonebatch_end_callback=Nonereset=True)

Run the model on X and calculate the score with eval_metric :param X: :type X: mxnet.DataIter :param eval_metric: The metric for calculating score :type eval_metric: metric.metric :param num_batch: the number of batch to run. Go though all batches if None :type num_batch: int or None

Returns: s – the final score
Return type: float
類方法功能:在X上運行模型並且用評估矩陣計算分數。
返回值:我們的最終分數。

fit(Xy=Noneeval_data=Noneeval_metric='acc'epoch_end_callback=None,batch_end_callback=Nonekvstore='local'logger=Nonework_load_list=Nonemonitor=None,eval_batch_end_callback=None)

Fit the model.

Parameters:
  • X (DataIter, or numpy.ndarray/NDArray) – Training data. If X is an DataIter, the name or, if not available, position, of its outputs should match the corresponding variable names defined in the symbolic graph.
  • y (numpy.ndarray/NDArray, optional) – Training set label. If X is numpy.ndarray/NDArray, y is required to be set. While y can be 1D or 2D (with 2nd dimension as 1), its 1st dimension must be the same as X, i.e. the number of data points and labels should be equal.
  • eval_data (DataIter or numpy.ndarray/list/NDArray pair) – If eval_data is numpy.ndarray/list/NDArray pair, it should be (valid_data, valid_label).
  • eval_metric (metric.EvalMetric or str or callable) – The evaluation metric, name of evaluation metric. Or a customize evaluation function that returns the statistics based on minibatch.
  • epoch_end_callback (callable(epoch, symbol, arg_params, aux_states)) – A callback that is invoked at end of each epoch. This can be used to checkpoint model each epoch.
  • batch_end_callback (callable(epoch)) – A callback that is invoked at end of each batch For print purpose
  • kvstore (KVStore or str, optional) – The KVStore or a string kvstore type: ‘local’, ‘dist_sync’, ‘dist_async’ In default uses ‘local’, often no need to change for single machiine.
  • logger (logging logger, optional) – When not specified, default logger will be used.
  • work_load_list (float or int, optional) – The list of work load for different devices, in the same order as ctx
類方法功能:模型擬合
參數:
X:訓練集。
Y:訓練集標籤。可以是二維的,不過第二維是一,標籤的個數需要和輸入點的個數一致。
eval_data:解析數據(和javascript裏面的eval函數差不多),輸入應該是(vaild_data,vaild_label)
eval_metric評估矩陣
epoch_end_callback-在執行到每一epoch的結尾的時候調用。通常用來點檢測。
batch_end_callback-在每一批結尾都會調用,只是爲了打印出來看。
kvstore:這個通常不用改,基本上都是'local'
logger:當沒有指定的時候,會用默認的logger。
work_load_list:不同設備的工作流列表,和ctx的順序一樣。

save(prefixepoch=None)

Checkpoint the model checkpoint into file. You can also use pickle to do the job if you only work on python. The advantage of load/save is the file is language agnostic. This means the file saved using save can be loaded by other language binding of mxnet. You also get the benefit being able to directly load/save from cloud storage(S3, HDFS)

Parameters: prefix (str) – Prefix of model name.

Notes

  • prefix-symbol.json will be saved for symbol.
  • prefix-epoch.params will be saved for parameters.
static load(prefixepochctx=None**kwargs)

Load model checkpoint from file.

Parameters:
  • prefix (str) – Prefix of model name.
  • epoch (int) – epoch number of model we would like to load.
  • ctx (Context or list of Context, optional) – The device context of training and prediction.
  • kwargs (dict) – other parameters for model, including num_epoch, optimizer and numpy_batch_size
Returns:

model – The loaded model that can be used for prediction.

Return type:

FeedForward

保存和加載的比較簡單,我就不說了。

static create(symbolXy=Nonectx=Nonenum_epoch=Noneepoch_size=None,optimizer='sgd'initializer=<mxnet.initializer.Uniform object>eval_data=Noneeval_metric='acc',epoch_end_callback=Nonebatch_end_callback=Nonekvstore='local'logger=None,work_load_list=Noneeval_batch_end_callback=None**kwargs)

Functional style to create a model. This function will be more consistent with functional languages such as R, where mutation is not allowed.

Parameters:
  • symbol (Symbol) – The symbol configuration of computation network.
  • X (DataIter) – Training data
  • y (numpy.ndarray, optional) – If X is numpy.ndarray y is required to set
  • ctx (Context or list of Context, optional) – The device context of training and prediction. To use multi GPU training, pass in a list of gpu contexts.
  • num_epoch (int, optional) – Training parameter, number of training epochs(epochs).
  • epoch_size (int, optional) – Number of batches in a epoch. In default, it is set to ceil(num_train_examples / batch_size)
  • optimizer (str or Optimizer, optional) – Training parameter, name or optimizer object for training.
  • initializier (initializer function, optional) – Training parameter, the initialization scheme used.
  • eval_data (DataIter or numpy.ndarray pair) – If eval_set is numpy.ndarray pair, it should be (valid_data, valid_label)
  • eval_metric (metric.EvalMetric or str or callable) – The evaluation metric, name of evaluation metric. Or a customize evaluation function that returns the statistics based on minibatch.
  • epoch_end_callback (callable(epoch, symbol, arg_params, aux_states)) – A callback that is invoked at end of each epoch. This can be used to checkpoint model each epoch.
  • batch_end_callback (callable(epoch)) – A callback that is invoked at end of each batch For print purpose
  • kvstore (KVStore or str, optional) – The KVStore or a string kvstore type: ‘local’, ‘dist_sync’, ‘dis_async’ In default uses ‘local’, often no need to change for single machiine.
  • logger (logging logger, optional) – When not specified, default logger will be used.
  • work_load_list (list of float or int, optional) – The list of work load for different devices, in the same order as ctx
創建模型這個API和前面也是大同小異。

接下去的這些API不常用到


初使化的API參考

class mxnet.initializer.Initializer

Base class for Initializer.

__call__(namearr)

Override () function to do Initialization

Parameters:
  • name (str) – name of corrosponding ndarray
  • arr (NDArray) – ndarray to be Initialized
class mxnet.initializer.Load(paramdefault_init=Noneverbose=False)

Initialize by loading pretrained param from file or dict

Parameters:
  • param (str or dict of str->NDArray) – param file or dict mapping name to NDArray.
  • default_init (Initializer) – default initializer when name is not found in param.
  • verbose (bool) – log source when initializing.
class mxnet.initializer.Mixed(patternsinitializers)

Initialize with mixed Initializer

Parameters:
  • patterns (list of str) – list of regular expression patterns to match parameter names.
  • initializers (list of Initializer) – list of Initializer corrosponding to patterns
class mxnet.initializer.Uniform(scale=0.07)

Initialize the weight with uniform [-scale, scale]

Parameters: scale (float, optional) – The scale of uniform distribution
class mxnet.initializer.Normal(sigma=0.01)

Initialize the weight with normal(0, sigma)

Parameters: sigma (float, optional) – Standard deviation for gaussian distribution.
class mxnet.initializer.Orthogonal(scale=1.414rand_type='uniform')

Intialize weight as Orthogonal matrix

Parameters:
  • scale (float optional) – scaling factor of weight
  • rand_type (string optional) – use “uniform” or “normal” random number to initialize weight
  • Reference –
  • --------- –
  • solutions to the nonlinear dynamics of learning in deep linear neural networks(Exact) –
  • preprint arXiv (arXiv) –
class mxnet.initializer.Xavier(rnd_type='uniform'factor_type='avg'magnitude=3)

Initialize the weight with Xavier or similar initialization scheme.

Parameters:
  • rnd_type (str, optional) – Use `gaussian` or `uniform` to init
  • factor_type (str, optional) – Use `avg``in`, or `out` to init
  • magnitude (float, optional) – scale of random number range



評估矩陣(Evalution Metric)API

Online evaluation metric module.

mxnet.metric.check_label_shapes(labelspredsshape=0)

Check to see if the two arrays are the same size.

class mxnet.metric.EvalMetric(namenum=None)

Base class of all evaluation metrics.

update(labelpred)

Update the internal evaluation.

Parameters:
  • labels (list of NDArray) – The labels of the data.
  • preds (list of NDArray) – Predicted values.
reset()

Clear the internal statistics to initial state.

get()

Get the current evaluation result.

Returns:
  • name (str) – Name of the metric.
  • value (float) – Value of the evaluation.
get_name_value()

Get zipped name and value pairs

class mxnet.metric.CompositeEvalMetric(**kwargs)

Manage multiple evaluation metrics.

add(metric)

Add a child metric.

get_metric(index)

Get a child metric.

class mxnet.metric.Accuracy

Calculate accuracy

class mxnet.metric.TopKAccuracy(**kwargs)

Calculate top k predictions accuracy

class mxnet.metric.F1

Calculate the F1 score of a binary classification problem.

class mxnet.metric.MAE

Calculate Mean Absolute Error loss

class mxnet.metric.MSE

Calculate Mean Squared Error loss

class mxnet.metric.RMSE

Calculate Root Mean Squred Error loss

class mxnet.metric.CrossEntropy

Calculate Cross Entropy loss

class mxnet.metric.Torch

Dummy metric for torch criterions

class mxnet.metric.CustomMetric(fevalname=Noneallow_extra_outputs=False)

Custom evaluation metric that takes a NDArray function.

Parameters:
  • feval (callable(label, pred)) – Customized evaluation function.
  • name (str, optional) – The name of the metric
  • allow_extra_outputs (bool) – If true, the prediction outputs can have extra outputs. This is useful in RNN, where the states are also produced in outputs for forwarding.
mxnet.metric.np(numpy_fevalname=Noneallow_extra_outputs=False)

Create a customized metric from numpy function.

Parameters:
  • numpy_feval (callable(label, pred)) – Customized evaluation function.
  • name (str, optional) – The name of the metric.
  • allow_extra_outputs (bool) – If true, the prediction outputs can have extra outputs. This is useful in RNN, where the states are also produced in outputs for forwarding.
mxnet.metric.create(metric**kwargs)

Create an evaluation metric.

Parameters: metric (str or callable) – The name of the metric, or a function providing statistics given pred, label NDArray
優化API

Common Optimization algorithms with regularizations.

class mxnet.optimizer.Optimizer(rescale_grad=1.0param_idx2name=Nonewd=0.0,clip_gradient=Nonelearning_rate=0.01lr_scheduler=Nonesym=None)

Base class of all optimizers.

static register(klass)

Register optimizers to the optimizer factory

static create_optimizer(namerescale_grad=1**kwargs)

Create an optimizer with specified name.

Parameters:
  • name (str) – Name of required optimizer. Should be the name of a subclass of Optimizer. Case insensitive.
  • rescale_grad (float) – Rescaling factor on gradient.
  • kwargs (dict) – Parameters for optimizer
Returns:

opt – The result optimizer.

Return type:

Optimizer

create_state(indexweight)

Create additional optimizer state such as momentum. override in implementations.

update(indexweightgradstate)

Update the parameters. override in implementations

set_lr_scale(args_lrscale)

set lr scale is deprecated. Use set_lr_mult instead.

set_lr_mult(args_lr_mult)

Set individual learning rate multipler for parameters

Parameters: args_lr_mult (dict of string/int to float) – set the lr multipler for name/index to float. setting multipler by index is supported for backward compatibility, but we recommend using name and symbol.
set_wd_mult(args_wd_mult)

Set individual weight decay multipler for parameters. By default wd multipler is 0 for all params whose name doesn’t end with _weight, if param_idx2name is provided.

Parameters: args_wd_mult (dict of string/int to float) – set the wd multipler for name/index to float. setting multipler by index is supported for backward compatibility, but we recommend using name and symbol.
mxnet.optimizer.register(klass)

Register optimizers to the optimizer factory

class mxnet.optimizer.SGD(momentum=0.0**kwargs)

A very simple SGD optimizer with momentum and weight regularization.

Parameters:
  • learning_rate (float, optional) – learning_rate of SGD
  • momentum (float, optional) – momentum value
  • wd (float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad (float, optional) – rescaling factor of gradient.
  • clip_gradient (float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
  • param_idx2name (dict of string/int to float, optional) – special treat weight decay in parameter ends with bias, gamma, and beta
create_state(indexweight)

Create additional optimizer state such as momentum.

Parameters: weight (NDArray) – The weight data
update(indexweightgradstate)

Update the parameters.

Parameters:
  • index (int) – An unique integer key used to index the parameters
  • weight (NDArray) – weight ndarray
  • grad (NDArray) – grad ndarray
  • state (NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class mxnet.optimizer.NAG(**kwargs)

SGD with nesterov It is implemented according to https://github.com/torch/optim/blob/master/sgd.lua

update(indexweightgradstate)

Update the parameters.

Parameters:
  • index (int) – An unique integer key used to index the parameters
  • weight (NDArray) – weight ndarray
  • grad (NDArray) – grad ndarray
  • state (NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class mxnet.optimizer.SGLD(**kwargs)

Stochastic Langevin Dynamics Updater to sample from a distribution.

Parameters:
  • learning_rate (float, optional) – learning_rate of SGD
  • wd (float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad (float, optional) – rescaling factor of gradient.
  • clip_gradient (float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
  • param_idx2name (dict of string/int to float, optional) – special treat weight decay in parameter ends with bias, gamma, and beta
create_state(indexweight)

Create additional optimizer state such as momentum.

Parameters: weight (NDArray) – The weight data
update(indexweightgradstate)

Update the parameters.

Parameters:
  • index (int) – An unique integer key used to index the parameters
  • weight (NDArray) – weight ndarray
  • grad (NDArray) – grad ndarray
  • state (NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class mxnet.optimizer.ccSGD(momentum=0.0**kwargs)

A very simple SGD optimizer with momentum and weight regularization. Implemented in C++.

Parameters:
  • learning_rate (float, optional) – learning_rate of SGD
  • momentum (float, optional) – momentum value
  • wd (float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad (float, optional) – rescaling factor of gradient.
  • clip_gradient (float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
update(indexweightgradstate)

Update the parameters.

Parameters:
  • index (int) – An unique integer key used to index the parameters
  • weight (NDArray) – weight ndarray
  • grad (NDArray) – grad ndarray
  • state (NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class mxnet.optimizer.Adam(learning_rate=0.001beta1=0.9beta2=0.999epsilon=1e-08,decay_factor=0.99999999**kwargs)

Adam optimizer as described in [King2014].

[King2014] Diederik Kingma, Jimmy Ba, Adam: A Method for Stochastic Optimization,http://arxiv.org/abs/1412.6980

the code in this class was adapted from https://github.com/mila-udem/blocks/blob/master/blocks/algorithms/__init__.py#L765

Parameters:
  • learning_rate (float, optional) – Step size. Default value is set to 0.002.
  • beta1 (float, optional) – Exponential decay rate for the first moment estimates. Default value is set to 0.9.
  • beta2 (float, optional) – Exponential decay rate for the second moment estimates. Default value is set to 0.999.
  • epsilon (float, optional) – Default value is set to 1e-8.
  • decay_factor (float, optional) – Default value is set to 1 - 1e-8.
  • wd (float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad (float, optional) – rescaling factor of gradient.
  • clip_gradient (float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
create_state(indexweight)

Create additional optimizer state: mean, variance

Parameters: weight (NDArray) – The weight data
update(indexweightgradstate)

Update the parameters.

Parameters:
  • index (int) – An unique integer key used to index the parameters
  • weight (NDArray) – weight ndarray
  • grad (NDArray) – grad ndarray
  • state (NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class mxnet.optimizer.AdaGrad(eps=1e-07**kwargs)

AdaGrad optimizer of Duchi et al., 2011,

This code follows the version in http://arxiv.org/pdf/1212.5701v1.pdf Eq(5) by Matthew D. Zeiler, 2012. AdaGrad will help the network to converge faster in some cases.

Parameters:
  • learning_rate (float, optional) – Step size. Default value is set to 0.05.
  • wd (float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad (float, optional) – rescaling factor of gradient.
  • eps (float, optional) – A small float number to make the updating processing stable Default value is set to 1e-7.
  • clip_gradient (float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
class mxnet.optimizer.RMSProp(gamma1=0.95gamma2=0.9**kwargs)

RMSProp optimizer of Tieleman & Hinton, 2012,

This code follows the version in http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013.

Parameters:
  • learning_rate (float, optional) – Step size. Default value is set to 0.002.
  • gamma1 (float, optional) – decay factor of moving average for gradient, gradient^2. Default value is set to 0.95.
  • gamma2 (float, optional) – “momentum” factor. Default value if set to 0.9.
  • wd (float, optional) – L2 regularization coefficient add to all the weights
  • rescale_grad (float, optional) – rescaling factor of gradient.
  • clip_gradient (float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
create_state(indexweight)

Create additional optimizer state: mean, variance :param weight: The weight data :type weight: NDArray

update(indexweightgradstate)

Update the parameters. :param index: An unique integer key used to index the parameters

Parameters:
  • weight (NDArray) – weight ndarray
  • grad (NDArray) – grad ndarray
  • state (NDArray or other objects returned by init_state) – The auxiliary state used in optimization.
class mxnet.optimizer.AdaDelta(rho=0.9epsilon=1e-05**kwargs)

AdaDelta optimizer as described in Zeiler, M. D. (2012). ADADELTA: An adaptive learning rate method.

http://arxiv.org/abs/1212.5701

Parameters:
  • rho (float) – Decay rate for both squared gradients and delta x
  • epsilon (float) – The constant as described in the thesis
  • wd (float) – L2 regularization coefficient add to all the weights
  • rescale_grad (float, optional) – rescaling factor of gradient.
  • clip_gradient (float, optional) – clip gradient in range [-clip_gradient, clip_gradient]
class mxnet.optimizer.Test(**kwargs)

For test use

create_state(indexweight)

Create a state to duplicate weight

update(indexweightgradstate)

performs w += rescale_grad * grad

mxnet.optimizer.create(namerescale_grad=1**kwargs)

Create an optimizer with specified name.

Parameters:
  • name (str) – Name of required optimizer. Should be the name of a subclass of Optimizer. Case insensitive.
  • rescale_grad (float) – Rescaling factor on gradient.
  • kwargs (dict) – Parameters for optimizer
Returns:

opt – The result optimizer.

Return type:

Optimizer

mxnet.optimizer.get_updater(optimizer)

Return a clossure of the updater needed for kvstore

Parameters: optimizer (Optimizer) – The optimizer
Returns: updater – The clossure of the updater
Return type: function



發佈了145 篇原創文章 · 獲贊 233 · 訪問量 123萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章