Skip to content

Latest commit

 

History

History
223 lines (202 loc) · 11.9 KB

TensorFlow(and keras).org

File metadata and controls

223 lines (202 loc) · 11.9 KB

tf基础操作

shape

官网手册 返回指定维度的一个张量

tf.shape(
    input,
    out_type=tf.dtypes.int32,
    name=None
)

Args

  • **`input`**: A `Tensor` or `SparseTensor`.
  • **`out_type`**: (Optional) The specified output type of the operation (`int32` or `int64`). Defaults to [`tf.int32`](https://www.tensorflow.org/api_docs/python/tf#int32).
  • **`name`**: A name for the operation (optional).

return

A Tensor of type out_type.

constant

创建一个常数张量

tf.constant(
    value,
    dtype=None,
    shape=None,
    name='Const'
)

logical逻辑操作

and or

tf.logical_and(a,b) #即返回a&&b
tf.logical_or(a,b) #即返回a||b

image

resize

改变图片尺寸

tf.image.resize(
    images,
    size,
    method=ResizeMethod.BILINEAR,
    preserve_aspect_ratio=False,
    antialias=False,
    name=None
)

传入参数

  • **`images`**: 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor of shape `[height, width, channels]`.
  • **`size`**: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The new size for the images.
  • **`method`**: ResizeMethod. Defaults to `bilinear`.
  • **`preserve_aspect_ratio`**: Whether to preserve the aspect ratio. If this is set, then `images` will be resized to a size that fits in `size` while preserving the aspect ratio of the original image. Scales up the image if `size` is bigger than the current size of the `image`. Defaults to False.
  • **`antialias`**: Whether to use an anti-aliasing filter when downsampling an image.
  • **`name`**: A name for this operation (optional).

tf.keras

Input

Input用于实例化一个Keras张量

tf.keras.Input(
    shape=None,
    batch_size=None,
    name=None,
    dtype=None,
    sparse=False,
    tensor=None,
    ragged=False,
    **kwargs
)

Arguments

  • **`shape`**: A shape tuple (integers), not including the batch size. For instance, `shape=(32,)` indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; ‘None’ elements represent dimensions where the shape is not known.
  • **`batch_size`**: optional static batch size (integer).
  • **`name`**: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn’t provided.
  • **`dtype`**: The data type expected by the input, as a string (`float32`, `float64`, `int32`…)
  • **`sparse`**: A boolean specifying whether the placeholder to be created is sparse. Only one of ‘ragged’ and ‘sparse’ can be True.
  • **`tensor`**: Optional existing tensor to wrap into the `Input` layer. If set, the layer will not create a placeholder tensor.
  • **`ragged`**: A boolean specifying whether the placeholder to be created is ragged. Only one of ‘ragged’ and ‘sparse’ can be True. In this case, values of ‘None’ in the ‘shape’ argument represent ragged dimensions. For more information about RaggedTensors, see https://www.tensorflow.org/guide/ragged_tensors.
  • **kwargs: deprecated arguments support.

Returns

A tensor

layers

input

tf.keras.layers.Input

这个与 tf.keras.Input 是一样的。

LayerNormalization

_\under{}init_\under

__init__(
    axis=-1,
    epsilon=0.001,
    center=True,
    scale=True,
    beta_initializer='zeros',
    gamma_initializer='ones',
    beta_regularizer=None,
    gamma_regularizer=None,
    beta_constraint=None,
    gamma_constraint=None,
    trainable=True,
    name=None,
    **kwargs
)

参数

  • **`axis`**: Integer, the axis that should be normalized (typically the features axis). For instance, after a `Conv2D` layer with `data_format=”channels_first”`, set `axis=1` in `BatchNormalization`.
  • **`momentum`**: Momentum for the moving average.
  • **`epsilon`**: Small float added to variance to avoid dividing by zero.
  • **`center`**: If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored.
  • **`scale`**: If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. [`nn.relu`](https://www.tensorflow.org/api_docs/python/tf/nn/relu)), this can be disabled since the scaling will be done by the next layer.
  • **`beta_initializer`**: Initializer for the beta weight.
  • **`gamma_initializer`**: Initializer for the gamma weight.
  • **`moving_mean_initializer`**: Initializer for the moving mean.
  • **`moving_variance_initializer`**: Initializer for the moving variance.
  • **`beta_regularizer`**: Optional regularizer for the beta weight.
  • **`gamma_regularizer`**: Optional regularizer for the gamma weight.
  • **`beta_constraint`**: Optional constraint for the beta weight.
  • **`gamma_constraint`**: Optional constraint for the gamma weight.
  • **`renorm`**: Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
  • **`renorm_clipping`**: A dictionary that may map keys ‘rmax’, ‘rmin’, ‘dmax’ to scalar `Tensors` used to clip the renorm correction. The correction `(r, d)` is used as `corrected_value = normalized_value * r + d`, with `r` clipped to [rmin, rmax], and `d` to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
  • **`renorm_momentum`**: Momentum used to update the moving means and standard deviations with renorm. Unlike `momentum`, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that `momentum` is still applied to get the means and variances for inference.
  • **`fused`**: if `True`, use a faster, fused implementation, or raise a ValueError if the fused implementation cannot be used. If `None`, use the faster implementation if possible. If False, do not used the fused implementation.
  • **`trainable`**: Boolean, if `True` the variables will be marked as trainable.
  • **`virtual_batch_size`**: An `int`. By default, `virtual_batch_size` is `None`, which means batch normalization is performed across the whole batch. When `virtual_batch_size` is not `None`, instead perform “Ghost Batch Normalization”, which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
  • **`adjustment`**: A function taking the `Tensor` containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, `adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))` will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If `None`, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.

call参数

  • **inputs**: Input tensor (of any rank).
  • **training** : Python boolean indicating whether the layer should behave in training mode or in inference mode.
    • training=True: The layer will normalize its inputs using the mean and variance of the current batch of inputs.
    • training=False: The layer will normalize its inputs using the mean and variance of its moving statistics, learned during training.

Model类

实例化方法

函数接口

使用函数接口,利用输入输出创建网络

import tensorflow as tf

inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)

继承Model类

使用这种方法,你必须在_\under{}init_\under{}方法里定义层,并且必须在 **call** 方法里定义前向传播。

import tensorflow as tf

class MyModel(tf.keras.Model):

  def __init__(self):
    super(MyModel, self).__init__()
    self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
    self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)

  def call(self, inputs):
    x = self.dense1(inputs)
    if training: #可选
      x = self.dropout(x, training=training)
    return self.dense2(x)

model = MyModel()

可以在 **call** 方法里设置利用参数 trainin 使得训练和推理有不同的运算

_\under{}init_\under

__init__(
    *args,
    **kwargs
)

Methods

compile

配置网络使得其可以训练

compile(
    optimizer='rmsprop',
    loss=None,
    metrics=None,
    loss_weights=None,
    sample_weight_mode=None,
    weighted_metrics=None,
    target_tensors=None,
    distribute=None,
    **kwargs
)
Arguments
  • **`optimizer`**: String (name of optimizer) or optimizer instance. See [`tf.keras.optimizers`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers).
  • **`loss`**: String (name of objective function), objective function or [`tf.losses.Loss`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Loss) instance. See [`tf.losses`](https://www.tensorflow.org/api_docs/python/tf/losses). If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.
  • **`metrics`**: List of metrics to be evaluated by the model during training and testing. Typically you will use `metrics=[‘accuracy’]`. To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as `metrics={‘output_a’: ‘accuracy’, ‘output_b’: [‘accuracy’, ‘mse’]}`. You can also pass a list (len = len(outputs)) of lists of metrics such as `metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]]` or `metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]`.
  • **`loss_weights`**: Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the `loss_weights` coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a tensor, it is expected to map output names (strings) to scalar coefficients.
  • **`sample_weight_mode`**: If you need to do timestep-wise sample weighting (2D weights), set this to `”temporal”`. `None` defaults to sample-wise weights (1D). If the model has multiple outputs, you can use a different `sample_weight_mode` on each output by passing a dictionary or a list of modes.
  • **`weighted_metrics`**: List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing.
  • **`target_tensors`**: By default, Keras will create placeholders for the model’s target, which will be fed with the target data during training. If instead you would like to use your own target tensors (in turn, Keras will not expect external Numpy data for these targets at training time), you can specify them via the `target_tensors` argument. It can be a single tensor (for a single-output model), a list of tensors, or a dict mapping output names to target tensors.
  • **`distribute`**: NOT SUPPORTED IN TF 2.0, please create and compile the model under distribution strategy scope instead of passing it to compile.
  • **kwargs: Any additional arguments.

summary

打印网络的字符串摘要

summary(
    line_length=None,
    positions=None,
    print_fn=None
)
Arguments:
  • **`line_length`**: Total length of printed lines (e.g. set this to adapt the display to different terminal window sizes).
  • **`positions`**: Relative or absolute positions of log elements in each line. If not provided, defaults to `[.33, .55, .67, 1.]`.
  • **`print_fn`**: Print function to use. Defaults to `print`. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.