A common set of hyperparameters shared by the training jobs of all model
maker tasks.
Attributes
learning_rate
The learning rate to use for gradient descent training.
batch_size
Batch size for training.
epochs
Number of training iterations over the dataset.
steps_per_epoch
An optional integer indicate the number of training steps
per epoch. If not set, the training pipeline calculates the default steps
per epoch as the training dataset size divided by batch size.
class_weights
An optional mapping of indices to weights for weighting the
loss function during training.
shuffle
True if the dataset is shuffled before training.
repeat
True if the training dataset is repeated infinitely to support
training without checking the dataset size.
export_dir
The location of the model checkpoint files.
distribution_strategy
A string specifying which Distribution Strategy to
use. Accepted values are 'off', 'one_device', 'mirrored',
'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case
insensitive. 'off' means not to use Distribution Strategy; 'tpu' means to
use TPUStrategy using tpu_address. See the tf.distribute.Strategy
documentation for more details:
https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy.
num_gpus
How many GPUs to use at each worker with the
DistributionStrategies API. The default is 0.
tpu
The TPU resource to be used for training. This should be either the
name used when creating the Cloud TPU, a grpc://ip.address.of.tpu:8470
url, or an empty string if using a local TPU.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-07 UTC."],[],[]]