Learning rate to use for gradient descent training.
batch_size
Batch size for training.
epochs
Number of training iterations over the dataset.
do_fine_tuning
If true, the base module is trained together with the
classification layer on top.
l1_regularizer
A regularizer that applies a L1 regularization penalty.
l2_regularizer
A regularizer that applies a L2 regularization penalty.
label_smoothing
Amount of label smoothing to apply. See tf.keras.losses for
more details.
do_data_augmentation
A boolean controlling whether the training dataset is
augmented by randomly distorting input images, including random cropping,
flipping, etc. See utils.image_preprocessing documentation for details.
decay_samples
Number of training samples used to calculate the decay steps
and create the training optimizer.
warmup_steps
Number of warmup steps for a linear increasing warmup schedule
on learning rate. Used to set up warmup schedule by model_util.WarmUp.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-07 UTC."],[],[]]