See https://arxiv.org/abs/1810.04805 (BERT: Pre-training of Deep Bidirectional
Transformers for Language Understanding) for more details.
Attributes:
seq_len: Length of the sequence to feed into the model.
do_fine_tuning: If true, then the BERT model is not frozen for training.
dropout_rate: The rate for dropout.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-05-07 UTC."],[],[]]