We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I was wondering if this behavior is intended. For instance when I run run_finetune.py with the following code:
python run_finetune.py --model_type dna --tokenizer_name=dna$KMER --model_name_or_path $MODEL_PATH --task_name dnaprom --do_train --data_dir $DATA_PATH --per_gpu_eval_batch_size=32 --per_gpu_train_batch_size=32 --learning_rate 2e-4 --output_dir $OUTPUT_PATH --logging_steps 100 --save_steps 4000 --warmup_percent 0.1 --overwrite_output --weight_decay 0.01 --n_process 8 --max_seq_length 59 --hidden_dropout_prob 0.1 --num_train_epochs 5.0
The config.json file still has "max_length": 20. Should I be editing the config.json file prior to finetuning?
"max_length": 20
Thanks so much for your help!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I was wondering if this behavior is intended. For instance when I run run_finetune.py with the following code:
python run_finetune.py --model_type dna --tokenizer_name=dna$KMER --model_name_or_path $MODEL_PATH --task_name dnaprom --do_train --data_dir $DATA_PATH --per_gpu_eval_batch_size=32 --per_gpu_train_batch_size=32 --learning_rate 2e-4 --output_dir $OUTPUT_PATH --logging_steps 100 --save_steps 4000 --warmup_percent 0.1 --overwrite_output --weight_decay 0.01 --n_process 8 --max_seq_length 59 --hidden_dropout_prob 0.1 --num_train_epochs 5.0
The config.json file still has
"max_length": 20
. Should I be editing the config.json file prior to finetuning?Thanks so much for your help!
The text was updated successfully, but these errors were encountered: