I am trying to use the trainer to fine tune a bert model but it keeps trying to connect to wandb and I dont know what that is and just want it off. is there a config I am missing?
import os
os.environ[“WANDB_DISABLED”] = “true”
This works for me.
alternatively, you can disable the weights and biases (wandb) callback in the TrainingArguments
directly:
# None disables all integrations
args = TrainingArguments(report_to=None, ...)
It should be “none”, not None
Hi after doing that i have the following error: “Error: You must call wandb.init() before wandb.log()”
Hi @hiramcho, check out the docs on the logger to solve that issue. You just need to call wandb.init(project='your_project_name')
somewhere before you start using the logger. https://docs.wandb.ai/guides/integrations/huggingface
For me, I want wandb, but I don’t like the many prints during training.
Even though I set the logging level to be WARNING, I still get many:
Generate config GenerationConfig {
"decoder_start_token_id": 0,
"eos_token_id": 1,
"pad_token_id": 0,
"transformers_version": "4.26.0"
}
How do I turn this print off? (Print is in configuration_utils.py:543)
I get these annoying prints when running on Kaggle
Generate config GenerationConfig {
“decoder_start_token_id”: 0,
“eos_token_id”: 1,
“pad_token_id”: 0,
“transformers_version”: “4.26.0”
}
when the code starts evaluation
it prints each step
how to tun it off Please
???
I also had the same issue with the annoying Generate config GenerationConfig
lines. In my case, upgrading from transformers 4.26.0 to 4.27.2 solved the issue.
I also faced the same issue, and the issue is caused by transformers/configuration_utils.py at main · huggingface/transformers · GitHub
Comment that part the issue should be solved.
import wandb
wandb.init(mode=“disabled”)
Thanks! This is so weird. I was banging my head on it for too long lol.
This method is productive in kaggle. Thank you!
This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.