Is it possible to set initial_prompt and condition_on_previous_text with a whisper_pipeline?
|
|
4
|
2129
|
February 8, 2024
|
Possible fix for trainer evaluation with object detection
|
|
0
|
318
|
February 7, 2024
|
Nice infrence trick
|
|
0
|
122
|
February 7, 2024
|
Getting the error: "ValueError: The following model_kwargs are not used by the model:....."
|
|
2
|
3842
|
February 7, 2024
|
Loading Peft model from checkpoint leading into size missmatch
|
|
6
|
10665
|
February 7, 2024
|
AutoModel from_pretrained not releasing memory and causing a memory leak
|
|
7
|
7837
|
February 7, 2024
|
Overhead caused by moving eos_token_id to gpu mem
|
|
14
|
441
|
February 7, 2024
|
How to stop at 512 tokens when sending text to pipeline?
|
|
2
|
1493
|
February 7, 2024
|
PatchTSMixerForPrediction error with prediction of len 1
|
|
2
|
137
|
February 6, 2024
|
Move model with device_map="balanced" to CPU
|
|
1
|
6359
|
February 5, 2024
|
Cannot change training arguments when resuming from a checkpoint
|
|
0
|
215
|
February 5, 2024
|
LayoutLMv3 token classification on repeated values
|
|
0
|
105
|
February 5, 2024
|
ImportError: cannot import name 'pipeline' from 'transformers'
|
|
3
|
3751
|
February 5, 2024
|
When using AutoModelForCausalLM, THUDM/cogagent-vqa-hf and load_in_8bit I get this error : self and mat2 must have the same dtype, but got Half and Char
|
|
0
|
228
|
February 4, 2024
|
How to do inference with fined-tuned huggingface models?
|
|
3
|
822
|
February 4, 2024
|
Facing Issue in importing pipelines from transformers
|
|
8
|
22000
|
February 4, 2024
|
How to modify Model Class with AutoModelForCausalLM.from_config
|
|
0
|
335
|
February 4, 2024
|
How to use GPU when using transformers.AutoModel
|
|
0
|
1740
|
February 3, 2024
|
Encoding multiple sentences
|
|
2
|
190
|
February 3, 2024
|
Combine between lora and prompt tunning
|
|
1
|
925
|
February 3, 2024
|
BetterTransformer with HF Trainer
|
|
2
|
285
|
February 3, 2024
|
Fine tuning CLIP Transformer for downstream task
|
|
1
|
3275
|
February 2, 2024
|
Training BERT from scratch (MLM+NSP) on a new domain
|
|
10
|
6138
|
February 2, 2024
|
What database should i use to store Pdf metda and keywords
|
|
0
|
81
|
February 2, 2024
|
How to do classification fine-tuning of quantized models?
|
|
0
|
486
|
February 2, 2024
|
Multi GPU training - Model parallelism
|
|
1
|
1915
|
February 2, 2024
|
Speeding up custom data collator
|
|
0
|
287
|
February 2, 2024
|
KeyError 'siglip'
|
|
2
|
805
|
February 1, 2024
|
Whisper Finetuning Dutch: Weird double characters
|
|
2
|
346
|
February 1, 2024
|
Generate answer to a query starting from Decoder Embeddings
|
|
0
|
266
|
February 1, 2024
|