Which parameter is causing the decrease in Learning rate every epoch?

Hey, I have been trying to train my model on mnli and the learning rate seems to keep decreasing for no reason. Can someone help me? -

train_args = TrainingArguments(
    output_dir=f'./resultsv3/output',
    logging_dir=f'./resultsv3/output/logs',
    learning_rate=3e-6,
    per_device_train_batch_size=4,
    per_device_eval_batch_size=4,
    num_train_epochs=4,
    load_best_model_at_end=True,
    metric_for_best_model="accuracy",
    fp16=True,
    fp16_full_eval=True,
    evaluation_strategy="epoch",
    save_strategy = "epoch",
    save_total_limit=5,
    logging_strategy="epoch",
    report_to="all")

def compute_metrics(eval_pred):
	predictions, labels = eval_pred
	predictions = np.argmax(predictions, axis=1)
	return metric.compute(predictions=predictions, references=labels)

trainer = Trainer( 
    model=model,
    tokenizer=tokenizer,
    args=train_args,
    data_collator=data_collator,
    train_dataset=encoded_dataset_train,  
    eval_dataset=encoded_dataset_test,
    compute_metrics=compute_metrics
)

which parameter is causing the decrease in Learning rate every epoch?

The learning_rate parameter is just the initial learning rate, but it is usually changed during training.

You can find the default values of TrainingArguments at Trainer. You can see that lr_scheduler_type is linear by default.

As specified in its [documentation(Optimization), linear creates a schedule with a learning rate that decreases linearly from the initial learning rate after an initial warmup period.

Ok thank you for your answer. Can tell me whether the way
mentioned here microsoft/deberta-v2-xxlarge-mnli ยท Hugging Face also uses linear as scheduler type?