Auto Train stuck repeating Application Startup

For context, I am training an image classification model. Whenever I start the training the container log shows this and keeps repeating itself. Here is the log :

===== Application Startup at 2024-01-20 18:42:32 =====

==========
== CUDA ==

CUDA Version 12.1.1

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

INFO: Will watch for changes in these directories: [ā€˜/appā€™]
WARNING: ā€œworkersā€ flag is ignored when reloading is enabled.
INFO: Uvicorn running on http://0.0.0.0:7860 (Press CTRL+C to quit)
INFO: Started reloader process [34] using StatReload

INFO Authenticating userā€¦
WARNING Parameters not supplied by user and set to default: text_column, rejected_text_column, apply_chat_template, model_max_length, save_total_limit, scheduler, valid_split, dpo_beta, prompt_text_column, logging_steps, data_path, lora_alpha, username, evaluation_strategy, optimizer, train_split, save_strategy, disable_gradient_checkpointing, batch_size, seed, weight_decay, push_to_hub, warmup_ratio, merge_adapter, lora_r, gradient_accumulation, use_flash_attention_2, add_eos_token, token, project_name, lr, model, max_grad_norm, lora_dropout, auto_find_batch_size, repo_id, trainer, model_ref
WARNING Parameters not supplied by user and set to default: text_column, weight_decay, push_to_hub, warmup_ratio, epochs, save_total_limit, scheduler, gradient_accumulation, valid_split, token, logging_steps, log, project_name, data_path, username, max_seq_length, target_column, lr, model, max_grad_norm, auto_find_batch_size, evaluation_strategy, optimizer, train_split, save_strategy, repo_id, batch_size, seed
WARNING Parameters not supplied by user and set to default: weight_decay, push_to_hub, warmup_ratio, epochs, save_total_limit, scheduler, gradient_accumulation, valid_split, token, logging_steps, log, project_name, data_path, username, target_column, lr, model, max_grad_norm, auto_find_batch_size, evaluation_strategy, optimizer, image_column, train_split, save_strategy, repo_id, batch_size, seed
WARNING Parameters not supplied by user and set to default: text_column, target_modules, epochs, save_total_limit, scheduler, valid_split, logging_steps, data_path, username, lora_alpha, max_seq_length, evaluation_strategy, quantization, optimizer, train_split, save_strategy, batch_size, seed, weight_decay, warmup_ratio, lora_r, gradient_accumulation, token, project_name, target_column, lr, model, max_grad_norm, lora_dropout, peft, auto_find_batch_size, max_target_length, repo_id, push_to_hub
WARNING Parameters not supplied by user and set to default: task, target_columns, push_to_hub, time_limit, valid_split, categorical_columns, token, project_name, data_path, username, numerical_columns, model, num_trials, id_column, train_split, repo_id, seed
WARNING Parameters not supplied by user and set to default: sample_batch_size, scale_lr, center_crop, bf16, allow_tf32, warmup_steps, class_prompt, resume_from_checkpoint, adam_beta1, epochs, adam_beta2, scheduler, prior_loss_weight, logging, validation_prompt, adam_epsilon, username, num_class_images, local_rank, checkpoints_total_limit, rank, checkpointing_steps, seed, validation_epochs, prior_preservation, text_encoder_use_attention_mask, tokenizer_max_length, dataloader_num_workers, pre_compute_text_embeddings, xl, token, class_image_path, project_name, lr_power, num_validation_images, class_labels_conditioning, validation_images, model, max_grad_norm, prior_generation_precision, adam_weight_decay, revision, tokenizer, repo_id, num_cycles, push_to_hub, image_path
INFO: Started server process [36]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 10.16.18.44:16993 - ā€œGET /?logs=container HTTP/1.1ā€ 200 OK
INFO: 10.16.18.44:16993 - ā€œGET /model_choices/llm%3Asft HTTP/1.1ā€ 200 OK
INFO Task: llm:sft
INFO: 10.16.18.44:57416 - ā€œGET /params/llm%3Asft HTTP/1.1ā€ 200 OK
INFO: 10.16.18.44:41098 - ā€œGET /model_choices/image-classification HTTP/1.1ā€ 200 OK