Autotrain not working

warn("The installed version of bitsandbytes was compiled without GPU support. " /app/env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32 You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. {‘rescale_betas_zero_snr’, ‘sample_max_value’, ‘dynamic_thresholding_ratio’, ‘variance_type’, ‘thresholding’, ‘timestep_spacing’, ‘prediction_type’, ‘clip_sample_range’} was not found in config. Values will be initialized to default values. {‘projection_class_embeddings_input_dim’, ‘num_attention_heads’, ‘conv_in_kernel’, ‘time_embedding_type’, ‘num_class_embeds’, ‘time_embedding_dim’, ‘addition_embed_type’, ‘reverse_transformer_layers_per_block’, ‘encoder_hid_dim_type’, ‘class_embed_type’, ‘use_linear_projection’, ‘resnet_time_scale_shift’, ‘upcast_attention’, ‘mid_block_type’, ‘addition_embed_type_num_heads’, ‘encoder_hid_dim’, ‘timestep_post_act’, ‘dual_cross_attention’, ‘addition_time_embed_dim’, ‘mid_block_only_cross_attention’, ‘dropout’, ‘time_cond_proj_dim’, ‘cross_attention_norm’, ‘resnet_out_scale_factor’, ‘time_embedding_act_fn’, ‘attention_type’, ‘class_embeddings_concat’, ‘conv_out_kernel’, ‘only_cross_attention’, ‘transformer_layers_per_block’, ‘resnet_skip_time_act’} was not found in config. Values will be initialized to default values.