Models taking too much time and not giving desired answers

Hello all,

I am trying to use the codellama/CodeLlama-7b-Instruct-hf and the meta-llama/Llama-2-7b-hf models to summarize a given dataset. I am facing a couple problems:

  1. My code creates a pipeline using the Transformers Library along with these respective models. However, when I pass a prompt to the pipeline, my command prompt successfully loads the checkpoint shards, but takes a very long time to run and answer my prompt. Is there another CodeLlama model I can use that will take a shorter time, or another way to mitigate this?

  2. I have experimented with different NLP tasks, and have not found a way to pass a CSV dataset in order to obtain a summary of the dataset. I have specifically tried to use the table-question-answering task, where I passed in a pandas dataframe of my dataset along with a query asking for common trends within the dataset, but it didn’t return the expected response. The actual response is attached.

Is there another task/model combination that will help me achieve the desired result?