Chat with a PDF

Hi All,

I am new forum member. Recently, I have interest in AI, machine learning and stuff like this. I studied a documents and tutorials around the web. I am also following the Hugging Faces course on the platform. I completed section 1 and I started to do some experiments. At the moment, I consider myself an absolute beginner.

The first one I attempt is a small Chatbot for a PDF. Basically, you give a PDF to the ChatBOT, then you can start ask questions about it.

M project is here.

I took inspiration from similar projects on the web. However, all these projects use ChatGPT, and I dont’ want to do it because I don’t have free credit anymore and I don’t want to spend money for this kind of experiments.

The README.md contains the procedure to install and run it.

How does it work?

The main program is app.py, here how it works:

  1. First of all I read the PDF. The github project includes a PDF about Robinson Crusoe.
  2. I split it in 1000 characters chunks.
  3. Then I convert them in ids (my understanding is that models only operates with ids tensors, not text) and store them in a chromedb using the all-MiniLM-L6-v2 model.
  4. There is a ChatBOT loop.
  5. User ask the question and I use it to retrieve the top-k documents relative to it. I will use them as context.
  6. Then these lines use the context and the question to generate a response using the google/flan-t5-large model.

The problem is that the chatbot is not very exciting. I mean, I tested the code and it seems that PDF is correctly loaded, correctly splitted in chunks, probably correctly saved in chromedb (I don’t know how to verify this). The real problem is that the quality of the answer is very poor. Here an example of output:

Ask Question about your PDF: who is robinson crusoe?
a teenager
Ask Question about your PDF: can you tell me more about him?
Yes
Ask Question about your PDF: please tell me more about him
He is a good student. He is clever. He improves quickly.
Ask Question about your PDF: can you tell me the member of his family?
Father
Ask Question about your PDF: can you tell me a bit of robinson crusoe story
Yes

As you can notice the first answer is too short. The second answer is even worse. I would like more complete answer (something similar to ChatGPT). Moreover, it seems that it doesn’t generate answer but simply use the text in the PDF. I would like that the chatbot provide an answer by its own using only the top-k chunks as a context to generate the response.

Since I am a beginner, I am not familiar with the model to use for text generation in my specific scenario. Can anyone help me to figure out what’s wrong with my code?

Another thing I don’t understand is that to run the HuggingFaceHub class, I needed to create an Hugging Face Token. Can you explain to me why? How this service is billed and what are the rate limit. I tried to understand it from the platform but no luck.

Another improvement that I want to do is to persist my chroma database on disk so that I can register my chunk ids there once and then use it N time simply loading the content from there. I tried several code pieces, read doc, but I didn’t find a way to do it.

Thank you in advance for your help.

3 Likes

Thanks for the detailed explanation@sasadangelo. I had a couple of suggestions which might improve the retrived answer:
Embedding model used for converting the chunks into embeddings.
The number of chunks we generate also matters a lot
LLM models used for retrieval purposes

I’m happy to discuss more around this and look into the potential opportunities for improvisation. Thanks!

Thanks for the explanation .

  • Try for different embeddings models
  • Try different LLM model as well .
  • I would suggest you to try faiss embeddings rather than chroma since faiss is very good at finding the similar text .

Hi All,

Thank your for your reply. I analyzed a bit the issue and I found that:

  1. FAISS and Chromedb are quiet similar in this scenario, the extracted docs are enough to have a good answer.
  2. I noticed that before split the PDF in chunks it should be cleaned up in some way, there is a lot of rubbish (index, first introduction pages, and so on) but I do not know how to define a generic “cleanup procedure” that could be valid for each PDF.
  3. The size of the chunnks matters. If I set a chunks of 1000 with 200 overlap the extracted docs are quiet good. If I set 200 with 50 overlap they are very bad.
  4. The reason in 3 I tried 200/50 is that I tried with gpt2 LLM and if the chunk is larger than 200 it give me the error “ValueError: Error raised by inference API: Input is too long for this model, shorten your input or use ‘parameters’: {‘truncation’: ‘only_first’} to run the model only on the first part.”

Since I am quite inexpert at LLM level, you told me:
" * Try different LLM model as well ."

can you suggest me one or two I can try that accepts 1000 size chunks and provide good answer in human like manner?

In general, I see people on the web always use ChatGPT and this has a cost, moreover not all the organizations allow the use of ChatGPT (the mine, for example, don’t allow it).

I did small progresses.
Consider you have the chatbot in a streamlit interface where you can upload the PDF. You can do there 2 things to improve the PDF quality:

  1. insert in a text box the list of pages to exclude
  2. insert in a text area the list of lines to exclude from the PDF

I simulated this with this code just for demo purpose:

Here the code that filter pages and unwanted text:

Line 47 remove the extra blank spaces.

I verified that input chunks (before embeddings) are really good.

I verified that FAISS and Chromedb extract the same documents so change them doesn’t bring any improvement. I think that they are only vector database, what really matters is the model used for the embedding (see line 59).

I don’t know if changing it can bring to improvements.

However, I anaalyzed a bit different models for answer generation. My first doubt was: should I use TextGeneration or Text2Text generation models? After a bit of analysis I think the second one is the option to choose.

I selected meta-llama/Llama-2-7b that seems quite promising. I did some tests in live chat and the result is amazing. Now I can use it in two way:

  • on local
  • on HuggingFace HUB

The second option seems more easy because I only need to change line 73 with 74 (see my new code).

I asked authorization on META Website and I am waiting the approval on HuggingFaces HUB. Is there a way to acceleraate the approval?

Just a new update,

Tried meta-llama/Llama-2-7b but no luck. If I use this model I got the error:
“Error raised by inference API: meta-llama/Llama-2-7b does not appear to have a file named config.json”

Looking for solution on web I found I need to use meta-llama/Llama-2-7b-hf but then I got:
“Model requires a Pro subscription; check out Hugging Face – Pricing to learn more”

It requires a pro subscription (like ChatGPT). I am going around in circle. Any suggestion?

Good work with that Sasadangelo. We also made a chatpdf tool. Be sure to give your feedback

You can use Llama-2 quantised model provided by TheBloke. For GPU use case, you can choose GPTQ model provided in here. If you choose to run it on CPU only or CPU + GPU, you can choose GGUF quantised model in here.

To use it, you can refer to the documentation in that link. Also note that I suggest you to follow the Llama-2 prompt template (also provided in the model card) for the best answer generation.

1 Like