Use embeddings stored in vector db to reduce work for LLM generating response

I’m trying to understand what the correct strategy is for storing and using embeddings in a vector database, to be used with an LLM. If my goal is to reduce the amount of work the LLM has to do when generating a response, (So you can think of a RAG implementation where I’ve stored text, embeddings I’ve created using an LLM, and metadata about the text.) I’m then trying to generate responses using say openai model from queries about the data, and I don’t want to have to spend a bunch of money and time chunking up the text and creating embeddings for it every time I want to answer a query about it.

If I create a vector database, for example a chroma database and I use an LLM to create embeddings for a corpus I have. I save those embeddings into the vector database, along with the text and metadata. Would the database use those embeddings I created to find the relevant text chunks, or would it make more sense for the vector database to use it’s own query process to find the relevant chunks (not using the embeddings the LLM created)?

Also do I want to pass the embeddings from the vector database to the LLM to generate the response, or do I pass the text that the vectore database found was most relevant to the LLM along with original text query so the LLM can then generate a response?

1 Like