goodafternoon.I am requesting for assistance. I’m working on a program for querying documents using Langchain and huggingFace on DominoLab, but I’ve loaded the hugging face embedding on the Lab and the huging face model.
I was able to test the embedding model, and everything is working properly
However, since the embedding model is local, how do call then on the following code.
load document
loader = PyPDFLoader(“INC.PC.001.ITG_Incident_Process_Sheet_EN 2022-07-29 (2).pdf”)
documents = loader.load()
split the documents into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
select which embeddings we want to use( my issue is the embedding model how to call this model)
embeddings =sentence-transformers/all-mpnet-base-v2
create the vectorestore to use as the index
db = Chroma.from_documents(texts, embeddings)
expose this index in a retriever interface
retriever = db.as_retriever(search_type=“similarity”, search_kwargs={“k”:2})
create a chain to answer questions
qa = RetrievalQA.from_chain_type(
llm=local_llm, chain_type="refine", retriever=retriever, return_source_documents=True)
query = “who is frederic Strauss?”
result = qa({“query”: query})
to test sentence embedding model i used
from code