Looking for a solution on training my own LLM

I am a beginner on this field and looking for a solution on how can i train my own LLM ?

In past few months I have tried the chatbot solution with openAI prompt engineering using langchain, pinecone DB to store my personal data.

But now, i need some help on how can i do it without prompt engineering and vector database. How can i train my own model and get results from it.

Can anyone help me out here with some insights on this matter, like which base model should i try ? how much data would i need ?

I’m also new to :hugs: and interested in training a model.

Somewhat related, I wrote a short article which disambiguated the different methods for me. Very basic, but maybe it might be useful:
https://brainwavelabs.substack.com/p/what-does-it-mean-to-train-an-ai

I wonder if you might find fine-tuning to be helpful? I’m sure there are much better (and probably cheaper) options here with :hugs:. Maybe the training isn’t that difficult, but OpenAI has a pretty straightforward approach for fine tuning:
https://platform.openai.com/docs/guides/fine-tuning

Lots of tutorials out there depending on what your favorite language is, and I’ve seen no-code options also.

Hi Abhishek, did you find how to achieve your goal? I need to train a model using our local tool …
thkx,
regards