Model for Postgres

I have a postgres database with 200+ Tables. Each table contains information about my supply inventory. It also contains columns which are JSON and there are nested JSON as well. There are relationships as well and some relationships are based on the values in the JSON Column in one table pointing to another Column in Another Table.

I use to do ETL to flat it, then query and generate report.

Recently I tried to solve this problem by LLM, I have installed and tried LLama-2 model on one of my EC2 machine. But I am not able to achieve the result which I thought. I wanted to chat with my database.

I have following questions in order to complete this entire activity. It will be great if you people can guide me for my next steps and other information.

  • One of the major problem is I face token limit error.
    • I understand that there is token limit for each LLM, but I am still wondering or not able to understand that do I need to really or explicitly create Embedding for my data or do I need to normalize or de-normalize data or something else I need to do.
  • The existing solution which I have created or working on is working, but it is not consistent
    • How can I make my solution more consitent
  • The existing solution which I have created gives result sometime correctly and sometimes incorrect or it not able to understand data
    • How can I make model understand my data for complex JSON or other columns, for joins etc
  • Increasing Datasize
    • How can I manage my model or solution to cater the increasing datasize in near future
  • Is there something to do with the Model
    • Is there any other Model which I should try or use which is suitable for my usecase

These are the some high level queries which I have. It will be great if someone can help me out understand the details here.

Appreciate help here.

You could create embeddings from your data and store the embeddings in a database. Then you could fetch the text closest in the embedding space from queries to the LLM as a prompt. You could store the embeddings from a passage of text that is within the LLM token limits.

Thanks for the response.