Best way to use a model to extract parameters from a question?

TLDR; what is the best way to provide additional context to an LLM about a list of games and categories, so that when a user asks a question, it can identify what the user is asking about and extract these properties, almost as parameters?

Hi all! Thank you for humoring my beginner question.

I am familiar with LLMs, though have never fine-tuned one (and it is seeming I may have to go that route).

I have a bot I’ve made for a speedrunning community, right now it uses commands (such as !foo). I’d like to move the bot to utilizing an LLM, specifically a chat model, so users can naturally ask questions to the bot. It would need to be able to identify and extract key pieces of information from the user’s question, including: the subject (i.e., “best_time”), the game, and the category (i.e., 100%).

The LLM should be able to return an output similar to the following: {"game": "GAME", "category": "CATEGORY", "subject": "best_time"}, etc…I am unsure if this makes sense.

I have been able to get LLMs to be able to summarize the user’s query into a subject. Game and category it cannot figure out.

What could I do to improve the results of an LLM in understanding what a game and category is? I have access to all the games and categories that are speedran, but I would need to construct training data out of it.

Do I need to fine-tune? Use embeddings? Token classification? Essentially…what would be the best route to give my LLM context about what a game and category is, so it can more reliably return the information I need? I have exhausted my prompt tuning options.

Thank you!