Looking for help finetuning Llama2

Note: I am doing all of this work inside Sagemaker

I am attempting to finetune a Llama2 model with some custom company information. As a proof of concept, I created some fake data to train the model on. I believe I have the data format correct, a csv file with a single column (‘text’). The data looks like this:

The fake data has 1000 rows, each in the same format. I have successfully finetuned both llama2-7b and llama2-70b (three epochs) within Sagemaker with this data (and by ‘successfully’ I mean that the training finished without error), however I cannot access any of this ‘new’ information with the finetuned model. I would like to pose questions such as “How many circuits are in King County?” and “What is the total length of circuit C25?”, but the model just gives a generic “I don’t have this information” response.

I know I am doing something fundamentally wrong here, but I just can’t figure out what. Is it the data format? Need more examples? Am I completely missing the point of finetuning? I know that people generally finetune to get a model to respond in a given tone, or to follow specific instructions, but I don’t need either of those. I just need it to have access to some new information. RAG is not the correct solution here, because I want to be able to ask summary questions of the new data, rather than simply looking for facts contained in the new data.

Any help you may have is much appreciated. Thanks in advance.