we are working on a Question Answering model that to any natural language question should be able to infer a SPARQL query to be used against a knowledge graph stored in GraphDB.
In our case, these queries are quite complex and contains many characters.
So far we have been using BART, trying to understand if it could suit our needs at best.
However, after fine-tuning it both with the dataset “lc_quad” and a custom dataset specifically designed to target our Knowledge Base (i.e. ca 50000 size training dataset), BART model does not seem able to predict correctly the SPARQL queries. Actually, even if the first part of the query is correct, at some point its syntax becomes wrong and cannot be used to carry out any query against the db.
Is BART not suitable for complex text generation?
To tackle the issue we are working on, could anybody suggest an architecture that after fine-tuning with suitable labeled data could infer successfully SPARQL queries from natural language questions?
For instance, can T5 model perform better than BART?
Many thanks for any hint!