As part of prototype development, I’m working on a use case where I have an csv or xlsx input containing data about various incidents. The data has many fields like Incident ID, Priority of Incident, Creation Date, Incident Description, Solved by Resource etc. The prototype would take this excel or csv file as input and would automatically generate inferences like “30% of the incidents were of Critical Priority” or some longer inferences like “Overall 25% increase in inflow of incidents, this was due to increased inflow of critical priority incidents in the month of Jan’21” something like these.
So far I have divided the problem into 2 phases, First Phase : involves generating logic/rules from the data (ie key facts from data) & Second Phase : involves using these logic/rules as input to T5 Text to text model to generated inferences.
I was planning to generate logic/rules in form of RDF Triples like WEBNLG for example if I upload an excel the first phase would generate important facts about data in form on RDF triples like “25% | Priority | Critical” or “25% | Ticketinflow | Increase && Jan’21 | Priority | Critical”
These triples would be input to the custom trained T5 Transformed which would then generate inferences based on input triples like
Input → “25% | Priority | Critical”
Output → "25% of the incidents were of critical priority
Input → “25% | Ticketinflow | Increase && Jan’21 | Priority | Critical”
Output → “Overall 25% increase in ticket inflow was observed. This increase was due to high critical priority incidents in the month of Jan’21”
It would be really kind of you to let me know if I’m on a right track for this problem, or is there any better approach I can use?