Finetuning AI for political analysis and strategising

Hi, i am first of all political analist, and person with basic knowledge about AI.
I want to experiment with finetuning medium size AI model (for example Llama 3.3 70B) for political analysis and strategising. I plan to obtain Nvidia DGX Spark with 128G (or 2 devices with 256G) of memory for experimentation.
Have couple of question for the start:

  • Foremost - should finetuning (LORAX) be used as such in this case with aim to finetune different adaptors on different functions (analysis, scenarios-building, strategising, risk assessment etc.) and orchestrate their cooperation - or good context engineering will do the job?
  • What models of medium size are best suited for political analysis and strategising?
  • Will some smaller model (around 30B) be strong enough for this delicate and nuanced reasoning?
  • Where to find datasets or just cases in the field of strategising/analysis/scenario-building etc?
  • Do you know some similar researches?
    Thank you, would really appreciate some assistance.
1 Like

If you don’t require human-like responses, a model with around 12B might work. Models known for their reasoning abilities tend to be around 32B in size, so that size range might also be a good choice. However, if resources allow, larger models will offer higher performance.

There seems to be a lot of data sets (inside, outside HF), but it will be quite difficult to organize them…

Welcome to posting on HF @Aries789

1 Like

Is this the right place for such a post? Maybe someone can advise some better group?

1 Like

If you can use Hugging Face Discord, that might be better. Also, for fine-tuning, I recommend Unsloth’s Discord.