Which model is best for code generation under [b]10GB[/b]

I am using hugging face token and want to use AI model for code generation but size limit is 10 GBs. top models like deepseek are too big in size.

1 Like

This?

More likely the one who can write a code, but I could be wrong… Obviously joking :grin: did you find one?

1 Like

Within 10 GB VRAM with no quantization, it’s virtually impossible to use a model larger than 3B…:sweat_smile:

With 4-bit quantization in GGUF, even with 10 GB, a 12B model would be practical, and in that case, there would be many usable models.

with 10gb limit i didnt find any useful model which can code tricky easy problems or miss the edge cases, i am currently using Mistral-7B-Instruct-v0.3

1 Like