Why Kayra-1 exists: a small Turkish model experiment

Kayra-1 is a ~100M parameter Turkish language model.

This project does NOT aim to compete with large LLMs.

Its goal is to study how small Turkish models behave under instruction tuning,

where they fail, and how much improvement is possible with limited resources.

Current observations:

- Simple factual questions often succeed.

- Open-ended questions may hallucinate.

- Tokenization issues are visible in Turkish words.

- Reasoning is weak by design.

Kayra-1 is intentionally kept small to make iteration fast

and improvements measurable.

This model is experimental and shared openly

to document the learning process.

2 Likes

What about the model link? Is this the right one: sixfingerdev/kayra-1 · Hugging Face ?

1 Like

Yes, that’s correct. The official link to the Kayra-1 model is: sixfingerdev/kayra-1 · Hugging Face .

Just to clarify, there are two versions:

- Kayra-1 instruction-tuned

- Kayra-1-exp not instruction-tuned, experimental

1 Like

U can test it on https://sixfingerdev-sixfinger-api.hf.space/

2 Likes