Annif - toolkit for multilabel text classification

We are pleased to announce the release of Annif 1.1!

Annif is a multi-algorithm automated subject indexing tool intended for libraries, archives and museums. It suggest subjects or topics from a predefined vocabulary which can be a thesaurus, ontology or just a list of subjects. The number of the subjects in the vocabulary can be large, tens of thousands or even more, and thus the task Annif performs can be called extreme multilabel classification.

Annif uses more traditional machine learning techniques, not LLMs, which makes it very fast in inference: typically it gives subjects for a text corresponding to a PDF of tens of pages in less than one second. Annif has a CLI for administrative tasks and a REST API for end users. Its development started and continues at the National Library of Finland, but all are welcome to join in!

Regarding Hugging Face, Annif 1.1 introduced annif upload and annif download commands which can be used to push and pull a set of selected projects and vocabularies to and from a Hugging Face Hub repository.

Check out these resources:

PS Maybe someone could forward my above message to the HF posts feed? I’m still in the waitlist for it, so can’t post there myself.

Annif 1.2 has been released!

This release introduces language detection capabilities in the REST API and CLI, improves :hugs: Hugging Face Hub integration, and also includes the usual maintenance work and minor bug fixes.

The new REST API endpoint /v1/detect-language expects POST requests that contain a JSON object with the text whose language is to be analyzed and a list of candidate languages. Similarly, the CLI has a new command annif detect-language. Annif projects are typically language specific, so a text of a given language needs to be processed with a project intended for that language; the language detection feature can help in this. For details see this Wiki page. The language detection is performed with the Simplemma library by @adbar et al.

The annif download command has a new --trust-repo option, which needs to be used if the repository to download from has not been used previously (that is if the repository does not appear in the local Hugging Face Hub cache). This option is introduced to raise awareness of the risks of downloading projects from the internet; the project downloads should only be done from trusted sources. For more information see the Hugging Face Hub documentation.

This release also includes automation of downloading the NLTK datapackage used for tokenization to simplify Annif installation. Maintenance tasks include upgrading dependencies, including a new version of Simplemma that allows better control over memory usage. The bug fixes include restoring the --host option of the annif run command.

Python 3.12 is now fully supported (previously NN-ensemble and STWFSA backends were not supported on Python 3.12).

Supported Python versions:

  • 3.9, 3.10, 3.11 and 3.12

Backward compatibility:

  • NN ensemble projects trained with Annif v1.1 or older need to be retrained.
  • For other projects, the warnings by SciKit-learn are harmless.
1 Like

:question: Have you been using Annif for subject indexing or classification? What do you think of it? :thinking:

We’re interested in your feedback!

:clipboard: The Annif users survey is open until November 30: https://forms.gle/P7jGoPMbEAJnD9zw9

Annif 1.3 has been released!

This release introduces a new EstNLTK analyzer, improves the performance of the MLLM backend and fixes minor bugs.

The key enhancement of this release is the addition of a new analyzer for lemmatization using EstNLTK, which supports the Estonian language. This analyzer needs to be installed separately, see the Optional features and dependencies in Wiki. Note that the indirect dependencies of EstNLTK are quite large, requiring around 500 MB of libraries.

Another improvement is the optimization of the ambiguity feature calculation in the MLLM algorithm. Previously, the calculation could be slow, especially when dealing with a large number of matches when using a large vocabulary such as GND. This optimization addresses the quadratic nature of the ambiguity calculation, and is expected to greatly reduce the processing time of some documents.

This release also includes maintenance updates and bug fixes. The file permissions issue, where Annif did not adhere to the umask setting for data files, has been resolved, thus easing Annif use in multiuser environments.

Supported Python versions:

  • 3.9, 3.10, 3.11, and 3.12

Backward compatibility:

  • The projects trained with Annif v1.2 remain working.
1 Like

Last November we organized a survey for Annif users, and now the results have been published in the Doria repository of the National Library of Finland: https://www.doria.fi/bitstream/handle/10024/190930/Annif%20Users%20Survey.pdf

The report includes an overview of:

  • The vocabularies and datasets that are used with Annif
  • The workflows that Annif is integrated with
  • The problems Annif users are facing

The average ratings for various aspects and features of Annif given by users are shown. In short, in a scale from 1 to 5, the ratings are:

  • Overall: 4.4
  • Features and functions: 4.1
  • Documentation: 4.5
  • Smoothness of initial setup: 4.2
  • Usability: 4.4
  • Achieved quality of subject suggestions: 3.6

The survey also gathered user views on the improvements and new features, which are briefly discussed in the report.

1 Like