Installation failed: Lumina-mGPT-2.0

In the YouTube videos, it always sounds very simple, but when I try to install it, I keep getting error messages, some of which are written in red. For example, when installing Lumina-mGPT-2.0. I would be happy if someone could show me exactly how to deal with the messages.

ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
f5-tts 0.6.2 requires accelerate>=0.33.0, but you have accelerate 0.32.1 which is incompatible.
ltx-video 0.1.2 requires huggingface-hub~=0.25.2, but you have huggingface-hub 0.29.2 which is incompatible.
ultralytics 8.3.56 requires numpy>=1.23.0, but you have numpy 1.22.0 which is incompatible.
Successfully installed Ninja-1.11.1.4 fairscale-0.4.13 gradio-4.19.0 gradio-client-0.10.0 h5py-3.13.0 pathlib-1.0.1 socksio-1.0.0 torch-2.3.0 torchao-0.9.0 torchaudio-2.3.0 torchvision-0.18.0

E:\Lumina-mGPT-2.0\Lumina-mGPT-2.0>pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl --no-build-isolation
ERROR: flash_attn-2.7.4.post1+cu12torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl is not a supported wheel on this platform.

1 Like

In this case, it’s actually a little difficult. The libraries are upgraded quickly, and the situation changes from when they were making the video…

Especially in Windows environments, installing AI-related Python libraries is the first hurdle. In a raw Python environment, it can happen that “one library works but another doesn’t”. Therefore, it is often recommended to use a virtual environment such as Conda or venv. It seems that the method of using Conda is also introduced on the github of Lumina-mGPT-2.0.

If you only use one piece of software, there is no need to use a virtual environment.