Dependency error when building space: `ImportError: numpy.core.multiarray failed to import `

Hi,

I’ve been trying to make this space to work again, to test its capabilities. I forked it here, and have updated various details, in order for it to be compatible with current versions of HF (somehow this felt easier than finding the correct old version of HF, but if someone knows how to search for that, that’d be also great).

I now encounter an issue I haven’t been able to solve:

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.

0it [00:00, ?it/s]
0it [00:00, ?it/s]
RuntimeError: module compiled against ABI version 0x1000009 but this version of numpy is 0x2000000
Traceback (most recent call last):
  File "/home/user/app/app.py", line 8, in <module>
    from utils.generate_synthetic import *
  File "/home/user/app/utils/generate_synthetic.py", line 18, in <module>
    from lavis.models import load_model_and_preprocess
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/__init__.py", line 15, in <module>
    from lavis.datasets.builders import *
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/datasets/builders/__init__.py", line 8, in <module>
    from lavis.datasets.builders.base_dataset_builder import load_dataset_config
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/datasets/builders/base_dataset_builder.py", line 18, in <module>
    from lavis.processors.base_processor import BaseProcessor
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/processors/__init__.py", line 10, in <module>
    from lavis.processors.alpro_processors import (
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/processors/alpro_processors.py", line 13, in <module>
    from lavis.processors.randaugment import VideoRandomAugment
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/processors/randaugment.py", line 8, in <module>
    import cv2
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cv2/__init__.py", line 8, in <module>
    from .cv2 import *
ImportError: numpy.core.multiarray failed to import

I don’t know if specifying a version of NumPy or OpenCV in the requirements.txt might help in this case? Or if anyone has come across this before? I’d be grateful for any hint.

Thanks in advance!

2 Likes

Resolving this dependency is tough…
The following settings brought me a little closer to a build that didn’t error, but it timed out.
Something is wrong with the way pip works when resolving the dependencies for salesforce-lavis.

transformers
diffusers
accelerate
numpy<2
opencv-python
openai
joblib
salesforce-lavis
#git+https://github.com/pix2pixzero/pix2pix-zero.git
1 Like

Thanks for this! I’ll post here if I find a solution, but at the moment the easy way for me to experiment would be on a Mac, and salesforce-lavis is not supported. On the other hand, I would need to update to Pro to check which versions end up being installed on the HF Space, and try and fiddle with things on the VM directly. The authors look terribly busy, and so I have little hope they will answer (here.

1 Like

I’m on Windows. In any case, it’s difficult to experiment locally.
In the Pro Zero GPU space, the code itself needs to be tweaked quite a bit to get it to work, so it seems better to aim to get it to work in the CPU space first. If it works in the end, it should get to the point where the build passes.
Specifically, it seems like the best way to go is to devise a way to install that library.
I’ll try it in my spare time and write about it here if I make any progress.

1 Like

Just as a small follow-up, I have done a few tests on a Linux machine with GPU, but so far to no avail, and unfortunately I don’t have the bandwidth to go very deep right now. I opened an issue here, we’ll see if that leads anywhere. It feels like by far the easiest solution would be if one of the original creators could do a simple pip freeze > requirements.txt and update, but I doubt there’s much chance we can grab their attention… Thanks for having looked into this!

1 Like

I’ve tried it since then, but the original library’s dependencies are old. And it’s stubborn. (The version range specification is strict.)
Probably it would be faster to fork github and make your own…

1 Like

You might be right… Food for thought!

1 Like

It started up somehow.
I made a mistake with the commit destination and commit version.:sweat_smile:
Fixed.

2 Likes

Fabulous, thanks so much! So, it runs, but now there’s an error at the end of the generation process:

/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/base_model.py:40: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  checkpoint = torch.load(cached_file, map_location="cpu")
Traceback (most recent call last):
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
    response = await route_utils.call_process_api(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
    result = await self.call_function(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
    result = context.run(func, *args)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
    response = f(*args, **kwargs)
  File "/home/user/app/utils/generate_synthetic.py", line 252, in launch_main
    prompt_str = model_blip.generate({"image": _image})[0]
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/blip_models/blip_caption.py", line 188, in generate
    decoder_out = self.text_decoder.generate_from_encoder(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/med.py", line 1360, in generate_from_encoder
    outputs = self.generate(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/utils.py", line 2246, in generate
    result = self._beam_search(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/utils.py", line 3455, in _beam_search
    outputs = self(**model_inputs, return_dict=True)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/med.py", line 1210, in forward
    outputs = self.bert(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/med.py", line 974, in forward
    encoder_outputs = self.encoder(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/med.py", line 592, in forward
    layer_outputs = layer_module(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/med.py", line 475, in forward
    cross_attention_outputs = self.crossattention(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/med.py", line 346, in forward
    self_outputs = self.self(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/lavis/models/med.py", line 219, in forward
    attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
RuntimeError: The size of tensor a (3) must match the size of tensor b (9) at non-singleton dimension 0

Do you have a fork where it runs out of the box? Wondering if it’s because of the cache I have in mine… Don’t wory about this, though, as mentioned I’m unfortunately not in a position to work on this at the moment :confused:

Congrats for solving the dependency maze!

1 Like

Well, we’ve made progress!
I was aware that there was an error, but when I looked more closely, it seemed to be a problem with the contents of Levis…:scream:
It’s not a CPU-based processing space, so there probably won’t be any such forks…
However, if you use Zero GPU space instead of normal GPU space, you’ll probably need to get involved in the inference part, just like CPU space, so this is a problem.

Indeed, thanks again for looking into it all. And indeed, this error is quite deep, which makes me wonder what happened between last year, where this space must have worked for a while, and now… I’ll look into it if I find the time!

1 Like

what happened between last year

Compared to that time, the version of Diffusers has gone up from 0.11.0 to 0.31.0… other libraries are similar.
The version of AI-related libraries is updated too quickly, and libraries that are not maintained are left behind.:sweat_smile:

I put Lavis locally and modified it by about two lines. I think we’ve come to the point where it works on the CPU by modifying the files in the space.

The updates everywhere are a bit mental. Congrats on finding a way to make it work, it does on CPU now thanks to your PR! :pray: :pray:

Sorry for being so late, samped in too many other things.

1 Like

If it works, OKAY!:grinning:

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.