Unable to run ASR on a raspberry pi 4

Hello, I’m trying to run ‘facebook/wav2vec2-base-960h’ on a raspberry pi 4 with 8 gb RAM and a 64-bit quad-core Cortex-A72 processor. But everytime I’m trying to run the transformer, it starts using the full 8 gb ram en crashes after a few seconds.

The code runs fine on my computer, which uses 32 gb RAM and a Intel i7-9700F. It never uses more than 50% of my CPU and 2 gb RAM.

I’m wondering if this is just a limitation of a raspberry pi processing power or that i’m missing some optimisation.

This is the code:

        self.processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
        self.model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")

        input_audio, _ = librosa.load(fname, sr=16000)
        input_values = self.processor(input_audio, return_tensors="pt", 
        sampling_rate=16000).input_values
        logits = self.model(input_values).logits
        predicted_ids = torch.argmax(logits, dim=-1)
        transcription = self.processor.decode(predicted_ids[0]).lower()

i’m using pytorch 1.10 and the transformers 4.16

Hello vollebreggie,
I succeeded to run the same model on a Rapberry Pi 3B+ with 1 gb RAM (+extended swap to 512 gb). My code is almost the same as yours.
I’m using pytorch 1.5 and transformers 4.23