Hello, as already described in the title, I have some problems with complete failures of my PC on which I develop.
Briefly about me, I am a student who wants to build a small analysis framework that uses models from the Hugging Face Transformers library. The specs of my PC are:
Processor i5-8400 2.8 GHz
Dockert Desktop is installed on it.
In my development environment I have installed pytoch and the transformer library. My program works smoothly here. To make the whole thing deplyoable I use a base image from docker hub.
I also built my own docker image which gave me the same error with large amounts of data.
I simply use 2 models in pipelines that I initialize beforehand and then repeatedly feed with data.
Here is the code of my pipelines and where I use it.
from transformers import pipeline class Model1: def __init__(self): self.generator = pipeline(model="model1name/") def analyze(self, body): return self.generator(body)# .decode('utf-8')) class Model2: def __init__(self): self.generator = pipeline(model="model2name/") def analyze(self, body): return self.generator(body)# .decode('utf-8'))
m1 = Model1() m2 = Model2() if "text" in current_text: results.append(m1.analyze(current_text["text"])) results.append(m2.analyze(current_text["text"]))
I also have a MongoDB and use Rabbitmq for data transfer. Unfortunately I can’t explain my problem completely because I can’t find any error logs. If I give my system a large amount of data, all my cores run at 100% and after a short time the entire PC simply shuts down. Windows itself doesn’t give me an error report.
Maybe I did something wrong when initializing, or am I leaving too much computing capacity in the pipelines?