Accessing model through Inference Endpoint

So if I’m understanding correctly, the computational power of the machine running the application won’t matter as long as it can get the image output from the endpoint? I just want to get the generated image output to a simple Flask application without downloading the model. Are there any restrictions, and do I need to consider the paid version just for a personal project?