Understanding pricing for spaces for a small model

We have a small model we would like to deploy to go alongside a publication. It’s actually small enough that inference on CPU is fine-ish (although we trained on GPU). On the free tier we get 2 vCPUs. I assume that is total, i.e. if 3 users try to use our space at the same time they will share just those 2 vCPUs (so it will slow down)?
If we pay for the “CPU Upgrade” to get 8 vCPUs does that mean each user is getting 8 vCPUs, or 8 vCPUs are shared amongst all concurrent users? In the latter case we presumably pay for each user?