Fast CPU Inference On Pegasus-Large Finetuned Model -- Currently Impossible?

What I want to try next is running the inference on a computer with more CPUs. I am running this currently on a i9-10980XE CPU @ 3.00GHz , which seems to have 18 cores: Intel® Core™ i9-10980XE Extreme Edition Processor (24.75M Cache, 3.00 GHz) Product Specifications … And I can see in glances (like top) that as the inference is running, I see 1800% cpu usage. I am going to try it next on a 36 core cpu and benchmark the difference.

1 Like