What is the significance of exposing timesteps in DiffusionPipeline?

Hello everyone, hope you are doing great.
During my experiments, I noticed DiffusionPipeline takes a timesteps argument which is set to None by default.
Given that num_inference_steps also exists and is used to initialize timesteps (correct me if Im wrong please) and that the majority of schedulers/samplers seem not to accept timesteps through this argument, is this simply deprecated? or is it me that’s missing something?
I’ve tried LMSDiscreteScheduler, DDIMScheduler, and the PNDMScheduler all to no avail.
my snippet in using a custom timesteps tensor:

timesteps = torch.cat([torch.linspace(0, 0.5, steps=30), torch.linspace(0.5, 1, steps=10)])
text2image.scheduler = dfs.schedulers.scheduling_ddim.DDIMScheduler.from_config(text2image.scheduler.config)
text2image.scheduler.set_timesteps(timesteps)
# ...

Could someone kindly enlighten me here? am I using the wrong schedulers? if so what scheduler can be used in this situation?
Thanks a lot in advance