HF Inference API Defaulting to Stream=True

Inference API delivering 503 errors for all models. Sketchy documentation on Inference Client vs Inference API. A streaming response without a way to add parameters. Calling the same model from two different applications/accounts get vastly different results. Doesn’t seem like the same model…

Application 1 Call 1 : a large cluster of neutron star spheres in space, the cradle of the planet’s cradle is at the center, several smaller stars are scattered around the cradle, the largest sphere is in the foreground on the left side, it’s a dark and mysterious atmosphere, the planet’s cradle is red and white with visible rings, numerous small stars are scattered throughout the background, some stars have a glowing effect, the image has a high level of detail and sharpness, there is no text in the image.

Application 2 Call 1 : A front view of a Jack-in-the-box cyborg clowns performing for a children’s game. The clowns are wearing black shirts and black shorts. They are standing on a cement ground. The one on the left is wearing a black hat and is performing a stunt. It is facing to the right. It is on a rock, and there is a black cyborg in the middle. It is doing a stunt stunt. It is doing a stunt stunt. It is on a rock, and there is a blue cyborg in the middle. It is facing to the right. It is on a rock, and there is a blue cyborg in the middle. It is doing a drill on the rock. It is on a rock. It is blue, and there is a blue cyborg in the middle, and it is doing a trick. It is doing a trick on the rock. It is on a rock. It is on a rock, and there are blue and green t-shirts on. On the left, on the rock, there

The same model, different seeds but all the outputs are repeated similarly for different calls.

1 Like