The $10 that was withheld for PRO membership subscription was not refunded

When I subscribed to the PRO membership, it prompted that 10$ would be deducted, and then returned after “completion”.

I was deducted 10$+9$, of which 9$ was the membership fee and 10$ was some kind of deposit, but now it has been shown as a member, and my 10$ has not been refunded.

I am a low-income person. When I was working, I only had a meager income of 22 dollars a day, working 8 hours+, and writing the management system backend in golang in a small company.

Later I resigned. I am particularly fond of deep learning, especially NLP and embedding. It is amazing that I can search for images by text using cosine similarity.

However, this cannot change my financial situation.

I plan to open a Space on huggingface, but I need a GPU because it is an llm modeled after chatgpt-o1, and the CPU probably cannot run it.

I tried to quantize, but failed. The gguf-my-repot Space of gguf prompted that the model was not supported. I can only run it with a GPU, so I had to recharge a PRO membership.

Then it happened, everything that happened at the beginning.

I am not begging for pity, I just want to say that I should have done nothing wrong. It seems that there is a bug in the system. I did not receive the 10$ deposit. For me, my financial situation is not optimistic. If I can earn 100$ a day, I will not care about these.

Please pay attention to this, I will follow up on this situation. Until I receive the deposit I should have received.

1 Like

@meganariley It may be a rather essential payment issue.


I received a $20 refund from Claude.AI. Theoretically, this isn’t a program error (from the name of the refunding party), but I did receive the $20, which eased my disappointment.

I did stop using Claude.AI’s service, so maybe it refunded me due to some EU laws or something.

It’s also not impossible that a kind-hearted person sponsored me.

At least the refund issue with huggingface.co isn’t so urgent now. I might need to patiently wait for the bank’s update. I’ll close this issue in 1-2 days (once I manage to fill the gap from another source, I’ll be able to generously spend that $10, regardless of whether it’s a “deposit” or “a month’s subscription fee”).

So sleepy.

Thank God! Thank you all—at least now I have something to be happy about!

2 Likes

Update: I am still following this and have not received any refunds from “huggingface.co” as of now.

1 Like

I just learned how to use huggingface.co’s inference API. I wonder if I can deploy my own model to it.

1 Like

The Serverless Inference API was degraded a few months ago due to shared resource exhaustion, so users will have a hard time using it.
It is available from Gradio’s gr.load, so it is better to use that method instead.

BTW, Pro is inexpensive for residents of developed countries, not so for developing countries. Even though this is difficult to solve, I am concerned about it as a potential loss for our communities.:sweat:

It seems that the models that gr.load can load are limited to specific architectures. It seems that it can load models that I trained or fine-tuned myself. I have a preliminary understanding of it. It seems that I need to adapt Hugging Face Transformers, which is interesting and challenging.

You are right, it is true that the PRO membership pricing is global, but I should try to find ways to make more money.

No matter where I am, I should strive to move forward. Even in the United States, there are poor people, and poverty is temporary.

1 Like

I live in a developed country, but half of the young population in particular do not have any surplus funds. The percentage of people who think that their daily lives are easy is probably less than 20% in any country.
I agree with your constructive way of thinking.:grinning:

gr.load can load are limited to specific architectures

Actually, this is just Gradio calling the Serverless Inference API internally.:sweat_smile:
Excluding Gradio bugs and unsupported, unusable models, you can basically use models that are potentially available (Warm, Cold) in the Serverless Inference API and are less than 10GB in size.
In the Serverless Inference API, the YAML part of the header in the README.md of the model repo functions as part of the configuration file, so there are some tricks to writing it, but if you copy the model of someone else that is working, you should be able to get by.
If you don’t understand the details, please ask me or someone else.

1 Like

I got my deposit. Cool!

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.