I hope that this is not viewed as an inappropriate post.
I am new to the Hugging Face Hub, and I know that I’m not the only one who is struggling with the various instruction videos, articles, and tutorials that are by programmers for programmers. The information that the instructions have presume a level of understanding that some people don’t have.
There is a feeling of distinct lack of help for people who are interested in learning how to operate the models, but don’t have the knowledge, tools, and/or time needed to even get a foothold to stay interested in learning more.
Could someone please point to information that is geared toward people that are starting with a computer, an internet connection, and a desire to learn how to operate before we learn how to tinker. Not just for me, but for everyone else that wants to participate.
So a, critical tidbit of information that I have come across, when using the Python program to use the Pip command, it’s not input at the Python Interpreter prompt screen, but at the Command Shell screen you have to open manually. The more you know.
Even for coders, it’s not always clear where to start with Hugging Face! It’s useful once you get used to it, but no one knows how to get used to it, and the fact that it’s basically DIY can cause problems from time to time…
As I mentioned in the post below, the purpose of using it will determine whether you should use it for learning or using it. If you’re using it for using it, you can use Hugging Face as a simple free storage space and use a GUI tool. (StableDiffusion has WebUI and ComfyUI, LLM has Open WebUI and LM Studio…)
There are also training scripts with GUIs that use HF resources, so even if you can’t code, it doesn’t mean you can’t be creative. It’s also good to use generative AI as a tool to do other creative things.
In the first place, developers are helped just by using it to report accurate bugs and give feedback.
Thanks for the response. I’m currently looking at using while I learn the Python language, with the eventual goal of understanding how to tweak models as indicated by the repo owner.
The current analogy that I’m using to describe the situation I have run into repeatedly:
“I want to learn how to operate a car.”
“Awesome. The first thing you need to do is put the key in the ignition and turn on the car.”
“Ignition? I don’t even know how to open the door!”
The next step of my journey, I have discovered that if you use the Pip command and it fails in the command prompt, you may have failed to tell the installation wizard to modify the PATH variable to allow the Pip file to be used anywhere. You have the options of working your way through the command prompt to the Python folder and then the Script folder inside that to the Pip file, then run the Pip command in the folder every time you want to use the command; uninstall Python and reinstall with the PATH variable modification selected; or work the computer’s properties settings to manually edit the Path variable.
Good news though, I have the transformers library installed, nVidia Cuda drivers installed, and PyTorch installed now. Progress!
First of all, if you are using the model for inference and training, you only need to know a small part of Python syntax.
When you copy and paste the sample model card and run it, you will only need to tweak the parameters a little, and you should be fine for a while.
It may be more difficult to install Python itself, the CUDA Toolkit, and libraries. When the model changes, the required library version changes…
I mainly use Hugging Face Spaces (each is a virtual environment) for AI, so I don’t need it very much, but if you mainly do it locally, it is highly likely that you should learn how to use virtual environments.
The appropriate installation of PyTorch, CUDA Toolkit, and CuDNN can only be described as hell. Let’s search…
(It looks like you’ve managed to install it while I was writing this! Congratulations!)
As for Python itself, Python 3.12 has a lot of problems…
Python 3.11 is the safest. Hugging Face’s default is 3.10. I’m using 3.9, but this is just by chance, and I don’t recommend it. It’s too old.
or work the computer’s properties settings to manually edit the Path variable.
This is recommended for Windows environments. It is the most stable.
Also, if you want to play around with Python code and JSON, it’s useful to install and use VSCode (A great improvement Notepad). It makes programming a lot easier. Also, if you want to use Hugging Face, it’s useful to install git and git-lfs. Git is installed on Windows by default, but the version is actually old and doesn’t work properly.
I don’t hear much about Python 3.13, for better or for worse, but if something doesn’t work, you might want to check the Python version too.
In most cases, it’s probably fine.
Virtual environments can also be used to switch Python versions, so they’re also useful if you encounter errors like this. I’m using HF as a substitute for a virtual environment, so I’m not using it, but today, I found that there are almost no disadvantages to using a virtual environment other than the fact that it’s a bit of a hassle.
I have read up on virtual environments. I just haven’t had the understanding to implement it yet because, once again, the tutorial was running on the presupposition that it audience would have some understanding of where to go to run the commands.
I have found a couple tutorials that were very close to being absolutely no previous experience required with only two explicitly stated that the articles were written for people with zero coding knowledge: How to AutoTrain using a Space and How to use transformers. But both those had out of date information.
I think it’s fine to think about virtual environments after you’ve run into some kind of trouble. If it works without them, that’s fine too.
It’s common for the know-how to be out of date. If it’s information from six months ago, the functions may have already been abolished…
You should use the information on the internet as a reference, but you shouldn’t believe it… The only thing that matters is whether it works or not.
Well, since it’s made to work, it’ll usually work out somehow. As long as the GPU specs are sufficient.
ollama has a windows installer , webrowser use, has a windows installer, open-webui has a windows installer… they seem the path of least resistance and you virtually dont need to touch a command line or terminal
Ollama is also compatible with Hugging Face, and although it’s not as fast as Llamacpp, it’s fast enough, so I also recommend it.
Also, if you’re looking for a GUI for image generation, I think WebUI Forge or reForge are easy to use. If you want to do something advanced, you can use ComfyUI, but it’s quite difficult.
Hey, I totally understand where you’re coming from! It’s definitely challenging when you’re starting out and the available resources assume a certain level of experience. I’d love to help you get more clarity on what you want to achieve.
What’s your main goal? Are you trying to learn how to use AI models for a specific project or just exploring for fun? Also, what do you bring to the table in exchange for help? Are you offering resources like time, money, labor, or maybe code? Let me know, and we can find the best way forward for you!
Which model would you like to run? What did you try already, what didn’t work? I think the best way to learn is to get something very simple to run. I’m willing to help but I need something you’re interested in to get you started
Install LM Studio, download a model & run it as a local LLM server (on your laptope/desktop)
Learn to code in python & use the OpenAI SDK to connect with your local LLM server.
Once you are familiar shift to other LLM providers like Gemini, ANthropic etc or start using the Hugging face models.
The Hugging face provides you python code(via HF sdk) to download the HF(Hugging face) models, inference/train on your dektop/laptop. Ensure you have a Nvidia GPU to use them.