Longtime programmer (1978) in multiple languages but new (5 months) to Python and AI. New HF user. Went through the NLP course in 2 days to gain an appreciation for how things work. I think I could cobble together a test program for my project to play with. My problem comes when trying to pick the right model(s) to test. When I saw that the Hub boasted over 200K entries, my first thought was that this was data, not information. I made choices on the left-side menu on the various tabs and winnowed that down to 109. But now it looks like I’ll have to manually examine all 109 to try and figure out which one(s) I should be using. That’s too many results to manually examine when the results require detailed analysis. Since I’m new to this, I don’t know what additional search parameters are needed - I don’t know what I don’t know - but it does seem like more granularity is needed to weed out results you don’t want. Thanks.
Okay, did I just not notice the “Full Text” search button, or did it get added after my post? This is awesome and solves my problem. So this is how it always works, right? I’ll ask for something and 5 minutes later you guys provide it?