I was fine tuning Gemini 1.0 Pro, and I have noticed something in it, being specific I was genrating some reasoning data by a fine tuned version, So model had to genrate a question and an answer in response but whenever it thought it have generated a wrong question it used to clear, dismiss or I don’t what it do and then it replies in response that the question is wrong and with the things was wrong In it. But in “wrong cases it never produced data” only told what was wrong. But I wanted to know that how a model can modify what it have internally generated and then genrate something else.
The contents of Gemini are a black box to all but Google developers, but this model in HF could help with that consideration?
I’m completely out of the gate, so there’s a good chance I’m missing the point.
If possible, I think more people will see this if you use the Post rather than the Forum.
Often technical considerations are posted in Post and Article.
I have also worked with CoT/Reflection way, in that model genrate tokens and then correct if something went wrong , in further genration but in my case model do it without generating any token, internally maybe while procceing data from layer to layer. And I Can’t post on hf cause I don’t have Pro and as 11th student I can’t afford it.
I think it can be hallucinations of model, But I have never seen or heard of this kind of hallucinations.
Then the Forum or Discussion section is the best place to go in HF…if the model is in HF, the Discussion section of the model repo is the best place to get an accurate answer…but there is no way Gemini model is in HF. I know there are several Spaces where individuals are using Gemini.
I’ve never heard of it either. No matter how much I don’t know about it.
The fact that a wrong answer is visible, even for a moment, means that it has been output at least once. I’ve never seen it corrected on its own either. It is a strange structure of the program. Even in streaming output, once something is output, it doesn’t disappear on its own.
If the output is modified inside Gemini before output, there should not be any wrong answers before or during the modification process. The wrong answers may appear, but they should remain there. If it disappears, it must have been erased from our side at some point.
Could it be a Gemini-specific problem, such as one of the following functions working on its own after output?
As you said,“If the output is modified inside Gemini before output, there should not be any wrong answers before or during the modification process. ” I think I missed some info in explaining, this is the structure model was supposed to generate data:“Q:{Question Here} A: {Answer here} ” So It doesn’t give any wrong answer. Maybe it’s doing the thing that when it generates the “Q:” and then goes to solve it for "A: " and it realises that There are mistakes in “Q:” Part it doesn’t solve it and don’t respond in the structure it was supposed to cause question is wrong, so it responds telling what is wrong in “Q:” question section, but in response it only response with the text like why questions can’t be solved there doesn’t available any “Q:” part. This says that either it’s removing/erasing that text or something. But when “Q:” is correct it responds as expected.
It’s possible that, analogous to Stable Diffusion, it would be similar to a structure that the server would be overloaded with NSFW checks on the generated images, so would reject requests when a sexual prompt is passed. (an implementation that actually exists on some sites)
Anyway, I get the impression that they are doing some kind of branching process that is probably not directly related to the logic of the generative AI, although it’s Google, so it’s probably not a financially secured reason.
If they are doing it within the logic of the generative AI, are they learning in real time, reconsidering and giving a new answer or rejecting it? I don’t think so, indeed. The server load would be too high.
I also thought of the the thing you are talking about “Some Response Gourds”, but they aren’t supposed to modifying content they just classify as if it is safe of not.
I think I’m starting to get some idea of what’s going on in the programming context, but I have no idea what Google is trying to get out of this and what criteria they are using to sift through the responses.
Well, I can guess. The criteria is probably the correction mechanism mentioned above or something similar. I don’t think they would make several similar ones.
The question is motive, but perhaps without the user’s permission (or is it noted somewhere in the corner?) Maybe they’re beta testing an auto-correction feature on an actual service… they could do it…
In that case, as far as I know, the mechanism is similar to LangChain or something like that… a series of LLMs. The LLM as an error detection mechanism would surely be lighter and simpler than Gemini itself.