Use this topic to ask your questions to Margaret Mitchell during her talk: On Values in ML Development.
How could we balance the filtering of bias between freedom of speech and offensive/biased content?
And also to what extent we can do it.
- How do we protect against machine learning systems designed primarily without ethics in mind?
- How do we push for adaptation of more ethical datasets (Not Imagenet. CC4 or LAION, for example) in major conferences?
- What about retraining models that are already trained on unethical data?
P.S. If making jokes reflects poorly, I will take that all day, everyday. More power to you, Margaret! Keep doing all the good work
If a model could be fine-tuned for human values, couldn’t it be fine-tuned for anti-values in the same way?
what’s the easiest way to determine whether a text generation model is not racist or sexist? And how do we solve this quickly without losing too much training data?
Model transparency seems to be a precondition of the mentioned ethical goals. Dr. Mitchell seems to emphasize the role of input data. Isn’t the model’s way of decision-making equally important (feature importance etc.) for ethical AI?
This is answered at 1:44:40 in the main stream.
When Dr. Mitchell was talking about loss functions and what they emphasize or optimize for, I believe this was what was being talked about
While training the pretrained models, is it possible to add a step including debiasing algorithms (there are emerging algorithms today), could this make some intrinsic bias disappear?
What does the regular day-to-day work look like for you?
Is making it mandatory and forcing to create datasets where each participant is highly paid going to put a lot of pressure on researchers from low-resource institutions and researchers from low-resource countries?
about False Positive vs False Negative): I think that that the choice should match the goal of our Model or application. For example if it is about coronvirus or other task of medical diagnosis, The Goal is to detect the maximum of positive cases (having the illness), in this case we should focus on False Neagtive (False Positive doesn’t matter, even if the model predcit healthy subjects as ill: it is not very dangerous). but if it is about detecting credit card frauds(for example): yes,we should focus on the False Positive (because we must be careful not to calssify a non-fraudulent(negative) case as fraudulent(positive)).
This is answered at 2:04:20 in the main stream.
This is answered at 2:02:25 of the main stream.
This is answered at 1:56:35 of the main stream.
This is answered at 1:57:55 of the main stream.
This is answered at 1:59:00 of the main stream.
This is answered at 2:00:10 of the main stream.
This is answered at 2:08:32 of the main stream.