Use this topic to ask your questions to Matthew Watson and Chen Qian during their talk: NLP workflows with Keras.
Since a lot of people use both keras and pytorch especially while using HuggingFace. Have you guys considered making the code torch code transferrable to keras (to some extent at least) and vice versa so as to ease it for everyone?
Hi, TF maintainer here! We’ve been working on trying to add some helper functions for this. For example, the recent notebooks use a
to_tf_dataset() method that can be used to convert HuggingFace datasets to TF Dataset objects, which allows you to pass them straight to Keras methods like
model.fit(). This avoids the need for separate TF data handling code in a lot of cases.
wow, thank you for your efforts and for replying.
Can you please tell how to develop pipeline for a language?
Hey @sugandhi, thanks for your question! Could you clarify a bit what you mean by “pipeline”? What type of NLP task do you have in mind?
Can you develop more on the idea of training loss and validation loss? If the validation loss starts increasing, it means that it’s already overfitting?
Hi, thanks for this great session. My question is can layer-wise learning rate help in the fine-tuning when it comes to bert models.
This is answered at 2:45:20 in the main stream.
This is answered at 2:46:10 on the main stream.
I was exploring Neuralcoref for english language. I want the same model for “Hindi” language but for this SpaCy parser doesn’t have pipeline developed for Hindi language. So Can you please guide, how to develop NeuralCoref for Hindi language? https://spacy.io/universe/project/neuralcoref
Hey @sugandhi, this is an interesting question! I think it would be best to ask it as a general forum topic so that others in the community can share their expertise