Hi,
currently I am implementing a PyTorch version of the Tapas algorithm proposed by Google AI. Since the algorithm is based on BERT, I am using parts of modeling_bert.py to implement it. However, the original implementation of the Tapas algorithm (which is written in Tensorflow 1) contains operations on segmented tensors, such as:
tf.math.unsorted_segment_meantf.math.unsorted_segment_sumtf.math.unsorted_segment_maxtf.math.unsorted_segment_min
These operations do not exist in PyTorch. However, there’s a small extension library called pytorch_scatter which implements these operations. So I would use this library in my implementation.
Is there any chance my implementation will be added to the Transformers library if it relies on libraries other than Pytorch? I read in the templates README of the Transformers repository that “the package is also designed to be as self-consistent and with a small and reliable set of packages dependencies. In consequence, additional dependencies are usually not allowed when adding a model but can be allowed for the inclusion of a new tokenizer (recent examples of dependencies added for tokenizer specificities include sentencepiece and sacremoses).”
Thanks!
There is a requirement for the
However, the main thing which should be improved is 
). Do I simply have to