But it’s been over 10 min for the following code to run list(full_pubmed.take(20)).
What is the point of “skip” if it just iterates lines one by one (which is why I assume it takes so much time)? Unless I’m missing something and there’s a better way to skip entries?
Hi ! The orignal pubmed data files are around 1k xml files and we can’t really know in advance where the example at position 12M is located unfortunately, so it has to iterate on the all the examples before finding it.
At one point we’ll support fast skipping if the dataset is made of supported data files like parquet, for which we know the length in advance