Skip-gram tokens

Is there any way to produce skip-gram tokens with tokenizers library?