Fine-tuning translator based on a single language


I have to develop a translator where there are models trained only on the source language. For example, EN_KLINGON, where only EN_DE, EN_FR and EN_RU are available. I was wondering if I could take advantage of the trained (EN) encoder, while training the decoder from zero. Also, since such an encoder could be biased towards the target language, could I use a base language model (BERT, for example) for the source language as its encoder in the Seq2Seq training pipeline?