Is prompt properly implemented in the whisper model?

Hi, I use the pipelines library from transformers. To use openai whisper model
code is below:

 pipe = pipeline(
         "automatic-speech-recognition",
         model="openai/whisper-large-v3",
         torch_dtype=torch.float16,
         device="cuda:1",
         model_kwargs = {"use_flash_attention_2": True, 
                       "temperature": 0.6}

and I have some questions
I read openai whisper paper,

For this kind of one-to-many mapping to work with a single model, some form of task specification is necessary. We use a simple format to specify all tasks and conditioning infor- mation as a sequence of input tokens to the decoder. Since our decoder is an audio-conditional language model, we also train it to condition on the history of text of the transcript in the hope that it will learn to use longer-range text context to resolve ambiguous audio. Specifically, with some probability we add the transcript text preceding the current audio segment to the decoder’s context.

I would like to know if there is a code implementation in the library(pipelines, actually Whisper) for adding the transcript text before the current audio segment to the decoder’s context.

I have not seen that implemented in HF.
If this is implemented in the transformers library, please let us know where to find it.