When using an SDXL base and refiner, should LORAs be sent to both?

For my base model, I do the following:

base = StableDiffusionXLPipeline.from_pretrained(model_id, vae=vae, **model_args).to('cuda')
base.load_lora_weights('models', weight_name='EasyFix.safetensors')
base.load_lora_weights('models', weight_name='EnvyBetterHiresFixXL01.safetensors')
base.fuse_lora()
base = _apply_pipe_optimizations(base)
# optimizations like "fuse_qkv_projections"

# Load large LORAs and set their weights to 2.0 to have their fx appear

My refiner is like this:

refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained(
	'stabilityai/stable-diffusion-xl-refiner-1.0',
	text_encoder_2=encoder,
	vae=vae,
	**model_args
).to('cuda')
# We can't ever use "load_lora_weights" with the refiner.
refiner.unet.load_attn_procs('models', weight_name='EasyFix.safetensors')
refiner.unet.load_attn_procs('models', weight_name='EnvyBetterHiresFixXL01.safetensors')
refiner = _apply_pipe_optimizations(refiner)

My main question is with large LORAs loaded on the base, should those weights also be put on the refiner?

Other related questions:
2) If any runner variables like use_lu_lambdas, sampler, strength, etc. are changed on the base model, should the same changes be made to the refiner?