Accelerate + Gemma2 + FSDP

I have a pretty standard accelerate setup with Gemma2-9B. I am trying to finetune it on an 8-gpu A100 node with the following config, using per_gpu batch size 1 and a total size of 8. However, I can only get up to context length 2048. Am I doing something silly that could help a lot? I feel like this should work for at least 8192.

compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: "no"
enable_cpu_affinity: false
fsdp_config:
  fsdp_activation_checkpointing: false
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
  fsdp_backward_prefetch: BACKWARD_PRE
  fsdp_cpu_ram_efficient_loading: true
  fsdp_forward_prefetch: true
  fsdp_offload_params: false
  fsdp_sharding_strategy: FULL_SHARD
  fsdp_state_dict_type: SHARDED_STATE_DICT
  fsdp_sync_module_states: true
  fsdp_use_orig_params: true
  fsdp_transformer_layer_cls_to_wrap: Gemma2DecoderLayer
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

Fwiw, I tried also doing fsdp_transformer_layer_cls_to_wrap: Gemma2DecoderLayer,Embedding but wrapping the Embedding layer causes this issue: RuntimeError: size mismatch, got input (2048), mat (2048x3584), vec (114688000)