Dose Prediction model with 3D input images

Need Help with Predicting Radiation Dose in 3D Image dataset

Hey everyone! I’m working on a project where I want to predict how radiation hit on a target (like a human body) and how much energy the radiation deposits there after the hit.
What I Have:

  1. 3D Target Matrix (64x64x64 grid)
  • Its a 3D pixel matrix such that it has 3 value showing how dense the material is — like air, tissue, or bone.
  1. Beam Shape Matrix (same size)
  • Shows where the radiation beam is active (1 = beam on, 0 = off).
  1. Optional Info:
  • I might also include the beam’s angle (from 0 to 360 degrees) later on.

Goal:

I want to predict how much radiation (dose) is deposited in each pixel — basically a value that shows how much energy ends up at each (x, y) coordinate. Output example:

Example output:
[x=12, y=24, dose=0.85]

I tested out 3D U Net architecture, and it performed quite well but Now, my task is develop a more advanced model. So, I need suggestions regarding what architectures could be suitable for my model.

About me: I have never used Hugging Face Library before but I am somewhat familiar with how the transformers work and how you feed data to them.

1 Like