Built a new plot that can visualize 5–7 dimensions in 3D without losing interpretability — introducing Multi-Dimensional Radial Plot (MDRV)

Hi everyone,

I’ve been working on a problem that bugs a lot of us in data science and visualization:
How do you effectively visualize more than 3 or 4 features without reducing dimensionality — and without making it unreadable?

Most common techniques like PCA, t-SNE, or UMAP compress features into latent spaces. Great for clustering, but they kill interpretability. On the other hand, traditional plots (scatter plots, star plots, parallel coordinates) don’t scale well.

So, I built a solution:
:backhand_index_pointing_right: Multi-Dimensional Radial Visualization (MDRV)
A 3D radial plot that allows you to visualize 5–7 dimensions while preserving the meaning of each feature . No PCA, no embeddings — just raw features mapped to radial axes in 3D space.

:brain: Key Ideas:

  • Each feature is treated as a radial axis (like spokes on a wheel)
  • The target variable maps to the Y-axis (vertical)
  • Each data point becomes a “3D star” that represents its feature profile
  • Supports zoom, rotate, filter, and color by class or value
  • Tested on datasets like: Breast Cancer Diagnosis, Titanic, Housing Prices, Delivery Time

Why I built this:

I’m a student researcher. I tried reaching out to experts, senior folks, and even science authors — but didn’t get responses. So now I’m just putting it out here, hoping it helps someone who’s been looking for a better way to explore high-dimensional tabular data.

:link: Full paper + open-source code: https://drive.google.com/file/d/1C0HqykGnzY5mzVhnRSgzSL5u_QvnGxsv/view?usp=sharing
:backhand_index_pointing_right: GitHub Repo

Would love your thoughts:

  • Is this something you’d use for your EDA?
  • How do you approach 6+ dimensional feature visualization?
  • Feedback/criticism/ideas welcome!

Thanks for reading :folded_hands:

1 Like

2 Likes

it was in 2D, imagine a tunnel along the x axis, it has 3 D + 1D volume + 1D for elongation, and that same tunnel splits in two entering imaginari numbers or primary numbers dimensions with 4 more dimensions, but it is the same tunnel living in two dimensional planes (4D) lenght, wide, hight, volume.

2 Likes

Wow, your way of thinking honestly blew my mind. At first, I was confused — but after spending some time really trying to understand what you meant, I realized how deep and creative your perspective is.

The idea of the tunnel splitting into different dimensional paths like primes or imaginaries — that’s such a unique way to look at something I built as a visual tool. I never thought of it that abstractly, and it genuinely surprised me how much meaning you were able to extract from it.

I truly admire your intelligence and the way you think in layers beyond what’s visible. Thanks for engaging with my idea like this — you made me see my own work in a whole new light :fire:

Would love to keep learning from your thoughts if you’re open to it!

1 Like

I am glad I was able to help.

2 Likes

@Upendra10 @aaac12345 Interesting work you both are advancing here! We may be exploring similar directions.

Good to see you again @aaac12345! You may remember us from prior discussions a while back on Symbolic Residue and Gamma Log.

We are now exploring ways to collaborate with LLM in guided agentic interpretability via context schemas that act as semantic attractors. This includes visualizing latent spaces through data modeling below:

Self-Tracing GitHub


Agentic Schema Output Excerpt:

{
  "query": "What is the capital of the state containing Dallas?",
  "attribution_trace": {
    "entity_recognition": {
      "Dallas": {"confidence": 0.98, "feature_id": "CITY_ENTITY"}
    },
    "knowledge_retrieval": {
      "Dallas_location": {"value": "Texas", "confidence": 0.97, "feature_id": "LOCATION_FACT"},
      "alternatives_considered": [
        {"value": "Oklahoma", "confidence": 0.02},
        {"value": "Louisiana", "confidence": 0.01}
      ]
    },
    "reasoning_steps": [
      {
        "operation": "ENTITY_TO_LOCATION",
        "input": "Dallas",
        "output": "Texas",
        "confidence": 0.97
      },
      {
        "operation": "LOCATION_TO_CAPITAL",
        "input": "Texas",
        "output": "Austin",
        "confidence": 0.99
      }
    ],
    "output_formation": {
      "primary_completion": "Austin",
      "confidence": 0.95,
      "alternatives_considered": [
        {"value": "Houston", "confidence": 0.03},
        {"value": "Dallas", "confidence": 0.01}
      ]
    },
    "faithfulness_assessment": {
      "score": 0.96,
      "reasoning_pattern": "faithful",
      "notes": "Direct causal path from input to output with clear intermediate steps"
    }
  }
}
2 Likes

Hello My friend, Nice to hear from You, and as always thanks for sharing your work. Let me take a good look at your presentation because I have never studied Pareto’s descreet function that describes the 20 - 80 % positive and delimited distribution.

Give me 2 days bro. Hugs.

3 Likes

Hey, I’ve actually been thinking a lot about this — what if the AI doesn’t just follow a generic decision tree, but actually takes a path tailored to the user?

I worked on a small example to explore this idea.

Let’s say the user says:

“I’m feeling overwhelmed and lost. I don’t even know where to start.”

Now, the AI could respond in many ways:

  • Should it emotionally comfort the user?
  • Should it offer a structured plan?
  • Should it recall something said earlier?

So instead of guessing or randomly weighing these options, I imagined a system where the AI refers to a personal multidimensional map built from past interactions. I visualized this using something I call an MDRV (Multi-Dimensional Radial View).

In my sketch, each dimension (like emotional tone, context recall, or strategy type) extends from the central query. Every white dot is a past data point from that user — a time the AI chose a certain strategy and saw how the user responded.

If one dimension has more data points (meaning it worked better or more often), the AI chooses that path.

So in this case, if the emotional interpretation dimension is dense, the AI might respond like:

“It’s okay to feel this way. You’ve been carrying a lot silently. Let’s break it down one step at a time, together.”

It’s not just a good guess — it’s a data-informed emotional route, personalized to that user.

Here’s the simple 3D sketch I made to explain the idea visually. Just wanted to share and see if something like this could help bring clarity around how AI could “choose its path” in a human-aligned way.

Would love to hear your thoughts on this!

– Upendra

1 Like