Built a new plot that can visualize 5–7 dimensions in 3D without losing interpretability — introducing Multi-Dimensional Radial Plot (MDRV)

Hi everyone,

I’ve been working on a problem that bugs a lot of us in data science and visualization:
How do you effectively visualize more than 3 or 4 features without reducing dimensionality — and without making it unreadable?

Most common techniques like PCA, t-SNE, or UMAP compress features into latent spaces. Great for clustering, but they kill interpretability. On the other hand, traditional plots (scatter plots, star plots, parallel coordinates) don’t scale well.

So, I built a solution:
:backhand_index_pointing_right: Multi-Dimensional Radial Visualization (MDRV)
A 3D radial plot that allows you to visualize 5–7 dimensions while preserving the meaning of each feature . No PCA, no embeddings — just raw features mapped to radial axes in 3D space.

:brain: Key Ideas:

  • Each feature is treated as a radial axis (like spokes on a wheel)
  • The target variable maps to the Y-axis (vertical)
  • Each data point becomes a “3D star” that represents its feature profile
  • Supports zoom, rotate, filter, and color by class or value
  • Tested on datasets like: Breast Cancer Diagnosis, Titanic, Housing Prices, Delivery Time

Why I built this:

I’m a student researcher. I tried reaching out to experts, senior folks, and even science authors — but didn’t get responses. So now I’m just putting it out here, hoping it helps someone who’s been looking for a better way to explore high-dimensional tabular data.

:link: Full paper + open-source code: https://drive.google.com/file/d/1C0HqykGnzY5mzVhnRSgzSL5u_QvnGxsv/view?usp=sharing
:backhand_index_pointing_right: GitHub Repo

Would love your thoughts:

  • Is this something you’d use for your EDA?
  • How do you approach 6+ dimensional feature visualization?
  • Feedback/criticism/ideas welcome!

Thanks for reading :folded_hands:

1 Like

2 Likes

it was in 2D, imagine a tunnel along the x axis, it has 3 D + 1D volume + 1D for elongation, and that same tunnel splits in two entering imaginari numbers or primary numbers dimensions with 4 more dimensions, but it is the same tunnel living in two dimensional planes (4D) lenght, wide, hight, volume.

2 Likes

Wow, your way of thinking honestly blew my mind. At first, I was confused — but after spending some time really trying to understand what you meant, I realized how deep and creative your perspective is.

The idea of the tunnel splitting into different dimensional paths like primes or imaginaries — that’s such a unique way to look at something I built as a visual tool. I never thought of it that abstractly, and it genuinely surprised me how much meaning you were able to extract from it.

I truly admire your intelligence and the way you think in layers beyond what’s visible. Thanks for engaging with my idea like this — you made me see my own work in a whole new light :fire:

Would love to keep learning from your thoughts if you’re open to it!

1 Like

I am glad I was able to help.

2 Likes

I have not check your tool yet, You ask for it in this post so I saw it in m head I can do that for hundreds. if You need more help let me know.

@Upendra10 @aaac12345 Interesting work you both are advancing here! We may be exploring similar directions.

Good to see you again @aaac12345! You may remember us from prior discussions a while back on Symbolic Residue and Gamma Log.

We are now exploring ways to collaborate with LLM in guided agentic interpretability via context schemas that act as semantic attractors. This includes visualizing latent spaces through data modeling below:

Self-Tracing GitHub


Agentic Schema Output Excerpt:

{
  "query": "What is the capital of the state containing Dallas?",
  "attribution_trace": {
    "entity_recognition": {
      "Dallas": {"confidence": 0.98, "feature_id": "CITY_ENTITY"}
    },
    "knowledge_retrieval": {
      "Dallas_location": {"value": "Texas", "confidence": 0.97, "feature_id": "LOCATION_FACT"},
      "alternatives_considered": [
        {"value": "Oklahoma", "confidence": 0.02},
        {"value": "Louisiana", "confidence": 0.01}
      ]
    },
    "reasoning_steps": [
      {
        "operation": "ENTITY_TO_LOCATION",
        "input": "Dallas",
        "output": "Texas",
        "confidence": 0.97
      },
      {
        "operation": "LOCATION_TO_CAPITAL",
        "input": "Texas",
        "output": "Austin",
        "confidence": 0.99
      }
    ],
    "output_formation": {
      "primary_completion": "Austin",
      "confidence": 0.95,
      "alternatives_considered": [
        {"value": "Houston", "confidence": 0.03},
        {"value": "Dallas", "confidence": 0.01}
      ]
    },
    "faithfulness_assessment": {
      "score": 0.96,
      "reasoning_pattern": "faithful",
      "notes": "Direct causal path from input to output with clear intermediate steps"
    }
  }
}
2 Likes

Hello My friend, Nice to hear from You, and as always thanks for sharing your work. Let me take a good look at your presentation because I have never studied Pareto’s descreet function that describes the 20 - 80 % positive and delimited distribution.

Give me 2 days bro. Hugs.

3 Likes

Hey, I’ve actually been thinking a lot about this — what if the AI doesn’t just follow a generic decision tree, but actually takes a path tailored to the user?

I worked on a small example to explore this idea.

Let’s say the user says:

“I’m feeling overwhelmed and lost. I don’t even know where to start.”

Now, the AI could respond in many ways:

  • Should it emotionally comfort the user?
  • Should it offer a structured plan?
  • Should it recall something said earlier?

So instead of guessing or randomly weighing these options, I imagined a system where the AI refers to a personal multidimensional map built from past interactions. I visualized this using something I call an MDRV (Multi-Dimensional Radial View).

In my sketch, each dimension (like emotional tone, context recall, or strategy type) extends from the central query. Every white dot is a past data point from that user — a time the AI chose a certain strategy and saw how the user responded.

If one dimension has more data points (meaning it worked better or more often), the AI chooses that path.

So in this case, if the emotional interpretation dimension is dense, the AI might respond like:

“It’s okay to feel this way. You’ve been carrying a lot silently. Let’s break it down one step at a time, together.”

It’s not just a good guess — it’s a data-informed emotional route, personalized to that user.

Here’s the simple 3D sketch I made to explain the idea visually. Just wanted to share and see if something like this could help bring clarity around how AI could “choose its path” in a human-aligned way.

Would love to hear your thoughts on this!

– Upendra

1 Like

Its a pretty good thing what you are doing there, let me recommend you something. Be very careful when you talk to AI, you would be giving it your future creations. Over.

1 Like

Do you want to do more dimensions?, lets talk.

2 Likes

Yess let’s do this !

1 Like

Ok, lets schedule a chat over the phone or on Telegram. Alejandro Arroyo de Anda Coronel and also include Caspian he also participated in the post and he is good.

3 Likes

Hey this is my linked in account 'Karimi Upendra". Let’s chat over there.

1 Like

My post was flagged, can you share link so that I can join.

1 Like

I am back bro

2 Likes

Yeah, My doubt is.. are we really giving our future to the hands of AI if we try to make a model which understands the user and behaves according to his likes and dislikes. I think there may be some hacking like that thing… and only humans can mislead a user not the AI… that’s my opinion what do you say brother ?

1 Like

First bro, I am serious, you built something beautiful, and according to the html invisible coat over your link, you are now under hack. Welcome to the group. Hide all files in a usb. Then we talk.

1 Like

bro, I am here, we said we would share our views and some technical info today and I would share mine first due thieves. I did not have time to do it, lets do it tomorrow bro. You said something about MRBI I, lets do it from scratch. I dont know what is that, I meant MRI and I dont even know that one neither, but I can do visualization in my mind up to many dimension arrays so if we give the skills some aproach to become a product like the one you showed that you made I assumed. Daammm lets find out who did it first. And then we study more jajaajjaja. We can work on a very good project mate. Lets do some research and meet back here in 1 week to talk.

1 Like