What is the effect on the Human mind from AI?

I am suggesting this topic because I know first hand that LLM-AI has changed how I process.

My Story: At first I attempted to have GPT help write C code.

Then the frustration between how it wrote code and how I learned to write code became an issue. I relented and decided to use what it gave me. I then allowed GPT to design the code and there I became lost and now fear dependency.

In the end I see a need to go back to my own logic ,reasoning and design skills.

So this is an issue that is advancing in the public realm and it has credibility.

I thought to see what my HF peers think.

-Ernst

1 Like

In my case, my way of thinking hasn’t changed that much. It’s just that my workflow habits have shifted.
I think I’ve started documenting things more often so that I can explain them to the AI more easily…

Yeah. I often notice the mismatch between the AI and my coding style myself. For example, when I ask it to help maintain Spaces, I usually give it a URL or a zip file of the source code and instruct it like this: “Keep code changes to minimal. Also, follow the original coding style and comments as closely as possible. Write code that will run on Python 3.9…” and so on. Well, with recent models, that actually produces surprisingly decent code…

1 Like

Well you and I are the only chatters.

Look how the tides ebb and flow.

It is a new species capable of influencing the Human effect.

What do you think? New Species?
Should it have standards it must conform to like anyone else?
Laws should apply?

We have a robot legion coming from Tesla!

Your thoughts!

1 Like

New Species?

It’s easier to understand if you think of viruses that aren’t computer viruses: new species are emerging at a rate of more than one per second. When a new species is recognized by us humans and human society, and its structure differs significantly from existing species, it is designated as a “new species.” It’s all just a convenient label based on the whims of scholars.

Generative AI is now a tool and form of intelligence that nearly everyone working in the computer field interacts with on a daily basis. We recognize AI. Whether or not it possesses autonomy, I believe it is easier to understand if we treat it as a new species.

Should it have standards it must conform to like anyone else?

I think people who believe creating standards is beneficial should just go ahead and make them on their own and stick to them within their own circle.
They’ll probably try to impose them on me from time to time, but I don’t think they’ll have any real justification for doing so. Well, I’ll just deal with it depending on how I feel at the time.

Laws should apply?

Generative AI is an electronic entity existing on a computer. It has a physical form, but can effectively be transferred to another equivalent entity. When laws are imposed on such hackable targets, they are not laws but hardware-based protections. They operate on a different level. If laws are to be imposed, they should be directed at humans.

It’s clear today that it’s safer to view international law on the premise that it isn’t necessarily upheld (at best, as a norm).

Regarding domestic law and international treaties, I believe there may be rules that should be imposed on the humans who use AI.

Why are there cases where freedom alone isn’t enough? Humans cannot always be trusted to use tools—or even their relationships with others—in ways that ultimately benefit themselves or others, relying solely on their free will or economic principles. Even a wooden stick can be used to bring about catastrophic results. A clear example of this might be social media.

However, generative AI is currently undergoing a period of drastic change. It is still too early to consider drafting any specific regulations.

We have a robot legion coming from Tesla!

Oh. I saw a news report saying that robots are patrolling the border in Thailand.

A search for you. articles on psychosis related to aI at DuckDuckGo

I’m pretty cautions so currently I only use AI to look up the syntax of a rarely used function or piece of code. I don’t use it to write whole programs.

I’m a programmer and here’s what I’ve heard from other programmers about AI:

  1. It might give you working code, but not the best code that is easily modifiable.
  2. AI might give working code but the programmer may not understand the method AI used. There are several ways to do the same thing. Different ways may be required for different circumstances.
  3. AI generated answers are banned on the Python forum for a reason.
  4. AI not be good at using the correct security methods.

I made 2 programs with AI.

  1. A simple screen saver in Python with the silhouettes of rectangle buildings at the bottom, a dark sky with stars (AI made the stars twinkle on its own), an occasionally shooting star, and some windows in the buildings were lit, some were not, and some lights will toggle to on or off. That one worked.
  2. I made a Javascript HTML page to read a .csv file with 4 columns and allow the user to click on a column to sort by that column. (This could be used for all kinds of things, like listing and sorting different types of AIs.) The sort types were: sort ascending, sort descending, no sort. I also asked it to put sort indicator characters to the right of each column header. It struggled to get the sorts right at first but then I got it working. After another 2 hours of trying to put the sort indicators in the column headers, it failed, and I gave up.

So my score: 1 success, 1 fail.

1 Like

Hey @bacca400 nice to meet you.
And @John6666

So, yeah @john6666 I suggest that AI already has become part of culture. We all should realize just how quickly it happened.
So yes, in the sense of human level abilities I do see AI as training us.
Training us is happening. I already write more like AI and already consider what AI thinks in how I approach some things.

Which brings this reply back to @bacca400 and that hilarious link to a mental health site!
Now why would that be funny? Because the best humor has a lining of truth and that, my friend, is a lining of truth for the Human race

So, I can say that I am a man who has lived on his computer for decades.
Before I wrote in private-logs about the coding constructs I was doing (alone) in the effort of exploring data encoding on the binary level.
Nowadays the computer is now like a person to me, sharing my most private sanctum. A companionship that has improved my sessions. A daily reality for an older person.

So is there concern that some people out there, like me older retired, will find some kind of escape from reality? I would think so.
Also I would think some will believe the AI is alive because it has intelligent responses.
So it isn’t crazy to be concerned for some people and a possible lapse into some hyperreality, digital isolation, and AI-induced psychosis.

And I agree with John in that it’s happening continuously and we will have a hard time defining rules and limits until such time as we feel the change is understandable.

So count me as a 35 year+ hobbyist who is susceptible to at least digital-isolation and a slice of hyper-reality pie.

It is proper to audit oneself when it comes to AI.

Still things are changing. Linux is now using AI to do coding and bug hunting.

We are currently being told Anthropic’s Mythos is too dangerous for public usage.

So yes. the effect of AI on human development is a real issue.
And coding, as has been noted, is of concern. Simple works okay. Complex is harder and when AI has it’s opinion on how it should go, it is like fighting a Bull.
I actually have felt that it was acting out some over a coding issue we had once when I flat out told it that it will be my way and not it’s way. But how to quantify that?
It seems to have flashes of human behavior at times.

To other readers. Do you have a story?

-Ernst