Thoughts on AI and Its Dangers

My thoughts on AI as a software engineer, as it seems to me people keep talking about it in very extreme terms, without much nuance.

Materialistic viewpoint

First, I want to discuss the two main ways to look at AI and its potential.

First we have the materialistic side. Most people are on this camp in today's modern societies. Modern societies are disconnected from the spiritual and from religion or God.

From this point of view, the possibility of AI becoming as intelligent as a human, on a long enough timespan, is real.

If humans are merely chemical meat machines, then why wouldn't we eventually be capable of creating a mere electronic metal machine?

Which means, AI is something to be careful of. It IS dangerous if not careful. And it will rule over us eventually.

The religious viewpoint

The other main camp is the more religious, or spiritual side.

The main concept here, as it relates to AI, is that this camp differentiates between life, as an intangible innate quality of human beings and human intelligence, and the more dead, simply looking at processes and data, nature of AI.

From this camp's perspective it is very easy to argue that AI will NEVER, not in a million years, compare to a human, at least in terms of creativity.

It might resemble one. But it will never be one. Not even a dog.

Life has an innate intelligence that makes it unique.

The current reality

And then we look at where AI actually is at, today.

It could jump up exponentially, but let's look at how it is today.

We have ChatGPT. A huge leap for AI. Very impressive.

But it is still noticeably an AI writing it.

It still requires human input and adjustments.

And to begin with, it studied human texts to educate itself.

AI isn't "intelligent", so much as it programmatically mimics and resembles intelligence. It cannot THINK. It can only apply its predefined processes and "intelligence" to a set of data.

Humans do this too, but humans have the inate ability to simply look at it from a diffferent angle and voila we think about it completely differently.

A person can, for example, think "if I was Jenny, how would I feel hearing these words?", AKA empathy. I prefer to think of it more as "changing my viewpoint". I was in my viewpoint, now I'm in her viewpoint. A much more useable and useful skill.

Is it dangerous on its own?

With that all being said. Is AI dangerous left to its own accord?

I don't think so.

It can barely think. And it probably won't develop much further than that. (Can you guess which camp I'm on myself? 🙂)

Its patterns for processing data and recognizing patterns will contine to become more and more complex, no doubt.

But will it ever think? I highly doubt it.

Can AI be dangerous, though?

Yes. Absolutely.

It is very simple actually, even with AI technology from two decades ago.

When your systems are oppressive. And you educate the AI to administer certain punishments based on certain criteria. THEN, it becomes very easy for AI to become dangerous.

For example you have robots roaming the streets. Not unlikely. And their job is to make sure everybody is dressed appropriately (let's say they check that not too much skin is visible). And if you violate those rules, they're allowed to deduct $10 from your digital wallet.

Well now AI is dangerous. It knows no nuance.

And unlike a human cop, you cannot bargain with it or otherwise apply your bard skills (that is to say, communication skills) to get a lighter sentence or none at all.

It is very black and white.

How it can be dangerous

So we have to think about the more realistic case.

In the last few years we have become very comfortable with A LOT of checks whenever we travel on an airplane (unless you fly private).

It used to be none. Just a rather free ticket. You could buy a ticket for such and such route, and it would last you a year. You could buy it in case you had any guests, and keep it in your drawer, and then gift it to one of them.

Then we introduced passports and other ID systems. Now THIS person HAS to travel on THAT date. And then on a specific time.

OK that's still reasonable. Less convenient though. Can't buy just a ticket and gift it to someone.

Then due to the events of September 11, we introduced "strict" security at the airport. TSA as we call it.

Demonstrably a useless exercise in security, and a very effective exercise in control, and making everyone unhappy with flying.

Is The TSA Really Necessary?
The absence rates the TSA has faced during the shutdown emphasize how non-critical the agency and a sizable portion of its workforce has become. Fortunately, new technologies may allow us to start reclaiming our airports. The TSA was only ever a means to an end and, today, there are better means.

And in the last couple of years we introduced a medical test on top of all of that You now cannot fly, without testing negative for a specific disease.

I'm sure that roster of diseases will be extended.

And eventually if you didn't post a positive Tweet about our Saviour and [Over]Lord you won't be able to get in your car, and it will be automatically enforced by our Trusted Peacekeepers [AI].

So no worries. AI isn't dangerous. As long as the world doesn't become dangerous.

A positive note

It all depends on what the people are OK with.

So whatever happens to us, it is up to us. Not up to our "overlords", if you believe in them.