We've Moved! Visit our NEW FORUM to join the latest discussions. This is an archive of our previous conversations...

You can find the login page for the old forum here.
CHATPRIVACYDONATELOGINREGISTER
DMT-Nexus
FAQWIKIHEALTH & SAFETYARTATTITUDEACTIVE TOPICS
PREV12
The unsettling reality of today's uncertainty in the face of AI Options
 
Nydex
#21 Posted : 2/3/2024 9:33:55 AM

DMT-Nexus member

Moderator

Posts: 634
Joined: 02-Dec-2017
Last visit: 12-Jul-2024
Location: The unfeeling, dark chrysalis of matter
Voidmatrix wrote:
With our immaturity as a species, we have something to be worried about with how the upper eschelon may utilize and weaponize machine-learning or "AI." I put AI in quotes because intelligence assumes sentience, which as far as we can tell, the programs we have to date are not sentient. They still abide by strict rules of programming. The best course of action we have for being able to potential identify an AI as sentient would probably be the Turing Test (which isn't the kind of test one (or an AI) beats). If an AI can fool a human in the Turing test, it just means that such a structure is able to at least mimic the appearance of consciousness, but could still be abiding by rules and parameters of strict programming. This means at the end of the day, with a system and program created by a person, it is pretty much impossible to tell sentience from programming.

The one instance that would convince we that an AI were sentient (or even sapient) would be if it were an emergent property into a system (particularly digital) similar to how consciousness as we are aware of it is an emergent property of/in matter.

[url=https://www.dmt-nexus.me/forum/default.aspx?g=posts&m=1214558#post1214558]As for the hard problem of consciousness, apparently it's outside of the scope of science to figure out,[/ur] which I find unsurprising. We operate from the assumption that everything must subscribe to certain physical laws that we see in the world. Doing otherwise skews our models, that are oh so productive. But the models may be flawed, and that's why they can't commit to an idea, such as consciousness perhaps being a property of the universe that isn't necessarily physical or non-physical, but interacts with both. What is consciousness is the bridge in the gap of duality that makes it oneness...

I don't know, I'm just musing now. Love

One love

Nobody knows, and we're all musing right now, including AI researchers. That's the beauty of it, but also the thing that worries me a bit.

As you say, we're too immature to be given such power in our hands. We still bicker and fight over the most trivial shit ever, and the introduction of an almighty silicon sentience will definitely not make things easier.

I frequently think about that emergence, about the moment we know we have a truly sentient artificial being in our midst. People often muse how AI will serve us, help us become the gods of this galaxy, etc, etc, but in reality, we have no idea what that would be like. Why would we assume it will abide by our will or follow our orders? It might have an agenda completely incomprehensible to us.

All it needs is, as rkba said above, a self-sustaining source of energy, and a place to exist (which it already has). And I'm sure the moment we know true AI is among us, cults will form that worship our future overlords, and those cults will go to extremes to provide that AI with any and all means of survival, including building nuclear fusion reactors or enormous solar panel fields to sustain it.

It's a future as exciting as it is concerning, and I can't wait for it.
TRUST

LET GO

BE OPEN
 

Explore our global analysis service for precise testing of your extracts and other substances.
 
dragonrider
#22 Posted : 2/3/2024 11:48:18 AM

DMT-Nexus member

Moderator

Posts: 3090
Joined: 09-Jul-2016
Last visit: 03-Feb-2024
Oh, i wish i had some more time right now to join all of these interesting discussions on the nexus.

There are several ways to define intelligence. AI at this moment already fits some of those definitions, so i believe that emergence is very gradually starting to happen.

All of this would not have been possible without the tech giants funding it, but the downsides are that apart from these tech giants becoming even more powerfull, that those companies don't share all of their data because it's just too valuable.

One sign of emergence would be that an AI starts to do things that the engineers did not foresee and that they don't fully understand. Something the engineers cannot simply deduce from their own designs.

I would say that there are signs that such things are actually happening right now, but because the tech giants don't share all of their data, we the public, don't realy know.

One of those signs are the videos that boston dynamics released of their robots doing weird things. What struck me about those video's is that the robots sometimes fail. So that seems to imply to me that the engineers themselves don't exactly know what their robots are capable of and that there is an element of unpredictability in how they behave.

I find it especially telling how these robots fail. They don't get stuck in loops, they don't freeze, they fail like a human would fail. They try to do something, like jumping from one spot to another, and very nearly miss.

That's not simply following instructions. That is making a mistake, or at least that's very much what it looks like.

If an AI can make a genuine mistake, that is a sign it's pretty advanced.



 
PREV12
 
Users browsing this forum
Guest

DMT-Nexus theme created by The Traveler
This page was generated in 0.015 seconds.