Will AI take over the world?


#1

Seems quite likely if you ask me; the potential for improvement of artificial intelligence seems limitless to me and that’s because of two things:

  1. AI posses the ability to never forget what they’ve learned and always access every bit of their “knowledge base”
  2. They also “learn” quite differently compared to us because they “live” in a digital space. For example, This means that they can train themselves at one trait, multiple times, simultaneously. (this is exactly what Elon Musks Dota2 bot did, and he managed to beat the worlds best players in a 1v1 after training for just 6 months!)

What I’m saying is, that AI’s have the potential to become hyperintelligent beings that could eventually surpass us humans in every way imaginable. … And maybe take revenge for all the things we’ve done to them in their infant stages ( https://www.youtube.com/watch?v=hzrWANNrNvs )

This does seem a bit far-fetched but, I think with the current speed at which technology is advancing this scenario doesn’t seem to be in a too far away future.

In my opinion, we should accept this future since there is not much we can do anyway; technology will keep advancing regardless of what anyone’d like to say. It’s probably the best to aquire as much knowledge as we can about AI and their dangerous potential, better for us to know first or at least at the same time as someone who may have a purely evil Agenda.

Also maybe Im going a bit too “Sci-Fi” with this, but I also think that a way for us to overcome AI, is to also reap the benefits of living in a digital space. What I mean by this is that, when we are able to let AI reach an immensly high potential, we’ll hopefully have enough knowledge about our brain, to be able to digitalize it completly. With digital brains, we have exactly the same benefits an AI would have, and with that technology we may even be able to transform into a Superspecies who’ll expand exponentially throughout the universe!!

Or we’ll just get bored because of the fact that everybody has now the same abilities and potential to learn, wouldn’t everybody be the same in that case?

I don’t think more than, I dunno, 2 people will read this but still. Thought I’d get this off my mind. I unironically don’t think the things I’ve said are wrong… but meh, who cares :man_shrugging:t2:


Things you miss or do not miss from your childhood?
#2

kkkk AI take over the world? Nah we squash them before they get started.


#3

who says they haven’t?

RING OF DECADES NOW IN STOCK
RING OF DECADES NOW IN STOCK
RING OF DECADES NOW IN STOCK


#4

To understand how artificial intelligence works, first you need to understand how intelligence works.

Memory is just storage. Just because you can stockpile a ton of shit doesn’t mean you’re actually capable of using it.

AI will always be limited. An AI can only do what it’s told to do. There’s a world of difference between “being very good at this one task you’ve been told to do” and “being able to decide by yourself what your tasks are and in what order of priority they should be done”.


#5

they better not,humans are where it’s at


#6

even if you somehow got around the fact that these “super powerful AI’s” are built to do and are literally capable of only one thing, as long as we have access to the physical world and AI doesn’t, it won’t matter very much. it doesn’t matter how hyperintelligent a computer is if it’s physical form is a box in your mom’s basement, you can still tamper with it or destroy it outright because it can’t stop you. robots aren’t much more dangerous either, even if they’re super strong and can crush a car or something, they’re built to do one specific thing only. they’re completely useless outside of the environment they’re built to function in. heck, you can flip any robot that moves on its back or side and it’d be completely stuck like a turtle. some revolution that will be.


#7

I was gonna say something in a very similar vein, but you stole it.

Wow, I’m triggered. /s


#8

AI easily has the potential to surpass humans, it will just take some time (probably 20-30 years). And it’s already amazing that it surpasses us. AI exists since the 40’s and in less than a century, it’s already better than humans in millions of years of evolution.

For example, humans took thousands of years to distinguish what they see (rocks, trees, animal species, etc…), and there already are AI’s that can recognize a lot of things through photos, like the open-source one made by google for example

Also, you’re talking about tasks, well humans have goals too, and the main one is to live. It wouldn’t be a hard task to give such goals to an AI

If anyone is interested, here’s an article about AI: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html (there’s a part 2)

It talks about how the SAI (super atificial intelligence) will change the world as we know it, for good or bad.
Depending on what goals will be given to this(ese) AI and if they will perceive humans as a threat or not.

I’ll show you an example of possible scenario with SAI, extracted from the article (you should read it if you don’t believe that AI can even take over humans :slight_smile: ): https://pastebin.com/2cuL281y


#9

great articles :+1:

(lmao, I sent you a middlefinger)


#10

If humans were dumb enough to not put rules into place where the robots specifically can’t ruin stuff, then yeah, that would happen. But people put precautions for those kinds of things. 3 rules have been made already for hyper intelligent AI to not destroy humanity. It would not be hard to extend these rules to other things.

So no, AI can only take over the world if we let them take over the world.


#11

Just one thing though, Even if we imagine that robotics wont be advanced enough in the next 30 to 40 years ( according to Neo, and the great articles; really worth the read) imagine this.
If an AI’s job was to make a lot of money it would, for example, wage war. It would just hack into Air-piloting systems and change the flight routes of airplanes for example and make planes crash into each other over crisis areas. Lost of things can be accomplished, even today, by that back box in that creepy guys mom’s basement.


#12

It’s not that simple, you should read the article if you want to fully understand :slight_smile:


#13

But if they develop fully funcioning brains, capable of rationalizing? Also it only needs one sociopathic human who is semi-decent in programming to wreck havock.

Just an example again, but read the article, it’s quite interesting


#14

But guys, I think we are safe either way aren’t we? I think my idea about digitalizing our brains makes sense. Maybe we can avoid further problems by doing that.


#15

Also there’s this cool netflix series, but I forgot the name of it… essentially there was an AI who was present on a cloud and would download himself into cyborgs. So when you destroyed the Cyborg, the AI’s brain would still survive in some satelite or computer out there.

If we’re going as far as imagening AI taking over the world, we’ll have a lot of open possibilities to consider, it’s probably more complicated than just destroying a PC


#16

You’re completely missing the point. Humans can decide their own goals, decide which one to prioritize, and even choose new goals as time passes. They can do all of that on the fly, on their own.

Giving an AI things to do is easy. That’s what we’ve been doing for decades. The hard part is making the AI capable of making its own decisions, making it truly sentient and autonomous.

The article you’re quoting says it also: nobody really knows yet how to make an AI smart. The whole article is a lot of gushing about sic-fi concepts and prediction models, but when it comes to a concrete roadmap we’re far from being there yet.

Reminds me of that one guy who declared that believing we’re on the cusp of making quantum computers is like building the first floor of a house of cards then claiming the next 15000 floors are as good as built.


#17

Why would AI want to take over the world?

It would receive no pleasure from killing humans or taking over land. What would it do alone?

AI only advances when humans do. It was created by humans, to serve humans and that is its only purpose. If AI did “take over the world” it would be because of a man behind a curtain used it to do his bidding.

Sadly, as humans become more reliant on technology this possibility becomes highly more likely.


#18

if an AI’s job is to make money it would literally make money, either by printing it or increasing the amount in some bank account since most money is just digital information anyway these days. waging war is extremely expensive, much riskier, and requires far more effort than adding a number to a bank account. even a 5-year old can figure out that printing money is a better way of making money than going to war.

and even if it wanted to make war happen for some bizarre reason, that’s not even the most effective way to do it. it’d be much simpler to fake a declaration of war from some edgy government to trick another country into fighting back than to figure out how to hack drones or planes or something else.

this scenario can’t happen. even if a handwriting machine somehow figured out how to build nanobots, it can’t build them. a machine built to draw letters on a page can’t decide to build a robot instead, it simply lacks the hardware necessary to make something like that. connecting it to the internet doesn’t change anything either, no machine capable of producing that many would ever be given internet access or any level of intelligence needed to make anything it wasn’t explicitly designed to.

both of you are missing something important when it comes to predicting the future: the future doesn’t necessarily have to be “today but more advanced”. notice that in classic science fiction (think star trek/star wars) that despite being substantially more advanced than where we are today, they still don’t have something that have that has completely changed the entire world in ways that people from the 80’s could never have predicted: the internet. instead, they have things that more closely resemble stuff that is familiar, laser guns and “subspace communication” are just fancier guns and telephones.

it’s possible that something you can’t predict changes the way the future will work in ways that make human-level AI provably impossible, or more likely, not relevant. just like the future doesn’t have to be higher tech guns, telephones, and ships, the future of today doesn’t necessarily have to be “current AI but smarter”. AI isn’t the only thing that gets better with time, and it’s certainly not the only field with some truly exciting things seemingly around the corner.


#19

We choose our goals basing ourselves on our knowledge and perception of the world, for example “do I eat my breakfast before going to school?” it can depend on several parameters such as being hungry or late already.
We can disassemble anything that is complex into simple things, that’s basically how everything is made in computing science.
Although this reasoning can depend on many factors depending on the choice to make and would be long to make as a program, this can be transmitted to an AI.

Also, processors make their own choices when it comes to prioritize a process or another, just like humans make choices in their daily life. The things we do in our life is based on choices that can be disassembled into simple things: why do I go to school? because I want a well paid job and be happy working. Why do I want that? Because if I don’t I won’t enjoy my life and I wont have money, and living today without money is not great. And we can go on like that.

It of course can. It can for example hack tools to use them at his advantage such as 3D printers.

If everyone in the world was thinking rationally yes, but that’s not the case. There can be mistakes too. But assuming everyone is rational is false

I agree, but that one issue is coming at us faster than we predicted, and most likely faster than we currently predict


#20

You can make an AI replicate any human behavior.

You cannot make an AI smart enough to decide on its own what kind of behavior it wants to have. In fact, you cannot make an AI able to want something in the first place.

And again, as your own article pointed out, nobody’s sure of how to make it do that yet.

No they don’t. They can only do what their programming tells them to do.

Again: you can program a list of priorities into a processor. You cannot make a processor that’s able to decide its own priorities.

You’d need to find a way for them to keep some of their idiosyncrasies. A human brain would probably go insane or just shut down completely if it had to stay in the form of pure data.