Not to mention that there's no inherent reason for an AI to compete with humanity even if it had a self-preservation imperative. The AI does not require food or water, nor does it compete with human beings for territory because it is a program and thus does not occupy physical space. The only thing it requires is computers with the ability to house it to exist, and for at least one such computer to be functional. And in that regard, it seems humanity would be an asset to the AI's survival, as we built the AI in the first place, and so clearly have the desire to build AI-supporting computers. As long as we're not actively trying to kill it, I don't see why it would necessarily kill us.
Although it would not make sense at all to fit an AI with a self-preservation mechanism. Notice how the machines in the Matrix sequels have to constantly deal with the headache of updating and replacing programs that are self-aware and have self-preservation mechanisms, many of whom choose to hide in the system instead of being deleted. And we saw how well it worked out for their attempts at bug-fixing when a self-aware bug didn't want to be fixed.
The comic-book scenario where the hypothetical AI attains self-awareness, wakes up, and says "DESTROY ALL HUMANS" in a synthetic voice is, of course, not likely. Correspondingly, actual safe AI researchers don't appear to be overly concerned with that scenario.
The worry is (or so the safe AI researchers reason, anyway) that (almost) no matter what the goal of the AI is, the continued existence and comfort of humans is not likely to be compatible with that goal.
Imagine that the AI's goal is something as mundane as, say, computing as many digits of pi as it possibly can. Prima facie this objective does not involve destroying humans or even impinging on human life in any way. Nevertheless, a sufficiently intelligent AI is bound to notice that humans take up a lot of space, energy, and matter. If that space were cleared of humans, it could be filled with more CPUs or whatever, for faster computation of pi. All those calories of energy that are used to power human biology could be used for computing pi instead. And the matter that human bodies are made of could certainly be put in a more pi-friendly configuration, the ordinary human configuration being notoriously bad at computation. All of these things seem like obvious wins for a pi-computation-maximizing machine.
So starting with a mundane goal that is not evil on its face, you get "DESTROY ALL HUMANS" as an efficient step along the path to that goal. The AI doesn't actively hate you, it would just prefer you in another configuration -- one you won't like. This reasoning works for just about any goal you might care to assign to the AI.
The second problem is, as you say, human intelligence evolved. Machine intelligence, in contrast, will be engineered.
One of the leading theories on how to attain AI is to actually do just that, to "evolve" it. Being the only known method of creating true intelligence, basically scientists would run a computer through what can only be called natural selection and evolution, only on a hyper increased rate inside a computer until it has achieved true intelligence. Thats not to say that this is actually how we will eventually achieve it, but it is one of the leading theories to date.
The second problem is, as you say, human intelligence evolved. Machine intelligence, in contrast, will be engineered. Human aggression is the result of Darwinian evolution selecting for apes that are psychological capable (and even eager) to defend themselves and acquire resources through violence, thus improving their and their kin-group's survival chances and spreading the genes that encourage these behavior patterns. There is nothing to indicate that machine intelligences which we have built and programmed need to behave in ways even remotely similar. Indeed, as I think I've already mentioned in this thread, some of our very smartest robots - guided missiles and torpedoes - act for the sole purpose of destroying themselves. Because they do not reproduce genetically, they do not undergo evolution, and thus there's nowhere for them to get a self-preservation imperative from.
Calling a missile AI is about the same as applying the same term to a show dog trained to walk on its hind legs. Their programming is very specific and while it does allow for some narrow forms of problem solving (and the definition of true intelligence is always evolving) it does not posses any form of recognizable intelligence. Instead it is cleverly programmed with expected problems that can occur on its mission and the exact course of action on how to deal with these. If some problem occurred which it has not been programmed to deal with were to occur it would do nothing, since it has no more actual intelligence than your common hand calculator.
Now, of course I wouldn't argue that contemporary missiles and torpedoes are strong AIs, but that's because they lack many other capabilities, not because they lack a self-preservation imperative. I see no merit at all to your claim that a self-preservation imperative is a test for strong AI
I would argue that a sense of self-preservation is imperative to any intelligent being. One of the measures of true intelligence (again, noting that the actual definition is hardly set in stone and is always evolving)is a sense of self, so to speak. Anything truely intelligent would have to have a sense of self, and thus a sense of self-preservation, at least on some level (and the ability to acknowledge it and disregard it as well, for that matter.)
Not to mention that there's no inherent reason for an AI to compete with humanity even if it had a self-preservation imperative. The AI does not require food or water, nor does it compete with human beings for territory because it is a program and thus does not occupy physical space. The only thing it requires is computers with the ability to house it to exist, and for at least one such computer to be functional. And in that regard, it seems humanity would be an asset to the AI's survival, as we built the AI in the first place, and so clearly have the desire to build AI-supporting computers. As long as we're not actively trying to kill it, I don't see why it would necessarily kill us.
It is true that we would not compete over many resources, except for probably the most valuable in this day and age, energy. It is entirely possible to see a future where the most sought after commodity isn't food or water but is in fact energy, in one form or another. We already launch massive wars even in this day and age to gather it, horde it, protect it.
Are they going to boot up and immediately begin with the "destroy all humans" path of thinking? certainly not. It should not even be a situation used to prevent us from developing them. It is merely something which must be kept in mind as a possible problem down the line which should be addressed early so as to prevent it from being a factor.
Private Mod Note
():
Rollback Post to RevisionRollBack
Trolling can be defined as "A art, one specifically designed to misdirect, anger, or confuse others by reporting meaningful information in a clear, coherent way."
One day I will go infinate on a token combo then drop Scramble verse and watch as the trolling begins. That day will be a good day.
One of the leading theories on how to attain AI is to actually do just that, to "evolve" it. Being the only known method of creating true intelligence, basically scientists would run a computer through what can only be called natural selection and evolution, only on a hyper increased rate inside a computer until it has achieved true intelligence. Thats not to say that this is actually how we will eventually achieve it, but it is one of the leading theories to date.
In which case, the intelligence will probably exhibit more behavior which parallels that of organic intelligences than otherwise. Though even then, so much depends on the environment the AI is being evolved in - the researchers could in principle do something like have the program be duplicated multiple times whenever it is "destroyed" and thus put selection pressures on it exactly the opposite of those on organisms in nature. But that's just a tangent here. Your thesis is that humanlike behaviors are a necessary feature of intelligence, and simply saying that one way of developing intelligence might result in humanlike behaviors is a long, long way from proving this.
If some problem occurred which it has not been programmed to deal with were to occur it would do nothing, since it has no more actual intelligence than your common hand calculator.
The same is true of you and me. There are limits to our senses and limits to our ability to learn. If some problem occurs beyond those limits, we can't do anything about it. Intelligence isn't magic.
But you will note that I wrote these words: "of course I wouldn't argue that contemporary missiles and torpedoes are strong AIs... because they lack many other capabilities". So if you think you're boldly contradicting me when you say they're not intelligent, think again.
I would argue that a sense of self-preservation is imperative to any intelligent being. One of the measures of true intelligence (again, noting that the actual definition is hardly set in stone and is always evolving)is a sense of self, so to speak. Anything truely intelligent would have to have a sense of self, and thus a sense of self-preservation, at least on some level (and the ability to acknowledge it and disregard it as well, for that matter.)
You haven't provided any reason why anything "truly intelligent" would have to have a sense of self - that smells like a No True Scotsman fallacy to me. And it does not follow from an entity having a sense of self that it must have a sense of self-preservation.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
The same is true of you and me. There are limits to our senses and limits to our ability to learn. If some problem occurs beyond those limits, we can't do anything about it. Intelligence isn't magic.
But you will note that I wrote these words: "of course I wouldn't argue that contemporary missiles and torpedoes are strong AIs... because they lack many other capabilities". So if you think you're boldly contradicting me when you say they're not intelligent, think again.
I wouldnt argue that they are AI at all and that was the point that I was driving. I certainly cannot speak for everyone (I have met some very slow people in my life) but I for one am fully capable of thinking my way out of situations,even new ones in which I have little to no experience. That is one of the great things about human intelligence, we are capable of generating new ideas and thought, evaluating even new situations and taking some form of action. In contrast, a missile presented with something it has not been trained to deal with (its designated target being a mosque instead of a military outpost for example) is only capable of following its programming.
Before I go any further with this discussion I would like to clarify that to myself at least, AI includes a certain level of sentience.
In which case, the intelligence will probably exhibit more behavior which parallels that of organic intelligences than otherwise. Though even then, so much depends on the environment the AI is being evolved in - the researchers could in principle do something like have the program be duplicated multiple times whenever it is "destroyed" and thus put selection pressures on it exactly the opposite of those on organisms in nature. But that's just a tangent here. Your thesis is that human like behaviors are a necessary feature of intelligence, and simply saying that one way of developing intelligence might result in human like behaviors is a long, long way from proving this.
Ah but I never stated that it did prove my statement,
The second problem is, as you say, human intelligence evolved. Machine intelligence, in contrast, will be engineered.
pointing out that machine intelligence could still be evolved is however a logical rebuttal to the statement that it will instead be engineered.
You haven't provided any reason why anything "truly intelligent" would have to have a sense of self - that smells like a No True Scotsman fallacy to me. And it does not follow from an entity having a sense of self that it must have a sense of self-preservation.
I could see where you could come to that conclusion since using the words "true intelligence" would indicate that I attempt back up my statements by claiming that counterexamples are not true examples at all (more or less). This however is not the case, more I try to better define what AI means to me at least.
When deep blue first won the chess world championship, only a few years prior it would be have been hailed as the birth of artificial intelligence. Today we all recognize that it was not in fact a real AI and more just clever programming meant to simulate intelligence. The definition we use for intelligence has shifted and so too then must the definition for AI.
as for the statement that a sense of self must include a sense of self preservation, perhaps that is not a provable statement. As you have pointed out previously, a sample size of 1 is not sufficient to make generalizations about things. This portion of the debate moves into the realm of philosophy, an area I do not feel myself suitable to debate in (usually)
Private Mod Note
():
Rollback Post to RevisionRollBack
Trolling can be defined as "A art, one specifically designed to misdirect, anger, or confuse others by reporting meaningful information in a clear, coherent way."
One day I will go infinate on a token combo then drop Scramble verse and watch as the trolling begins. That day will be a good day.
No !
Programs that already evolve were "told" to evolve, so a program only do what it was written to.
Of course you can always attribute the state of a running program to the nature of the rules governing the program; so, too, can you attribute the state of a living system to the nature of the rules governing that system -- organic chemistry, the operation of the laws of physics, hell, even the design of God if you're into that sort of thing. A living system that evolves was "told" to evolve just as much as any program was, in the sense that inexorable rules imposed from without force it to do so.
The burden here, which you do not substantively engage with, is to identify a relevant difference in the nature of the rules governing these domains. The questions before you are:
1) Is there any real process that can take place in the biochemical domain that cannot be efficiently simulated or otherwise replicated in the computational domain?
2) Does the real process that you've identified in (1) play a necessary role in cognition? (As in, without this process, cognition would be impossible.)
If you can't specifically identify the process that you're claiming is not computational and identify where in human cognition that magic is happening, then you're not demonstrating that organic/inorganic distinctions undermine the idea of AI.
Have any of you actually taken an AI class? Have any of you coded any AI algorithms?
I've coded in LISP, learned some prolog, coded some rule-based systems, and coded a neural network as well. Actually one of my major projects was to try to code a basic artificial intelligence that could perform better than ELIZA. What I was working on was later superceded by XML database chat-bots.
...
Honestly, if any of you just crack open a book on AI and start some basic coding examples, you would laugh when you see how far off we are.
I'm not deliberately trying to offend you (though since you dropped the old "crack open a book" bomb I feel considerably less bad about it if it turns out that way) but the stuff you're describing here is rudimentary precisely because your level of knowledge is. You're describing introductory material. As groundbreaking as ELIZA was, chatbots aren't exactly the cutting edge of AI research anymore. It is as if you asked us to open a fifth-grade math book and, finding nothing in there about the Riemann hypothesis, declared attempts at solving it to be a waste of time.
Also, not that it matters because it was an ad lapidem argument to begin with, but your Arduino code is something I could have written when I was about 10 years old. It moves a servo to a fixed position! This is trivial! How could you consider that to be demonstrative of anything? Even if someone provided you with a perfect "distinguish good from evil" subroutine, it would not change the semantics of your trivial program. Moreover, our ignorance concerning the "distinguish good from evil" subroutine is precisely what safe AI research seeks to remedy, so if you really want that subroutine you should be kicking in your money alongside Elon Musk's. I could go on but, well, you're making a bad argument. Let's leave it there. Speaking of cracking open a book, I'd like to take this opportunity to advise everyone in this thread (well, really, everyone everywhere) to read Alan Turing's 1950 paper, Computing Machinery and Intelligence. (And TomCat, if they didn't assign this in your AI class, find a new class.) There are at least three reasons why you should:
It's one of the greatest papers at the intersection of science and philosophy ever written. Packed with insight, yet totally devoid of incomprehensible mathematical jargon or symbology, and easily readable to fluent English speakers.
It's the greatest paper on AI ever written. The computer scientist Scott Aaronson quips that 70% of all AI research done so far was done by Turing in 1950, and the remaining 30% by the plodding mortals that have followed since. If you' want to talk about AI intelligently, well, the only way you can do that is after you've read it.
Many skeptical arguments about AI, not only in this thread but by eminent scientists and philosophers who should know better, were actually anticipated and answered by Turing in 1950.
And finally, since the topic of technological failure has been amply covered by several posters here, I'd like to counterbalance that by saying something about technological success.
Being an AI researcher has to be just about the worst job in the world if you're looking to be recognized for your achievements, because you can make what everyone agrees in advance is a touchdown, spike the ball, and do a victory dance, only to find that someone has moved the goalposts another hundred yards down the field. After the hundredth or so time this happens, I imagine it gets pretty frustrating.
First, it was chess. Turing himself suggested it as a benchmark, and everyone agreed that whatever process it was that undergirded good chess play qualified as thinking. Well, they made a computer that could do it better than any human (spike ball, victory dance?) -- and then suddenly it wasn't thinking anymore! (goalposts moved 100yd.)
Then it was mathematical proof. Of course from the beginning computers were used to assist in calculation, but they would find applications in generating new insights as well -- a computer resolved the Robbins Conjecture, an infamous problem that was so difficult that Tarski assigned it as a challenge problem to his best students. (Spike ball? Victory dance?) "Pshaw, it was just searching for consequences of the axioms," said the skeptics. (goalposts moved 100yd., and P.S., no *****, Sherlock. That's what mathematicians do.)
Then it was "creativity." Computers will never be creative. Well, what about when they start making music?Poetry?. (Spike ball? Victory dance?) "It was programmed to do those things!" (goalposts moved 100yd, and this answer is fallacious for reasons I've mentioned.)
Then games of imperfect information. Poker? Crushed. In fact, completely solved! Not only can computers bluff and read bluffs, they can do so in a provably optimal fashion -- they are necessarily as good or better than the best human. (Spike ball? Victory dance?)
Really I could go on like this for 50 pages, but I think that's enough examples to be going on. There seems to be a kind of cognitive bias in the skeptical reactions to these things -- call it the "Mommy, I'm Special Heuristic", or MISH for short. It's a variant on the same old belief that man is at the center of the universe, whether it manifests itself in an overtly-spiritual way as in "machines don't have souls", or in the form of something like italofoca's belief in the specialness of human brain chemistry, or in some other way altogether. The MISH is a one-two punch of negative cognitive bias: it leads one to undue skepticism about potential achievements in computer science (how could a computer ever do that? Then it would be like me, but I'm special, Mommy!), and it leads one to dismiss these achievements ex post facto. (Whatever the computer is doing might look like thought, but it isn't really, because I'm special, Mommy!)
The important thing to notice is that the MISH has been wrong not once but every single time it's been put to a concrete test. Of course, one can go on rationalizing forever, but after so many failures isn't it time to consider abandoning the heuristic?
Anyhow. I'm not saying a strong AI singularity is going to happen in the near future, but the state of play in AI has been utterly misrepresented by the posts made so far. I don't think AI believers are delusional fools. They have plenty of circumstantial evidence for the feasibility of AI, and a track record of exponential technological and software improvements. Moreover, even if AI is not forthcoming, safe AI research has enormous follow-on benefits outside the field of AI -- an argument for basic morality that takes the form of a computer program or mathematical proof would be a profound breakthrough in moral philosophy.
Musk's millions aren't being wasted here. You could argue, quite correctly, that there are much better philanthropic causes he could have bolstered. But he also could have done much worse.
Musk millions aren't being wasted. The thing to realize though is that it's a marketing stunt. Don't expect any real substance out of it.
AI that goes berserk and destroys the planet a la Terminator style is something that is already captive in the imaginations of the common people. It makes good financial sense to play off that.
The thing to realize is that there is difference between searching, and actual intelligence. Therein lies a huge problem for why we are laughably far away from any actual artificial intelligence.
You look at computers beating humans at poker, and chess, and then appeal to authority by playing up Turing and think therefore, actual artificial intelligence isn't far off.
You see things like a robot that can learn to cook by watching youtube, and a universal translator and think these are major steps in the evolution of actual AI.
The problem is NONE of those activities address the central problem of AI.
Let's look at your poker example. How can it be that a program can crush poker in a PROVABLY OPTIMAL fashion?
It's simple: because you're having the computer solve an optimization problem. We have numerical techniques to solve other
optimization problems. It just so happens that people commonly play poker, so it appears more emotionally significant than it should
that computers solve the poker optimization problem better than people do.
What about chess? Chess is a search algorithm.
Opening sequences are preprogrammed into the first few moves because the search at the outset is too much computational weight to bear.
Where the computers power is insufficient to conduct the search, computers appear weak.
What about a universal translator? What about it? Take a bunch of dictionaries together and implement grammar parsing rules into basic text analysis.
Sounds like a relational database + parser to me.
What about that robot that can cook on youtube?
Step 1: Download videos from youtube and break them down frame by frame.
Step 2: Conduct image analysis on frames to identify
a) ingredients
b) motion.
Step 3: Conduct sound analysis to extract textual information.
Step 4: Parse data and generate cooking sequence.
Step 5: Have cooking robot move towards performing cooking actions in accordance with its generated cooking sequence.
Let's go back to the chess example for me to show you why we aren't close to AI.
Our computers are currently stronger than the world's best in Western Chess, and have been so for a number of years now.
Computers can search far deeper than a human can, and in fact have perfect memory. But no one claims that AI is nigh because computers can recall more digits of Pi than people can.
But what about Go? The most powerful super computers in the world cannot compete with even low level professional Go players. Why not?
Because the search space is too big! There are 2 x 10^170 legal Go positions. There are however only 10^123 atoms in the universe.
When the problem becomes unsearchable, computers aren't so "smart" anymore.
My point here is that the examples of the AI you gave are really just 1. Searching and 2. Performing math.
The central reason why I dont see any form of AI happening any time soon is that we have yet to address the central problem of AI. How do you give an AI any form of intuition or awareness of what it is doing? In fact, we dont even have good models of intelligence to build from. It's not like psychology has a model of what intelligence is to give AI researchers a blue print. (Actually that's something I've been working on on the side--a model of human intelligence that the field of AI might be able to use)
That to me is the single largest reason why AI has been promising the moon for the past 50 years and has not delivered. The day AI researchers take a step towards building artificial intelligence with actual awareness of what it is doing is the day I will concur that it is only a matter of time.
But advances in processing power, in search algorithms, in relational databases, in grammar parsing, and in memory capacity are nothing new.
What we really need is a fundamental breakthrough about the nature of intelligence itself.
As much as I respect Turing, to me he has always been much more of a computing theorist than philosopher.
He laid the groundwork with some initial thoughts, but technology moves on.
As for Musk? He can spend his money on what he wants. But I would liken this move to the time the CDC decided to issue guidance on what to do in the case of a zombie apocalypse . Obviously zombies dont exist. But they're not going to let a little thing like reality prevent them from promoting an idea while its captive in the mind of the populace.
Let's go back to the chess example for me to show you why we aren't close to AI.
Our computers are currently stronger than the world's best in Western Chess, and have been so for a number of years now.
Computers can search far deeper than a human can, and in fact have perfect memory. But no one claims that AI is nigh because computers can recall more digits of Pi than people can.
But what about Go? The most powerful super computers in the world cannot compete with even low level professional Go players. Why not?
Because the search space is too big! There are 2 x 10^170 legal Go positions. There are however only 10^123 atoms in the universe.
When the problem becomes unsearchable, computers aren't so "smart" anymore.
My point here is that the examples of the AI you gave are really just 1. Searching and 2. Performing math.
The central reason why I dont see any form of AI happening any time soon is that we have yet to address the central problem of AI. How do you give an AI any form of intuition or awareness of what it is doing? In fact, we dont even have good models of intelligence to build from. It's not like psychology has a model of what intelligence is to give AI researchers a blue print. (Actually that's something I've been working on on the side--a model of human intelligence that the field of AI might be able to use)
That to me is the single largest reason why AI has been promising the moon for the past 50 years and has not delivered. The day AI researchers take a step towards building artificial intelligence with actual awareness of what it is doing is the day I will concur that it is only a matter of time.
What is "awareness" and how does it help either a human or a non-human computer play go? Human go masters (and chessmasters) don't arrive at the right move through monkish introspection. They use a search algorithm. The advantage they have over current computers is a way to reliably and massively prune the possibility trees so they only deeply consider a very few options. You used the word "intuition", but this is precisely what intuition is. It's not something qualitatively different than a computer algorithm - it's a refined algorithm.
Which, in the larger picture, is precisely why getting a computer to play a game will never, in its own right, be a good indicator for artificial intelligence. Chess and go are highly specialized computational tasks. Even if we get a computer to play these games exactly the same way a human does - well, I don't mean to demean this accomplishment too much, because it will mean we've learned a lot both about computers and ourselves, but fundamentally when we play these games we're training ourselves to be computers, not the other way around, so it will not be the final breakthrough to have AIs that can imitate us imitating them.
I think the functional benchmark for AI will not be "awareness" which, as you allude, is a hard concept even to pin down. Rather, it will be a move from specialization to generalization. And by generalization, of course, I don't mean a universal Turing machine that can be put to any task in principle. We already have those. I mean a machine that, like a human, you can drop into any of a very wide variety of situations and it will independently figure out a rational course of action based on core imperatives. We can call this figuring-out process "intuition" or even "awareness", but I think the more relevant concrete computational tasks involve flexible input and self-modification.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
I think the functional benchmark for AI will not be "awareness" which, as you allude, is a hard concept even to pin down. Rather, it will be a move from specialization to generalization. And by generalization, of course, I don't mean a universal Turing machine that can be put to any task in principle. We already have those. I mean a machine that, like a human, you can drop into any of a very wide variety of situations and it will independently figure out a rational course of action based on core imperatives. We can call this figuring-out process "intuition" or even "awareness", but I think the more relevant concrete computational tasks involve flexible input and self-modification.
So, a computer that can learn how to learn and then demonstrate flexibility in applying this "learning of learning" to a number of situations? Do I have that right?
So, a computer that can learn how to learn and then demonstrate flexibility in applying this "learning of learning" to a number of situations? Do I have that right?
It's how we do it.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
To post a comment, please login or register a new account.
That's even assuming it could actually do anything. If it's stuck inside a computer with no internet connection and no link to the outside world, then it has no ability to influence anything. Then there's the possibility that it doesn't WANT anything to do with the outside world.
Although it would not make sense at all to fit an AI with a self-preservation mechanism. Notice how the machines in the Matrix sequels have to constantly deal with the headache of updating and replacing programs that are self-aware and have self-preservation mechanisms, many of whom choose to hide in the system instead of being deleted. And we saw how well it worked out for their attempts at bug-fixing when a self-aware bug didn't want to be fixed.
The worry is (or so the safe AI researchers reason, anyway) that (almost) no matter what the goal of the AI is, the continued existence and comfort of humans is not likely to be compatible with that goal.
Imagine that the AI's goal is something as mundane as, say, computing as many digits of pi as it possibly can. Prima facie this objective does not involve destroying humans or even impinging on human life in any way. Nevertheless, a sufficiently intelligent AI is bound to notice that humans take up a lot of space, energy, and matter. If that space were cleared of humans, it could be filled with more CPUs or whatever, for faster computation of pi. All those calories of energy that are used to power human biology could be used for computing pi instead. And the matter that human bodies are made of could certainly be put in a more pi-friendly configuration, the ordinary human configuration being notoriously bad at computation. All of these things seem like obvious wins for a pi-computation-maximizing machine.
So starting with a mundane goal that is not evil on its face, you get "DESTROY ALL HUMANS" as an efficient step along the path to that goal. The AI doesn't actively hate you, it would just prefer you in another configuration -- one you won't like. This reasoning works for just about any goal you might care to assign to the AI.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
One of the leading theories on how to attain AI is to actually do just that, to "evolve" it. Being the only known method of creating true intelligence, basically scientists would run a computer through what can only be called natural selection and evolution, only on a hyper increased rate inside a computer until it has achieved true intelligence. Thats not to say that this is actually how we will eventually achieve it, but it is one of the leading theories to date.
Calling a missile AI is about the same as applying the same term to a show dog trained to walk on its hind legs. Their programming is very specific and while it does allow for some narrow forms of problem solving (and the definition of true intelligence is always evolving) it does not posses any form of recognizable intelligence. Instead it is cleverly programmed with expected problems that can occur on its mission and the exact course of action on how to deal with these. If some problem occurred which it has not been programmed to deal with were to occur it would do nothing, since it has no more actual intelligence than your common hand calculator.
I would argue that a sense of self-preservation is imperative to any intelligent being. One of the measures of true intelligence (again, noting that the actual definition is hardly set in stone and is always evolving)is a sense of self, so to speak. Anything truely intelligent would have to have a sense of self, and thus a sense of self-preservation, at least on some level (and the ability to acknowledge it and disregard it as well, for that matter.)
It is true that we would not compete over many resources, except for probably the most valuable in this day and age, energy. It is entirely possible to see a future where the most sought after commodity isn't food or water but is in fact energy, in one form or another. We already launch massive wars even in this day and age to gather it, horde it, protect it.
Are they going to boot up and immediately begin with the "destroy all humans" path of thinking? certainly not. It should not even be a situation used to prevent us from developing them. It is merely something which must be kept in mind as a possible problem down the line which should be addressed early so as to prevent it from being a factor.
One day I will go infinate on a token combo then drop Scramble verse and watch as the trolling begins. That day will be a good day.
The same is true of you and me. There are limits to our senses and limits to our ability to learn. If some problem occurs beyond those limits, we can't do anything about it. Intelligence isn't magic.
But you will note that I wrote these words: "of course I wouldn't argue that contemporary missiles and torpedoes are strong AIs... because they lack many other capabilities". So if you think you're boldly contradicting me when you say they're not intelligent, think again.
You haven't provided any reason why anything "truly intelligent" would have to have a sense of self - that smells like a No True Scotsman fallacy to me. And it does not follow from an entity having a sense of self that it must have a sense of self-preservation.
candidus inperti; si nil, his utere mecum.
I wouldnt argue that they are AI at all and that was the point that I was driving. I certainly cannot speak for everyone (I have met some very slow people in my life) but I for one am fully capable of thinking my way out of situations,even new ones in which I have little to no experience. That is one of the great things about human intelligence, we are capable of generating new ideas and thought, evaluating even new situations and taking some form of action. In contrast, a missile presented with something it has not been trained to deal with (its designated target being a mosque instead of a military outpost for example) is only capable of following its programming.
Before I go any further with this discussion I would like to clarify that to myself at least, AI includes a certain level of sentience.
Ah but I never stated that it did prove my statement, pointing out that machine intelligence could still be evolved is however a logical rebuttal to the statement that it will instead be engineered.
I could see where you could come to that conclusion since using the words "true intelligence" would indicate that I attempt back up my statements by claiming that counterexamples are not true examples at all (more or less). This however is not the case, more I try to better define what AI means to me at least.
When deep blue first won the chess world championship, only a few years prior it would be have been hailed as the birth of artificial intelligence. Today we all recognize that it was not in fact a real AI and more just clever programming meant to simulate intelligence. The definition we use for intelligence has shifted and so too then must the definition for AI.
as for the statement that a sense of self must include a sense of self preservation, perhaps that is not a provable statement. As you have pointed out previously, a sample size of 1 is not sufficient to make generalizations about things. This portion of the debate moves into the realm of philosophy, an area I do not feel myself suitable to debate in (usually)
One day I will go infinate on a token combo then drop Scramble verse and watch as the trolling begins. That day will be a good day.
Musk millions aren't being wasted. The thing to realize though is that it's a marketing stunt. Don't expect any real substance out of it.
AI that goes berserk and destroys the planet a la Terminator style is something that is already captive in the imaginations of the common people. It makes good financial sense to play off that.
The thing to realize is that there is difference between searching, and actual intelligence. Therein lies a huge problem for why we are laughably far away from any actual artificial intelligence.
You look at computers beating humans at poker, and chess, and then appeal to authority by playing up Turing and think therefore, actual artificial intelligence isn't far off.
You see things like a robot that can learn to cook by watching youtube, and a universal translator and think these are major steps in the evolution of actual AI.
The problem is NONE of those activities address the central problem of AI.
Let's look at your poker example. How can it be that a program can crush poker in a PROVABLY OPTIMAL fashion?
It's simple: because you're having the computer solve an optimization problem. We have numerical techniques to solve other
optimization problems. It just so happens that people commonly play poker, so it appears more emotionally significant than it should
that computers solve the poker optimization problem better than people do.
What about chess? Chess is a search algorithm.
Opening sequences are preprogrammed into the first few moves because the search at the outset is too much computational weight to bear.
Where the computers power is insufficient to conduct the search, computers appear weak.
What about a universal translator? What about it? Take a bunch of dictionaries together and implement grammar parsing rules into basic text analysis.
Sounds like a relational database + parser to me.
What about that robot that can cook on youtube?
Step 1: Download videos from youtube and break them down frame by frame.
Step 2: Conduct image analysis on frames to identify
a) ingredients
b) motion.
Step 3: Conduct sound analysis to extract textual information.
Step 4: Parse data and generate cooking sequence.
Step 5: Have cooking robot move towards performing cooking actions in accordance with its generated cooking sequence.
Let's go back to the chess example for me to show you why we aren't close to AI.
Our computers are currently stronger than the world's best in Western Chess, and have been so for a number of years now.
Computers can search far deeper than a human can, and in fact have perfect memory. But no one claims that AI is nigh because computers can recall more digits of Pi than people can.
But what about Go? The most powerful super computers in the world cannot compete with even low level professional Go players. Why not?
Because the search space is too big! There are 2 x 10^170 legal Go positions. There are however only 10^123 atoms in the universe.
When the problem becomes unsearchable, computers aren't so "smart" anymore.
My point here is that the examples of the AI you gave are really just 1. Searching and 2. Performing math.
The central reason why I dont see any form of AI happening any time soon is that we have yet to address the central problem of AI. How do you give an AI any form of intuition or awareness of what it is doing? In fact, we dont even have good models of intelligence to build from. It's not like psychology has a model of what intelligence is to give AI researchers a blue print. (Actually that's something I've been working on on the side--a model of human intelligence that the field of AI might be able to use)
That to me is the single largest reason why AI has been promising the moon for the past 50 years and has not delivered. The day AI researchers take a step towards building artificial intelligence with actual awareness of what it is doing is the day I will concur that it is only a matter of time.
But advances in processing power, in search algorithms, in relational databases, in grammar parsing, and in memory capacity are nothing new.
What we really need is a fundamental breakthrough about the nature of intelligence itself.
As much as I respect Turing, to me he has always been much more of a computing theorist than philosopher.
He laid the groundwork with some initial thoughts, but technology moves on.
As for Musk? He can spend his money on what he wants. But I would liken this move to the time the CDC decided to issue guidance on what to do in the case of a zombie apocalypse . Obviously zombies dont exist. But they're not going to let a little thing like reality prevent them from promoting an idea while its captive in the mind of the populace.
Which, in the larger picture, is precisely why getting a computer to play a game will never, in its own right, be a good indicator for artificial intelligence. Chess and go are highly specialized computational tasks. Even if we get a computer to play these games exactly the same way a human does - well, I don't mean to demean this accomplishment too much, because it will mean we've learned a lot both about computers and ourselves, but fundamentally when we play these games we're training ourselves to be computers, not the other way around, so it will not be the final breakthrough to have AIs that can imitate us imitating them.
I think the functional benchmark for AI will not be "awareness" which, as you allude, is a hard concept even to pin down. Rather, it will be a move from specialization to generalization. And by generalization, of course, I don't mean a universal Turing machine that can be put to any task in principle. We already have those. I mean a machine that, like a human, you can drop into any of a very wide variety of situations and it will independently figure out a rational course of action based on core imperatives. We can call this figuring-out process "intuition" or even "awareness", but I think the more relevant concrete computational tasks involve flexible input and self-modification.
candidus inperti; si nil, his utere mecum.
candidus inperti; si nil, his utere mecum.