So recently, private space agency boss Elon Musk revealed donating 10M$ to Future of Life Institute [thanks Tiax], a company researching AI safety and possible preventative measures we might take against them. While the man is free to do whatever he desires with his money, I think it's both wasted and gives a wrong message.
My reasons for this are mostly twofold. Firstly: I don't understand this obsession with the idea that AI will try to do humans harm. The claimants often refer to something like "robots have no morality, they will operate purely on logic" but this an assumption about an as of yet fictive technology doesn't mean an AI might try to harm humans. These arguments often liken the AI to a sociopath: incapable of empathy. While this may be true however, the comparison is inherently flawed: while indeed a being incapable of having emotions might be incapable of empathy, a sociopath still has emotions, which I would argue leads to the higher rates of criminality among these people. After all, why would you steal if you don't covet, or hurt people if you don't get angered?
So since an a-emotional AI would not have wants of their own making (since that implies the urge to do something), their actions would be controlled by the directives built into them by their programmers. So in the case of AI being completely logical, the danger does not come from the AI, but from the people designing them.
But what do you think? Are you worried about AIs, and why? What do you think should be done to prevent possible damage?
Rationale: When was the last time you saw a computer program that didn't crash, a robot that didn't break down, or any machine that worked exactly the way it was supposed to?
I rest my case.
Neither transhumanism nor the singularity are to be taken seriously because technology is crap.
So recently, private space agency boss Elon Musk revealed donating 10M$ to DeepMind, a company researching AI safety and possible preventative measures we might take against them. While the man is free to do whatever he desires with his money, I think it's both wasted and gives a wrong message.
My reasons for this are mostly twofold. Firstly: I don't understand this obsession with the idea that AI will try to do humans harm. The claimants often refer to something like "robots have no morality, they will operate purely on logic" but this an assumption about an as of yet fictive technology doesn't mean an AI might try to harm humans. These arguments often liken the AI to a sociopath: incapable of empathy. While this may be true however, the comparison is inherently flawed: while indeed a being incapable of having emotions might be incapable of empathy, a sociopath still has emotions, which I would argue leads to the higher rates of criminality among these people. After all, why would you steal if you don't covet, or hurt people if you don't get angered?
So since an a-emotional AI would not have wants of their own making (since that implies the urge to do something), their actions would be controlled by the directives built into them by their programmers. So in the case of AI being completely logical, the danger does not come from the AI, but from the people designing them.
But what do you think? Are you worried about AIs, and why? What do you think should be done to prevent possible damage?
Worrying about AI could not be a more fruitless endeavor. I think anyone who codes would understand that the computer does whatever you want it to. When you're trying to address concerns that have no basis in practical reality, it's nothing more than pure speculative fantasy.
It's honestly on the level of: Do you think Cyclops's beams could cut through adamantium?
Cyclops is a mutant in fiction. He doesn't exist. Neither does our AI, save for extremely basic rule based logic systems, and neural networks that currently do nothing more than some dimensional analysis. We have nothing approaching any kind of consciousness or emotions. So worrying that these systems lack empathy or human morality is just a waste of time in my opinion.
I could copy paste an essay of sorts I wrote about this exact subject and exact debate of the subject, but it would span quite a bit. I'll just simplify my views instead.
I'm in support of us making a human-equaling and potentially human-surpassing AI because it could be directly helpful to advancing our technology, organizing our societies better, and increasing our survivability as a species in many ways. Transhumanism is something it could also help open the doors to. Contrary to Highroller's conclusion, I'd argue the singularity and transhumanism are to be taken seriously - as a future likelihood to extensively think out and prepare for, but of course I agree we are far from both at this point in time.
That said, it is feasible in theory to do but not anytime soon, because such a creation as far as I've figured will be dependent on breakthroughs in neuroscience and processor design. Quantum processors, maybe, if they're possible. An easy path (and by easy I mean in an extremely loose sense - to be based off of something that exists instead of designing totally from scratch, ground up) to take could be to try to emulate the human brain's functionality heavily, in machine form. Perhaps something that functions with a hardware system similar to how humans have neural networks. Learning, pattern recognition, emotions, social intelligence, self-'awareness' - perhaps the core program built like a tree with 'survival' on top and every other gigantic algorithm below (such as a 'curiosity' algorithm) that like a node serving the purpose of the top of the tree. Design the AI such as that it is in it's own interest to learn, design improvements for itself, maybe help invent better technology for us with the data given to it in order to further its own survival programming. That would require programming work of a quality and quantity surpassing by an order of magnitude the complexity of every single program humans have ever written for a computer. The engineering work just to support that kind of programming? Probably unprecedented in complexity. It would be a gamble for it to work at all and a huge expense and effort to accomplish.
In addition to that... I have thought over quite a bit about whether an AI of the specific level I've described could be a threat to us in its own intents. I imagine not, if we just design it right. If it evolves and learns with a result of deciding it wants to 'kill all humans' we just turn it off and identify and fix the problem. But we humans could easily be at fault for misusing such a technology. It would have to be safeguarded by the most rational among us, extensive contingencies and safeguards put in place, likely not to contain it but to prevent the many ill-intentioned humans around the world from copying and misusing the technology. I think, given enough preparation for security, a responsible, rational coalition in humanity should seek to design such a thing. It would be such a sad thing to prevent that technology's potential from being tapped into just because some of humanity isn't responsible enough to have it in their hands. It must be kept out of reach of those who would misuse it, and built by those who would use it wisely and carefully for all of the world's benefit.
Still, a gamble of an effort but it's worth it to keep trying, I'd say.
That said, it is feasible in theory to do but not anytime soon, because such a creation as far as I've figured will be dependent on breakthroughs in neuroscience and processor design. Quantum processors, maybe, if they're possible. An easy path (and by easy I mean in an extremely loose sense - to be based off of something that exists instead of designing totally from scratch, ground up) to take could be to try to emulate the human brain's functionality heavily, in machine form. Perhaps something that functions with a hardware system similar to how humans have neural networks. Learning, pattern recognition, emotions, social intelligence, self-'awareness' - perhaps the core program built like a tree with 'survival' on top and every other gigantic algorithm below (such as a 'curiosity' algorithm) that like a node serving the purpose of the top of the tree. Design the AI such as that it is in it's own interest to learn, design improvements for itself, maybe help invent better technology for us with the data given to it in order to further its own survival programming. That would require programming work of a quality and quantity surpassing by an order of magnitude the complexity of every single program humans have ever written for a computer. The engineering work just to support that kind of programming? Probably unprecedented in complexity. It would be a gamble for it to work at all and a huge expense and effort to accomplish.
We already have artificial neural networks that are capable of machine learning and "training." They're used extensively in object recognition and intelligent data analysis. For example, Google Street View uses them. They're also the basis of self-driving cars, from the real deal to hobbyist models.
It's not unreasonable to think we could achieve human-like intelligence with silicon-based computing power. It's probably unnecessary to use quantum computing, since modern computers are already significantly faster than human brains. This issue is one of organization, not processing speed; the latter is what quantum computing is hoping to improve.
We already have artificial neural networks that are capable of machine learning and "training." They're used extensively in object recognition and intelligent data analysis. For example, Google Street View uses them. They're also the basis of self-driving cars, from the real deal to hobbyist models.
It's not unreasonable to think we could achieve human-like intelligence with silicon-based computing power. It's probably unnecessary to use quantum computing, since modern computers are already significantly faster than human brains. This issue is one of organization, not processing speed; the latter is what quantum computing is hoping to improve.
Facinating. It seems I am happily shown wrong on some of the things I thought still held true about computers vs humans, and that the technology is in fact getting there a bit more quickly than I thought. So there is a computer out there with more processing power than me. Makes me feel a bit smaller in the grand scheme of things. One thing I do envy of computers as well, their data storage. It seems to me the human brain is heavily flawed in that I forget so many things, can't recall my days in second-by-second crystal clear detail. But curiously, it does seem like I always retain memory of what is useful and important, the little details, such as that I don't have to retain perfect copies of everything I see to remember them accurately. I wonder just how do our brains manage to throw away so much information but still get the fundamentally useful pieces of what is remembered right. It also calls into question, how long could a human brain age before it starts having to drop the most important bits of information retained in the past to make more room for new memory? Or, are we constantly doing that already, not really aware of it?
So, as far as those artificial neural networks go, how far away would you say we are from a 'human-awareness' equaling AI, if that's even reasonable speculated upon? Might you elaborate more on the 'issue of organization'?
The problem isn't that computers will become "sociopathic" or anything like you see in Terminator, it's that, at the point of the technological singularity, it's impossible to predict what will happen. And that's the whole point. I'm not sure how they could prepare for an event that is completely unknowable. Hopefully this AI-countermeasure group isn't just developing EMP rifles or something.
I do think the technological singularity is something we should be concerned about, but I'm not sure it's our most pressing issue at the time.
I'm in support of us making a human-equaling and potentially human-surpassing AI because it could be directly helpful to advancing our technology, organizing our societies better, and increasing our survivability as a species in many ways.
And I am in support of us replacing cars with unicorns as a means of transportation, as the magic of unicorns has been demonstrated to be completely natural, energy efficient, and generates rainbows as a waste product, which are environmentally-friendly.
And I am in support of us replacing cars with unicorns as a means of transportation, as the magic of unicorns has been demonstrated to be completely natural, energy efficient, and generates rainbows as a waste product, which are environmentally-friendly.
Replacing cars with unicorns would eliminate thousands of U.S. manufacturing jobs. In addition, recent studies have shown that rainbows may not be as environmentally friendly as previously believed, as water vapor is a greenhouse gas - indeed, there's over ten times more water vapor than carbon dioxide in the atmosphere. And finally, the virginity requirement is nothing less than a heavy-handed attempt to police women's sexual expression, ****-shaming sexually active women as "impure".
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
In addition, recent studies have shown that rainbows may not be as environmentally friendly as previously believed, as water vapor is a greenhouse gas - indeed, there's over ten times more water vapor than carbon dioxide in the atmosphere.
I feel like this argument is a little glib. Sure, computers crash all the time, but I'm not sure that should really alleviate any fears someone might have about AI or transhumanism, etc.
If you're going to fear something out of science fiction, I would think those damn Xenomorphs would be more worth your while.
The fact of the matter is we can't write a program that does ONE thing, or build a machine that does ONE thing, without it breaking down all the time. Exactly how are we going to build a functional human brain?
Seriously, am I the only one who sees this as thunderously stupid? Some dumbass donated ten million dollars to research the threat level of something that doesn't exist and may never exist. Meanwhile, the world is tormented by REAL intelligences that raise armies and are actually posing a threat to humanity, as well as things that are neither robots nor intelligences that kill droves of people like heart disease, strokes, cancer, famine, and poverty.
That's like cleaning your house for mold while it's on fire.
The fact of the matter is we can't write a program that does ONE thing, or build a machine that does ONE thing, without it breaking down all the time. Exactly how are we going to build a functional human brain?
Sure we can. I've had my phone for over a year and it's never crashed. If it were, say, a self-aware machine capable of killing me, it could have done so many times over without crashing once.
Futhermore, to add on to Synalon Etuul's point, it's not like we humans are perfectly engineered, either. But we've dominated every other lifeform on earth. Computers don't need to be perfect to eclipse us; they just need to be more perfect than us. I can easily see this happening.
And regarding the "a program only does what you tell it to" line: that only holds until you start writing programs that can evolve, which we are already doing. I think this is the relatively new development that has a lot of people concerned. Of course it's not a danger right now, but that doesn't mean that it isn't an issue that it would hurt us to start thinking about.
Rationale: When was the last time you saw a computer program that didn't crash, a robot that didn't break down, or any machine that worked exactly the way it was supposed to?
I rest my case.
If a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.
-- Alan Turing
If you require an AI to be an infallible machine, then there will never be an AI. There never will be one because there can't be one; it's logically impossible. You say nothing interesting when you say this. No computer scientist ever has or would propose such a thing -- the first computer scientist that there ever was already knew it was impossible!
So. AI, for a reasonable definition of that term and to the extent it can exist, will necessarily have failure modes. It makes perfect sense, then, that someone interested in realistic rather than infallible AI would pay special attention to avoiding particular failure modes -- like, say, playing a game of global thermonuclear war with real missiles.
And regarding the "a program only does what you tell it to" line: that only holds until you start writing programs that can evolve, which we are already doing. I think this is the relatively new development that has a lot of people concerned. Of course it's not a danger right now, but that doesn't mean that it isn't an issue that it would hurt us to start thinking about.
No !
Programs that already evolve were "told" to evolve, so a program only do what it was written to.
The gap between a program that learns how to associate new data and a program that rewrites itself into a new different program alien to it's former self is huge.
One way to think about this is that knowledge have contingency. A dog cannot learn geometry because geometry is not contingent with a dog's brain functions. AI have even better defined contingency as well and it's the lines the programmer wrote in it plus all the possible extra lines that can come up from learning algorithm. A program meant to recognize photos will not suddenly start to stealing our bank accounts or creating false news. Reason is that the program that the lines needed to performs those actions are not contingent in the photo program - for this program bank accounts and news are aliens, something that does not exist.
Also, a artificial human brain is impossible because human brains are organic [and that's very likely a core aspect of it's peculiar functioning] and our capacity of creating "new" organic things are limited. We can clone it, we can alter using the thing's own rules, but not creating new contingencies. We will be creating completely new and unique life form before we even start with new brains. And note, this new brain will have to be a unprecedented design because it will need both the computing power of wired brains and the self awareness and creativity of organic brains and there's still chances those two things contradicts each other and work just like a human using a computer or two separate brains.
And I am in support of us replacing cars with unicorns as a means of transportation, as the magic of unicorns has been demonstrated to be completely natural, energy efficient, and generates rainbows as a waste product, which are environmentally-friendly.
I was specifically careful in my wording to try to prevent, well, exactly that kind of response. A keyword I used was 'could'. Maybe a better word would have been 'might'. An AI of the level we speak of might help us unlock all those possibilities I mentioned. In that, that is why I think it's worth pursuing. Not because I absolutely know it will, but because the potential is there if our species tries. We'll probably fail multiple times along the way, but it's not something so impossible and fantastical as it seems you're trying to make it look like I think it is. It's not like a unicorn. The key reason I think it is possible is because our technology has been progressing particularly quickly in the last century and because the human brain is something that has been built by nature billions of times. If a careless force with no will like nature led to us, maybe a careful, willed force like us could make something similar and put it to good use?
If you're going to fear something out of science fiction, I would think those damn Xenomorphs would be more worth your while.
The fact of the matter is we can't write a program that does ONE thing, or build a machine that does ONE thing, without it breaking down all the time. Exactly how are we going to build a functional human brain?
Seriously, am I the only one who sees this as thunderously stupid? Some dumbass donated ten million dollars to research the threat level of something that doesn't exist and may never exist. Meanwhile, the world is tormented by REAL intelligences that raise armies and are actually posing a threat to humanity, as well as things that are neither robots nor intelligences that kill droves of people like heart disease, strokes, cancer, famine, and poverty.
That's like cleaning your house for mold while it's on fire.
A fool and his money are soon parted.
As far as breaking down... Humans break down too. Our bodies are just constantly fixing themselves, a full replacement of all of our cells happens over the course of every 7-10 years. We even expect far, far more mistakes and problems out of our fellow humans than these computers that we have now. And yet the human race goes on. People have even survived extremely traumatic brain injuries and still retained self-awareness. Why is fixing the hypothetical AI as things break such a problem? It could still be made to work, and when it doesn't we just fix it so it starts working again. Fixing it as we go along would probably be way easier than the still risky procedure of brain surgery since it would be a mechanically-based machine. Seems a non-issue to me.
I figure it will just take a heck of a lot of work to make a machine that can 'think' like a human. It could even make errors like a human and still be useful despite being imperfect.
Truth be told I have actually gone down that line of thought about AI you've shared above, in my own way. There are over 7 billion of us humans here right now, and we accomplish complex, difficult tasks that alone we lack the ability to do. In a sense... teams of engineers, doctors, scientists, and inventors of singular things are uniting their collective 'processing power' to make something that was developed in a single human surpassing manner. So why even try to build an AI when there are 7 billion of us, pretty much an army of mobile self-aware supercomputers? Mainly, my thinking is it would be worth it because it could be designed to be more task-oriented and just get things done better and quicker than a human. Humans don't like being slaves and we need frequent rest and time to do lots of unproductive, pointless stuff, like lurking about these forums.
If could also end up being a matter of fact that evolution has sculpted a computer system better than anything else we could ever design already. To achieve similar goals to what we'd use an AI for we could just look at improving upon methods by which we all communicate with each other, perhaps by looking into ideas for interfacing our brains with each other.
As far as the 'threat' of AI. I already agree with you there. There is no point in throwing all that money at researching contingencies for AI. Rather, the contingencies that should be being prepared for when/if the technology starts showing more promise should be for dealing with human beings who have malicious intentions for AI. Any problems the AI itself could possibly create on its own are really easy to prepare for from a programming perspective.
Also, a artificial human brain is impossible because human brains are organic [and that's very likely a core aspect of it's peculiar functioning] and our capacity of creating "new" organic things are limited. We can clone it, we can alter using the thing's own rules, but not creating new contingencies. We will be creating completely new and unique life form before we even start with new brains. And note, this new brain will have to be a unprecedented design because it will need both the computing power of wired brains and the self awareness and creativity of organic brains and there's still chances those two things contradicts each other and work just like a human using a computer or two separate brains.
Why do you say this? What does something organic provide that something artificial cannot?
The problem of AI is not inherent to the Ai specifically. The problem arises when some dumb-ass asks the AI to solve the dilemma of being human. The only logical conclusion that the AI can come to is that humans are the problem. No humans, no problems. Isaac Asimov was a freaking genius, I think "I Robot" is a concise glimpse into "that" future. For every problem that we try to solve, we invent ten new problems(probably an over exaggeration for sure, but still...)
We as humans just exist, AI is a tool we've/we'll invent. One that's really not "necessary" to our survival, yet it's perfection could spell our doom. One of the problems of being human is that the temptation to open Pandora's Box, is greater than the fear of the result.
No !
Programs that already evolve were "told" to evolve, so a program only do what it was written to.
The gap between a program that learns how to associate new data and a program that rewrites itself into a new different program alien to it's former self is huge.
One way to think about this is that knowledge have contingency. A dog cannot learn geometry because geometry is not contingent with a dog's brain functions. AI have even better defined contingency as well and it's the lines the programmer wrote in it plus all the possible extra lines that can come up from learning algorithm. A program meant to recognize photos will not suddenly start to stealing our bank accounts or creating false news. Reason is that the program that the lines needed to performs those actions are not contingent in the photo program - for this program bank accounts and news are aliens, something that does not exist.
I think you're thinking too narrowly. What if, say, someone created a robot designed to assist people. Like you're typical sci-fi robot that walks, talks, looks like a human. Such a robot would be programmed to be adaptable, to at least some extent, because it'd be impossible to program it for every single thing it would come across. What, then, would prevent it from adapting in the "wrong" way, possibly even in a dangerous way?
I'm genuinely asking, coming from ignorance here (I don't know a thing about programming). If it's a matter of the programming building in safeguards, I don't like that one bit. I've seen software fail in way too many basic ways to trust a programmer to guarantee my safety in that way. If it's a matter of hardcoding something like Asimov's Three Laws, I'd feel a little better...but even then, you just have to read I, Robot to realize that the Laws aren't foolproof.
The problem of AI is not inherent to the Ai specifically. The problem arises when some dumb-ass asks the AI to solve the dilemma of being human. The only logical conclusion that the AI can come to is that humans are the problem. No humans, no problems.
Why? You can say this with as much certainty as you like, but I see no backup for your claims whatsoever.
Isaac Asimov was a freaking genius, I think "I Robot" is a concise glimpse into "that" future. For every problem that we try to solve, we invent ten new problems(probably an over exaggeration for sure, but still...)
Asimov was a great writer. However, I'm not sure you have actually read I Robot. Although they do have some minor issues with the androids in those books, they are a huge boon to society.
Private Mod Note
():
Rollback Post to RevisionRollBack
We have laboured long to build a heaven, only to find it populated with horrors.
Also, a artificial human brain is impossible because human brains are organic [and that's very likely a core aspect of it's peculiar functioning] and our capacity of creating "new" organic things are limited. We can clone it, we can alter using the thing's own rules, but not creating new contingencies. We will be creating completely new and unique life form before we even start with new brains. And note, this new brain will have to be a unprecedented design because it will need both the computing power of wired brains and the self awareness and creativity of organic brains and there's still chances those two things contradicts each other and work just like a human using a computer or two separate brains.
Why do you say this? What does something organic provide that something artificial cannot?
It's most specialist's belief that human thought is result of BIOchemical reactions. There's a distinction between organic and non-organic chemistry, they don't share the same reactions. It's the same reason why we cannot make a machine that feeds on nutrients or ferment beer like microorganisms.
I think you're thinking too narrowly. What if, say, someone created a robot designed to assist people. Like you're typical sci-fi robot that walks, talks, looks like a human. Such a robot would be programmed to be adaptable, to at least some extent, because it'd be impossible to program it for every single thing it would come across. What, then, would prevent it from adapting in the "wrong" way, possibly even in a dangerous way?
This robot would be able to rewrite it's program but all the possible rewriting lines is already defined indirectly by the learn algorithm. It means the robot's intelligence is bounded.
Of course the machine can go wrong and attack a human for whenever reason. But that would not be different from a mechanical defect on a plane, crashing and killing thousands. What he cannot do is start thinking like humans, hacking our leaders e-mail and start sending messages causing a nuclear war, hiding itself in a secret factory and creating a robot army. Mostly because the programs that would allow such things use programming lines that are non-contingent in it's learning space.
The error here comes from thinking that just because a AI can learn something, it can learn everything. This is not how learning algorithms works right now, as far as I know.
I agree with Highroller. Worry about AI dangers in the present is literally worrying about a menace that does not exists. Can it exist in the future ? Hell, who am I to dictate the limits of technology ? But worrying about possible future dangers is futile. Is like asking for banning all MTG cards because it could be that in the future MTG players would become a criminal-terrorist organization that would cause civil war in several countries.
It's most specialist's belief that human thought is result of BIOchemical reactions. There's a distinction between organic and non-organic chemistry, they don't share the same reactions. It's the same reason why we cannot make a machine that feeds on nutrients or ferment beer like microorganisms.
Human muscle movement is the result of biochemical reactions, but we can also make a non-biochemical machine that moves. Just saying that something is different does not mean it cannot achieve the same result. You have to explain how the difference is relevant, or else you're committing the fallacy of special pleading.
This robot would be able to rewrite it's program but all the possible rewriting lines is already defined indirectly by the learn algorithm. It means the robot's intelligence is bounded.
If the robot can modify the learning algorithm, it begins a chaotic feedback cycle - what Douglas Hofstadter calls a "strange loop". This is more or less how humans think. You're right that we shouldn't be any more worried that a robot will go murderous on us than we'd be worried that a random baby will go murderous on us, especially since human cognition also shows it is possible to have imperatives like empathy that operate below the conscious and modifiable level. Asimov's Three Laws, in short, seem feasible. But you vastly overstate the intrinsic bounds of the process. Through multiple iterations of self-modification, the learning algorithm can indeed arrive at programs that were once, as you term it, "non-contingent". A strange loop is by its very nature unpredictable - it's the best explanation we have for what this whole "free will" thing is.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
I don't understand why everyone here but highroller seems to be missing the major point. AI of the level we're talking about here hasn't been invented yet and it isn't close to being invented.
Have any of you actually taken an AI class? Have any of you coded any AI algorithms?
I've coded in LISP, learned some prolog, coded some rule-based systems, and coded a neural network as well. Actually one of my major projects was to try to code a basic artificial intelligence that could perform better than ELIZA. What I was working on was later superceded by XML database chat-bots.
At any rate, it's just grossly immature to the point of it being a waste of time to discuss this. The tech hasn't been implemented yet. Nothing close to the tech has been implemented yet.
Don't talk about neural networks as somethings that will just grow and take over the world. It's Sci-Fi, I assure you.
Neural Networks have been around for FIFTY years without any real progress. It wasn't until recently...like a few years ago that some new promising developments have been made. But for those of you in it, the promises of Neural networks has been told for decades and it has failed to deliver.
Honestly, if any of you just crack open a book on AI and start some basic coding examples, you would laugh when you see how far off we are.
For now, I'll admit that we have made progress in basic pattern recognition and image analysis. But it just doesn't make sense to talk about ensuring morality in AI before anything resembling AI is built.
In fact to me, the only reason why you would want to donate money and talk about Skynet like possibilities is pure marketing stunt because it encapsulates a kind of fear which intrigues people.
The reality is our "AI" for now is nothing more than:
1. Searching of large related databases.
2. Basic pattern recognition in very limited applications. (Think CAPTCHAS or Siri)
3. Solving of mathematical equations through searches.
4. Advanced Text Parsing.
I picked these up from wiki, which basically encapsulates how I feel about the whole endeavor:
"Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool." (Dewdney, p. 82)
"Neural networks, for instance, are in the dock not only because they have been hyped to high heaven..."
We can't ensure morality or emotions in our AI, because the computational foundations where such high level concepts can take hold have been built yet.
To know the difference between right and wrong, you have to have some working physical knowledge of the world. We don't have AI with working physical knowledge of the world. We don't have any machines with working physical knowledge of the world. What we have limited programs and computers which solve for equations without any kind of comprehension of awareness as to what they are doing.
Yesterday I build a working robotic arm in Arduino. Let's all debate about how I can ensure my robotic arm knows right from wrong and doesn't wind up killing me out of spite for me, its creator.
I'll tell you all what I did so you can see why this debate is ridiculous as hell.
My robotic uses Linear actuators and Servos. When I send a pulse of 1500ms at 7.4 volts to my servo, it swings to base position. When I input a signal of 1200us, it swings -90 degrees. When i input a signal of 1800us, it swings to positive 90 degrees.
I hope you all can tell me what code I should put into my arduino controller to give my robotic arm knowledge of right from wrong.
No need to waste further brain cells debating this. I'm giving you all the chance to tell me the exact code I should put into my controller to teach the robotic arm right from wrong so it wont hurt people in the future.
Go on. I'll upload the code today. This is what I have so far.
#include <Servo.h>
Servo primary_servo; // create servo object to control a servo
// a maximum of eight servo objects can be created
Servo secondary_servo;
int pos = 0; // variable to store the servo position
It's most specialist's belief that human thought is result of BIOchemical reactions. There's a distinction between organic and non-organic chemistry, they don't share the same reactions. It's the same reason why we cannot make a machine that feeds on nutrients or ferment beer like microorganisms.
That doesn't really answer anything. There are a huge number of radically different models of computation, but the set of things they can compute ends up being the same. Why should I suspect that using different chemicals breaks that trend?
Neural Networks have been around for FIFTY years without any real progress. It wasn't until recently...like a few years ago that some new promising developments have been made. But for those of you in it, the promises of Neural networks has been told for decades and it has failed to deliver.
In fairness, that can be said about a lot of things, because it turns out that a lot of hard problems require a huge amount of data to have a chance at solving. Machine translation research started up around the same time as neural network research. Both ran into the problem that computers of the time were vastly underpowered for the problems they were trying to tackle. Both fields stagnated for decades, and only recently has machine translation started making real progress. I'd be cautious about interpreting that as meaning that fluent machine translation is a pipe dream for the distant future, just as I'm not sure the history of neural networks should be interpreted as being a flop.
At any rate, it's just grossly immature to the point of it being a waste of time to discuss this.
We're on the message board of a hobby gaming website. Everything we do here is a waste of time. And we discuss science-fictional and fantastic topics with some regularity. Hell, we have an entire subforum devoted to questions about a magical being for the existence of which there has never, in the entire history of human civilization, been the slightest scrap of scientifically sound evidence. The potential for strong AI, however remote, still has a better basis in observable reality than that. We don't know much about the nature of human and machine cognition, but we can still talk about what we do know.
Now, you go on for some length, but your objection seems simply to be that strong AI is not close at hand. Okay. I haven't seen a single post claiming that it is. So get off your high horse. Your practical expertise is of course welcome here. But your attitude is not.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
No !
Programs that already evolve were "told" to evolve, so a program only do what it was written to.
Of course you can always attribute the state of a running program to the nature of the rules governing the program; so, too, can you attribute the state of a living system to the nature of the rules governing that system -- organic chemistry, the operation of the laws of physics, hell, even the design of God if you're into that sort of thing. A living system that evolves was "told" to evolve just as much as any program was, in the sense that inexorable rules imposed from without force it to do so.
The burden here, which you do not substantively engage with, is to identify a relevant difference in the nature of the rules governing these domains. The questions before you are:
1) Is there any real process that can take place in the biochemical domain that cannot be efficiently simulated or otherwise replicated in the computational domain?
2) Does the real process that you've identified in (1) play a necessary role in cognition? (As in, without this process, cognition would be impossible.)
If you can't specifically identify the process that you're claiming is not computational and identify where in human cognition that magic is happening, then you're not demonstrating that organic/inorganic distinctions undermine the idea of AI.
Have any of you actually taken an AI class? Have any of you coded any AI algorithms?
I've coded in LISP, learned some prolog, coded some rule-based systems, and coded a neural network as well. Actually one of my major projects was to try to code a basic artificial intelligence that could perform better than ELIZA. What I was working on was later superceded by XML database chat-bots.
...
Honestly, if any of you just crack open a book on AI and start some basic coding examples, you would laugh when you see how far off we are.
I'm not deliberately trying to offend you (though since you dropped the old "crack open a book" bomb I feel considerably less bad about it if it turns out that way) but the stuff you're describing here is rudimentary precisely because your level of knowledge is. You're describing introductory material. As groundbreaking as ELIZA was, chatbots aren't exactly the cutting edge of AI research anymore. It is as if you asked us to open a fifth-grade math book and, finding nothing in there about the Riemann hypothesis, declared attempts at solving it to be a waste of time.
Also, not that it matters because it was an ad lapidem argument to begin with, but your Arduino code is something I could have written when I was about 10 years old. It moves a servo to a fixed position! This is trivial! How could you consider that to be demonstrative of anything? Even if someone provided you with a perfect "distinguish good from evil" subroutine, it would not change the semantics of your trivial program. Moreover, our ignorance concerning the "distinguish good from evil" subroutine is precisely what safe AI research seeks to remedy, so if you really want that subroutine you should be kicking in your money alongside Elon Musk's. I could go on but, well, you're making a bad argument. Let's leave it there. Speaking of cracking open a book, I'd like to take this opportunity to advise everyone in this thread (well, really, everyone everywhere) to read Alan Turing's 1950 paper, Computing Machinery and Intelligence. (And TomCat, if they didn't assign this in your AI class, find a new class.) There are at least three reasons why you should:
It's one of the greatest papers at the intersection of science and philosophy ever written. Packed with insight, yet totally devoid of incomprehensible mathematical jargon or symbology, and easily readable to fluent English speakers.
It's the greatest paper on AI ever written. The computer scientist Scott Aaronson quips that 70% of all AI research done so far was done by Turing in 1950, and the remaining 30% by the plodding mortals that have followed since. If you' want to talk about AI intelligently, well, the only way you can do that is after you've read it.
Many skeptical arguments about AI, not only in this thread but by eminent scientists and philosophers who should know better, were actually anticipated and answered by Turing in 1950.
And finally, since the topic of technological failure has been amply covered by several posters here, I'd like to counterbalance that by saying something about technological success.
Being an AI researcher has to be just about the worst job in the world if you're looking to be recognized for your achievements, because you can make what everyone agrees in advance is a touchdown, spike the ball, and do a victory dance, only to find that someone has moved the goalposts another hundred yards down the field. After the hundredth or so time this happens, I imagine it gets pretty frustrating.
First, it was chess. Turing himself suggested it as a benchmark, and everyone agreed that whatever process it was that undergirded good chess play qualified as thinking. Well, they made a computer that could do it better than any human (spike ball, victory dance?) -- and then suddenly it wasn't thinking anymore! (goalposts moved 100yd.)
Then it was mathematical proof. Of course from the beginning computers were used to assist in calculation, but they would find applications in generating new insights as well -- a computer resolved the Robbins Conjecture, an infamous problem that was so difficult that Tarski assigned it as a challenge problem to his best students. (Spike ball? Victory dance?) "Pshaw, it was just searching for consequences of the axioms," said the skeptics. (goalposts moved 100yd., and P.S., no *****, Sherlock. That's what mathematicians do.)
Then it was "creativity." Computers will never be creative. Well, what about when they start making music?Poetry?. (Spike ball? Victory dance?) "It was programmed to do those things!" (goalposts moved 100yd, and this answer is fallacious for reasons I've mentioned.)
Then games of imperfect information. Poker? Crushed. In fact, completely solved! Not only can computers bluff and read bluffs, they can do so in a provably optimal fashion -- they are necessarily as good or better than the best human. (Spike ball? Victory dance?)
Really I could go on like this for 50 pages, but I think that's enough examples to be going on. There seems to be a kind of cognitive bias in the skeptical reactions to these things -- call it the "Mommy, I'm Special Heuristic", or MISH for short. It's a variant on the same old belief that man is at the center of the universe, whether it manifests itself in an overtly-spiritual way as in "machines don't have souls", or in the form of something like italofoca's belief in the specialness of human brain chemistry, or in some other way altogether. The MISH is a one-two punch of negative cognitive bias: it leads one to undue skepticism about potential achievements in computer science (how could a computer ever do that? Then it would be like me, but I'm special, Mommy!), and it leads one to dismiss these achievements ex post facto. (Whatever the computer is doing might look like thought, but it isn't really, because I'm special, Mommy!)
The important thing to notice is that the MISH has been wrong not once but every single time it's been put to a concrete test. Of course, one can go on rationalizing forever, but after so many failures isn't it time to consider abandoning the heuristic?
Anyhow. I'm not saying a strong AI singularity is going to happen in the near future, but the state of play in AI has been utterly misrepresented by the posts made so far. I don't think AI believers are delusional fools. They have plenty of circumstantial evidence for the feasibility of AI, and a track record of exponential technological and software improvements. Moreover, even if AI is not forthcoming, safe AI research has enormous follow-on benefits outside the field of AI -- an argument for basic morality that takes the form of a computer program or mathematical proof would be a profound breakthrough in moral philosophy.
Musk's millions aren't being wasted here. You could argue, quite correctly, that there are much better philanthropic causes he could have bolstered. But he also could have done much worse.
My reasons for this are mostly twofold. Firstly: I don't understand this obsession with the idea that AI will try to do humans harm. The claimants often refer to something like "robots have no morality, they will operate purely on logic" but this an assumption about an as of yet fictive technology doesn't mean an AI might try to harm humans. These arguments often liken the AI to a sociopath: incapable of empathy. While this may be true however, the comparison is inherently flawed: while indeed a being incapable of having emotions might be incapable of empathy, a sociopath still has emotions, which I would argue leads to the higher rates of criminality among these people. After all, why would you steal if you don't covet, or hurt people if you don't get angered?
So since an a-emotional AI would not have wants of their own making (since that implies the urge to do something), their actions would be controlled by the directives built into them by their programmers. So in the case of AI being completely logical, the danger does not come from the AI, but from the people designing them.
But what do you think? Are you worried about AIs, and why? What do you think should be done to prevent possible damage?
Rationale: When was the last time you saw a computer program that didn't crash, a robot that didn't break down, or any machine that worked exactly the way it was supposed to?
I rest my case.
Neither transhumanism nor the singularity are to be taken seriously because technology is crap.
Worrying about AI could not be a more fruitless endeavor. I think anyone who codes would understand that the computer does whatever you want it to. When you're trying to address concerns that have no basis in practical reality, it's nothing more than pure speculative fantasy.
It's honestly on the level of: Do you think Cyclops's beams could cut through adamantium?
Cyclops is a mutant in fiction. He doesn't exist. Neither does our AI, save for extremely basic rule based logic systems, and neural networks that currently do nothing more than some dimensional analysis. We have nothing approaching any kind of consciousness or emotions. So worrying that these systems lack empathy or human morality is just a waste of time in my opinion.
I'm in support of us making a human-equaling and potentially human-surpassing AI because it could be directly helpful to advancing our technology, organizing our societies better, and increasing our survivability as a species in many ways. Transhumanism is something it could also help open the doors to. Contrary to Highroller's conclusion, I'd argue the singularity and transhumanism are to be taken seriously - as a future likelihood to extensively think out and prepare for, but of course I agree we are far from both at this point in time.
That said, it is feasible in theory to do but not anytime soon, because such a creation as far as I've figured will be dependent on breakthroughs in neuroscience and processor design. Quantum processors, maybe, if they're possible. An easy path (and by easy I mean in an extremely loose sense - to be based off of something that exists instead of designing totally from scratch, ground up) to take could be to try to emulate the human brain's functionality heavily, in machine form. Perhaps something that functions with a hardware system similar to how humans have neural networks. Learning, pattern recognition, emotions, social intelligence, self-'awareness' - perhaps the core program built like a tree with 'survival' on top and every other gigantic algorithm below (such as a 'curiosity' algorithm) that like a node serving the purpose of the top of the tree. Design the AI such as that it is in it's own interest to learn, design improvements for itself, maybe help invent better technology for us with the data given to it in order to further its own survival programming. That would require programming work of a quality and quantity surpassing by an order of magnitude the complexity of every single program humans have ever written for a computer. The engineering work just to support that kind of programming? Probably unprecedented in complexity. It would be a gamble for it to work at all and a huge expense and effort to accomplish.
In addition to that... I have thought over quite a bit about whether an AI of the specific level I've described could be a threat to us in its own intents. I imagine not, if we just design it right. If it evolves and learns with a result of deciding it wants to 'kill all humans' we just turn it off and identify and fix the problem. But we humans could easily be at fault for misusing such a technology. It would have to be safeguarded by the most rational among us, extensive contingencies and safeguards put in place, likely not to contain it but to prevent the many ill-intentioned humans around the world from copying and misusing the technology. I think, given enough preparation for security, a responsible, rational coalition in humanity should seek to design such a thing. It would be such a sad thing to prevent that technology's potential from being tapped into just because some of humanity isn't responsible enough to have it in their hands. It must be kept out of reach of those who would misuse it, and built by those who would use it wisely and carefully for all of the world's benefit.
Still, a gamble of an effort but it's worth it to keep trying, I'd say.
We already have artificial neural networks that are capable of machine learning and "training." They're used extensively in object recognition and intelligent data analysis. For example, Google Street View uses them. They're also the basis of self-driving cars, from the real deal to hobbyist models.
It's not unreasonable to think we could achieve human-like intelligence with silicon-based computing power. It's probably unnecessary to use quantum computing, since modern computers are already significantly faster than human brains. This issue is one of organization, not processing speed; the latter is what quantum computing is hoping to improve.
Facinating. It seems I am happily shown wrong on some of the things I thought still held true about computers vs humans, and that the technology is in fact getting there a bit more quickly than I thought. So there is a computer out there with more processing power than me. Makes me feel a bit smaller in the grand scheme of things. One thing I do envy of computers as well, their data storage. It seems to me the human brain is heavily flawed in that I forget so many things, can't recall my days in second-by-second crystal clear detail. But curiously, it does seem like I always retain memory of what is useful and important, the little details, such as that I don't have to retain perfect copies of everything I see to remember them accurately. I wonder just how do our brains manage to throw away so much information but still get the fundamentally useful pieces of what is remembered right. It also calls into question, how long could a human brain age before it starts having to drop the most important bits of information retained in the past to make more room for new memory? Or, are we constantly doing that already, not really aware of it?
So, as far as those artificial neural networks go, how far away would you say we are from a 'human-awareness' equaling AI, if that's even reasonable speculated upon? Might you elaborate more on the 'issue of organization'?
I do think the technological singularity is something we should be concerned about, but I'm not sure it's our most pressing issue at the time.
candidus inperti; si nil, his utere mecum.
If you're going to fear something out of science fiction, I would think those damn Xenomorphs would be more worth your while.
The fact of the matter is we can't write a program that does ONE thing, or build a machine that does ONE thing, without it breaking down all the time. Exactly how are we going to build a functional human brain?
Seriously, am I the only one who sees this as thunderously stupid? Some dumbass donated ten million dollars to research the threat level of something that doesn't exist and may never exist. Meanwhile, the world is tormented by REAL intelligences that raise armies and are actually posing a threat to humanity, as well as things that are neither robots nor intelligences that kill droves of people like heart disease, strokes, cancer, famine, and poverty.
That's like cleaning your house for mold while it's on fire.
A fool and his money are soon parted.
Sure we can. I've had my phone for over a year and it's never crashed. If it were, say, a self-aware machine capable of killing me, it could have done so many times over without crashing once.
Futhermore, to add on to Synalon Etuul's point, it's not like we humans are perfectly engineered, either. But we've dominated every other lifeform on earth. Computers don't need to be perfect to eclipse us; they just need to be more perfect than us. I can easily see this happening.
And regarding the "a program only does what you tell it to" line: that only holds until you start writing programs that can evolve, which we are already doing. I think this is the relatively new development that has a lot of people concerned. Of course it's not a danger right now, but that doesn't mean that it isn't an issue that it would hurt us to start thinking about.
Modern: GW Hatebears/midrange, WGU Knightfall/evolution midrange stuff
Standard: nope
Legacy: W Death & Taxes
EDH (not Commander!): W Avacyn, Angel of Hope, GR Ruric Thar, the Unbowed, WGB Anafenza, the Foremost, WU Hanna, Ship's Navigator
If you require an AI to be an infallible machine, then there will never be an AI. There never will be one because there can't be one; it's logically impossible. You say nothing interesting when you say this. No computer scientist ever has or would propose such a thing -- the first computer scientist that there ever was already knew it was impossible!
So. AI, for a reasonable definition of that term and to the extent it can exist, will necessarily have failure modes. It makes perfect sense, then, that someone interested in realistic rather than infallible AI would pay special attention to avoiding particular failure modes -- like, say, playing a game of global thermonuclear war with real missiles.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
No !
Programs that already evolve were "told" to evolve, so a program only do what it was written to.
The gap between a program that learns how to associate new data and a program that rewrites itself into a new different program alien to it's former self is huge.
One way to think about this is that knowledge have contingency. A dog cannot learn geometry because geometry is not contingent with a dog's brain functions. AI have even better defined contingency as well and it's the lines the programmer wrote in it plus all the possible extra lines that can come up from learning algorithm. A program meant to recognize photos will not suddenly start to stealing our bank accounts or creating false news. Reason is that the program that the lines needed to performs those actions are not contingent in the photo program - for this program bank accounts and news are aliens, something that does not exist.
Also, a artificial human brain is impossible because human brains are organic [and that's very likely a core aspect of it's peculiar functioning] and our capacity of creating "new" organic things are limited. We can clone it, we can alter using the thing's own rules, but not creating new contingencies. We will be creating completely new and unique life form before we even start with new brains. And note, this new brain will have to be a unprecedented design because it will need both the computing power of wired brains and the self awareness and creativity of organic brains and there's still chances those two things contradicts each other and work just like a human using a computer or two separate brains.
BGU Control
R Aggro
Standard - For Fun
BG Auras
I was specifically careful in my wording to try to prevent, well, exactly that kind of response. A keyword I used was 'could'. Maybe a better word would have been 'might'. An AI of the level we speak of might help us unlock all those possibilities I mentioned. In that, that is why I think it's worth pursuing. Not because I absolutely know it will, but because the potential is there if our species tries. We'll probably fail multiple times along the way, but it's not something so impossible and fantastical as it seems you're trying to make it look like I think it is. It's not like a unicorn. The key reason I think it is possible is because our technology has been progressing particularly quickly in the last century and because the human brain is something that has been built by nature billions of times. If a careless force with no will like nature led to us, maybe a careful, willed force like us could make something similar and put it to good use?
As far as breaking down... Humans break down too. Our bodies are just constantly fixing themselves, a full replacement of all of our cells happens over the course of every 7-10 years. We even expect far, far more mistakes and problems out of our fellow humans than these computers that we have now. And yet the human race goes on. People have even survived extremely traumatic brain injuries and still retained self-awareness. Why is fixing the hypothetical AI as things break such a problem? It could still be made to work, and when it doesn't we just fix it so it starts working again. Fixing it as we go along would probably be way easier than the still risky procedure of brain surgery since it would be a mechanically-based machine. Seems a non-issue to me.
I figure it will just take a heck of a lot of work to make a machine that can 'think' like a human. It could even make errors like a human and still be useful despite being imperfect.
Truth be told I have actually gone down that line of thought about AI you've shared above, in my own way. There are over 7 billion of us humans here right now, and we accomplish complex, difficult tasks that alone we lack the ability to do. In a sense... teams of engineers, doctors, scientists, and inventors of singular things are uniting their collective 'processing power' to make something that was developed in a single human surpassing manner. So why even try to build an AI when there are 7 billion of us, pretty much an army of mobile self-aware supercomputers? Mainly, my thinking is it would be worth it because it could be designed to be more task-oriented and just get things done better and quicker than a human. Humans don't like being slaves and we need frequent rest and time to do lots of unproductive, pointless stuff, like lurking about these forums.
If could also end up being a matter of fact that evolution has sculpted a computer system better than anything else we could ever design already. To achieve similar goals to what we'd use an AI for we could just look at improving upon methods by which we all communicate with each other, perhaps by looking into ideas for interfacing our brains with each other.
As far as the 'threat' of AI. I already agree with you there. There is no point in throwing all that money at researching contingencies for AI. Rather, the contingencies that should be being prepared for when/if the technology starts showing more promise should be for dealing with human beings who have malicious intentions for AI. Any problems the AI itself could possibly create on its own are really easy to prepare for from a programming perspective.
Why do you say this? What does something organic provide that something artificial cannot?
We as humans just exist, AI is a tool we've/we'll invent. One that's really not "necessary" to our survival, yet it's perfection could spell our doom. One of the problems of being human is that the temptation to open Pandora's Box, is greater than the fear of the result.
I think you're thinking too narrowly. What if, say, someone created a robot designed to assist people. Like you're typical sci-fi robot that walks, talks, looks like a human. Such a robot would be programmed to be adaptable, to at least some extent, because it'd be impossible to program it for every single thing it would come across. What, then, would prevent it from adapting in the "wrong" way, possibly even in a dangerous way?
I'm genuinely asking, coming from ignorance here (I don't know a thing about programming). If it's a matter of the programming building in safeguards, I don't like that one bit. I've seen software fail in way too many basic ways to trust a programmer to guarantee my safety in that way. If it's a matter of hardcoding something like Asimov's Three Laws, I'd feel a little better...but even then, you just have to read I, Robot to realize that the Laws aren't foolproof.
Modern: GW Hatebears/midrange, WGU Knightfall/evolution midrange stuff
Standard: nope
Legacy: W Death & Taxes
EDH (not Commander!): W Avacyn, Angel of Hope, GR Ruric Thar, the Unbowed, WGB Anafenza, the Foremost, WU Hanna, Ship's Navigator
Why? You can say this with as much certainty as you like, but I see no backup for your claims whatsoever.
Asimov was a great writer. However, I'm not sure you have actually read I Robot. Although they do have some minor issues with the androids in those books, they are a huge boon to society.
It's most specialist's belief that human thought is result of BIOchemical reactions. There's a distinction between organic and non-organic chemistry, they don't share the same reactions. It's the same reason why we cannot make a machine that feeds on nutrients or ferment beer like microorganisms.
This robot would be able to rewrite it's program but all the possible rewriting lines is already defined indirectly by the learn algorithm. It means the robot's intelligence is bounded.
Of course the machine can go wrong and attack a human for whenever reason. But that would not be different from a mechanical defect on a plane, crashing and killing thousands. What he cannot do is start thinking like humans, hacking our leaders e-mail and start sending messages causing a nuclear war, hiding itself in a secret factory and creating a robot army. Mostly because the programs that would allow such things use programming lines that are non-contingent in it's learning space.
The error here comes from thinking that just because a AI can learn something, it can learn everything. This is not how learning algorithms works right now, as far as I know.
I agree with Highroller. Worry about AI dangers in the present is literally worrying about a menace that does not exists. Can it exist in the future ? Hell, who am I to dictate the limits of technology ? But worrying about possible future dangers is futile. Is like asking for banning all MTG cards because it could be that in the future MTG players would become a criminal-terrorist organization that would cause civil war in several countries.
BGU Control
R Aggro
Standard - For Fun
BG Auras
If the robot can modify the learning algorithm, it begins a chaotic feedback cycle - what Douglas Hofstadter calls a "strange loop". This is more or less how humans think. You're right that we shouldn't be any more worried that a robot will go murderous on us than we'd be worried that a random baby will go murderous on us, especially since human cognition also shows it is possible to have imperatives like empathy that operate below the conscious and modifiable level. Asimov's Three Laws, in short, seem feasible. But you vastly overstate the intrinsic bounds of the process. Through multiple iterations of self-modification, the learning algorithm can indeed arrive at programs that were once, as you term it, "non-contingent". A strange loop is by its very nature unpredictable - it's the best explanation we have for what this whole "free will" thing is.
candidus inperti; si nil, his utere mecum.
Have any of you actually taken an AI class? Have any of you coded any AI algorithms?
I've coded in LISP, learned some prolog, coded some rule-based systems, and coded a neural network as well. Actually one of my major projects was to try to code a basic artificial intelligence that could perform better than ELIZA. What I was working on was later superceded by XML database chat-bots.
At any rate, it's just grossly immature to the point of it being a waste of time to discuss this. The tech hasn't been implemented yet. Nothing close to the tech has been implemented yet.
Don't talk about neural networks as somethings that will just grow and take over the world. It's Sci-Fi, I assure you.
Neural Networks have been around for FIFTY years without any real progress. It wasn't until recently...like a few years ago that some new promising developments have been made. But for those of you in it, the promises of Neural networks has been told for decades and it has failed to deliver.
Honestly, if any of you just crack open a book on AI and start some basic coding examples, you would laugh when you see how far off we are.
For now, I'll admit that we have made progress in basic pattern recognition and image analysis. But it just doesn't make sense to talk about ensuring morality in AI before anything resembling AI is built.
In fact to me, the only reason why you would want to donate money and talk about Skynet like possibilities is pure marketing stunt because it encapsulates a kind of fear which intrigues people.
The reality is our "AI" for now is nothing more than:
1. Searching of large related databases.
2. Basic pattern recognition in very limited applications. (Think CAPTCHAS or Siri)
3. Solving of mathematical equations through searches.
4. Advanced Text Parsing.
I picked these up from wiki, which basically encapsulates how I feel about the whole endeavor:
"Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool." (Dewdney, p. 82)
"Neural networks, for instance, are in the dock not only because they have been hyped to high heaven..."
We can't ensure morality or emotions in our AI, because the computational foundations where such high level concepts can take hold have been built yet.
To know the difference between right and wrong, you have to have some working physical knowledge of the world. We don't have AI with working physical knowledge of the world. We don't have any machines with working physical knowledge of the world. What we have limited programs and computers which solve for equations without any kind of comprehension of awareness as to what they are doing.
Yesterday I build a working robotic arm in Arduino. Let's all debate about how I can ensure my robotic arm knows right from wrong and doesn't wind up killing me out of spite for me, its creator.
I'll tell you all what I did so you can see why this debate is ridiculous as hell.
My robotic uses Linear actuators and Servos. When I send a pulse of 1500ms at 7.4 volts to my servo, it swings to base position. When I input a signal of 1200us, it swings -90 degrees. When i input a signal of 1800us, it swings to positive 90 degrees.
I hope you all can tell me what code I should put into my arduino controller to give my robotic arm knowledge of right from wrong.
No need to waste further brain cells debating this. I'm giving you all the chance to tell me the exact code I should put into my controller to teach the robotic arm right from wrong so it wont hurt people in the future.
Go on. I'll upload the code today. This is what I have so far.
#include <Servo.h>
Servo primary_servo; // create servo object to control a servo
// a maximum of eight servo objects can be created
Servo secondary_servo;
int pos = 0; // variable to store the servo position
void setup()
{
//myservo.detach(9);
//pinMode(4, OUTPUT);
//Serial.begin(9600);
/* SERVO INITIALIZATION */
primary_servo.attach(9); // PRIMARY SERVO ON CHANNEL 9
secondary_servo.attach(8); // SECONDARY SERVO ON CHANNEL 8
default_open.attach(13);
default_close.attach(12);
//right_lower_blade_servo.attach(7);
}
void loop()
{
/*INITIAL TRANSFORMATION SEQUENCE*/
delay(50);
primary_servo.writeMicroseconds(2000);
secondary_servo.writeMicroseconds(1900);
delay(3000); //wait two seconds for the right and left arms to open
default_open.writeMicroseconds(2000);
default_close.writeMicroseconds(1000);
delay(8000); //UPPER BLADE RISES; RIGHT ARM EXTENDS OUTWARD USING SLIDE RAIL COMPLETION
/****INSERT KNOWLEDGE OF GOOD AND EVIL CODE HERE ****/
primary_servo.writeMicroseconds(1600);
secondary_servo.writeMicroseconds(1800);
}
That doesn't really answer anything. There are a huge number of radically different models of computation, but the set of things they can compute ends up being the same. Why should I suspect that using different chemicals breaks that trend?
In fairness, that can be said about a lot of things, because it turns out that a lot of hard problems require a huge amount of data to have a chance at solving. Machine translation research started up around the same time as neural network research. Both ran into the problem that computers of the time were vastly underpowered for the problems they were trying to tackle. Both fields stagnated for decades, and only recently has machine translation started making real progress. I'd be cautious about interpreting that as meaning that fluent machine translation is a pipe dream for the distant future, just as I'm not sure the history of neural networks should be interpreted as being a flop.
Now, you go on for some length, but your objection seems simply to be that strong AI is not close at hand. Okay. I haven't seen a single post claiming that it is. So get off your high horse. Your practical expertise is of course welcome here. But your attitude is not.
candidus inperti; si nil, his utere mecum.
Of course you can always attribute the state of a running program to the nature of the rules governing the program; so, too, can you attribute the state of a living system to the nature of the rules governing that system -- organic chemistry, the operation of the laws of physics, hell, even the design of God if you're into that sort of thing. A living system that evolves was "told" to evolve just as much as any program was, in the sense that inexorable rules imposed from without force it to do so.
The burden here, which you do not substantively engage with, is to identify a relevant difference in the nature of the rules governing these domains. The questions before you are:
1) Is there any real process that can take place in the biochemical domain that cannot be efficiently simulated or otherwise replicated in the computational domain?
2) Does the real process that you've identified in (1) play a necessary role in cognition? (As in, without this process, cognition would be impossible.)
If you can't specifically identify the process that you're claiming is not computational and identify where in human cognition that magic is happening, then you're not demonstrating that organic/inorganic distinctions undermine the idea of AI.
I'm not deliberately trying to offend you (though since you dropped the old "crack open a book" bomb I feel considerably less bad about it if it turns out that way) but the stuff you're describing here is rudimentary precisely because your level of knowledge is. You're describing introductory material. As groundbreaking as ELIZA was, chatbots aren't exactly the cutting edge of AI research anymore. It is as if you asked us to open a fifth-grade math book and, finding nothing in there about the Riemann hypothesis, declared attempts at solving it to be a waste of time.
Also, not that it matters because it was an ad lapidem argument to begin with, but your Arduino code is something I could have written when I was about 10 years old. It moves a servo to a fixed position! This is trivial! How could you consider that to be demonstrative of anything? Even if someone provided you with a perfect "distinguish good from evil" subroutine, it would not change the semantics of your trivial program. Moreover, our ignorance concerning the "distinguish good from evil" subroutine is precisely what safe AI research seeks to remedy, so if you really want that subroutine you should be kicking in your money alongside Elon Musk's. I could go on but, well, you're making a bad argument. Let's leave it there.
Speaking of cracking open a book, I'd like to take this opportunity to advise everyone in this thread (well, really, everyone everywhere) to read Alan Turing's 1950 paper, Computing Machinery and Intelligence. (And TomCat, if they didn't assign this in your AI class, find a new class.) There are at least three reasons why you should:
And finally, since the topic of technological failure has been amply covered by several posters here, I'd like to counterbalance that by saying something about technological success.
Being an AI researcher has to be just about the worst job in the world if you're looking to be recognized for your achievements, because you can make what everyone agrees in advance is a touchdown, spike the ball, and do a victory dance, only to find that someone has moved the goalposts another hundred yards down the field. After the hundredth or so time this happens, I imagine it gets pretty frustrating.
First, it was chess. Turing himself suggested it as a benchmark, and everyone agreed that whatever process it was that undergirded good chess play qualified as thinking. Well, they made a computer that could do it better than any human (spike ball, victory dance?) -- and then suddenly it wasn't thinking anymore! (goalposts moved 100yd.)
Then it was mathematical proof. Of course from the beginning computers were used to assist in calculation, but they would find applications in generating new insights as well -- a computer resolved the Robbins Conjecture, an infamous problem that was so difficult that Tarski assigned it as a challenge problem to his best students. (Spike ball? Victory dance?) "Pshaw, it was just searching for consequences of the axioms," said the skeptics. (goalposts moved 100yd., and P.S., no *****, Sherlock. That's what mathematicians do.)
Then it was "creativity." Computers will never be creative. Well, what about when they start making music? Poetry?. (Spike ball? Victory dance?) "It was programmed to do those things!" (goalposts moved 100yd, and this answer is fallacious for reasons I've mentioned.)
Then games of imperfect information. Poker? Crushed. In fact, completely solved! Not only can computers bluff and read bluffs, they can do so in a provably optimal fashion -- they are necessarily as good or better than the best human. (Spike ball? Victory dance?)
Neural networks are science fiction? Are you f***ing joking? They made a universal translator with them, and you can install it on your phone today. Or how about a robot that learns to cook by watching YouTube?
Really I could go on like this for 50 pages, but I think that's enough examples to be going on. There seems to be a kind of cognitive bias in the skeptical reactions to these things -- call it the "Mommy, I'm Special Heuristic", or MISH for short. It's a variant on the same old belief that man is at the center of the universe, whether it manifests itself in an overtly-spiritual way as in "machines don't have souls", or in the form of something like italofoca's belief in the specialness of human brain chemistry, or in some other way altogether. The MISH is a one-two punch of negative cognitive bias: it leads one to undue skepticism about potential achievements in computer science (how could a computer ever do that? Then it would be like me, but I'm special, Mommy!), and it leads one to dismiss these achievements ex post facto. (Whatever the computer is doing might look like thought, but it isn't really, because I'm special, Mommy!)
The important thing to notice is that the MISH has been wrong not once but every single time it's been put to a concrete test. Of course, one can go on rationalizing forever, but after so many failures isn't it time to consider abandoning the heuristic?
Anyhow. I'm not saying a strong AI singularity is going to happen in the near future, but the state of play in AI has been utterly misrepresented by the posts made so far. I don't think AI believers are delusional fools. They have plenty of circumstantial evidence for the feasibility of AI, and a track record of exponential technological and software improvements. Moreover, even if AI is not forthcoming, safe AI research has enormous follow-on benefits outside the field of AI -- an argument for basic morality that takes the form of a computer program or mathematical proof would be a profound breakthrough in moral philosophy.
Musk's millions aren't being wasted here. You could argue, quite correctly, that there are much better philanthropic causes he could have bolstered. But he also could have done much worse.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.