Damn, someone should probably spend $10 million to research how to protect us from Crashing00's berzerk mode...
But that being said, Crashing, all of that stuff is cool, but then again, none of those tasks are all that difficult for a human to do. Think about how much work that they needed to put in to create something like Watson, which was at the time an unheard and outlandish feat. Think of how many people worked on the project, how many hours they spent, how much money was spent on it, how many iterations they had to run...
All to learn a game a human child can figure out in mere minutes.
No, I'll tempt fate and risk the possibility of making AM more pissed than he already is by saying that Musk's millions are going to waste (unless he gets a tax deduction. Still, for ****'s sake man, donate to MEDICAL research.). And I have two reasons why:
1. There's no such thing as strong AI. A super AI, an AI that can function on a cognitive level like a human being, an AI that can do not one of the feats, but all of them; an AI that can actually form the thought, "If you can't specifically identify the process that you're claiming is not computational and identify where in human cognition that magic is happening, then you're not demonstrating that organic/inorganic distinctions undermine the idea of AI," does not exist and may never exist.
2. That's not actually where the threat lies. The threat is not in a technological singularity, or Skynet, or HAL, or a robot apocalypse.
The threat of any technology is where it always has been: in man.
If you're going to research AI safety, why the bleeding frell would you research safety against an artificial mind? To me, that's like the US military saying, "Hey, should we spend the bulk of our time researching the threat the Russians pose against us, or that General Zod poses against us?"
Isn't the FAR MORE PERTINENT threat not a super-advanced AI capable of doing everything that humans do turning on humanity, but instead a regular AI that is capable of only what it was written to in the hands of the wrong people? That maybe, like every single form of technology ever created, maybe it's the people that the technology is in the hands of that are the main threat?
Don't you think that instead of assessing the threat of a strong AI, we should spend our time researching the threat level of more and more complex versions of the programs that only do what they're told to, as italofoca put it, and what would happen if they got into the wrong hands? You know, an actual threat we're dealing with right now?
In fairness to the Future of Life people, I don't think Musk's comments about Terminator-style AIs really captures what's going to be done with his $10 million. I very much doubt anyone is going to be spending it on constructing the underground rave hall where we'll have our parties once we have to blight the sky to stop the evil robots.
I suspect there are a lot of ethical questions about AI that will be relevant in the near future. For example, it's certainly plausible that a self-driving car could find itself in a version of the classic Trolley Problem, where it has to choose between two potentially fatal collisions. You can look at some of their proposed research topics here:
Computer Science:
Verification: how to prove that a system satisfies certain desired formal properties. ("Did I build the system right?")
Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences. ("Did I build the right system?")
Security: how to prevent intentional manipulation by unauthorized parties.
Control: how to enable meaningful human control over an AI system after it begins to operate.
Law and ethics:
How should the law handle liability for autonomous systems? Must some autonomous systems remain under meaningful human control?
Should some categories of autonomous weapons be banned?
Machine ethics: How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost? Should such trade-offs be the subject of national standards?
To what extent can/should privacy be safeguarded as AI gets better at interpreting the data obtained from surveillance cameras, phone lines, emails, shopping habits, etc.?
Economics:
Labor market forecasting
Labor market policy
How can a low-employment society flourish?
Education and outreach:
Summer/winter schools on AI and its relation to society, targeted at AI graduate students and postdocs
Non-technical mini-schools/symposia on AI targeted at journalists, policymakers, philanthropists and other opinion leaders.
Of course you can always attribute the state of a running program to the nature of the rules governing the program; so, too, can you attribute the state of a living system to the nature of the rules governing that system -- organic chemistry, the operation of the laws of physics, hell, even the design of God if you're into that sort of thing. A living system that evolves was "told" to evolve just as much as any program was, in the sense that inexorable rules imposed from without force it to do so.
I agree with that. Reason why we should be worrying about human villainy (the ones were villainy is contingent in their intelligence) and not banana, dogs and computer villainy.
No !
Programs that already evolve were "told" to evolve, so a program only do what it was written to.
Of course you can always attribute the state of a running program to the nature of the rules governing the program; so, too, can you attribute the state of a living system to the nature of the rules governing that system -- organic chemistry, the operation of the laws of physics, hell, even the design of God if you're into that sort of thing. A living system that evolves was "told" to evolve just as much as any program was, in the sense that inexorable rules imposed from without force it to do so.
The burden here, which you do not substantively engage with, is to identify a relevant difference in the nature of the rules governing these domains. The questions before you are:
1) Is there any real process that can take place in the biochemical domain that cannot be efficiently simulated or otherwise replicated in the computational domain?
2) Does the real process that you've identified in (1) play a necessary role in cognition? (As in, without this process, cognition would be impossible.)
If you can't specifically identify the process that you're claiming is not computational and identify where in human cognition that magic is happening, then you're not demonstrating that organic/inorganic distinctions undermine the idea of AI.
1) All chemical process, not only biochemical. You can understand chemistry, model it in equations and make simulation on computers. But in the the end of the day virtual water and iron does not provide real rust. You can make a machine that mix water and iron and produce rust - but the process itself is not done by the computer, just the mixing. The issue with biochemical phenomena is that we are not looking for the mixing process (like the machine above), but for the chemical reactions itself. Thinking is not like pumping blood or generating motion, it's more like fermenting - something machines cannot do without microorganisms.
2) It comes from the belief that human thoughts ARE biochemical reactions. They are not result of it. Producing human thoughts would be like producing rust, you can't compute that, you need real iron and water. Of course you can simulate something with similar results or better results. I don't challenge the idea of AI. I challenge the idea that AI can go wrong in the sense a program made to make new music will evolve into a computer super villain that will hack our bank accounts or something like that.
First, it was chess. Turing himself suggested it as a benchmark, and everyone agreed that whatever process it was that undergirded good chess play qualified as thinking. Well, they made a computer that could do it better than any human (spike ball, victory dance?) -- and then suddenly it wasn't thinking anymore! (goalposts moved 100yd.)
Then it was mathematical proof. Of course from the beginning computers were used to assist in calculation, but they would find applications in generating new insights as well -- a computer resolved the Robbins Conjecture, an infamous problem that was so difficult that Tarski assigned it as a challenge problem to his best students. (Spike ball? Victory dance?) "Pshaw, it was just searching for consequences of the axioms," said the skeptics. (goalposts moved 100yd., and P.S., no *****, Sherlock. That's what mathematicians do.)
Then it was "creativity." Computers will never be creative. Well, what about when they start making music? Poetry?. (Spike ball? Victory dance?) "It was programmed to do those things!" (goalposts moved 100yd, and this answer is fallacious for reasons I've mentioned.)
Then games of imperfect information. Poker? Crushed. In fact, completely solved! Not only can computers bluff and read bluffs, they can do so in a provably optimal fashion -- they are necessarily as good or better than the best human. (Spike ball? Victory dance?)
Neural networks are science fiction? Are you f***ing joking? They made a universal translator with them, and you can install it on your phone today. Or how about a robot that learns to cook by watching YouTube?
Really I could go on like this for 50 pages
I will leave my skepticism behind when the poker playing program suddenly out of nowhere start playing another game. Or when the cooking robot start to make other robots by itself, with no original lanes telling him to do so.
There seems to be a kind of cognitive bias in the skeptical reactions to these things -- call it the "Mommy, I'm Special Heuristic", or MISH for short. It's a variant on the same old belief that man is at the center of the universe, whether it manifests itself in an overtly-spiritual way as in "machines don't have souls", or in the form of something like italofoca's belief in the specialness of human brain chemistry, or in some other way altogether.
Wow so much imagination to interpret the origin of skepticism...
Sorry for the irony, but unless I tell I think we are the center of the universe I would not like to see people saying my arguments comes from it.
Organic Chemistry IS special because it's the chemistry of carbon chains and complexes and it reach unique results because of that. You can can argue those results can be replicated but so far they can't and believing they will in the future is wishful thinking.
Blinking Spirit's mention of 'strange loops' is also a very interesting concept. I find myself with a bit more understanding perception of how my self-awareness, idea generation, and decision making works thanks to that explanation.
So maybe the better thing to be doing than trying to make a 'self-aware' AI is to continue trying to program and design machines that do specific things humans are still better at doing than computers. Following that, if it is desired for some reason to make something human-equaling or surpassing in all the tasks we've designed computers to do better than humans, all those things could be sought to be integrated into a singular machine.
We find ourselves in agreement about what computer programs are and are not good at doing, and compared to humans they both surpass and can't yet reach us in terms of accomplishing certain tasks. Perhaps we would do well to define what those tasks we would like to have a computer capable of doing are. A good place to start would be necessarily useful tasks of both simple and complex structure that humans usually find tedious or otherwise don't like doing. Off the top of my head though, it seems like most of the majors ones (like math and translating) have already been worked out for computer programs to do. One thing I think would be immensely useful isn't even something humans dislike doing - generating ideas. Ideas as in solutions to problems, concepts of inventions, all of the things that we do in a strange-loop-like manner. It would need to be more than something that just generates random permutations and combinations of the data it takes in as input, though. I wonder just how much structure of programming we'd have to put behind such a task so that it generates ideas like a human does. This could be useful in that, well, humans have throughout all of history advanced themselves with ideas in terms of how to think, what we build, what we design, what forms of art we make, what decisions we take. Not all ideas are useful, but there are often ones that propel us forward. We could seek to streamline that idea generation with filters that specify chosen categories of data combination, the fastest processors we can design, and do it significantly faster than a million of our most creative individuals do.
Of course, my general thought about the above is that it may inherently require a self-aware, human-equaling AI to do. More to the point, it would be desirable to have a machine that generates ideas like a human as fast as theoretically possible and with a memory space that allows for something more complex than any one of us can singularly envision.
1. There's no such thing as strong AI. A super AI, an AI that can function on a cognitive level like a human being, an AI that can do not one of the feats, but all of them; an AI that can actually form the thought, "If you can't specifically identify the process that you're claiming is not computational and identify where in human cognition that magic is happening, then you're not demonstrating that organic/inorganic distinctions undermine the idea of AI," does not exist and may never exist.
Clearly strong AI isn't here today; that's certainly not being contested. My beef is with the affirmative arguments people are making for the positions that strong AI is unlikely. They've all been pretty bad.
If you're going to research AI safety, why the bleeding frell would you research safety against an artificial mind? To me, that's like the US military saying, "Hey, should we spend the bulk of our time researching the threat the Russians pose against us, or that General Zod poses against us?"
Sure, let's agree that figuring out how to stop the Russians from nuking the world is more important than figuring out how to stop a totally hypothetical computer from nuking the world. Then again, let's also agree that figuring out how to stop the Russians from nuking the world is more important than producing another season of Duck Dynasty. What of it? People are not required to invest their thought and money into only the single most important problem that can be imagined at a given moment.
Moreover, I'd incorporate by reference what Tiax said above. Elon Musk's indulgence of his "fantasy of fighting SkyNet" might incidentally result in a solution for problems in, say, the behavior of self-driving cars in life-threatening situations.
Isn't the FAR MORE PERTINENT threat not a super-advanced AI capable of doing everything that humans do turning on humanity, but instead a regular AI that is capable of only what it was written to in the hands of the wrong people? That maybe, like every single form of technology ever created, maybe it's the people that the technology is in the hands of that are the main threat?
Don't you think that instead of assessing the threat of a strong AI, we should spend our time researching the threat level of more and more complex versions of the programs that only do what they're told to, as italofoca put it, and what would happen if they got into the wrong hands? You know, an actual threat we're dealing with right now?
Again, this is like saying people should stop devoting time to composing music, writing books, or painting because there's evil in the world that needs fixing. Human endeavor just doesn't work this way. Let the AI researchers research safe AI, let the Duck Dynasty people make Duck Dynasty, and let the people who are supposed to stop humans from destroying the world do that. We can all agree that the latter of the three is the most important job right now. That doesn't abnegate or reduce the value of the other jobs.
Also, your exhortation to work on intermediate technologies appears to be just a description of what safe AI researchers actually do. Apparently they're working on real things (e.g. self-driving cars) before they get going on Skynet.
(However, if we're going to try to engage with this question in some deeper way -- I'd begin by asking which is the more quixotic pursuit: attempting to design a moral AI (a project for which, mind you, you have the luxury of starting from a clean room and empty blueprint) or fixing the evil lurking in the depths of the human psyche?)
The issue with biochemical phenomena is that we are not looking for the mixing process (like the machine above), but for the chemical reactions itself. Thinking is not like pumping blood or generating motion, it's more like fermenting - something machines cannot do without microorganisms.
This is a totally unsubstantiated assertion! How do you know the thinking is in the organic process itself, rather than being an epiphenomenon or result of the process?
2) It comes from the belief that human thoughts ARE biochemical reactions. They are not result of it. Producing human thoughts would be like producing rust, you can't compute that, you need real iron and water.
Thoughts are not "rust." We can infer this by an analysis of the phenomenon of language. I can write down my thoughts and send them to you and in so doing I produce corresponding thoughts in your mind, but at no time do I send you any of the "rust" from inside my brain.
Now, it could still be that the "rust" was a necessary precursor to the thoughts, but this is exactly the position I was asking you to argue for and you have yet to substantiate it.
Of course you can simulate something with similar results or better results. I don't challenge the idea of AI. I challenge the idea that AI can go wrong in the sense a program made to make new music will evolve into a computer super villain that will hack our bank accounts or something like that.
If you agree that I can effectively simulate the organic precursors, and you agree that a mind based on the organic precursors is capable of maliciously hacking your bank account, then what, exactly, is the barrier that stops a simulated mind from maliciously hacking your bank account?
I will leave my skepticism behind when the poker playing program suddenly out of nowhere start playing another game. Or when the cooking robot start to make other robots by itself, with no original lanes telling him to do so.
Much like you, I'll believe it when I see it.
Wow so much imagination to interpret the origin of skepticism...
Sorry for the irony, but unless I tell I think we are the center of the universe I would not like to see people saying my arguments comes from it.
Organic Chemistry IS special because it's the chemistry of carbon chains and complexes and it reach unique results because of that. You can can argue those results can be replicated but so far they can't and believing they will in the future is wishful thinking.
Certainly I understand your desire not to be replaced by a straw man; however, that can hardly be the case when you freely cop to the position I was putting you on in the very next line.
Your position IS based on a specialness heuristic! You've just said so! Of course, you believe your specialness heuristic is justified (though I maintain that all you've made are bare assertions along those lines), but it is no less a specialness heuristic for all that.
Organic Chemistry IS special because it's the chemistry of carbon chains and complexes and it reach unique results because of that.
What about organic chemistry makes the thoughts generated by organic brains categorically different from the thoughts generated by silicon brains?
Is there some reason why a silicon brain could never achieve the complexity of a carbon brain?
Your argument is like saying there's something magically different about data stored in a hard disk drive versus a solid state drive. In one case, the data is represented as magnetic moments. In the other case, it's represented as states of electronic circuit loops. But the data represents the same "thing" (a program/document/image/etc.) regardless of how it's stored. If we stored the data in an organic medium (say, DNA) it would again represent the exact same thing.
Clearly strong AI isn't here today; that's certainly not being contested. My beef is with the affirmative arguments people are making for the positions that strong AI is unlikely. They've all been pretty bad.
I don't feel your argument has been stellar either. Saying we've been able to do a bunch of things previously thought impossible does not mean we'll ever create a strong AI.
italofoca has a point. All of these AI that we've created are ultimately just programs doing what humans programmed them to do. In that sense, they are fundamentally on the same level as the abacus. Their computations are more complex, they are capable of doing more things, but they are still computational devices doing what human beings tell them to do. Nothing has approached the level of sentience.
And don't get me wrong, I don't in any way wish to diminish or disrespect the achievements of those involved in computer science. Quite the contrary, these accomplishments are amazing. That said, I'm quite comfortable putting the strong AI concept as "science fiction."
Sure, let's agree that figuring out how to stop the Russians from nuking the world is more important than figuring out how to stop a totally hypothetical computer from nuking the world. Then again, let's also agree that figuring out how to stop the Russians from nuking the world is more important than producing another season of Duck Dynasty. What of it? People are not required to invest their thought and money into only the single most important problem that can be imagined at a given moment.
No, of course they're not. That does not mean investing millions of dollars to fight SkyNet isn't aboundingly idiotic. SkyNet does not exist. Nor does General Zod. To go about your day actually planning to fight them is therefore idiotic.
Moreover, I'd incorporate by reference what Tiax said above. Elon Musk's indulgence of his "fantasy of fighting SkyNet" might incidentally
I think the better word is "accidentally." Yes, he might accidentally cause some good by his being a dumbass with his money.
That does not mean he is not a dumbass.
Again, this is like saying people should stop devoting time to composing music, writing books, or painting because there's evil in the world that needs fixing.
No, that's not even close to what it's like. It's instead like worrying about monsters underneath one's bed. They don't exist. To worry about them is foolishness. Why? Because it is foolish to worry about nonexistent threats.
Human endeavor just doesn't work this way. Let the AI researchers research safe AI, let the Duck Dynasty people make Duck Dynasty, and let the people who are supposed to stop humans from destroying the world do that. We can all agree that the latter of the three is the most important job right now. That doesn't abnegate or reduce the value of the other jobs.
I have no problem with people researching practical applications with AI. Indeed, I have no problem with someone being a dumbass with ten million dollars. But that person is still a dumbass with his money.
Also, your exhortation to work on intermediate technologies appears to be just a description of what safe AI researchers actually do. Apparently they're working on real things (e.g. self-driving cars) before they get going on Skynet.
I'm glad some practical good is actually coming out of this man's stupidity, certainly.
(However, if we're going to try to engage with this question in some deeper way -- I'd begin by asking which is the more quixotic pursuit: attempting to design a moral AI (a project for which, mind you, you have the luxury of starting from a clean room and empty blueprint) or fixing the evil lurking in the depths of the human psyche?)
Your talent for wordplay remains as sharp as always. Nevertheless, if you're seriously asking me whether it's more unrealistic and impractical to attempt to reinvent the human mind from scratch, instead of concerning oneself with the already extant seven billion human minds, the answer is yes.
This is a totally unsubstantiated assertion! How do you know the thinking is in the organic process itself, rather than being an epiphenomenon or result of the process?
When people think we can see changes in the brain, both electricity and reactions. The simple explanation is that those things are the thoughts itself.
You can always belief thoughts are something else unknow and what we observe are precursors or results of it. But that's jumping hoops imo.
Thoughts are not "rust." We can infer this by an analysis of the phenomenon of language. I can write down my thoughts and send them to you and in so doing I produce corresponding thoughts in your mind, but at no time do I send you any of the "rust" from inside my brain.
The phenomenon of language are purely and entirely on the minds of those evolved and nowhere else. Let me clear my positioning.
Take the infinity money theorem (http://en.wikipedia.org/wiki/Infinite_monkey_theorem). Imagine the infinity monkeys is a machine. This machine will outdo humans and super computers in terms of intellectual production because it will produce every piece of information contingent in our languages - everything that can be written, will be written by this machine.
The machine however do not think in any meaningful way we believe what "thinking" is - they are literally random inputs and nothing more. In the end, however, the machine did nothing impressive - it just produce all the combinations of letters, which does only have a meaning because someone gave meaning to then. Without anyone to read it, the symbols the infinity monkey machine made are meaningless, which points out that the "information" is ultimately produced by our brains when we read the symbols and not by the machine.
Of course, the way computers solve problems are necessarily the way this machine solves it, except more efficiently. Instead of trying everything, it only tries the things that can possibly work (things pre-determined by the code).
This is no different from computers. They don't hold any information until some human reads what is on the screen. In the end it's the human brain that produces the information as a reaction of seen / reading / hearing the symbols. This is also how language works and why, despite thoughts being "rust", they don't need to be literally moved from a mind to another for language to be possible. It's more like "rust" causing a phenomena that produces "rust" in other parts. We don't know precisely why and how we do that, but those chemical reactions are the best answer for it at the moment imo.
My skepticism about the AI the way you see it is that the computer would have to be able to interpret the symbols like we do for it to exist. It cannot work on the same fundamentals the infinity monkey machine do (which is the fundamentals that all computers - at least the ones I know - works). But how a organism have to work in order for that to be possible is unknown in the end of the day. The only hint I know is brain chemistry so I feel like it's reasonable to assume it is a important part of this largely unknown process.
If you agree that I can effectively simulate the organic precursors, and you agree that a mind based on the organic precursors is capable of maliciously hacking your bank account, then what, exactly, is the barrier that stops a simulated mind from maliciously hacking your bank account?
I don't think a machine can effectively simulate us the way way we can't simulate a machine.
Overall I find it very weird that you guys are considering thinking and information as something abstract/incorporeal. It's not like reasoning alone is behind human thinking. Our feelings is part of our thinking process just like reasoning, or at least that's what psychology people I know belief (and something I tend to agree, from my own experience).
Certainly I understand your desire not to be replaced by a straw man; however, that can hardly be the case when you freely cop to the position I was putting you on in the very next line.
Your position IS based on a specialness heuristic! You've just said so! Of course, you believe your specialness heuristic is justified (though I maintain that all you've made are bare assertions along those lines), but it is no less a specialness heuristic for all that.
As far as I know special heuristics does not imply anthropocentrism. I can be blamed of one but no the other. I could be wrong through...
This is a totally unsubstantiated assertion! How do you know the thinking is in the organic process itself, rather than being an epiphenomenon or result of the process?
When people think we can see changes in the brain, both electricity and reactions. The simple explanation is that those things are the thoughts itself.
You didn't respond to my last post, so I'll rephrase the concept: why would it be impossible to replicate the electrical structures you've identified using non-organic circuits? The "electricity and reactions" you're discussing represent physical states of matter in the brain, which obey the laws of physics. If we built a very fast and complex silicon computer that accurately simulated these physical states of matter (and thereby generated thoughts and ideas) why would this not allow the computer to "think" the way a human does?
There are probably more efficient ways to allow a computer to achieve human-like intelligence than brute-force simulation of a human brain, but why should this not work in principle?
I like how you use the term brute force here. It helps us to separate the argument against this human-like AI in two.
First, AI could come from "brute force", a mechanical analogous of a human brain. I say this is improbable because we don't know how human brains works and the little we know evolves organic chemistry. Talking about a replica human brain is like talking about a machine bacteria, or machine stomach. It could be possible, I never denied that, I just say we are pretty far from that and maybe we could never accomplish.
Second, AI may not work precisely like our brain but have the same results (BS original counter argument to my post). My argument is that human cognition is able to give new meaning to the information it receives while computers are limited to give the feedback it codes tells it to give. Learn algorithms are able to write extra lines or rewrite old ones on the procedure but that doesn't change the fact the feedback were prescribed. So even with very intelligent AI, programers knows what it will end up doing (poker player bots playing poker, music creator bots creating music, etc).
The way I see it an AI going maliciousness and conquering the world is as likely as someone jamming random keyboard inputs in the PC and end up hacking world leader's mail boxes and causing WW3. Hey, it's POSSIBLE right ? But hell I'm not worry about that. Arguing that a AI will more likely to do that then a random program does not proceeds because while the AI will do fine coding (except of random one), it will do fine coding and not do what it was not meant to do.
The way I see it an AI going maliciousness and conquering the world is as likely as someone jamming random keyboard inputs in the PC and end up hacking world leader's mail boxes and causing WW3. Hey, it's POSSIBLE right ? But hell I'm not worry about that. Arguing that a AI will more likely to do that then a random program does not proceeds because while the AI will do fine coding (except of random one), it will do fine coding and not do what it was not meant to do.
I don't think malicious AI are a problem right now, but they're a conceivable problem as soon as we create computers with human-like intelligence.
You acknowledge it's at least theoretically possible to create a computer with human-like intelligence, at least by using the "brute force" method I described (and probably by other methods as well).
As I mentioned in an earlier post, we currently posses computers that are significantly faster than human brains. So it should be relatively trivial to run an "overclocked" human brain on a computer, once we have a normal human brain simulated. A brain that works many times faster than any human brain would naturally be much smarter, meaning we humans can now be outsmarted by an AI. All that's needed is for the simulated brain to have malicious intentions, and we're in trouble.
I don't feel your argument has been stellar either. Saying we've been able to do a bunch of things previously thought impossible does not mean we'll ever create a strong AI.
It's entirely possible that my arguments haven't been stellar -- I'm not an AI expert or even that interested in it, though I do have many colleagues who are. However, I've been debating long enough to know that sometimes the problem isn't my arguments, but my opponents being irrationally wedded to a specific position that they simply cannot be moved from. The fact that you keep on calling Elon Musk a dumbass (which is a lauguable notion) and bringing up General Zod makes me suspect the latter, but I'll extend the benefit of the doubt a bit here.
Let me write down some arguments in favor of AI in a bit more detail. The detailed exposition should mean that if you have a good-faith problem with these arguments, you can point out specific assumptions or inferences which are bad. Note that e.g. mentioning General Zod will immediately disqualify you from good-faith engagement with the argument, because General Zod is not going to be among the assumptions or inferences.
Here's an argument to start off with:
1) Humans exist.
2) Humans are intelligent.
3) Humans are energy-efficient.
Therefore,
4) Energy-efficient human-level intelligence (in other words, strong AI) is not forbidden and indeed is expressly permitted by the laws of nature.
This rather trivial-looking argument actually does a surprising amount of work against your position here.
First of all, it implies that if strong AI is science fiction, then it's "hard" science fiction. In fact it creates a specific distinction between strong AI and something like, say, warp drive -- with warp drive, we can look at the laws of nature as we know them and say with very high confidence that even if it were possible, it would require impractical transports of energy to realize. On the other hand, the existence of strong intelligence in nature proves there are no natural barriers to strong AI. General Zod is closer akin to warp drive -- and we can thereby conclude that your General Zod analogies are broken. (Having so demonstrated, I'm going to skip over the parts of your posts where you used them.)
Second, it shows that strong AI is an engineering problem, not a physics problem. This has two upshots for the discussion we're having here: one, engineering problems are rarely outright unsolvable, and demonstrating that a particular one is is a case of "proving a negative." This means that an affirmative anti-strong-AI argument is going to be hard -- cheap shots about General Zod won't do. Two, it opens the question up to the kinds of heuristic and circumstantial arguments I was making in a prior post. The cumulative nature of engineering knowledge means that the likelihood of a hard engineering problem being solved is (circumstantially) increased by the solution of easier, related problems. Thus my list of examples of achievements in AI research ought to increase the epistemic probability you assign to strong AI.
italofoca has a point. All of these AI that we've created are ultimately just programs doing what humans programmed them to do. In that sense, they are fundamentally on the same level as the abacus. Their computations are more complex, they are capable of doing more things, but they are still computational devices doing what human beings tell them to do.
I already addressed this argument. The idea that a computational device is "on rails" but a human being is not is an instance of the human specialness heuristic. It is not as though the computer follows rules but we fly free. We too follow inexorable rules of physics and biochemistry that we cannot break.
It's certainly the case that our rules permit a wider variety of outwardly-observable behaviors than do the computer rule sets developed to date. However, in order to turn this into an affirmative argument against strong AI, it is necessary to identify a specific fundamental barrier that would actually stop a computer from exhibiting a rule set of equal complexity. (bitterroot has been attempting to engage in a good argument against the existence of such a barrier; I won't repeat it in detail but it's the matter of directly simulating the rules of biochemistry with a computer.)
Nothing has approached the level of sentience.
"Hasn't" isn't the same as "won't" or "can't" or "is unlikely."
I think the better word is "accidentally." Yes, he might accidentally cause some good by his being a dumbass with his money.
That does not mean he is not a dumbass.
I don't understand your repeated insistence on calling Elon Musk a dumbass. You know that is an absurd notion, right? Furthermore, what business do you have declaring this to be accidental? Are you a mind-reader? It would only be accidental if Elon Musk were unaware of the near-term benefits of safe AI research. I guess starting from your unfounded assumption that he's a dumbass, you might circumstantially conclude that. I deny the assumption and your conclusion as well.
I'm glad some practical good is actually coming out of this man's stupidity, certainly.
So, just to be clear -- one of my assertions, which you contested by way of direct denial, was "Musk's millions aren't being wasted here." You now agree with that assertion?
Your talent for wordplay remains as sharp as always. Nevertheless, if you're seriously asking me whether it's more unrealistic and impractical to attempt to reinvent the human mind from scratch, instead of concerning oneself with the already extant seven billion human minds, the answer is yes.
That wasn't what I asked, though, was it? My question wasn't whether we ought to be concerned with human evil. Of course we should be concerned. Very concerned. Maximally concerned, even.
My question was whether or not it can be fixed. To make the question more specific and easier to engage with: With Musk's $10M, AI researchers can change the way a self-driving car "thinks" about the trolley problem. If we gave you the $10M to distribute to whatever organization or recipients you thought best instead, could you use it to change the way humans think about the trolley problem? What effects can we expect from your chosen intervention and how likely is it to succeed? Based on your answer to the preceding, which is ultimately the more practical use of the money?
When people think we can see changes in the brain, both electricity and reactions. The simple explanation is that those things are the thoughts itself.
In order for something to be the simplest explanation, it first has to be an explanation, and in order to be an explanation, it must explain all observed facts, not just the ones you cherry pick. Thoughts are associated with biochemical brain activity. They can also be transported from mind to mind without the associated biochemical activity. In fact, they can be archived and restored long after the original electorchemical activity has dissipated into pure entropy.
Your explanation has to account for the fact that thoughts are inorganically transportable from mind to mind.
The phenomenon of language are purely and entirely on the minds of those evolved and nowhere else.
Not so. Shakespeare's mind no longer exists. I can still reproduce some of his thoughts in my mind by reading his linguistic output. If this procedure is contingent on his mind, how can it take place when his mind does not exist?
Take the infinity money theorem (http://en.wikipedia.org/wiki/Infinite_monkey_theorem). Imagine the infinity monkeys is a machine. This machine will outdo humans and super computers in terms of intellectual production because it will produce every piece of information contingent in our languages - everything that can be written, will be written by this machine.
The machine however do not think in any meaningful way we believe what "thinking" is - they are literally random inputs and nothing more.
I deny this thesis. We don't know how the human creative process works; randomness or pseudorandomness could be involved in a crucial way. If it is, and I deny that the output of a device with random inputs is thought, then I not only deny that Monkey Shakespeare thinks -- I may also be denying that Shakespeare himself thought! This is an absurdity. Thus the reasoning is invalid; it needs a further assumption that no randomness is involved in human thought.
In the end, however, the machine did nothing impressive - it just produce all the combinations of letters, which does only have a meaning because someone gave meaning to then. Without anyone to read it, the symbols the infinity monkey machine made are meaningless, which points out that the "information" is ultimately produced by our brains when we read the symbols and not by the machine.
Certainly I agree that meaning is assigned by minds. However, that does not speak to the point. The point is that these meanings can be transferred between minds without transferring any of the organic matter or reactants! Therefore the organic chemistry was not a crucial part of the meaning. It can be reconstructed without it.
Your whole thesis here, as I see it, is that minds depend crucially on organic chemistry. You are not going to be able to prove this thesis by referring to abstract properties of minds, because those properties would be the same however a mind came to exist. You must specifically explain how organic chemistry is irreducibly a part of the process.
This is also how language works and why, despite thoughts being "rust", they don't need to be literally moved from a mind to another for language to be possible. It's more like "rust" causing a phenomena that produces "rust" in other parts. We don't know precisely why and how we do that, but those chemical reactions are the best answer for it at the moment imo.
Again, the rust never leaves your mind, but the thoughts do. You can reconstruct the thoughts long after the rust is gone. Thus the thoughts may be caused by the rust, but they certainly aren't the rust, and you certainly have no basis for concluding that the rust is necessary.
I don't think a machine can effectively simulate us the way way we can't simulate a machine.
I don't think just switching two words around in a sentence constitutes an argument...
Overall I find it very weird that you guys are considering thinking and information as something abstract/incorporeal.
It's not like reasoning alone is behind human thinking. Our feelings is part of our thinking process just like reasoning, or at least that's what psychology people I know belief (and something I tend to agree, from my own experience).
...
This is getting very close to a "computers don't have souls" argument.
As far as I know special heuristics does not imply anthropocentrism. I can be blamed of one but no the other. I could be wrong through...
Quite true. I drop any charge of anthropocentrism.
It's entirely possible that my arguments haven't been stellar -- I'm not an AI expert or even that interested in it, though I do have many colleagues who are. However, I've been debating long enough to know that sometimes the problem isn't my arguments, but my opponents being irrationally wedded to a specific position that they simply cannot be moved from. The fact that you keep on calling Elon Musk a dumbass (which is a lauguable notion) and bringing up General Zod makes me suspect the latter, but I'll extend the benefit of the doubt a bit here.
Dude, I'm more than willing to meet you halfway here. Just be warned that if your aim is to demonstrate this guy's expenditure of 10 million dollars to fight killer robots was a smart purchase on his part, you've got a LOT of ground to cover to meet me at said halfway point.
Let me write down some arguments in favor of AI in a bit more detail.
Strong AI, mind. Regular AI we already have. In fact, I played against five AI not too long ago in DotA.
The detailed exposition should mean that if you have a good-faith problem with these arguments, you can point out specific assumptions or inferences which are bad. Note that e.g. mentioning General Zod will immediately disqualify you from good-faith engagement with the argument, because General Zod is not going to be among the assumptions or inferences.
Now hang on there. General Zod was brought up because he is a fictional character, as is the Terminator. General Zod, therefore, is a completely valid analogy. In both cases, a person is expending large amounts of resources against a nonexistent, fictional threat.
Now, evidently your aim is to demonstrate that spending money to fight the Terminator is neither as frivolous, nor as foolish as fighting General Zod. You're welcome to do so, but recognize that the burden of proof is on you there.
Here's an argument to start off with:
1) Humans exist.
2) Humans are intelligent.
3) Humans are energy-efficient.
Therefore,
4) Energy-efficient human-level intelligence (in other words, strong AI) is not forbidden and indeed is expressly permitted by the laws of nature.
Umm... Take a look at that argument again. If if we were to grant you premise #3 (I'm not myself sure what you mean by "energy-efficient"), you have a rather conspicuous problem. Namely, you are using the existence of humans — who, as you say, have human level intelligence — to say that human-level artificial intelligence is permitted by nature.
Except, that doesn't follow, because humans aren't artificial intelligence. Matter of fact, they're precisely the opposite.
This rather trivial-looking argument actually does a surprising amount of work against your position here.
First of all, no it doesn't, as I demonstrated above.
Second of all, whether the existence of AI is "permitted by nature" was never a claim I took issue with. I have no issue with the claim that it is physically possible that a strong AI could exist.
First of all, it implies that if strong AI is science fiction, then it's "hard" science fiction.
*Shrug* Rather a meaningless distinction.
In fact it creates a specific distinction between strong AI and something like, say, warp drive -- with warp drive, we can look at the laws of nature as we know them and say with very high confidence that even if it were possible, it would require impractical transports of energy to realize. On the other hand, the existence of strong intelligence in nature proves there are no natural barriers to strong AI. General Zod is closer akin to warp drive -- and we can thereby conclude that your General Zod analogies are broken. (Having so demonstrated, I'm going to skip over the parts of your posts where you used them.)
No, because in both cases, still ridiculous.
If I am afraid of Darth Vader coming after me, I am being ridiculous, because Darth Vader is fictional.
If I am afraid of Roy Batty coming after me, I am being ridiculous, because Roy Batty is fictional.
Whether your choice of boogeyman comes from the hard or soft science fiction genre doesn't make any difference. If I added Khal Drogo to that list, still ridiculous. If I added the boogeyman to that list, still ridiculous. They are each of them ridiculous.
Second, it shows that strong AI is an engineering problem, not a physics problem.
I have never stated otherwise.
This means that an affirmative anti-strong-AI argument is going to be hard -- cheap shots about General Zod won't do.
Not liking an analogy does not stop it from being pertinent, Crashing.
Two, it opens the question up to the kinds of heuristic and circumstantial arguments I was making in a prior post. The cumulative nature of engineering knowledge means that the likelihood of a hard engineering problem being solved is (circumstantially) increased by the solution of easier, related problems. Thus my list of examples of achievements in AI research ought to increase the epistemic probability you assign to strong AI.
The fact that you solved a previous problem does nothing to suggest that you can solve the next.
Further, as I've said before, no computer in existence has ever approached sentience. Surely there is a world of difference between an abacus to a calculator, a calculator to an early computer, an early computer to a computer in the 1990s, a computer in the 1990s to Watson. This cannot be denied, and the achievements of computer scientists in this regard is to be lauded.
That being said, italofoca has a point, and that point stands: At no point has a computer transcended the model of the computational device that does what the human beings tell it to do.
I already addressed this argument. The idea that a computational device is "on rails" but a human being is not is an instance of the human specialness heuristic. It is not as though the computer follows rules but we fly free. We too follow inexorable rules of physics and biochemistry that we cannot break.
Irrelevant nonissue. Something is sentient or it is not. Computers are not sentient. Saying, "Well sentience ain't all it's cracked up to be," does nothing to change this.
It's certainly the case that our rules permit a wider variety of outwardly-observable behaviors than do the computer rule sets developed to date. However, in order to turn this into an affirmative argument against strong AI, it is necessary to identify a specific fundamental barrier that would actually stop a computer from exhibiting a rule set of equal complexity.
Nonexistence is a fantastic barrier against performing most things, you will find.
I don't understand your repeated insistence on calling Elon Musk a dumbass. You know that is an absurd notion, right?
It is an absurd notion why? You're begging the question, asserting this without rationalization, whereas I have provided plenty of rationalization: he's spending large sums of resources in the hopes of fighting a nonexistent threat.
Now, maybe you didn't realize this guy actually believed he was fighting killer robots. That's fine.
But if your comments were made with that knowledge, then please point out to me why killer robots posing an existential threat to humanity within the next five years is a rational belief, that I may see Elon Musk as the Cassandra-figure that he truly is.
If I were to spend 10 million dollars to fight against Darth Vader, would you think me a fool? Of course you would.
What about the Boogeyman? 10 million dollars to fight against the Boogeyman? How about it Crashing00? No, Highroller maintains foolishness there, because he's wasting his money. The Boogeyman doesn't exist.
So have at it, Crashing00. Demonstrate my "absurd notions."
Furthermore, what business do you have declaring this to be accidental? Are you a mind-reader? It would only be accidental if Elon Musk were unaware of the near-term benefits of safe AI research. I guess starting from your unfounded assumption that he's a dumbass, you might circumstantially conclude that. I deny the assumption and your conclusion as well.
If he wants to donate the money to further the research that this organization is actually doing, then he's fine.
If he's doing it to fight killer robots, which is exactly what he said he was donating it for, then he's a moron.
Yes, his moronic actions might provide a good deal of benefit. He might accidentally do a lot of good. Still a moron.
Incidentally, what's gotten you so worked up about this?
So, just to be clear -- one of my assertions, which you contested by way of direct denial, was "Musk's millions aren't being wasted here." You now agree with that assertion?
No, of course not. Your confusion lies in who I was accusing of doing the wasting.
If Musk believes he's fighting killer robots, then he's wasting his money.
That doesn't necessarily mean the Future of Life Institute is wasting the money Musk gave them.
Your talent for wordplay remains as sharp as always. Nevertheless, if you're seriously asking me whether it's more unrealistic and impractical to attempt to reinvent the human mind from scratch, instead of concerning oneself with the already extant seven billion human minds, the answer is yes.
That wasn't what I asked, though, was it?
I feel it denotes to exactly what you asked.
My question was whether or not it can be fixed.
Depends on what you mean by fixed. It can certainly be made better.
To make the question more specific and easier to engage with: With Musk's $10M, AI researchers can change the way a self-driving car "thinks" about the trolley problem. If we gave you the $10M to distribute to whatever organization or recipients you thought best instead, could you use it to change the way humans think about the trolley problem? What effects can we expect from your chosen intervention and how likely is it to succeed? Based on your answer to the preceding, which is ultimately the more practical use of the money?
I could probably change the way human beings think about the trolley problem, were that my aim, by donating the 10 million dollars to a think tank of people who are thinking about the trolley problem, as opposed to people who must first create an AI advanced enough to conceive of the trolley problem, and then program morality into it. Right?
Again, I don't want to make it seem like I'm anti-computer science or whatever, because I'm not. I don't want to make it seem like I have anything against people seeking to create AI who can conceive of trolley problems either, because I'm not. But if you're asking which is more impractical and/or more unrealistic — which is what the word "quixotic" means — there's really only one answer.
Okay, so 90% of your post is garbage about General Zod and Darth Vader again. I'm going to have to skip that. If this is to be a criticism of some argument of mine, I can only engage with responses to premises or conclusions that I've stated. Unfortunately, when we clear away the detritus from your post there isn't much left to work with:
Umm... Take a look at that argument again. If if we were to grant you premise #3 (I'm not myself sure what you mean by "energy-efficient"), you have a rather conspicuous problem. Namely, you are using the existence of humans — who, as you say, have human level intelligence — to say that human-level artificial intelligence is permitted by nature.
Except, that doesn't follow, because humans aren't artificial intelligence. Matter of fact, they're precisely the opposite.
So you're denying premise 3? That humans are energy-efficient? Our average power output is 100 watts. Granted, "energy-efficient" is a vague term, but I'm not asking it to do very much work. I only require that the laws of nature allow intelligent systems to exist without, say, having to burn up whole stars to run them.
This premise is undeniable. It seems that you're alleging another problem, which is that "artificiality" is relevant here. It's not. Just modify the argument to say "intelligence" on both sides rather than "artificial intelligence." There is no reliance on how the intelligence was constructed, only that it exists.
Second of all, whether the existence of AI is "permitted by nature" was never a claim I took issue with. I have no issue with the claim that it is physically possible that a strong AI could exist.
Okay, so I trust I will not see further comparisons with things that can't exist, like General Zod?
Not liking an analogy does not stop it from being pertinent, Crashing.
Show me where I said your analogy is invalid because I don't like it, and then maybe you'll have something here.
The fact that you solved a previous problem does nothing to suggest that you can solve the next.
The feasibility of powered flight was inferred from studying kites and gliders. Don't worry, though; I'll borrow General Zod's time-phone and ring up the Wright brothers and tell them Highroller says they're full of *****.
Irrelevant nonissue. Something is sentient or it is not. Computers are not sentient. Saying, "Well sentience ain't all it's cracked up to be," does nothing to change this.
At the present time, computers are not sentient. This claim is wholly conceded to you and entirely uncontested. Now show me a premise I've stated or a conclusion I've made that depends on it!
Nonexistence is a fantastic barrier against performing most things, you will find.
"Airplanes are nonexistent, therefore they are impossible." Spot the problem with this argument. You will probably need a hint, so here it is: imagine making it in 1902.
It is an absurd notion why?
Because 140 is probably a conservative estimate of his IQ? I mean, I can't find an authoritative source on that (the best I can find is that at age 16 he scored the highest nationwide on an IBM engineering exam) but the chances of a person who is literally stupid having his profile of achievements are simply astronomical.
However absurd killer robots are, the idea of Elon Musk being stupid is still more absurd.
Incidentally, what's gotten you so worked up about this?
About what, bad arguments? I'm always "worked up" about those (though I'm not sure "worked up" is the right phrase). Probably 75% of the posts I make on here are because I read something that I think is nonsense and my love for the truth compels me to say something. About you calling Elon Musk a moron over and over again? Well, I could say some things about planks and motes; I think you can probably fill in the details yourself.
No, of course not. Your confusion lies in who I was accusing of doing the wasting.
Right, I'm not going to argue any more about Musk's psychology -- the claim I put on the table was that the money was not ultimately being wasted, and I think we've resolved that issue; it's not, and my claim stands.
I could probably change the way human beings think about the trolley problem, were that my aim, by donating the 10 million dollars to a think tank of people who are thinking about the trolley problem, as opposed to people who must first create an AI advanced enough to conceive of the trolley problem, and then program morality into it. Right?
No, no, no. Of course you can get a room full of people to think about the trolley problem by putting a pile of money in the room and distributing it to everyone who agrees to think about the trolley problem. That's not the question.
Remember -- one nice thing about computers is that they are universal, so that if the AI researchers are successful in generating insight with the $10M, every self-driving car everywhere will make better decisions as a result, because the software can be installed on all of them. So the challenge isn't just to get some people to think about it for awhile -- it's to get many people to actually behave better in real moral quandaries.
What I'm trying to get at is the underlying value to society. Upping the "moral IQ" of self-driving cars has real, redeemable value in that they will make better decisions and cause fewer injuries. Getting a room full of people to temporarily think about the trolley problem until you run out of money does not necessarily result in any value at all. You only get value if your intervention is able to produce long-term, reliable changes in human moral decision making, and not just for the 10 people in the room.
Hell, there has already been plenty of well-funded and quality research about the trolley problem in both moral philosophy and neuroscience, and so far, general human behavior has not improved one whit as a result.
Okay, so 90% of your post is garbage about General Zod and Darth Vader again. I'm going to have to skip that.
Once again, not liking the analogy does not mean it is not valid. And in this case, it certainly is.
If this is to be a criticism of some argument of mine,
You're saying I'm wrong for saying that Elon Musk is a dumbass, so yes, it is a criticism of an argument of yours. The fact that you're threatening ragequitting on the grounds of my bringing up fictional characters indicates to me that you are offended. It is as though you have some sense that I am insulting your intelligence by bringing up absurdities. Does this accurately sum up your feelings toward the situation?
If so, keep in mind that these absurdities are perfectly analogous to Musk's frame of mind. So, before you rush to defend him next time, do consider this carefully, and then maybe you won't find yourself in this situation.
So you're denying premise 3? That humans are energy-efficient?
No, I am not denying premise 3. Reread what I actually said. I said I have no idea what you were saying with premise 3. That is not denying premise 3.
Our average power output is 100 watts. Granted, "energy-efficient" is a vague term, but I'm not asking it to do very much work. I only require that the laws of nature allow intelligent systems to exist without, say, having to burn up whole stars to run them.
This premise is undeniable.
Ok, great. Premise 3 is full of win. Irrelevant really, considering the problem with the conclusion.
It seems that you're alleging another problem, which is that "artificiality" is relevant here. It's not. Just modify the argument to say "intelligence" on both sides rather than "artificial intelligence." There is no reliance on how the intelligence was constructed, only that it exists.
I could do that, but it would say absolutely nothing of interest, because all I would be saying is, "Human-level intelligence is possible due to the existence of the human-level intelligences that humans have." Which says nothing about AI, the subject of our discussion.
And all of THAT is pointless because, once again, I never said that strong AI were physically impossible.
Second of all, whether the existence of AI is "permitted by nature" was never a claim I took issue with. I have no issue with the claim that it is physically possible that a strong AI could exist.
Okay, so I trust I will not see further comparisons with things that can't exist, like General Zod?
Why? It remains science fiction, doesn't it? It is therefore perfectly analogous to Musk's scenario of killer robots, and therefore Zod remains part of the discussion. (Although I'm perfectly willing to switch him with a different comic book villain if you'd prefer.)
Not liking an analogy does not stop it from being pertinent, Crashing.
Show me where I said your analogy is invalid because I don't like it, and then maybe you'll have something here.
If you agree that my analogy is valid, then you have no business complaining about it.
If you disagree, demonstrate how it isn't instead of complaining about it.
If you cannot demonstrate how it isn't, then from whence comes your disagreement?
The fact that you solved a previous problem does nothing to suggest that you can solve the next.
The feasibility of powered flight was inferred from studying kites and gliders. Don't worry, though; I'll borrow General Zod's time-phone and ring up the Wright brothers and tell them Highroller says they're full of *****.
Why would I deny powered flight? I see it all the time.
Once again, just because you've solved a problem doesn't mean you can solve the next one.
Irrelevant nonissue. Something is sentient or it is not. Computers are not sentient. Saying, "Well sentience ain't all it's cracked up to be," does nothing to change this.
At the present time, computers are not sentient. This claim is wholly conceded to you and entirely uncontested. Now show me a premise I've stated or a conclusion I've made that depends on it!
Why the claim that the quoted statement was in response to, of course.
'I already addressed this argument. The idea that a computational device is "on rails" but a human being is not is an instance of the human specialness heuristic. It is not as though the computer follows rules but we fly free. We too follow inexorable rules of physics and biochemistry that we cannot break.'
Which is totally irrelevant. "Well human thinking has limitations" does nothing to change the fact that humans are sentient and computers aren't. Saying, "Well we too follow rules" is therefore a non-sequitur.
Nonexistence is a fantastic barrier against performing most things, you will find.
"Airplanes are nonexistent, therefore they are impossible." Spot the problem with this argument. You will probably need a hint, so here it is: imagine making it in 1902.
No, actually the problem with this argument is you, once again, are attempting to make me out as saying that strong AI were impossible when I already said that I do not believe that they are. Misrepresenting my arguments seems to be a trend with you.
Because 140 is probably a conservative estimate of his IQ?
And this makes him incapable of being foolish? Really now?
I mean, I can't find an authoritative source on that (the best I can find is that at age 16 he scored the highest nationwide on an IBM engineering exam) but the chances of a person who is literally stupid having his profile of achievements are simply astronomical.
I wasn't aware being an engineer made one incapable of being foolish.
However absurd killer robots are, the idea of Elon Musk being stupid is still more absurd.
Except you've already conceded he believes in blatantly absurd things, and yet you're still not going to concede that he's a fool?
About what, bad arguments? I'm always "worked up" about those (though I'm not sure "worked up" is the right phrase). Probably 75% of the posts I make on here are because I read something that I think is nonsense and my love for the truth compels me to say something. About you calling Elon Musk a moron over and over again? Well, I could say some things about planks and motes; I think you can probably fill in the details yourself.
Your "love for the truth"? Really? So what does that love for the truth say when you assess the threat posed by humanity towards killer robots over the next five years?
Let's not confuse love of the truth with love of winning, Crashing00.
Right, I'm not going to argue any more about Musk's psychology
Because you've recognized he's a dumbass, yes. Thank you.
the claim I put on the table was that the money was not ultimately being wasted, and I think we've resolved that issue; it's not, and my claim stands.
No, it doesn't. Musk is wasting his money. That the organization he's giving it to might do good things with it doesn't change that.
If I believe that the world is controlled by lizard people and the only way to defeat them is to throw $100 bills at them, I am wasting my money because I am being a moron. That doesn't mean the (very confused) people I just tossed $100 bills at cannot pick them up, keep them, and use them in fiscally responsible and benevolent ways. Nor does this change the fact that I'm still a moron who has lost a great deal of money even if they were to do so. Their behavior does not make mine retroactively intelligent.
No, no, no. Of course you can get a room full of people to think about the trolley problem by putting a pile of money in the room and distributing it to everyone who agrees to think about the trolley problem. That's not the question.
What is the question and how is it relevant to whether or not Musk is a dumbass, which is what I thought the original topic of discussion was?
What I'm trying to get at is the underlying value to society.
Then I repeat what I said before. If it's purely utilitarian value you wanted, invest those 10 million into medical research. You could help cure one of the myriad health conditions that kill more people annually than car crashes.
Now, once again, because I think you are still laboring under this misinterpretation of my arguments, I will clarify I'm not saying everyone in general who donates money to this organization, or anyone who donates money to AI research in general, is a dumbass or someone who is wasting their money. I don't believe that at all.
However, you did ask me which is more quixotic, and, once again, there is only one correct answer to that.
In order for something to be the simplest explanation, it first has to be an explanation, and in order to be an explanation, it must explain all observed facts, not just the ones you cherry pick. Thoughts are associated with biochemical brain activity. They can also be transported from mind to mind without the associated biochemical activity. In fact, they can be archived and restored long after the original electorchemical activity has dissipated into pure entropy.
Your explanation has to account for the fact that thoughts are inorganically transportable from mind to mind.
The phenomenon of language are purely and entirely on the minds of those evolved and nowhere else.
Not so. Shakespeare's mind no longer exists. I can still reproduce some of his thoughts in my mind by reading his linguistic output. If this procedure is contingent on his mind, how can it take place when his mind does not exist?
I deny this thesis. We don't know how the human creative process works; randomness or pseudorandomness could be involved in a crucial way. If it is, and I deny that the output of a device with random inputs is thought, then I not only deny that Monkey Shakespeare thinks -- I may also be denying that Shakespeare himself thought! This is an absurdity. Thus the reasoning is invalid; it needs a further assumption that no randomness is involved in human thought.
In the end, however, the machine did nothing impressive - it just produce all the combinations of letters, which does only have a meaning because someone gave meaning to then. Without anyone to read it, the symbols the infinity monkey machine made are meaningless, which points out that the "information" is ultimately produced by our brains when we read the symbols and not by the machine.
Certainly I agree that meaning is assigned by minds. However, that does not speak to the point. The point is that these meanings can be transferred between minds without transferring any of the organic matter or reactants! Therefore the organic chemistry was not a crucial part of the meaning. It can be reconstructed without it.
Your whole thesis here, as I see it, is that minds depend crucially on organic chemistry. You are not going to be able to prove this thesis by referring to abstract properties of minds, because those properties would be the same however a mind came to exist. You must specifically explain how organic chemistry is irreducibly a part of the process.
Again, the rust never leaves your mind, but the thoughts do. You can reconstruct the thoughts long after the rust is gone. Thus the thoughts may be caused by the rust, but they certainly aren't the rust, and you certainly have no basis for concluding that the rust is necessary.
I don't know what field of science actually treats thoughts like that.
Thoughts can't be inorganically transported and we can't reproduce Shakespeare's thoughts through language. When someone writes, he alters physical reality producing symbols and these symbols are read and interpreted by another mind by creating new thoughts of it's owm. This is the reason why different human minds may give different interpretation to the same piece of information. Language is a peculiar phenomena because humans have developed the craft of altering the world in specific ways in a attempt to produce the thoughts they desire in another person mind. When someone writes literature or music, the author is trying to cause certain thoughts or sensation on other people but does not have perfect control over the reactions.
Of course you can try to be as objective as you can, as in teaching, laws and scientific papers. This is why more precise languages are created in those cases in order that thoughts can reproduced with higher clarity in attempt everyone is actually interpreting info in the same fashion. Language is thought > information > thought' phenomena, thought and thought' are not the same thing (although they can be equal, but not necessarily) and information, the thing between the two thoughts are very obviously physical such as letters sequence in a paper, patterns carved in stone or pixels in a electric screen.
Even electronic machines who are built to not misinterpret information are unable to circulate thoughts inorganically. Computers literally send information through electricity, radio waves and other methods.
I deny this thesis. We don't know how the human creative process works; randomness or pseudorandomness could be involved in a crucial way. If it is, and I deny that the output of a device with random inputs is thought, then I not only deny that Monkey Shakespeare thinks -- I may also be denying that Shakespeare himself thought! This is an absurdity. Thus the reasoning is invalid; it needs a further assumption that no randomness is involved in human thought.
Randomness can be evolved (probably IS) but by "not work like the infinite monkey machine" means the random experiment of human thinking is not the same random experiment that defined the infinity monkey machine (they can have the same sample space and the same sigma-algebra but certainly not the same probabilistic mapping). Or else if I took a sample of your thoughts I would find something like "efkvb3iu4gn91u3ign3iu4n81ugbreb".
I don't think just switching two words around in a sentence constitutes an argument...
This was not a argument, it was statement on my belief, as I felt I had to clear my positioning.
Information is incorporeal...
It's as incorporeal as the unitary circle defined by the set {(x,y) in R^2 : x^2+y^2=1}. You can certainly interpret it as a incorporeal thing and you won't be missing anything if you do so. Information theory gains nothing by pondering if information is written on paper or carved in stone. However just because information theory wisely chooses to abstract properties of no consequences for IT doesn't mean those properties are not real or they don't exist. And all this have no bearing in the whole AI debate as I never challenged that computers couldn't produce/reproduce certain pieces of information (which would be a utterly stupid position to have as my very infinity monkey example would contradict that).
Also I said "thinking" and "information". While I stand for both being entirely physical, making information incorporeal does not challenge thinking being corporeal.
...
This is getting very close to a "computers don't have souls" argument.
It's more like computers are not organic machines like us and thus there's no guarantee they can do everything we can while assuming they can't do everything we can is consistent with certain positions on neuroscience and chemistry.
If you just use a little imagination it is easy to see why Elon Musk spent his money. Humans are advancing faster and faster every year. If we have a computer that can learn how to play games and is constanlty looking for better plays it will not take that long to create human level A.I.
It's topics like these that makes me think mankind is stupid enough to believe that today's science fiction should be tomorrow's science or to be more precise today's science fact with the way technology is advancing right now at an unprecedented rate. If companies like Google and Boston Dynamics are the Skynet that people have feared from James Cameron's Terminator series then I think we should stop and ask ourselves what drives our own curiosity to endanger our own species in such a way that it would become an existential threat to humanity. It's one thing to be afraid of what we don't understand but why do it in the first place If we already know what the overall consequences will be?
"Restriction breeds creativity." - Sheldon Menery on EDH / Commander in Magic: The Gathering
"Cancel Culture is the real reason why everyone's not allowed to have nice things anymore." - Anonymous
"For what will it profit a man if he gains the whole world, and loses his own soul?" - Mark 8:36
"Most men and women will grow up to love their servitude and will never dream of revolution." - Aldous Huxley, Brave New World
"Every life decision is always a risk / reward proposition." - Sanjay Gupta
But that being said, Crashing, all of that stuff is cool, but then again, none of those tasks are all that difficult for a human to do. Think about how much work that they needed to put in to create something like Watson, which was at the time an unheard and outlandish feat. Think of how many people worked on the project, how many hours they spent, how much money was spent on it, how many iterations they had to run...
All to learn a game a human child can figure out in mere minutes.
No, I'll tempt fate and risk the possibility of making AM more pissed than he already is by saying that Musk's millions are going to waste (unless he gets a tax deduction. Still, for ****'s sake man, donate to MEDICAL research.). And I have two reasons why:
1. There's no such thing as strong AI. A super AI, an AI that can function on a cognitive level like a human being, an AI that can do not one of the feats, but all of them; an AI that can actually form the thought, "If you can't specifically identify the process that you're claiming is not computational and identify where in human cognition that magic is happening, then you're not demonstrating that organic/inorganic distinctions undermine the idea of AI," does not exist and may never exist.
2. That's not actually where the threat lies. The threat is not in a technological singularity, or Skynet, or HAL, or a robot apocalypse.
The threat of any technology is where it always has been: in man.
If you're going to research AI safety, why the bleeding frell would you research safety against an artificial mind? To me, that's like the US military saying, "Hey, should we spend the bulk of our time researching the threat the Russians pose against us, or that General Zod poses against us?"
Isn't the FAR MORE PERTINENT threat not a super-advanced AI capable of doing everything that humans do turning on humanity, but instead a regular AI that is capable of only what it was written to in the hands of the wrong people? That maybe, like every single form of technology ever created, maybe it's the people that the technology is in the hands of that are the main threat?
Don't you think that instead of assessing the threat of a strong AI, we should spend our time researching the threat level of more and more complex versions of the programs that only do what they're told to, as italofoca put it, and what would happen if they got into the wrong hands? You know, an actual threat we're dealing with right now?
But maybe I'm being too harsh. Yes, some dumbass spent 10 million dollars to fulfill his fantasy of fighting SkyNet, but in fairness to him, there have been far bigger wastes of money in that regard.
http://futureoflife.org/misc/AI
I suspect there are a lot of ethical questions about AI that will be relevant in the near future. For example, it's certainly plausible that a self-driving car could find itself in a version of the classic Trolley Problem, where it has to choose between two potentially fatal collisions. You can look at some of their proposed research topics here:
http://futureoflife.org/grants/large/initial
Not a lot of Skynet in there.
I agree with that. Reason why we should be worrying about human villainy (the ones were villainy is contingent in their intelligence) and not banana, dogs and computer villainy.
1) All chemical process, not only biochemical. You can understand chemistry, model it in equations and make simulation on computers. But in the the end of the day virtual water and iron does not provide real rust. You can make a machine that mix water and iron and produce rust - but the process itself is not done by the computer, just the mixing. The issue with biochemical phenomena is that we are not looking for the mixing process (like the machine above), but for the chemical reactions itself. Thinking is not like pumping blood or generating motion, it's more like fermenting - something machines cannot do without microorganisms.
2) It comes from the belief that human thoughts ARE biochemical reactions. They are not result of it. Producing human thoughts would be like producing rust, you can't compute that, you need real iron and water. Of course you can simulate something with similar results or better results. I don't challenge the idea of AI. I challenge the idea that AI can go wrong in the sense a program made to make new music will evolve into a computer super villain that will hack our bank accounts or something like that.
I will leave my skepticism behind when the poker playing program suddenly out of nowhere start playing another game. Or when the cooking robot start to make other robots by itself, with no original lanes telling him to do so.
Wow so much imagination to interpret the origin of skepticism...
Sorry for the irony, but unless I tell I think we are the center of the universe I would not like to see people saying my arguments comes from it.
Organic Chemistry IS special because it's the chemistry of carbon chains and complexes and it reach unique results because of that. You can can argue those results can be replicated but so far they can't and believing they will in the future is wishful thinking.
BGU Control
R Aggro
Standard - For Fun
BG Auras
So maybe the better thing to be doing than trying to make a 'self-aware' AI is to continue trying to program and design machines that do specific things humans are still better at doing than computers. Following that, if it is desired for some reason to make something human-equaling or surpassing in all the tasks we've designed computers to do better than humans, all those things could be sought to be integrated into a singular machine.
We find ourselves in agreement about what computer programs are and are not good at doing, and compared to humans they both surpass and can't yet reach us in terms of accomplishing certain tasks. Perhaps we would do well to define what those tasks we would like to have a computer capable of doing are. A good place to start would be necessarily useful tasks of both simple and complex structure that humans usually find tedious or otherwise don't like doing. Off the top of my head though, it seems like most of the majors ones (like math and translating) have already been worked out for computer programs to do. One thing I think would be immensely useful isn't even something humans dislike doing - generating ideas. Ideas as in solutions to problems, concepts of inventions, all of the things that we do in a strange-loop-like manner. It would need to be more than something that just generates random permutations and combinations of the data it takes in as input, though. I wonder just how much structure of programming we'd have to put behind such a task so that it generates ideas like a human does. This could be useful in that, well, humans have throughout all of history advanced themselves with ideas in terms of how to think, what we build, what we design, what forms of art we make, what decisions we take. Not all ideas are useful, but there are often ones that propel us forward. We could seek to streamline that idea generation with filters that specify chosen categories of data combination, the fastest processors we can design, and do it significantly faster than a million of our most creative individuals do.
Of course, my general thought about the above is that it may inherently require a self-aware, human-equaling AI to do. More to the point, it would be desirable to have a machine that generates ideas like a human as fast as theoretically possible and with a memory space that allows for something more complex than any one of us can singularly envision.
Clearly strong AI isn't here today; that's certainly not being contested. My beef is with the affirmative arguments people are making for the positions that strong AI is unlikely. They've all been pretty bad.
Sure, let's agree that figuring out how to stop the Russians from nuking the world is more important than figuring out how to stop a totally hypothetical computer from nuking the world. Then again, let's also agree that figuring out how to stop the Russians from nuking the world is more important than producing another season of Duck Dynasty. What of it? People are not required to invest their thought and money into only the single most important problem that can be imagined at a given moment.
Moreover, I'd incorporate by reference what Tiax said above. Elon Musk's indulgence of his "fantasy of fighting SkyNet" might incidentally result in a solution for problems in, say, the behavior of self-driving cars in life-threatening situations.
Again, this is like saying people should stop devoting time to composing music, writing books, or painting because there's evil in the world that needs fixing. Human endeavor just doesn't work this way. Let the AI researchers research safe AI, let the Duck Dynasty people make Duck Dynasty, and let the people who are supposed to stop humans from destroying the world do that. We can all agree that the latter of the three is the most important job right now. That doesn't abnegate or reduce the value of the other jobs.
Also, your exhortation to work on intermediate technologies appears to be just a description of what safe AI researchers actually do. Apparently they're working on real things (e.g. self-driving cars) before they get going on Skynet.
(However, if we're going to try to engage with this question in some deeper way -- I'd begin by asking which is the more quixotic pursuit: attempting to design a moral AI (a project for which, mind you, you have the luxury of starting from a clean room and empty blueprint) or fixing the evil lurking in the depths of the human psyche?)
This is a totally unsubstantiated assertion! How do you know the thinking is in the organic process itself, rather than being an epiphenomenon or result of the process?
Thoughts are not "rust." We can infer this by an analysis of the phenomenon of language. I can write down my thoughts and send them to you and in so doing I produce corresponding thoughts in your mind, but at no time do I send you any of the "rust" from inside my brain.
Now, it could still be that the "rust" was a necessary precursor to the thoughts, but this is exactly the position I was asking you to argue for and you have yet to substantiate it.
If you agree that I can effectively simulate the organic precursors, and you agree that a mind based on the organic precursors is capable of maliciously hacking your bank account, then what, exactly, is the barrier that stops a simulated mind from maliciously hacking your bank account?
Much like you, I'll believe it when I see it.
Certainly I understand your desire not to be replaced by a straw man; however, that can hardly be the case when you freely cop to the position I was putting you on in the very next line.
Your position IS based on a specialness heuristic! You've just said so! Of course, you believe your specialness heuristic is justified (though I maintain that all you've made are bare assertions along those lines), but it is no less a specialness heuristic for all that.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
What about organic chemistry makes the thoughts generated by organic brains categorically different from the thoughts generated by silicon brains?
Is there some reason why a silicon brain could never achieve the complexity of a carbon brain?
Your argument is like saying there's something magically different about data stored in a hard disk drive versus a solid state drive. In one case, the data is represented as magnetic moments. In the other case, it's represented as states of electronic circuit loops. But the data represents the same "thing" (a program/document/image/etc.) regardless of how it's stored. If we stored the data in an organic medium (say, DNA) it would again represent the exact same thing.
italofoca has a point. All of these AI that we've created are ultimately just programs doing what humans programmed them to do. In that sense, they are fundamentally on the same level as the abacus. Their computations are more complex, they are capable of doing more things, but they are still computational devices doing what human beings tell them to do. Nothing has approached the level of sentience.
And don't get me wrong, I don't in any way wish to diminish or disrespect the achievements of those involved in computer science. Quite the contrary, these accomplishments are amazing. That said, I'm quite comfortable putting the strong AI concept as "science fiction."
No, of course they're not. That does not mean investing millions of dollars to fight SkyNet isn't aboundingly idiotic. SkyNet does not exist. Nor does General Zod. To go about your day actually planning to fight them is therefore idiotic.
I think the better word is "accidentally." Yes, he might accidentally cause some good by his being a dumbass with his money.
That does not mean he is not a dumbass.
No, that's not even close to what it's like. It's instead like worrying about monsters underneath one's bed. They don't exist. To worry about them is foolishness. Why? Because it is foolish to worry about nonexistent threats.
I have no problem with people researching practical applications with AI. Indeed, I have no problem with someone being a dumbass with ten million dollars. But that person is still a dumbass with his money.
I'm glad some practical good is actually coming out of this man's stupidity, certainly.
Your talent for wordplay remains as sharp as always. Nevertheless, if you're seriously asking me whether it's more unrealistic and impractical to attempt to reinvent the human mind from scratch, instead of concerning oneself with the already extant seven billion human minds, the answer is yes.
When people think we can see changes in the brain, both electricity and reactions. The simple explanation is that those things are the thoughts itself.
You can always belief thoughts are something else unknow and what we observe are precursors or results of it. But that's jumping hoops imo.
The phenomenon of language are purely and entirely on the minds of those evolved and nowhere else. Let me clear my positioning.
Take the infinity money theorem (http://en.wikipedia.org/wiki/Infinite_monkey_theorem). Imagine the infinity monkeys is a machine. This machine will outdo humans and super computers in terms of intellectual production because it will produce every piece of information contingent in our languages - everything that can be written, will be written by this machine.
The machine however do not think in any meaningful way we believe what "thinking" is - they are literally random inputs and nothing more. In the end, however, the machine did nothing impressive - it just produce all the combinations of letters, which does only have a meaning because someone gave meaning to then. Without anyone to read it, the symbols the infinity monkey machine made are meaningless, which points out that the "information" is ultimately produced by our brains when we read the symbols and not by the machine.
Of course, the way computers solve problems are necessarily the way this machine solves it, except more efficiently. Instead of trying everything, it only tries the things that can possibly work (things pre-determined by the code).
This is no different from computers. They don't hold any information until some human reads what is on the screen. In the end it's the human brain that produces the information as a reaction of seen / reading / hearing the symbols. This is also how language works and why, despite thoughts being "rust", they don't need to be literally moved from a mind to another for language to be possible. It's more like "rust" causing a phenomena that produces "rust" in other parts. We don't know precisely why and how we do that, but those chemical reactions are the best answer for it at the moment imo.
My skepticism about the AI the way you see it is that the computer would have to be able to interpret the symbols like we do for it to exist. It cannot work on the same fundamentals the infinity monkey machine do (which is the fundamentals that all computers - at least the ones I know - works). But how a organism have to work in order for that to be possible is unknown in the end of the day. The only hint I know is brain chemistry so I feel like it's reasonable to assume it is a important part of this largely unknown process.
I don't think a machine can effectively simulate us the way way we can't simulate a machine.
Overall I find it very weird that you guys are considering thinking and information as something abstract/incorporeal. It's not like reasoning alone is behind human thinking. Our feelings is part of our thinking process just like reasoning, or at least that's what psychology people I know belief (and something I tend to agree, from my own experience).
As far as I know special heuristics does not imply anthropocentrism. I can be blamed of one but no the other. I could be wrong through...
BGU Control
R Aggro
Standard - For Fun
BG Auras
You didn't respond to my last post, so I'll rephrase the concept: why would it be impossible to replicate the electrical structures you've identified using non-organic circuits? The "electricity and reactions" you're discussing represent physical states of matter in the brain, which obey the laws of physics. If we built a very fast and complex silicon computer that accurately simulated these physical states of matter (and thereby generated thoughts and ideas) why would this not allow the computer to "think" the way a human does?
There are probably more efficient ways to allow a computer to achieve human-like intelligence than brute-force simulation of a human brain, but why should this not work in principle?
First, AI could come from "brute force", a mechanical analogous of a human brain. I say this is improbable because we don't know how human brains works and the little we know evolves organic chemistry. Talking about a replica human brain is like talking about a machine bacteria, or machine stomach. It could be possible, I never denied that, I just say we are pretty far from that and maybe we could never accomplish.
Second, AI may not work precisely like our brain but have the same results (BS original counter argument to my post). My argument is that human cognition is able to give new meaning to the information it receives while computers are limited to give the feedback it codes tells it to give. Learn algorithms are able to write extra lines or rewrite old ones on the procedure but that doesn't change the fact the feedback were prescribed. So even with very intelligent AI, programers knows what it will end up doing (poker player bots playing poker, music creator bots creating music, etc).
The way I see it an AI going maliciousness and conquering the world is as likely as someone jamming random keyboard inputs in the PC and end up hacking world leader's mail boxes and causing WW3. Hey, it's POSSIBLE right ? But hell I'm not worry about that. Arguing that a AI will more likely to do that then a random program does not proceeds because while the AI will do fine coding (except of random one), it will do fine coding and not do what it was not meant to do.
BGU Control
R Aggro
Standard - For Fun
BG Auras
I don't think malicious AI are a problem right now, but they're a conceivable problem as soon as we create computers with human-like intelligence.
You acknowledge it's at least theoretically possible to create a computer with human-like intelligence, at least by using the "brute force" method I described (and probably by other methods as well).
As I mentioned in an earlier post, we currently posses computers that are significantly faster than human brains. So it should be relatively trivial to run an "overclocked" human brain on a computer, once we have a normal human brain simulated. A brain that works many times faster than any human brain would naturally be much smarter, meaning we humans can now be outsmarted by an AI. All that's needed is for the simulated brain to have malicious intentions, and we're in trouble.
It's entirely possible that my arguments haven't been stellar -- I'm not an AI expert or even that interested in it, though I do have many colleagues who are. However, I've been debating long enough to know that sometimes the problem isn't my arguments, but my opponents being irrationally wedded to a specific position that they simply cannot be moved from. The fact that you keep on calling Elon Musk a dumbass (which is a lauguable notion) and bringing up General Zod makes me suspect the latter, but I'll extend the benefit of the doubt a bit here.
Let me write down some arguments in favor of AI in a bit more detail. The detailed exposition should mean that if you have a good-faith problem with these arguments, you can point out specific assumptions or inferences which are bad. Note that e.g. mentioning General Zod will immediately disqualify you from good-faith engagement with the argument, because General Zod is not going to be among the assumptions or inferences.
Here's an argument to start off with:
1) Humans exist.
2) Humans are intelligent.
3) Humans are energy-efficient.
Therefore,
4) Energy-efficient human-level intelligence (in other words, strong AI) is not forbidden and indeed is expressly permitted by the laws of nature.
This rather trivial-looking argument actually does a surprising amount of work against your position here.
First of all, it implies that if strong AI is science fiction, then it's "hard" science fiction. In fact it creates a specific distinction between strong AI and something like, say, warp drive -- with warp drive, we can look at the laws of nature as we know them and say with very high confidence that even if it were possible, it would require impractical transports of energy to realize. On the other hand, the existence of strong intelligence in nature proves there are no natural barriers to strong AI. General Zod is closer akin to warp drive -- and we can thereby conclude that your General Zod analogies are broken. (Having so demonstrated, I'm going to skip over the parts of your posts where you used them.)
Second, it shows that strong AI is an engineering problem, not a physics problem. This has two upshots for the discussion we're having here: one, engineering problems are rarely outright unsolvable, and demonstrating that a particular one is is a case of "proving a negative." This means that an affirmative anti-strong-AI argument is going to be hard -- cheap shots about General Zod won't do. Two, it opens the question up to the kinds of heuristic and circumstantial arguments I was making in a prior post. The cumulative nature of engineering knowledge means that the likelihood of a hard engineering problem being solved is (circumstantially) increased by the solution of easier, related problems. Thus my list of examples of achievements in AI research ought to increase the epistemic probability you assign to strong AI.
I already addressed this argument. The idea that a computational device is "on rails" but a human being is not is an instance of the human specialness heuristic. It is not as though the computer follows rules but we fly free. We too follow inexorable rules of physics and biochemistry that we cannot break.
It's certainly the case that our rules permit a wider variety of outwardly-observable behaviors than do the computer rule sets developed to date. However, in order to turn this into an affirmative argument against strong AI, it is necessary to identify a specific fundamental barrier that would actually stop a computer from exhibiting a rule set of equal complexity. (bitterroot has been attempting to engage in a good argument against the existence of such a barrier; I won't repeat it in detail but it's the matter of directly simulating the rules of biochemistry with a computer.)
"Hasn't" isn't the same as "won't" or "can't" or "is unlikely."
I don't understand your repeated insistence on calling Elon Musk a dumbass. You know that is an absurd notion, right? Furthermore, what business do you have declaring this to be accidental? Are you a mind-reader? It would only be accidental if Elon Musk were unaware of the near-term benefits of safe AI research. I guess starting from your unfounded assumption that he's a dumbass, you might circumstantially conclude that. I deny the assumption and your conclusion as well.
So, just to be clear -- one of my assertions, which you contested by way of direct denial, was "Musk's millions aren't being wasted here." You now agree with that assertion?
That wasn't what I asked, though, was it? My question wasn't whether we ought to be concerned with human evil. Of course we should be concerned. Very concerned. Maximally concerned, even.
My question was whether or not it can be fixed. To make the question more specific and easier to engage with: With Musk's $10M, AI researchers can change the way a self-driving car "thinks" about the trolley problem. If we gave you the $10M to distribute to whatever organization or recipients you thought best instead, could you use it to change the way humans think about the trolley problem? What effects can we expect from your chosen intervention and how likely is it to succeed? Based on your answer to the preceding, which is ultimately the more practical use of the money?
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
In order for something to be the simplest explanation, it first has to be an explanation, and in order to be an explanation, it must explain all observed facts, not just the ones you cherry pick. Thoughts are associated with biochemical brain activity. They can also be transported from mind to mind without the associated biochemical activity. In fact, they can be archived and restored long after the original electorchemical activity has dissipated into pure entropy.
Your explanation has to account for the fact that thoughts are inorganically transportable from mind to mind.
Not so. Shakespeare's mind no longer exists. I can still reproduce some of his thoughts in my mind by reading his linguistic output. If this procedure is contingent on his mind, how can it take place when his mind does not exist?
I deny this thesis. We don't know how the human creative process works; randomness or pseudorandomness could be involved in a crucial way. If it is, and I deny that the output of a device with random inputs is thought, then I not only deny that Monkey Shakespeare thinks -- I may also be denying that Shakespeare himself thought! This is an absurdity. Thus the reasoning is invalid; it needs a further assumption that no randomness is involved in human thought.
Certainly I agree that meaning is assigned by minds. However, that does not speak to the point. The point is that these meanings can be transferred between minds without transferring any of the organic matter or reactants! Therefore the organic chemistry was not a crucial part of the meaning. It can be reconstructed without it.
Your whole thesis here, as I see it, is that minds depend crucially on organic chemistry. You are not going to be able to prove this thesis by referring to abstract properties of minds, because those properties would be the same however a mind came to exist. You must specifically explain how organic chemistry is irreducibly a part of the process.
Again, the rust never leaves your mind, but the thoughts do. You can reconstruct the thoughts long after the rust is gone. Thus the thoughts may be caused by the rust, but they certainly aren't the rust, and you certainly have no basis for concluding that the rust is necessary.
I don't think just switching two words around in a sentence constitutes an argument...
Information is incorporeal...
...
This is getting very close to a "computers don't have souls" argument.
Quite true. I drop any charge of anthropocentrism.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Strong AI, mind. Regular AI we already have. In fact, I played against five AI not too long ago in DotA.
Now hang on there. General Zod was brought up because he is a fictional character, as is the Terminator. General Zod, therefore, is a completely valid analogy. In both cases, a person is expending large amounts of resources against a nonexistent, fictional threat.
Now, evidently your aim is to demonstrate that spending money to fight the Terminator is neither as frivolous, nor as foolish as fighting General Zod. You're welcome to do so, but recognize that the burden of proof is on you there.
Umm... Take a look at that argument again. If if we were to grant you premise #3 (I'm not myself sure what you mean by "energy-efficient"), you have a rather conspicuous problem. Namely, you are using the existence of humans — who, as you say, have human level intelligence — to say that human-level artificial intelligence is permitted by nature.
Except, that doesn't follow, because humans aren't artificial intelligence. Matter of fact, they're precisely the opposite.
First of all, no it doesn't, as I demonstrated above.
Second of all, whether the existence of AI is "permitted by nature" was never a claim I took issue with. I have no issue with the claim that it is physically possible that a strong AI could exist.
*Shrug* Rather a meaningless distinction.
No, because in both cases, still ridiculous.
If I am afraid of Darth Vader coming after me, I am being ridiculous, because Darth Vader is fictional.
If I am afraid of Roy Batty coming after me, I am being ridiculous, because Roy Batty is fictional.
Whether your choice of boogeyman comes from the hard or soft science fiction genre doesn't make any difference. If I added Khal Drogo to that list, still ridiculous. If I added the boogeyman to that list, still ridiculous. They are each of them ridiculous.
I have never stated otherwise.
Not liking an analogy does not stop it from being pertinent, Crashing.
The fact that you solved a previous problem does nothing to suggest that you can solve the next.
Further, as I've said before, no computer in existence has ever approached sentience. Surely there is a world of difference between an abacus to a calculator, a calculator to an early computer, an early computer to a computer in the 1990s, a computer in the 1990s to Watson. This cannot be denied, and the achievements of computer scientists in this regard is to be lauded.
That being said, italofoca has a point, and that point stands: At no point has a computer transcended the model of the computational device that does what the human beings tell it to do.
Irrelevant nonissue. Something is sentient or it is not. Computers are not sentient. Saying, "Well sentience ain't all it's cracked up to be," does nothing to change this.
Nonexistence is a fantastic barrier against performing most things, you will find.
It is an absurd notion why? You're begging the question, asserting this without rationalization, whereas I have provided plenty of rationalization: he's spending large sums of resources in the hopes of fighting a nonexistent threat.
I just want to clarify here: you are aware that this guy has gone on record saying that he actually believes that this is a realistic threat, right? Now, maybe he is just trolling, and maybe this is all some elaborate act. But I don't think it is. I think he actually does believe what he's saying. So, again, I really have no problems whatsoever calling him a dumbass, and would like you to cite your reasons why calling him a dumbass is an "absurd notion."
Now, maybe you didn't realize this guy actually believed he was fighting killer robots. That's fine.
But if your comments were made with that knowledge, then please point out to me why killer robots posing an existential threat to humanity within the next five years is a rational belief, that I may see Elon Musk as the Cassandra-figure that he truly is.
If I were to spend 10 million dollars to fight against Darth Vader, would you think me a fool? Of course you would.
What about the Boogeyman? 10 million dollars to fight against the Boogeyman? How about it Crashing00? No, Highroller maintains foolishness there, because he's wasting his money. The Boogeyman doesn't exist.
So have at it, Crashing00. Demonstrate my "absurd notions."
If he wants to donate the money to further the research that this organization is actually doing, then he's fine.
If he's doing it to fight killer robots, which is exactly what he said he was donating it for, then he's a moron.
Yes, his moronic actions might provide a good deal of benefit. He might accidentally do a lot of good. Still a moron.
Incidentally, what's gotten you so worked up about this?
No, of course not. Your confusion lies in who I was accusing of doing the wasting.
If Musk believes he's fighting killer robots, then he's wasting his money.
That doesn't necessarily mean the Future of Life Institute is wasting the money Musk gave them.
I feel it denotes to exactly what you asked.
Depends on what you mean by fixed. It can certainly be made better.
I could probably change the way human beings think about the trolley problem, were that my aim, by donating the 10 million dollars to a think tank of people who are thinking about the trolley problem, as opposed to people who must first create an AI advanced enough to conceive of the trolley problem, and then program morality into it. Right?
Again, I don't want to make it seem like I'm anti-computer science or whatever, because I'm not. I don't want to make it seem like I have anything against people seeking to create AI who can conceive of trolley problems either, because I'm not. But if you're asking which is more impractical and/or more unrealistic — which is what the word "quixotic" means — there's really only one answer.
So you're denying premise 3? That humans are energy-efficient? Our average power output is 100 watts. Granted, "energy-efficient" is a vague term, but I'm not asking it to do very much work. I only require that the laws of nature allow intelligent systems to exist without, say, having to burn up whole stars to run them.
This premise is undeniable. It seems that you're alleging another problem, which is that "artificiality" is relevant here. It's not. Just modify the argument to say "intelligence" on both sides rather than "artificial intelligence." There is no reliance on how the intelligence was constructed, only that it exists.
Okay, so I trust I will not see further comparisons with things that can't exist, like General Zod?
Show me where I said your analogy is invalid because I don't like it, and then maybe you'll have something here.
The feasibility of powered flight was inferred from studying kites and gliders. Don't worry, though; I'll borrow General Zod's time-phone and ring up the Wright brothers and tell them Highroller says they're full of *****.
At the present time, computers are not sentient. This claim is wholly conceded to you and entirely uncontested. Now show me a premise I've stated or a conclusion I've made that depends on it!
"Airplanes are nonexistent, therefore they are impossible." Spot the problem with this argument. You will probably need a hint, so here it is: imagine making it in 1902.
Because 140 is probably a conservative estimate of his IQ? I mean, I can't find an authoritative source on that (the best I can find is that at age 16 he scored the highest nationwide on an IBM engineering exam) but the chances of a person who is literally stupid having his profile of achievements are simply astronomical.
However absurd killer robots are, the idea of Elon Musk being stupid is still more absurd.
About what, bad arguments? I'm always "worked up" about those (though I'm not sure "worked up" is the right phrase). Probably 75% of the posts I make on here are because I read something that I think is nonsense and my love for the truth compels me to say something. About you calling Elon Musk a moron over and over again? Well, I could say some things about planks and motes; I think you can probably fill in the details yourself.
Right, I'm not going to argue any more about Musk's psychology -- the claim I put on the table was that the money was not ultimately being wasted, and I think we've resolved that issue; it's not, and my claim stands.
No, no, no. Of course you can get a room full of people to think about the trolley problem by putting a pile of money in the room and distributing it to everyone who agrees to think about the trolley problem. That's not the question.
Remember -- one nice thing about computers is that they are universal, so that if the AI researchers are successful in generating insight with the $10M, every self-driving car everywhere will make better decisions as a result, because the software can be installed on all of them. So the challenge isn't just to get some people to think about it for awhile -- it's to get many people to actually behave better in real moral quandaries.
What I'm trying to get at is the underlying value to society. Upping the "moral IQ" of self-driving cars has real, redeemable value in that they will make better decisions and cause fewer injuries. Getting a room full of people to temporarily think about the trolley problem until you run out of money does not necessarily result in any value at all. You only get value if your intervention is able to produce long-term, reliable changes in human moral decision making, and not just for the 10 people in the room.
Hell, there has already been plenty of well-funded and quality research about the trolley problem in both moral philosophy and neuroscience, and so far, general human behavior has not improved one whit as a result.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
You're saying I'm wrong for saying that Elon Musk is a dumbass, so yes, it is a criticism of an argument of yours. The fact that you're threatening ragequitting on the grounds of my bringing up fictional characters indicates to me that you are offended. It is as though you have some sense that I am insulting your intelligence by bringing up absurdities. Does this accurately sum up your feelings toward the situation?
If so, keep in mind that these absurdities are perfectly analogous to Musk's frame of mind. So, before you rush to defend him next time, do consider this carefully, and then maybe you won't find yourself in this situation.
No, I am not denying premise 3. Reread what I actually said. I said I have no idea what you were saying with premise 3. That is not denying premise 3.
Ok, great. Premise 3 is full of win. Irrelevant really, considering the problem with the conclusion.
I could do that, but it would say absolutely nothing of interest, because all I would be saying is, "Human-level intelligence is possible due to the existence of the human-level intelligences that humans have." Which says nothing about AI, the subject of our discussion.
And all of THAT is pointless because, once again, I never said that strong AI were physically impossible.
Why? It remains science fiction, doesn't it? It is therefore perfectly analogous to Musk's scenario of killer robots, and therefore Zod remains part of the discussion. (Although I'm perfectly willing to switch him with a different comic book villain if you'd prefer.)
If you agree that my analogy is valid, then you have no business complaining about it.
If you disagree, demonstrate how it isn't instead of complaining about it.
If you cannot demonstrate how it isn't, then from whence comes your disagreement?
Why would I deny powered flight? I see it all the time.
Once again, just because you've solved a problem doesn't mean you can solve the next one.
Why the claim that the quoted statement was in response to, of course.
'I already addressed this argument. The idea that a computational device is "on rails" but a human being is not is an instance of the human specialness heuristic. It is not as though the computer follows rules but we fly free. We too follow inexorable rules of physics and biochemistry that we cannot break.'
Which is totally irrelevant. "Well human thinking has limitations" does nothing to change the fact that humans are sentient and computers aren't. Saying, "Well we too follow rules" is therefore a non-sequitur.
No, actually the problem with this argument is you, once again, are attempting to make me out as saying that strong AI were impossible when I already said that I do not believe that they are. Misrepresenting my arguments seems to be a trend with you.
And this makes him incapable of being foolish? Really now?
I wasn't aware being an engineer made one incapable of being foolish.
Except you've already conceded he believes in blatantly absurd things, and yet you're still not going to concede that he's a fool?
Your "love for the truth"? Really? So what does that love for the truth say when you assess the threat posed by humanity towards killer robots over the next five years?
Let's not confuse love of the truth with love of winning, Crashing00.
Because you've recognized he's a dumbass, yes. Thank you.
No, it doesn't. Musk is wasting his money. That the organization he's giving it to might do good things with it doesn't change that.
If I believe that the world is controlled by lizard people and the only way to defeat them is to throw $100 bills at them, I am wasting my money because I am being a moron. That doesn't mean the (very confused) people I just tossed $100 bills at cannot pick them up, keep them, and use them in fiscally responsible and benevolent ways. Nor does this change the fact that I'm still a moron who has lost a great deal of money even if they were to do so. Their behavior does not make mine retroactively intelligent.
What is the question and how is it relevant to whether or not Musk is a dumbass, which is what I thought the original topic of discussion was?
Then I repeat what I said before. If it's purely utilitarian value you wanted, invest those 10 million into medical research. You could help cure one of the myriad health conditions that kill more people annually than car crashes.
Now, once again, because I think you are still laboring under this misinterpretation of my arguments, I will clarify I'm not saying everyone in general who donates money to this organization, or anyone who donates money to AI research in general, is a dumbass or someone who is wasting their money. I don't believe that at all.
However, you did ask me which is more quixotic, and, once again, there is only one correct answer to that.
I feel like it would mostly just require him to read the list of things they said they're gonna do with his money.
I don't know what field of science actually treats thoughts like that.
Thoughts can't be inorganically transported and we can't reproduce Shakespeare's thoughts through language. When someone writes, he alters physical reality producing symbols and these symbols are read and interpreted by another mind by creating new thoughts of it's owm. This is the reason why different human minds may give different interpretation to the same piece of information. Language is a peculiar phenomena because humans have developed the craft of altering the world in specific ways in a attempt to produce the thoughts they desire in another person mind. When someone writes literature or music, the author is trying to cause certain thoughts or sensation on other people but does not have perfect control over the reactions.
Of course you can try to be as objective as you can, as in teaching, laws and scientific papers. This is why more precise languages are created in those cases in order that thoughts can reproduced with higher clarity in attempt everyone is actually interpreting info in the same fashion. Language is thought > information > thought' phenomena, thought and thought' are not the same thing (although they can be equal, but not necessarily) and information, the thing between the two thoughts are very obviously physical such as letters sequence in a paper, patterns carved in stone or pixels in a electric screen.
Even electronic machines who are built to not misinterpret information are unable to circulate thoughts inorganically. Computers literally send information through electricity, radio waves and other methods.
Randomness can be evolved (probably IS) but by "not work like the infinite monkey machine" means the random experiment of human thinking is not the same random experiment that defined the infinity monkey machine (they can have the same sample space and the same sigma-algebra but certainly not the same probabilistic mapping). Or else if I took a sample of your thoughts I would find something like "efkvb3iu4gn91u3ign3iu4n81ugbreb".
This was not a argument, it was statement on my belief, as I felt I had to clear my positioning.
It's as incorporeal as the unitary circle defined by the set {(x,y) in R^2 : x^2+y^2=1}. You can certainly interpret it as a incorporeal thing and you won't be missing anything if you do so. Information theory gains nothing by pondering if information is written on paper or carved in stone. However just because information theory wisely chooses to abstract properties of no consequences for IT doesn't mean those properties are not real or they don't exist. And all this have no bearing in the whole AI debate as I never challenged that computers couldn't produce/reproduce certain pieces of information (which would be a utterly stupid position to have as my very infinity monkey example would contradict that).
Also I said "thinking" and "information". While I stand for both being entirely physical, making information incorporeal does not challenge thinking being corporeal.
It's more like computers are not organic machines like us and thus there's no guarantee they can do everything we can while assuming they can't do everything we can is consistent with certain positions on neuroscience and chemistry.
I never talk about any soul.
BGU Control
R Aggro
Standard - For Fun
BG Auras
http://www.newyorker.com/tech/elements/deepmind-artificial-intelligence-video-games
If you just use a little imagination it is easy to see why Elon Musk spent his money. Humans are advancing faster and faster every year. If we have a computer that can learn how to play games and is constanlty looking for better plays it will not take that long to create human level A.I.
You know what the robot thought the second it became self aware, "Dam... these people are crazy."
"Restriction breeds creativity." - Sheldon Menery on EDH / Commander in Magic: The Gathering
"Cancel Culture is the real reason why everyone's not allowed to have nice things anymore." - Anonymous
"For what will it profit a man if he gains the whole world, and loses his own soul?" - Mark 8:36
"Most men and women will grow up to love their servitude and will never dream of revolution." - Aldous Huxley, Brave New World
"Every life decision is always a risk / reward proposition." - Sanjay Gupta
From the imaginary stories that we tell each other for entertainment, you say we know what the consequences will be?
candidus inperti; si nil, his utere mecum.