This is, to quote Pauli, not even wrong. A definition is a substitution; anywhere you see a defined thing, you may replace it with its definition. Definitions are neither true nor false. Axioms, on the other hand, are basic presupposed truths at the bottom of an ontology. Crucially, changing definitions never changes what's true (other than by the superficial replacement entailed by the definitional swap). Changing axioms, however, can change what's true.
You know more about axioms than I do. Since I have always operated under the assumption that definitions were a form of axiom, I spent some time today and read this articular to re-educate myself: http://www.phy.duke.edu/~rgb/Philosophy/axioms/axioms/node27.html
I hope it was correct, it does--at least--seem to explain where my misinformation was coming from.
Anyway, you (and Sam) are asking me to accept your definition of "ethics" as self-evident. As he puts it in that article you linked:
Again, the same can be said about medicine, or science as a whole. As I point out in my book, science is based on values that must be presupposed—like the desire to understand the universe, a respect for evidence and logical coherence, etc. One who doesn’t share these values cannot do science. But nor can he attack the presuppositions of science in a way that anyone should find compelling. Scientists need not apologize for presupposing the value of evidence, nor does this presupposition render science unscientific. In my book, I argue that the value of well-being—specifically the value of avoiding the worst possible misery for everyone—is on the same footing. There is no problem in presupposing that the worst possible misery for everyone is bad and worth avoiding and that normative morality consists, at an absolute minimum, in acting so as to avoid it. To say that the worst possible misery for everyone is “bad” is, on my account, like saying that an argument that contradicts itself is “illogical.” Our spade is turned. Anyone who says it isn’t simply isn’t making sense. The fatal flaw that Blackford claims to have found in my view of morality could just as well be located in science as a whole—or reason generally. Our “oughts” are built right into the foundations. We need not apologize for pulling ourselves up by our bootstraps in this way. It is far better than pulling ourselves down by them.
He (and you) are asking that we presuppose the definition of "ethics" to be the one her puts forth, since the logical outcome of that definition is one we can all agree is correct: The worst possible misery for everyone is "bad."
As I said, I agree with Sam's conclusion. I agree that the worst possible misery for everyone is "bad;" however, it does not then follow that "ethics" must be presupposed definitionally as "the well-being of 'conscious creatures.'" It might well be that IS what ethics is, but I see no reason we can't find a reason for it. We don't need to just assume it to be true, or declare the definition as such. We can find the reason behind that fact.
I will restate: my claim is that to fail to recognize the application of the word "good" to the well-being of conscious creatures is to either misuse or redefine the word, making the argument semantic from that point forward. This claim does not have the form "P is not known to be true, therefore it must be false" for any P.
There is nothing to be said in a semantic argument that could ever make one party right and the other wrong.
Have you ever agreed with a theist to define God as "omnibenevolent" and then try to argue His actions aren't moral?
The right definitions can very much result in one party being right or wrong.
Nor can they be "rejected" in the sense that an actual truth claim could. You must accept my definition of ethics in order to understand my claims. That doesn't mean you have to agree with my claims or that my claims can't be wrong -- it means that I intend them to be interpreted in a certain way and I am making that manifest by providing the definitions.
Right, I must understand your definition in order to understand your claims, I don't need to accept it as correct.
I understand what you and Sam are saying; pretty sure I do anyway.
In an argument they certainly can. If you agree that "measured IQ" is defined as "intelligence"--for example--then you've agreed that Whites are smarter than Blacks. If you were trying to argue that isn't true, you lose the argument as soon as you agree to that definition.
Getting your partner to agree to certain favorable definitions is a pretty standard debate tactic, Crashing00. One of the reasons I get very very suspicious when someone starts throwing statements around like "self-evident" as Sam does.
A person who defines ethics as coming from God still has to prove that his newly-defined ethics exists and can be used to draw conclusions. That is what an intelligent discussion with a person who defines ethics as coming from God would entail. Such a discussion might end with the other party saying "the thing named by your definition doesn't exist" or "your definition doesn't mesh with empirical reality" -- but it could never end with "your definition is wrong/circular/question-begging." Arguments are question-begging, not definitions.
If the argument was that God isn't ethical, then it most certainly would be a question beg. What if I told you one of the properties I am defining the Christian God to have is "exists"? As in, "He exists" is part of His definition. Would you tell me that definition can't be wrong?
Now once you come to understand the claims I am making by plugging in the definitions I'm telling you I'm using, then I am happy to proceed forward and argue that my claims make sense: that the well-being of conscious creatures is a thing that exists and can be increased and/or decreased in various ways. But I thought you already believed that.
I understand the claims you and Sam are making, pretty sure I do. I even agree with your conclusions. I just don't find your understanding of what ethics is "self-evidently true." But--to be fair--I find very little self-evidently true.
I think there is a reason why we feel ethics is what is it. That is what this thread it meant to explore.
I can find nothing in this paragraph that questions the reasoning at hand. Can you cite the exact argument of Harris' that you're criticising with this and point to where it applies?
I find the argument he makes in Chapter 1 that starts with "While I do not think anyone sincerely believes that this kind of moral skepticism makes sense, there is no shortage of people who will press this point with a ferocity that often passes for sincerity." to be incorrect. Consciousness necessarily assigning meaning to itself, because of the assumption that only consciousness can assign meaning.
I don't see how this helps. Whatever your framework purports to be of value, if it arises from an empirical world state, the experience machine can give you an experience that will increase your perception of that value. If telos comes from evolution, then we'll load a program into the experience machine that gives you the experience of being vastly more evolutionarily-fit. Then what?
This is why the experience machine is such a *****. You can't really cheat your way around it like this. I mean, if this answer worked, anybody who had ever faced an experience-machine problem (including Harris himself) could just say the same thing: you're not really making the world more happy by jumping into a fake world where everyone is happy, are you?.
The experience machine can make me think wrong is right, but it can't make wrong right... unless you are assuming that whatever you think is right is right.
When consciousness is assigning value to itself first, then you don't have anything before consciousness grounded in reality. Or--to put it another way--all other considerations are secondary to the consciousness's happiness. However, I am putting something else BEFORE the consciousness's well-being as having more value than that well-being. While you can use your machine to trick me, all you will be doing is making me objectively wrong about something. But under Sam's setup that is not true, since all he cares about is that the consciousness's well-being is increasing.
The machine is truly fulfilling Sam's ultimate goal, but it's not truly fulfilling mine. That's the difference, and why Sam's argument has more trouble with the machine than mine does.
There is a direction to e.g. gravity as well, and even an end "goal" if you insist on abusing that word, namely the center of mass of a system. This does not imply teleology. The only things capable of generating teleology by satisfying the finality requirements thereof are conscious beings; beings who can verify statements of the form "I want this thing to do this."
The direction of gravity has nothing to do directly with the sharpness or dullness of the rock, while the direction of evolution has everything to do with the fitness of the species. As you well know.
Whether or not you want to accuse me of abusing words, evolution has a direction and we are a direct outcome of that.
Why should I start from a different set of axioms? I'm perfectly happy with the ones I've got now and you've pointed me to no consequence of my axioms which gives me even the slightest pause to reconsider them. In fact, better my set than yours, because yours seem to evince some truly devastating errors that mine do not.
...
If you don't want to debate about the definition of ethics, why are you in a thread about debating the definition of ethics?
I'm honestly beginning to doubt you understand what I'm saying at this point, not the other way around.
In fact, let's unwind this back to topic. I didn't actually come here to make an affirmative claim about the underpinnings of ethics in the first place. The question I raised is whether or not your affirmatively-stated basis for ethics has irrefragable ontological (and now possibly epistemological) problems. I hereby drop all my affirmative claims about the basis for ethics until such time as we have settled that question.
K. Seems reasonable given the nature of the thread.
I don't accept the premise. (Well, I do accept the premise about the creatures existing without empathy; I don't accept the premise about empathy being necessary for the recognition of others' well-being.) Why do I need empathy to reach a conclusion that tells me to act to avoid another's discomfort? Why couldn't I have a rule that says "avoid others' discomfort" irrespective of my own ability to process discomfort or emotion? A blind man can't see color, but he can still know that there are colors.
An unempathetic creature can certainly understand another's discomfort, but they have no natural reason to care about it.
A sadist--for example--certainly understands that someone else's pain, but not as something they'd want to prevent.
Well, then they are immoral. Full stop. Their behavior reads directly as the negation of morality. Was that supposed to be hard? I mean, honestly, given all the good questions that can be asked of any account of morality -- experience machines, trolley problems, etc etc -- isn't this one just a complete waste of time?
You're confusing my argument with the one Sam talks about in his book dealing with human sociopaths.
But, I do agree if we accept your definition, they're just that.
You know more about axioms than I do. Since I have always operated under the assumption that definitions were a form of axiom, I spent some time today and read this articular to re-educate myself: http://www.phy.duke.edu/~rgb/Philosophy/axioms/axioms/node27.html
I hope it was correct, it does--at least--seem to explain where my misinformation was coming from.
That is indeed a good resource; I particularly commend your attention to the bulleted points where he distinguishes a definition from an axiom. I am going to take the liberty of quoting from that article in the sequel.
Anyway, you (and Sam) are asking me to accept your definition of "ethics" as self-evident.
Quote from Duke article »
Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions.
He (and you) are asking that we presuppose the definition of "ethics" to be the one her puts forth
Quote from Duke article »
Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions.
As I said, I agree with Sam's conclusion. I agree that the worst possible misery for everyone is "bad;" however, it does not then follow that "ethics" must be presupposed definitionally as "the well-being of 'conscious creatures.'"
Quote from Duke article »
Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions.
(I'm getting RSI from pushing command-v...)
It might well be that IS what ethics is, but I see no reason we can't find a reason for it. We don't need to just assume it to be true, or declare the definition as such. We can find the reason behind that fact.
I don't think even Sam Harris would object to an attempt to derive his views from something further anterior. The objections that you seem to be levying, however, are either of the "get back in the swamp" kind (above) or of the kind that equally well impale your own alternative (below).
Which is one of the reasons you shouldn't try to define what ethics is at the start of a debate about what ethics is.
There can be no debate about ethics in the absence of a shared understanding of what the word "ethics" means. Such a debate has no ground on which it can be fought. Ethics is ponies. Debate over, I win. There is no way for you to say I'm wrong; even bringing in your own definition is pointless because then you will just be talking about non-ponies while I rattle on about ponies.
Debates can only be had on the ground of shared definitions. The debate then becomes whether the thing named by the definition is coherent, extant, relevant, logical, et cetera.
You could say "the thing you have named ethics is incoherent" or "the thing you have named ethics is nonexistent" or "the thing you have named ethics is irrelevant."
Have you ever agreed with a theist to define God as "omnibenevolent" and then try to argue His actions aren't moral?
In debates about God I accept without any quibble whatsoever the theist's preferred definition of God. If it then turns out that they claim God exists and contradictions result therefrom, I say that by reductio, God as they have defined him doesn't exist.
The right definitions can very much result in one party being right or wrong.
Quote from Duke article »
You cannot prove that ``penny'' stands for slivers that might be copper, zinc, or whatever, produced by an authorized governmental institution, with one of several possible classes of history and morphology, you can only assign the word to refer to that class of actual objects each of which is a unique individual with its own specific differences) by means of a sufficiently precise definition.
Definitions can't be debated, can't be proven right or wrong, and don't determine truth. Sorry!
Right, I must understand your definition in order to understand your claims, I don't need to accept it as correct.
Category mistake. Definitions can't be correct or incorrect.
In an argument they certainly can. If you agree that "measured IQ" is defined as "intelligence"--for example--then you've agreed that Whites are smarter than Blacks. If you were trying to argue that isn't true, you lose the argument as soon as you agree to that definition.
If I define intelligence as measured IQ, then I can use command-F in my word processor and replace everything like "intelligent" or "smart" with "measured IQ." After doing that, we find that I am arguing about the measured IQ of whites and blacks. If it turns out that the measured IQ of whites is higher than blacks, then why would anyone interested in the truth about measured IQ ever deny that fact?
Getting your partner to agree to certain favorable definitions is a pretty standard debate tactic, Crashing00.
If you mean it can be used to trick or ensnare incompetents, then yes. That is not how I am using it, and I don't think Harris is using it that way either. I take you for a peer, not a fool. I have no doubt that you are capable of reaching an understanding of basic epistemology, metaphysics, and the differences between them. I (usually) debate for dialectical purposes, not rhetorical ones. I am not interested in convincing you "at any cost" or fooling you.
One of the reasons I get very very suspicious when someone starts throwing statements around like "self-evident" as Sam does.
You certainly should always be suspicious of the phrase "self-evident." That is not the problem.
If the argument was that God isn't ethical, then it most certainly would be a question beg.
You simply can't define God to be ethical and then argue that he isn't. When I disagree with a proponent of divine-command ethics, I am not saying that his definition is wrong or that God, by his lights, is actually unethical. I am saying that the consequences of his choice of worldview should give him pause. If he's really okay with the genocide of the Amalakites, then he's a moral monster unreachable by rational argument. The fact that his closed worldview is self-endorsing is not something that I can correct from outside. (Didn't we talk about this in the worldview thread?)
Ultimately, an appeal to a divine command theorist to change his view on ethics isn't (and can't be) an internal attack on his own axioms; rather, it's an appeal to his own ethical intuition and its collision therewith. It's almost an ad misericordiam argument. (Sometimes you can get them with their own axioms, though; for Christians, you can use that scripture that says "the law is written on your heart" or whatever.)
What if I told you one of the properties I am defining the Christian God to have is "exists"? As in, "He exists" is part of His definition. Would you tell me that definition can't be wrong?
This is the basis of one of the original ontological arguments for God. It's a tricky question; so tricky that Goedel himself thought the argument was valid, and it has been reformulated in more modern times by Alvin Plantinga into a still much deeper argument about the axiomatization of modal logic. A very terse summary of my answer is that I would say that your definition is incoherent because existence is not a predicate. To paraphrase the Duke article, definitions are purely descriptive -- they can't call things into existence, they can only apply (or fail to apply) to things that already exist. (If you want a longer discussion of this issue you are going to have to make a new thread.)
I understand the claims you and Sam are making, pretty sure I do. I even agree with your conclusions. I just don't find your understanding of what ethics is "self-evidently true." But--to be fair--I find very little self-evidently true.
Is it true that 1+1=2?
I think there is a reason why we feel ethics is what is it. That is what this thread it meant to explore.
Right, and let's get back to that. I still haven't heard an argument from you on the basic point that makes sense.
Once again, the foundation or explanation for our ethics can't be evolution. It's just a description of a natural process, not an explanatory terminus of anything. And that's not because of any of the definitional faffing around that we've been doing -- it's because of the ontology of evolution itself which as far as I can tell, we both agree on. Even under your definitions and axioms, it still doesn't work.
Evolution selects against behaviors/traits because they are not conducive to reproduction. The behavior/trait has to already be not conducive to reproduction before evolution can "identify" it as such. An evolutionary explanation for ethics would still have to identify those behaviors not conducive to reproduction and explain why that is so -- and in that sense, adding evolution does nothing. You still have to answer the same question that other empirical ethicists have to answer, only you've just added some unnecessary junk in between, because now your theory is totally inapplicable to things that may not have evolved.
The response to this that you registered two posts ago was a kind of tu quoque: you'd only accept this if Sam Harris's answer was somehow better. Well, let's say you're right and it isn't -- so what? That doesn't absolve you from having to square your own theory with basic ontology.
I find the argument he makes in Chapter 1 that starts with "While I do not think anyone sincerely believes that this kind of moral skepticism makes sense, there is no shortage of people who will press this point with a ferocity that often passes for sincerity." to be incorrect. Consciousness necessarily assigning meaning to itself, because of the assumption that only consciousness can assign meaning.
That's not an assumption. What else can assign meaning besides consciousness? We were talking in the other thread and your response to the point that only conscious creatures can realize moral theories was something like "well, duh." Why "duh?" Because I think your "duh" and Harris's "duh" are the same "duh." Really, Harris is making a reductio here -- the state of affairs in which there are no conscious creatures leaves no room for evaluative judgments.
The experience machine can make me think wrong is right, but it can't make wrong right...
Not the point! Okay, both you and Harris are trying to articulate an empirical theory of morality. That's the appeal, right? No mumbo-jumbo, no mysticism, no Gods, no appeal to anything outside of what can be observed -- you can measure morality solely by examining the state of the universe.
Well, that means that whatever empirical combination of states you are assigning the word "good" to, the experience machine can give to you in its entirety. It's not that the experience machine is changing what you think is good -- it's that it's giving you a thing that you cannot distinguish from "good" exactly as you have defined it without the appeal to something non-empirical.
When consciousness is assigning value to itself first, then you don't have anything before consciousness grounded in reality. Or--to put it another way--all other considerations are secondary to the consciousness's happiness. However, I am putting something else BEFORE the consciousness's well-being as having more value than that well-being.
Is the thing you are putting before well-being empirical and grounded in observation and experience, or is it not? If it is, then the experience machine can by definition give it to you and you are just as vulnerable to the experience machine as anything else.
The machine is truly fulfilling Sam's ultimate goal, but it's not truly fulfilling mine. That's the difference, and why Sam's argument has more trouble with the machine than mine does.
Then your ultimate goal cannot truly be empirical because the experience machine can duplicate everything empirical. So if you rely on this kind of symmetry-breaking, you're introducing non-empirical elements into your ethical theory. Now I don't know whether you care, but your theory certainly immediately loses its appeal to anybody looking for a better version of Harris.
Why? What are you basing this assertion on?
Because unconscious matter doesn't have wishes, desires, inclinations, or anything else that could constitute teleological finality. And how do I know that? Well, a photon doesn't have enough internal states to encode a wish, desire, or inclination, nor does an electron, and so on and so forth. It's only when you get up to systems as complex as a nervous-system (and I won't quibble about the boundary; just that it's a lot higher than brute matter) where you get the ability to encode the kind of information that you need to verify a teleology.
The direction of gravity has nothing to do directly with the sharpness or dullness of the rock, while the direction of evolution has everything to do with the fitness of the species. As you well know.
I beg your pardon? I am sure that the direction of gravity came into play many times in the collisions that formed our knife-rock's history and eventually gave it its knifelike shape.
If you don't want to debate about the definition of ethics, why are you in a thread about debating the definition of ethics?
Quote from Duke article »
Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable
(Dammit, just as my RSI was starting to subside. Go figure.)
I'm honestly beginning to doubt you understand what I'm saying at this point, not the other way around.
I doubt that even you understand what you're saying, at least in its fullest implications.
An unempathetic creature can certainly understand another's discomfort, but they have no natural reason to care about it.
A sadist--for example--certainly understands that someone else's pain, but not as something they'd want to prevent.
Yes, but you are speaking as though sadism is a necessary state of affairs; that anyone lacking in empathy would perforce become a sadist. I surely don't deny that it is possible that an unempathetic creature (or even an empathetic creature) winds up being sadistic and therefore immoral. I simply claim that there's no reason it couldn't be the other way around. Blind painters are a thing.
You know more about axioms than I do. Since I have always operated under the assumption that definitions were a form of axiom, I spent some time today and read this articular to re-educate myself: http://www.phy.duke.edu/~rgb/Philosophy/axioms/axioms/node27.html
I hope it was correct, it does--at least--seem to explain where my misinformation was coming from.
That is indeed a good resource; I particularly commend your attention to the bulleted points where he distinguishes a definition from an axiom. I am going to take the liberty of quoting from that article in the sequel.
Anyway, you (and Sam) are asking me to accept your definition of "ethics" as self-evident.
Quote from Duke article »
Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions.
He (and you) are asking that we presuppose the definition of "ethics" to be the one her puts forth
Quote from Duke article »
Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions.
As I said, I agree with Sam's conclusion. I agree that the worst possible misery for everyone is "bad;" however, it does not then follow that "ethics" must be presupposed definitionally as "the well-being of 'conscious creatures.'"
Quote from Duke article »
Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions.
(I'm getting RSI from pushing command-v...)
And therein lies the problem with your argument.
If you were just whitewashing the word 'ethics' to mean nothing but "caring about conscious creatures" I would have no problem with it; I would just point out that it's not the 'ethics' I'm talking about. However--unless I am very much misreading you--you have chosen the word "ethics" because it also implies things like "stuff you should do." You ARE NOT simply specifying an object with your definition, you are calling forth other properties that word traditionally refers to and attempting to link it to "caring about conscious creatures."
THAT is what I'm objecting to.
If you want to simply label "caring about conscious creatures" why not make up a new, hitherto meaningless, word instead? Why pick one already so loaded with meaning?
I don't think even Sam Harris would object to an attempt to derive his views from something further anterior. The objections that you seem to be levying, however, are either of the "get back in the swamp" kind (above) or of the kind that equally well impale your own alternative (below).
I'm attempting to lead us out of the swamp with something more legitimate than "we should just leave the swamp," because I don't think that's a good enough reason. Or--at least--explain why we have an inclination to leave the swamp.
Such a debate has no ground on which it can be fought. Ethics is ponies. Debate over, I win. There is no way for you to say I'm wrong; even bringing in your own definition is pointless because then you will just be talking about non-ponies while I rattle on about ponies.
I hope this part is meant to be ironic, since this is what you're trying to do with your loaded definition. You're mocking yourself, not me.
If I define intelligence as measured IQ, then I can use command-F in my word processor and replace everything like "intelligent" or "smart" with "measured IQ." After doing that, we find that I am arguing about the measured IQ of whites and blacks. If it turns out that the measured IQ of whites is higher than blacks, then why would anyone interested in the truth about measured IQ ever deny that fact?
You should have kept reading: "This definition itself is expressed in words that require definition. Ultimately any given dictionary is circular - it defines words in terms of other words in the dictionary and cannot be understood unless you already understand those words."
You would 'command-F' the very premise of the debate and make it so you lost before you even started.
If you mean it can be used to trick or ensnare incompetents, then yes. That is not how I am using it,
Not to sound obstruct, but--yes--yes you are for this debate. You might not know that you are, but you are.
This is a debate about how one arrives at ethics from other--for lack of a better term--more scientific principles. You are attempting to just skip to it (which would also end the debate entirely, since that's the end goal not the starting point for this).
Evolution selects against behaviors/traits because they are not conducive to reproduction. The behavior/trait has to already be not conducive to reproduction before evolution can "identify" it as such. An evolutionary explanation for ethics would still have to identify those behaviors not conducive to reproduction and explain why that is so -- and in that sense, adding evolution does nothing. You still have to answer the same question that other empirical ethicists have to answer, only you've just added some unnecessary junk in between, because now your theory is totally inapplicable to things that may not have evolved.
As I outlined in the OP, the point would be to use ethics to aid social evolution. Social evolution is going to happen anyway, societies more fit(with 'better behaviors') than other societies have--and will--historically beat out ones less fit. The point of my version of ethics is to simply acknowledge this fact and use it in guiding society's behavior, what society should or shouldn't do in that context.
The response to this that you registered two posts ago was a kind of tu quoque: you'd only accept this if Sam Harris's answer was somehow better. Well, let's say you're right and it isn't -- so what? That doesn't absolve you from having to square your own theory with basic ontology.
I'm becoming more and more skeptical you even know what I'm arguing at this point. I feel like I should be asking for a recap of my position your own words, like I gave for you.
That's not an assumption. What else can assign meaning besides consciousness? We were talking in the other thread and your response to the point that only conscious creatures can realize moral theories was something like "well, duh." Why "duh?" Because I think your "duh" and Harris's "duh" are the same "duh." Really, Harris is making a reductio here -- the state of affairs in which there are no conscious creatures leaves no room for evaluative judgments.
Just because I can't think of any, does not mean there isn't any (you're making an argumentum ad ignorantiam, and while I don't remember doing that on some other thread, maybe I did).
Anyway--even assuming this to be the case--it does not then follow that consciousness is required to assign value to itself first, or even at all. Sam--being a neurologist--wants to say consciousness is most important. I--being a physicist--wants to say the physical is most important. That is what I'm choosing to assign value to first, reality. Because of my physical limitations, I am forced to perceive reality through my consciousness, but that does not mean IT is more important than reality.
And I wish I did not have to rely on such a ****ty filter.
Well, that means that whatever empirical combination of states you are assigning the word "good" to, the experience machine can give to you in its entirety. It's not that the experience machine is changing what you think is good -- it's that it's giving you a thing that you cannot distinguish from "good" exactly as you have defined it without the appeal to something non-empirical.
Right, it can trick me into believing something untrue by presenting me with something counterfeit I am unable to distinguish from the empirical.
It can't change reality, only my perception of it.
Is the thing you are putting before well-being empirical and grounded in observation and experience, or is it not? If it is, then the experience machine can by definition give it to you and you are just as vulnerable to the experience machine as anything else.
I am just as vulnerable to the experience machine as Sam is--yes--but my theory isn't. However, I wouldn't know that, sure.
It would take an outside observe to see that Sam's goal is being fulfilled and mine isn't.
Because unconscious matter doesn't have wishes, desires, inclinations, or anything else that could constitute teleological finality. And how do I know that? Well, a photon doesn't have enough internal states to encode a wish, desire, or inclination, nor does an electron, and so on and so forth. It's only when you get up to systems as complex as a nervous-system (and I won't quibble about the boundary; just that it's a lot higher than brute matter) where you get the ability to encode the kind of information that you need to verify a teleology.
I'd like to rewrite my statements in ways that wouldn't imply teleology, and should probably should try harder.
I completely understand that such phrasing weakens my argument, which is why I put that first '*' in the OP.
I beg your pardon? I am sure that the direction of gravity came into play many times in the collisions that formed our knife-rock's history and eventually gave it its knifelike shape.
An unempathetic creature can certainly understand another's discomfort, but they have no natural reason to care about it.
A sadist--for example--certainly understands that someone else's pain, but not as something they'd want to prevent.
Yes, but you are speaking as though sadism is a necessary state of affairs; that anyone lacking in empathy would perforce become a sadist. I surely don't deny that it is possible that an unempathetic creature (or even an empathetic creature) winds up being sadistic and therefore immoral. I simply claim that there's no reason it couldn't be the other way around. Blind painters are a thing.
Certainly in the case of humans the more common tendency is away from sadism to flourish, but we're not talking about humans in this part. One could construct our "created creatures" such that sadism is the natural state for them. In fact, the only way they can survive and reproduce is by indulging in their sadism. The only way they can be fit as a species is thusly.
Under your definition they would be fundamentally "evil." Your ethics would tell them they "should" die out as quickly as they can.
If you were just whitewashing the word 'ethics' to mean nothing but "caring about conscious creatures" I would have no problem with it; I would just point out that it's not the 'ethics' I'm talking about.
Let's get back on track. As I said before, I am (attempting to) drop my affirmative claims about the basis for ethics. Nor am I going to spend any more time defending Harris. Harris could be totally wrong, but I promise you he is not as wrong as you are. This thread is about your claims, and I'm going to skip over everything that doesn't speak to your claims or underlying notions of logic that we need to deal with them.
You ARE NOT simply specifying an object with your definition, you are calling forth other properties that word traditionally refers to and attempting to link it to "caring about conscious creatures."
THAT is what I'm objecting to.
Then object to yourself! Anyone trying to lay out a theory of ethics is eventually going to run into very big problems with other people's purported definitions. Most people think ethics constitutes "what God has set forth." They're going to reply to you the same way you're replying to me, and what the hell are you going to do about it then?
The one sense in which it is "legitimate" to "debate" definitions is under the auspices of descriptive linguistics: one can say that a term as defined is not what "most people mean" when they use the same combination of symbols. Well, most people mean "divine command ethics" when they say ethics. So Harris, yourself, me, and every moral realist ever might as well just pack it in.
If you want to simply label "caring about conscious creatures" why not make up a new, hitherto meaningless, word instead? Why pick one already so loaded with meaning?
Ditto "the behaviors that maximize fitness."
And it should be a neutral definition, not a definition that includes your conclusions.
For as long as you believe definitions can include conclusions, your positions will be ultimately unintelligible to those operating under the auspices of basic logic and philosophy.
I hope this part is meant to be ironic, since this is what you're trying to do with your loaded definition. You're mocking yourself, not me.
No, it's intended to get you to understand something about the ground on which we are met. Definitions can only be analyzed in terms of the arguments and conclusions that make use of them. The only reason that you think anyone is being mocked here is because you don't know what is being talked about.
You should have kept reading: "This definition itself is expressed in words that require definition. Ultimately any given dictionary is circular - it defines words in terms of other words in the dictionary and cannot be understood unless you already understand those words."
You would 'command-F' the very premise of the debate and make it so you lost before you even started.
Complete non-sequitur. He's right, of course -- definitions are ultimately circular. That's why it's a good thing that definitions don't constitute truth claims, arguments, proofs, premises, or assumptions.
Yes, and I'm going to go out on a limb and say you can also prove it's true.[1]
That argument correctly proves using the Peano axioms that S0 + S0 = SS0. I'm not sure why I should believe that S0 = 1 or SS0 = 2. Can you show me why I should believe those?
Did you read the OP?
But, now I'm wondering how you're even arguing with me if you don't think I'm making any claims.
This argument is going to go the way of our last one if you keep engaging in bad-faith debate or straw manning me. How would a conscientious reader of my position ever think that it entails that you are not making any claims?
As I outlined in the OP, the point would be to use ethics to aid social evolution.
Social evolution is going to happen anyway, societies more fit(with 'better behaviors') than other societies have--and will--historically beat out ones less fit. The point of my version of ethics is to simply acknowledge this fact and use it in guiding society's behavior, what society should or shouldn't do in that context.
Okay, if I understood this to be your claim I don't know that I'd be debating, because it's circular on the face of it. Ethics aids evolution which is ontologically prior to ethics? I thought you were claiming to be able to derive normative principles only from evolution. Specifically, this:
Quote from OP »
Now--with his telos inhand--we can evaluate the behavior of man. Certain things that man does would take away from his fitness and some things man does would make him more fit. This is not to say that man should be evolving his genes, because that would be changing the nature of man; changing his telos. However, working within the structure already provided by evolution, behaviors can be seen to increase man's fitness.
And--indeed--this has already happened. If a behavior works better within that nature, those man following it thrive. This fitness is something physical and can be physically quantified. Thus, it can be modeld and studied by the scientific method. Science can tell us which behaviors would work within our nature(our genes) to make us more fit, and which would not. Within this behavioral structure we can start to call some behaviors "good" and some "bad" (or if you want to be more dramatic, "evil").
Thus, starting from evolution, we can now evaluate actions in a moral way. A normative ethics system is formed.
This is the claim I am addressing. Do you still maintain this?
I'm becoming more and more skeptical you even know what I'm arguing at this point.
Again, I don't think either of us knows what you're arguing. There appears to be a deeper problem, which is that I don't think you can know what you're arguing, because of some confusion on basic philosophical concepts. However, that does not preclude the very real possibility that I don't know what you're arguing. To be clear, I thought (putting aside our sub-arguments) your general assertion here is what I just quoted. Am I wrong?
Right, it can trick me into believing something untrue by presenting me with something counterfeit I am unable to distinguish from the empirical. It can't change reality, only my perception of it.
What do you mean, "untrue?" Scientific truth, to the extent it exists, is grounded in the empirical. Of course taking basic skepticism into account, we never declare anything to be true with certainty. But the experience machine gets you as close to truth as you could ever possibly get. Again, core point: There is no empirical basis for asserting that the experience machine is "fooling" you. Although "fooling" is usually an empirical thing, the way you're using it here, it isn't. So if your notion of ethics relies on your ability to distingush this kind of "fooling," it's non-empirical.
I am just as vulnerable to the experience machine as Sam is--yes--but my theory isn't. However, I wouldn't know that, sure.
No, your theory is too. Any theory that attempts to give ethics an empirical basis is vulnerable to having that basis ripped out.
It would take an outside observe to see that Sam's goal is being fulfilled and mine isn't.
Oh, an "outside observer" eh? Someone who was completely separated from the observational context of the universe you believe yourself to inhabit? Nothing non-empirical there, no sir.
I'd like to rewrite my statements in ways that wouldn't imply teleology, and should probably should try harder. I completely understand that such phrasing weakens my argument, which is why I put that first '*' in the OP.
It seems to me that to the extent that your argument expressly depends on telos, it doesn't just weaken your argument -- it undermines it altogether.
As a byproduct, not a product.
Again, pardon? There is no such distinction in the universe-of-rocks. Everything in nature is a product. The idea of a byproduct only makes sense when there's something that's interested in particular products but disinterested in others, and can label the ones he's disinterested in as byproducts. You have to presuppose telos to get byproducts.
Certainly in the case of humans the more common tendency is away from sadism to flourish, but we're not talking about humans in this part. One could construct our "created creatures" such that sadism is the natural state for them. In fact, the only way they can survive and reproduce is by indulging in their sadism. The only way they can be fit as a species is thusly.
Right; something that is hypothesized to meet all the conditions of immorality is immoral. There's nothing for that.
Under your definition they would be fundamentally "evil." Your ethics would tell them they "should" die out as quickly as they can.
Hold on there, slick. I know I am supposed to be dropping my ethical claims, but this is such a calumny that I felt compelled to respond. Ethical creatures would recognize even these sadists as conscious and would therefore not kill them. And I most certainly don't assert any deontological principle that says "anyone who is unethical must kill themselves off and/or fail to reproduce." That actually seems like it might be your thing, what with your connection between reproduction and ethics -- a connection that could only arise incidentally on my position.
I think the fate of those sadists under my sort of ethics would be something like imprisonment or exile; the isolation of all sadist-creatures in a place where they could do their thing without involving any non-sadistic creatures.
Anyone trying to lay out a theory of ethics is eventually going to run into very big problems with other people's purported definitions. Most people think ethics constitutes "what God has set forth." They're going to reply to you the same way you're replying to me, and what the hell are you going to do about it then?
It depends on the nature of the discussion. But, I think my posts around the debate form would be good examples about how I would conduct myself in different debates, depending on what's being asked and how its being asked.
The one sense in which it is "legitimate" to "debate" definitions is under the auspices of descriptive linguistics: one can say that a term as defined is not what "most people mean" when they use the same combination of symbols. Well, most people mean "divine command ethics" when they say ethics. So Harris, yourself, me, and every moral realist ever might as well just pack it in.
If you assume -> Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions.
Then, yes, you might as well give up at that point.
I am trying to derive the term, not trying to define it at the start. I am using an argument and reasoning chain to arrive at what I mean by "ethics." I am not handing out some definition that must either be agreed on or rejected right at the start.
You can agreed on or rejected my conclusion if you want, but just be aware it is a conclusion not a definition.
This argument is going to go the way of our last one if you keep engaging in bad-faith debate or straw manning me. How would a conscientious reader of my position ever think that it entails that you are not making any claims?
I am most certainly not doing that, and I wasn't doing it last time either, as I attempted to explain in PM.
But, I am getting very tired of you treating me like some kind of "debate criminal" every time we interact. Can't we have a conversation without you accusing me of a "bad-faith debate?"
Okay, if I understood this to be your claim I don't know that I'd be debating, because it's circular on the face of it. Ethics aids evolution which is ontologically prior to ethics?
We can use ethics to aid social evolution, not biological evolution.
Biological evolution exists ontologically prior to ethics, social evolution exists in tandem with ethics.
What do you mean, "untrue?" Scientific truth, to the extent it exists, is grounded in the empirical. Of course taking basic skepticism into account, we never declare anything to be true with certainty. But the experience machine gets you as close to truth as you could ever possibly get. Again, core point: There is no empirical basis for asserting that the experience machine is "fooling" you. Although "fooling" is usually an empirical thing, the way you're using it here, it isn't. So if your notion of ethics relies on your ability to distingush this kind of "fooling," it's non-empirical.
A person in an insane asylum might think he is performing empirical studies, and might not be able to tell he is not, however, his conviction doesn't make it true. His inability to distinguish between imagination and reality does not mean that their isn't a difference. Or, are you claiming this isn't true?
Sam's theory cares only about the well-being of the mind. So the person in the insane asylum, as long as their consciousness is experiencing the sensation of well-being, is fulling that requirement.
I am just as vulnerable to the experience machine as Sam is--yes--but my theory isn't. However, I wouldn't know that, sure.
No, your theory is too. Any theory that attempts to give ethics an empirical basis is vulnerable to having that basis ripped out.
It is possible I could wake-up in a Matrix Pod, but my theory would say I should then remove myself from the machinery. Sam's says he should plug himself back in if doing so will cause him to forget the whole experience. That's the difference.
Oh, an "outside observer" eh? Someone who was completely separated from the observational context of the universe you believe yourself to inhabit? Nothing non-empirical there, no sir.
And you claim you're not mocking me.....
I am not talking about some timeless, nonphysical God. I am saying YOU standing between me and Sam as we are plugged into the Matrix could see that my goal of fitness is not being fulfilled and Sam's goal of conscious' well-being IS being fulfilled.
To follow our goals, I should take the red pill and Sam should take the blue pill.
It seems to me that to the extent that your argument expressly depends on telos, it doesn't just weaken your argument -- it undermines it altogether.
You are pushing me closer to the fence on this one, but I'm not there yet. I'm believe I can recast the argument to remove any teleology.
The whole "telos" bit was added after I made the thread when BS brought it up (each of those divides are from separate edits that were included as the thread progressed and I attempted to make my theory more coherent). When I first wrote the OP I did not even know what telos was. You are convincing me it's inclusion may have been a mistake.
Right; something that is hypothesized to meet all the conditions of immorality is immoral. There's nothing for that.
Under your definition they would be fundamentally "evil." Your ethics would tell them they "should" die out as quickly as they can.
Hold on there, slick. I know I am supposed to be dropping my ethical claims, but this is such a calumny that I felt compelled to respond. Ethical creatures would recognize even these sadists as conscious and would therefore not kill them. And I most certainly don't assert any deontological principle that says "anyone who is unethical must kill themselves off and/or fail to reproduce."
I'm not saying you are saying they should kill themselves. I am saying that you are saying that 'should' stop engaging in sadism, which is vital to their survival.
That actually seems like it might be your thing, what with your connection between reproduction and ethics -- a connection that could only arise incidentally on my position.
No, I would say they 'should' do what they need to do to survive as a species. They 'should not' stop engaging in an activity that is vital to their survival as a species.
I think the fate of those sadists under my sort of ethics would be something like imprisonment or exile; the isolation of all sadist-creatures in a place where they could do their thing without involving any non-sadistic creatures.
Except, they--themselves--are "conscience creatures;" so--unless I misunderstand what you've been saying--allowing them to perform their sadism on each other violates your ethics as much as them performing it on any other "conscience creature."
(Again, for the record, I'm dropping my own foundational ethical claims and any defense of Harris and moving back towards the thread topic, so if I skipped something that's probably why.)
I am trying to derive the term, not trying to define it at the start.
...You can't derive a definition. Okay, here's what I think you mean, being charitable: you are going to start from some definition of ethics that you consider to be "neutral" and attempt to derive some properties from it.
That's a fine way to argue. So: Socratic method. Let's do it. What's ethics, on your position? (To spare us from having to go back and forth a dozen times, you may want to define any subsidiary terms as well. For instance, if you define ethics as "good behavior" I'm just going to ask you to define "good" so you might as well just do it up front.)
It depends on the nature of the discussion. But, I think my posts around the debate form would be good examples about how I would conduct myself in different debates, depending on what's being asked and how its being asked.
How could you possibly miss the point so badly? The point is: what will you do when confronted with a misunderstanding of the very kind that you possess? (We'll soon find out if we carry out the Socratic method above.) If you think that the contextually-previous exchange in this line is a legitimate way of responding to arguments (it's not!) then a sufficiently clever or stubborn opponent will be able to stymie all your arguments using the same nonsense.
If you assume -> Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions. Then, yes, you might as well give up at that point.
So, if I assume the workings of basic logic, I might as well give up in every debate that utilizes definitions? You're not seeing a problem here?
I am trying to derive the term, not trying to define it at the start. I am using an argument and reasoning chain to arrive at what I mean by "ethics." I am not handing out some definition that must either be agreed on or rejected right at the start.
Again, can't be done. Whatever your argument or derivation is, there is no compulsion on anyone's part to agree to call the thing on the bottom line "ethics."
But, I am getting very tired of you treating me like some kind of "debate criminal" every time we interact. Can't we have a conversation without you accusing me of a "bad-faith debate?"
Only if you don't do it. And while we're venting, I have a similar rant: you seem to be extraordinarily resistant to any attempt to educate you. I may not know more than anyone else about ethics, and I don't blame you for calling me out on any mistakes I make there, which are, I'm sure, very likely. But when it comes to basic logic, I know what I'm talking about. If you don't believe you have something to learn from me on that score, then you're mistaken. You continue to flatly contradict me every time I tell you what you're doing wrong, even after you've independently sourced my claims from another professional. I have no idea how to respond to someone who just sticks "not" in front of everything I say and repeats it back to me. It's ridiculous.
Read what I'm saying, understand it, multi-source it if you don't trust me, but once you've done so, straighten yourself out -- don't just keep doing the same thing over and over. If it chafes you to have to back down on some faulty piece of logic because you feel you'd be losing a debate, then take some time, use the various online and offline resources at your disposal, and evaluate it in a peaceful non-debate context. If we can't get logic right, there's just absolutely no chance of making any progress on ethics.
Biological evolution exists ontologically prior to ethics, social evolution exists in tandem with ethics.
Okay, before, I wrote that murder isn't "bad" because evolution selects against it; rather, evolution selects against it because it's "bad." Murder had to be bad before evolution could identify it as such; the measurements relevant to the evolutionary process are done against a background that already exists. Do you accept this? If so, then there is a clear direction of ontological priority between social evolution and ethics -- ethics is prior.
A person in an insane asylum might think he is performing empirical studies, and might not be able to tell he is not, however, his conviction doesn't make it true. His inability to distinguish between imagination and reality does not mean that their isn't a difference. Or, are you claiming this isn't true?
Well in a sense it depends what's wrong with his mind. If his mind literally was an experience machine, producing a reality that nobody outside could impinge on in any way, then there is obviously no empirical way for him to know that he is insane. Any empirical theories he develops will be based on his being "fooled
" so he could never develop one that concludes he's not being "fooled." I don't think that's really possible for a human, but if it is, then I would say that there is an almost-literal incarnation of a Cartesian demon at work, and of course those are irresolvable problems in philosophy.
(Or to be more glib about it: prove you're not insane.)
Sam's theory cares only about the well-being of the mind. So the person in the insane asylum, as long as their consciousness is experiencing the sensation of well-being, is fulling that requirement.
Harris is off the table till we figure out what's going on with your theory.
It is possible I could wake-up in a Matrix Pod, but my theory would say I should then remove myself from the machinery.
Okay. You're in one right now. What steps do you take on a daily basis to remove yourself?
Sam's says he should plug himself back in if doing so will cause him to forget the whole experience. That's the difference.
Does it? I think forcible removal of memories might trigger some red flags on Harris' account. But no more defending Harris until we get you straightened out.
I am not talking about some timeless, nonphysical God. I am saying YOU standing between me and Sam as we are plugged into the Matrix could see that my goal of fitness is not being fulfilled and Sam's goal of conscious' well-being IS being fulfilled.
I'm a nonphysical being with respect to you. In your Matrix pod, there's no way for you to interact with me. I might as well be God. Your evaluation of ethics depends on you being able to "reach outside" of your matrix pod and talk to me for an answer.This is non-empirical and I can't believe that you are denying it. What is going on here?
To follow our goals, I should take the red pill and Sam should take the blue pill.
The red-vs-blue pill scenario depends on there being a creature able to transcend realities that can offer you the ability to transcend realities. Highly non-empirical. What is going on here?
You are pushing me closer to the fence on this one, but I'm not there yet. I'm believe I can recast the argument to remove any teleology.
I highly recommend that you do so.
Gravity is indirectly involved in the sharpness of the rock. Evolution is directly involved in the fitness of a species.
First, what are the criteria for whether something is "directly" or "indirectly" involved in something? Second, evolution pushes things toward greater fitness, but it doesn't make the fitness what it is. What would you say if I told you that a thermometer was directly involved in producing the temperature it's measuring?
Only if you don't do it. And while we're venting, I have a similar rant: you seem to be extraordinarily resistant to any attempt to educate you. I may not know more than anyone else about ethics, and I don't blame you for calling me out on any mistakes I make there, which are, I'm sure, very likely. But when it comes to basic logic, I know what I'm talking about. If you don't believe you have something to learn from me on that score, then you're mistaken. You continue to flatly contradict me every time I tell you what you're doing wrong, even after you've independently sourced my claims from another professional. I have no idea how to respond to someone who just sticks "not" in front of everything I say and repeats it back to me. It's ridiculous.
If you think that's what I'm doing, I believe we are done here. I did extra legwork to read about the definition of "definition" and attempted to understand where I went wrong. I linked you the article and ask for your opinion and adjusted what I was saying according. I was reading the exact definition of "teleology" and thinking about how best to remove it from my argument as well. But, I guess you don't really care about that because my apparent obstinance offends your sensibilities.
But, if acknowledging your superiority(which I've already done a few times on this and other threads) isn't enough for you, and if you feel that me darning to continue to converse so that I might better see your point is just me being "ridiculous" then I don't think dragging this out will be worth either of our time.
If you're committed to talk down to me and act as if I am incapable of learning then I really don't wish to continue this discussion. This kind of snarky high-horse attitude—even in the face if submission—is exactly the reason I left cutting-edge academia.
......
Sigh........ *deep breath*
......Ok, I needed to get that out of my system.....
Let me try one more time, because--while I might be dumb as rocks--there are still two points of mine you seem to be misunderstanding, and I hate misunderstandings.
1) When I mean "ethics" I am talking about what we "should" or "shouldn't" do. I am not attempting to define the term "ethics" starting with a physical bases, I am attempting to say what we "should" or "should not" do starting with a physical bases. I am attempting to answer the question posed by ethics of "What should we do?" and arrive at an answer using physical principles.
I don't know if you care or not--because you keep saying you're abandoning Sam's argument and then coming back to it--but Sam is saying we should be presupposing that the answer to the question "what should we do?" is "care about the well-being of conscious creatures." I would rather not presuppose that.
2) I can't prove I'm not insane. The point of the "experience machine" example is that if all you care about is the well-being of your mind then it fulfills your requirements on all levels. You have no reason to want to leave it--assuming you could--because it can make you feel the sensation of well-being, which is your ultimate goal.
However, the machine cannot 'really' increase your fitness, which is my goal.
That is moot--of course--if you can't leave. But, if you assumed you could, Sam's theory would say you "shouldn't," while mine says you "should."
__________________
Now--if you don't mind--I have a lot of being unemployed I need to get back to. Because real life isn't harrowing enough for me... I need to go online to be reminded of my mental shortcomings.
If you think that's what I'm doing, I believe we are done here. I did extra legwork to read about the definition of "definition" and attempted to understand where I went wrong. I linked you the article and ask for your opinion and adjusted what I was saying according. I was reading the exact definition of "teleology" and thinking about how best to remove it from my argument as well. But, I guess you don't really care about that because my apparent obstinance offends your sensibilities.
It's up to you whether you want to end things for your own reasons, and I get that. I'm happy to continue only if I am getting traction on foundational issues.
But, if acknowledging your superiority(which I've already done a few times on this and other threads) isn't enough for you,
How could you have gotten me so wrong? I am not looking for acknowledgement of superiority. You could have left that out altogether for all the difference it makes. I skip over it when reading. I am looking for traction on issues, or evidence that I am having some effect.
If you're committed to talk down to me and act as if I am incapable of learning then I really don't wish to continue this discussion. This kind of snarky high-horse attitude—even in the face if submission—is exactly the reason I left cutting-edge academia.
Well, I can't help that. Oftentimes academics are guilty of expecting people to learn, and too quickly for their own good at that. However, if I may be so bold, I think your mistake might be in the emphasis you place on submission, which is wholly irrelevant. Evincing submission is different from evincing an attempt to learn.
1) When I mean "ethics" I am talking about what we "should" or "shouldn't" do. I am not attempting to define the term "ethics" starting with a physical bases, I am attempting to say what we "should" or "should not" do starting with a physical bases. I am attempting to answer the question posed by ethics of "What should we do?" and arrive at an answer using physical principles.
Okay, so ethics on your position is "that which we should do." What's "should?"
Hints: Dictionary says "must" or "ought." What's "ought?" Dictionary says "expression of duty, moral obligation, justice, moral rightness, propriety." I won't belabor you with more details but suffice it to say that all of these things come back around to "ethics" in the end. (This is the circular-definition thing.)
I don't know if you care or not--because you keep saying you're abandoning Sam's argument and then coming back to it--but Sam is saying we should be presupposing that the answer to the question "what should we do?" is "care about the well-being of conscious creatures." I would rather not presuppose that.
You can't make arguments without presupposing things. (Or rather, any argument you did make under such conditions would be a tautology.) You will see, as we unwind the Socratic method, that you are in fact presupposing things. You may not like what Harris is presupposing, but how is that a basis for any sort of objection? You will find that if you allow that to be a valid response to argumentation, then it can be applied to yours as well.
2) I can't prove I'm not insane. The point of the "experience machine" example is that if all you care about is the well-being of your mind then it fulfills your requirements on all levels. You have no reason to want to leave it--assuming you could--because it can make you feel the sensation of well-being, which is your ultimate goal.
However, the machine cannot 'really' increase your fitness, which is my goal.
No, the point of the experience machine is that it can produce any conceivable experience. So if your theories are ultimately grounded in empirics, it can validate any conceivable condition of your theory and therefore give you exactly what you say you want. If your goal is something the machine cannot give you, then your goal is not empirical!
I feel that you are not answering this argument. You are just repeating what you said in the first place and not taking into account any answer I've given.
That is moot--of course--if you can't leave. But, if you assumed you could, Sam's theory would say you "shouldn't," while mine says you "should."
Name the empirical state your theory evaluates as morally good. Once that state is named, call it X, program the experience machine to give you (and everyone else) exactly X. On what evaluative basis do you refuse to take exactly X when it's offered to you?
You introduce a "real" versus "not actually real" symmetry-breaking to underwrite this. But this is necessarily non-empirical, since you can't by any empirical measurement determine which of these states you are in. Furthermore, if the symmetry-break is open to you, it is open to Sam Harris as well; he can say he only wants "real" happiness, and that the mental states induced by the experience machine are not "real."
Again, all of these arguments I have stated previously. I think you must answer my objections specifically before you can continue to assert the original claims.
Private Mod Note
():
Rollback Post to RevisionRollBack
A limit of time is fixed for thee
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Okay, so ethics on your position is "that which we should do." What's "should?"
Hints: Dictionary says "must" or "ought." What's "ought?" Dictionary says "expression of duty, moral obligation, justice, moral rightness, propriety." I won't belabor you with more details but suffice it to say that all of these things come back around to "ethics" in the end. (This is the circular-definition thing.)
Right. You're right. It was foolish of me to think I could turn an "is" into an "ought." I was attempting to give evolution some kind of motive, which is a teleology and incorrect.
I see that now.
You can't make arguments without presupposing things. (Or rather, any argument you did make under such conditions would be a tautology.) You will see, as we unwind the Socratic method, that you are in fact presupposing things. You may not like what Harris is presupposing, but how is that a basis for any sort of objection? You will find that if you allow that to be a valid response to argumentation, then it can be applied to yours as well.
I am aware that you need to presuppose things. If you go back over our dissection you will see me say that time and time again that things must be presupposed. I was trying to do it further back in the reasoning chain then Sam because he was presuposing the answer to what I think is the key question. I "don't like" his method because it answers what--to me--is the only question worth answering in the discussion with an ad hoc assumption.
"What should we do?"
No, the point of the experience machine is that it can produce any conceivable experience. So if your theories are ultimately grounded in empirics, it can validate any conceivable condition of your theory and therefore give you exactly what you say you want. If your goal is something the machine cannot give you, then your goal is not empirical!
I feel that you are not answering this argument. You are just repeating what you said in the first place and not taking into account any answer I've given.
I believe we are talking about different "experience machine" thought experiments. However--despite this--I think I am being to understand what you're saying. In yours the person isn't aware of the experience machine isn't "real life?"
You introduce a "real" versus "not actually real" symmetry-breaking to underwrite this. But this is necessarily non-empirical, since you can't by any empirical measurement determine which of these states you are in. Furthermore, if the symmetry-break is open to you, it is open to Sam Harris as well; he can say he only wants "real" happiness, and that the mental states induced by the experience machine are not "real."
I was assuming since the machine dealt only with the mind, then the well-being Harries got from the machine was the same as "real" well-being, but my fitness wasn't the same as "real" fitness.
This is because the mental state of well-being would be equivalent in both, but the physical quality of fitness wouldn't be.
I guess I am still being pig-headed on this one because, while I understand the subject in the machine might not know the difference, there still is a difference. As far as my inferior intellect can deduce, anyway.
Right. You're right. It was foolish of me to think I could turn an "is" into an "ought." I was attempting to give evolution some kind of motive, which is a teleology and incorrect.
I see that now.
Well, let's not be too hasty. Keep following along here, I think this line of thinking will ultimately be constructive.
That being said, you've basically sized up the current situation. In fact, as of where we are right now in this little Socratic diversion, it's even worse than that. You can't get an "ought" from an anything -- our theory is totally vacuous and gives us no way of making ethical assertions because it doesn't define ethics in terms of anything but itself. We're in the swamp. However, I think it's premature (or at least trivializing) to just accept that.
There may be many ways out of the swamp, but they all take the same form: Here is where we need to make a presupposition or assumption in order to get things moving. Is there any particular one you would have us make?
I am aware that you need to presuppose things. If you go back over our dissection you will see me say that time and time again that things must be presupposed. I was trying to do it further back in the reasoning chain then Sam because he was presuposing the answer to what I think is the key question. I "don't like" his method because it answers what--to me--is the only question worth answering in the discussion with an ad hoc assumption.
"What should we do?"
The only way to answer any question is (ultimately) with an ad hoc assumption. How do we manipulate numbers? Well, our choice of the field axioms isn't (a priori) any less ad hoc than what's going on here. It's only a question of where our choices lead. We "like" the field axioms because when utilized, they solve useful problems and/or produce novel consequences. We may later on find that we "liked" those axioms too much, and some incompatible axioms exist which also give useful answers or novel consequences.
I believe we are talking about different "experience machine" thought experiments.
I have read that link. Actually, when I was in undergrad, I read the original paper by Nozick as well as some of the responses from his interlocutors.
I am using the same premise as Nozick -- namely, that there's a machine that can produce any conceivable experience. However, Nozick (says I and some of his respondents) stops short in enumerating the conclusions that follow from the existence of such a thing. He checks his swing. In fairness to him, he's only going after a particularly brain-dead variety of hedonism, but the exact hypothesis he's using can underwrite a much more devastating conclusion that he simply elects not to mention.
However--despite this--I think I am being to understand what you're saying. In yours the person isn't aware of the experience machine isn't "real life?"
He may or may not be aware that it's an experience machine (I'm happy to suppose that he is if that's what you want in this particular case, and I think Nozick intends on awareness, so let's say he is aware) -- but insofar as his wishes are empirical in nature, what he's after can be provided by the machine. If he has an additional wish that his experiences have some property called "real" then that wish is non-empirical.
Put it this way: you could be in an experience machine right now. Does your worry about the possible "unrealness" of everything you know and do color your every ethical decision? I should hope not. Why not? If you can answer that then you get what I'm saying.
I was assuming since the machine dealt only with the mind, then the well-being Harries got from the machine was the same as "real" well-being, but my fitness wasn't the same as "real" fitness.
I don't buy this assumption. It's the word "real" that kills it for me. Re fitness, why are all of the babies you have in the experience machine not "real" offspring? Again, you could be in an experience machine right now. I don't think if you have children you are constantly wondering whether or not they're real in any evaluative sense. I don't think if Morpheus came and offered you the red pill, you'd love any children you had any less. In fact, psychology may indicate the opposite -- there appears to be quite some question in experimental moral philosophy concerning whether you would take the red pill at all under such conditions!
I guess I am still being pig-headed on this one because, while I understand the subject in the machine might not know the difference, there still is a difference.
It's not necessarily that you don't know the difference (although each such belief must stand up to scrutiny under the possibility that you might not) -- it's that if "real" versus "not-real" is something you care about then that's a non-empirical thing you care about.
I've been doing a lot of development into the is/ought problem lately, and this is what I've arrived at. Maybe it will help.
Moral statements, e.g., "Destroying statues is wrong," are not facts and have no truth value. They seem to have truth values only insofar as the listener is able to infer an implicit value referent from them.
But this leaves room for ambiguity. (Ambiguity is the wind that sustains the heat and duration of various philosophical conversations. Philosophical conversations are supposed to be resolved. If they don't resolve, something may be very wrong.)
When subject to value explication, these statements become facts (that can be true or false). Not only that, but they become rather mundane facts, e.g., "It does not serve the interests of those who value utmost the preservation of all artistic and historical works to destroy statues."
Philosophical conversations reduce to science + semantics unless they're polluted by ambiguity, or deontology, or incoherence.
If you have certain interests, e.g., the preservation of mankind, the conquest of the world by salamanders, the diversity of species, the monolithic unity of all life on the planet, the extinction of all life on the planet, truth and honesty at the expense of all else, satisfaction at the expense of all else, etc., and you wish for others to share those interests, you are free to campaign. It turns out that one of the most effective interest-campaign strategies is to lie to people by convincing them that the interest is not a mere interest, but a magical moral object pervading the world, or a vase on God's coffee table. This is why the false concept of "intrinsic human rights" is so ubiquitously considered true, and sacrosanct.
I've been doing a lot of development into the is/ought problem lately, and this is what I've arrived at. Maybe it will help.
Moral statements, e.g., "Destroying statues is wrong," are not facts and have no truth value. They seem to have truth values only insofar as the listener is able to infer an implicit value referent from them.
To name one common objection: ethics appears to transform sanely under propositional logic in the way you'd expect it to (e.g. "if you ought not do X to Y, and a Z is a Y, then you ought not do X to Z") so ethical statements appear to reside well within the domain of propositional logic. But the domain of propositional logic is propositions. Such a statement would be incoherent in the semantics of propositional logic if its atoms couldn't be assigned truth values, and yet it appears perfectly coherent.
If you have certain interests, e.g., the preservation of mankind, the conquest of the world by salamanders, the diversity of species, the monolithic unity of all life on the planet, the extinction of all life on the planet, truth and honesty at the expense of all else, satisfaction at the expense of all else, etc., and you wish for others to share those interests, you are free to campaign. It turns out that one of the most effective interest-campaign strategies is to lie to people by convincing them that the interest is not a mere interest, but a magical moral object pervading the world, or a vase on God's coffee table. This is why the false concept of "intrinsic human rights" is so ubiquitously considered true, and sacrosanct.
I think this is an illicit accusation of rhetorical intent -- as if every ethicist had as his actual goal to capture the audience or get votes. Remove the imputation of ulterior motive from this argument and it becomes empty.
To name one common objection: ethics appears to transform sanely under propositional logic in the way you'd expect it to (e.g. "if you ought not do X to Y, and a Z is a Y, then you ought not do X to Z") so ethical statements appear to reside well within the domain of propositional logic. But the domain of propositional logic is propositions. Such a statement would be incoherent in the semantics of propositional logic if its atoms couldn't be assigned truth values, and yet it appears perfectly coherent.
Those statements are incoherent if you deliberately refuse to impute into them any inferred value referent. "Ought" has no meaning except in terms of a goal or interest.
Consider the statement, "Mushroom pizza is tasty." Does that statement have a truth value? No, it doesn't, unless you do it the favor of filling in its missing preference referent, probably by inference. An inferred appendage like, "to me," or "to you," or "to some," or "to most." Things aren't just "tasty" with no tastiness-judges.
Consider the statement, "X + 5 = 2." Does that have a truth value? No, it does not. In order to assign it a truth value, I need to make a referent appeal. And if that appeal returns, "X = 1 + Y," then I haven't resolved anything, and must make still more appeals.
Now, this doesn't mean that there are no objective components to morality about which we can speak. Once you are given a value referent, you can talk about the mechanics of optimizing it. Non-cognitivism doesn't mean morality exists purely as taste declarations without any world-ties. In the link you provided, they seemed stumped by the following:
'She does not realize that eating meat is wrong.' ... 'Attempts to translate these sentences in an emotivist framework seem to fail (e.g. "She does not realize, 'Boo on eating meat!'")'
The person pondering this "seeming" failure is confused by the fact that in that sentence, not only is a value referent missing, but it's also the kind of moral sentence we typically use when talking about working against your own interests (which is an objective action). The translation into coherent-speak is something like this:
"She does not realize that eating meat is counterproductive in terms of what she values."
I think this is an illicit accusation of rhetorical intent -- as if every ethicist had as his actual goal to capture the audience or get votes. Remove the imputation of ulterior motive from this argument and it becomes empty.
I don't meant to imply that this lie is deliberate. "We have libertarian free will," "moral realism is true," etc. are lies that are nearly ubiquitously and sincerely held as true. They are lies, but that doesn't mean their proliferators are intentionally lying, or malicious about it, or what have you.
When I talk about ethical campaigning, I mean things like this:
A conservation group producing a TV special to highlight the wonder of creatures in nature.
A preacher telling his congregation that homosexuality is an abomination.
An anti-abortion protest.
But I have a broad view of ethics, such that I'd even consider this ethical campaigning:
Kellogg's, through an advertisement, telling you that you ought to buy their cereal.
Now, to diminish my earlier claim a bit, most of these appeal to shared values and are just trying to convince you of supposed facts about how the world works. For example:
"We both agree that being heart-healthy is good, because that helps you stay alive, and staying alive is desirable. Kellogg's cereal has vitamins in it that will make you maximally heart-healthy."
"We both agree that conforming to God's opinions is absolutely essential. Getting mad at homosexuals is by far the best way to do that. Also, sacrificing 10 chickens every other Thursday."
"We both agree that maintaining the stability of our natural world is great. Animal conservation is necessary for accomplishing that, because step on the wrong Amazonian beetle, and our entire food chain could collapse."
Those statements are incoherent if you deliberately refuse to impute into them any inferred value referent. "Ought" has no meaning except in terms of a goal or interest.
Consider the statement, "Mushroom pizza is tasty." Does that statement have a truth value? No, it doesn't, unless you do it the favor of filling in its missing preference referent, probably by inference. An inferred appendage like, "to me," or "to you," or "to some," or "to most." Things aren't just "tasty" with no tastiness-judges.
Consider the statement, "X + 5 = 2." Does that have a truth value? No, it does not. In order to assign it a truth value, I need to make a referent appeal. And if that appeal returns, "X = 1 + Y," then I haven't resolved anything, and must make still more appeals.
You're playing soccer on the football field. I agree with your statements about variable quantification. You've defined "ought" as a ternary predicate, and it's of course a basic tenet of first-order predicate logic that a sentence that fails to provide sufficient inputs to match the arity of its predicates is non-grammatical. So if you read out a sentence from a binary account of "ought" literally and interpret the words in the context of a ternary "ought," you get nonsense.
However, a charge of incoherence against an opposing position can only be met by the lights of the purportedly-incoherent position (or some agreed-upon background material). While you've defined "ought" as ternary as you are free to do, you may not charge someone defining "ought" as binary with incoherence merely because he defines it otherwise. Rather, you must show that his own definition forces upon him logical errors or renders his claims unintelligible against a background you can both agree on.
This is what I see as the force of the original objection; there is nothing evinced by binary "ought" claims that outrages logic or would forbid them from having a truth value -- in fact, they seem to participate in the same "logical dance" that other truth claims do, and only an incurious person would dismiss that as a complete coincidence. Your reply here doesn't appear to contradict that objection by asserting anything like "binary ought leads to internal, irreparable unintelligibilities" -- rather, it appears to be saying "if you adopt my stance, which defines ought as ternary, you can't continue to use ought as if it were binary and still have your sentences make sense." While I agree with that statement, as it is obvious, it is not exactly what I'd call an answer to the objection.
On the matter of mushroom pizza being tasty, this appears to be an instance of impedance mismatch between colloquial and precise use of language. Colloquially, I read "mushroom pizza is tasty" as "I like the taste of mushroom pizza," which is most certainly a truth claim. In the context of precise language, I'd be inclined to agree that it is ambiguous -- however, it certainly isn't necessarily incapable of holding a truth value. Someone that wished to articulate an objective or realistic theory of taste (grounded in neuroscience, say) might well be able to articulate a coherent one-place "tasty" predicate that does not require a referent. Something like "Lights up this taste sub-center of the brain without fail," for instance.
Non-cognitivism doesn't mean morality exists purely as taste declarations without any world-ties. In the link you provided, they seemed stumped by the following:
'She does not realize that eating meat is wrong.' ... 'Attempts to translate these sentences in an emotivist framework seem to fail (e.g. "She does not realize, 'Boo on eating meat!'")'
I believe this objection is directed at a different variety of non-cognitivism which defines ethical statements to be nothing more than expressions of emotional revulsion. I agree that it is not a very good objection to your variant.
I don't meant to imply that this lie is deliberate. "We have libertarian free will," "moral realism is true," etc. are lies that are nearly ubiquitously and sincerely held as true. They are lies, but that doesn't mean their proliferators are intentionally lying, or malicious about it, or what have you.
It seems to me that in order for an accusation of lying to stick to someone at all, it must have been the case that they were making a truth claim in the first place. A moral realist, insofar as his ontology is grounded in axioms of the form "you ought to do (insert something morally realistic)," can't possibly be lying (at least not about the entirety of his position) so long as it is being denied that binary oughts can constitute truth claims. I think you might say that he's rambling or babbling, if you believe his statements to be fundamentally incoherent.
Now, to diminish my earlier claim a bit, most of these appeal to shared values and are just trying to convince you of supposed facts about how the world works. For example:
I am certainly prepared to acknowledge that a great deal of means-end reasoning can ultimately be grounded in fact. However, I would deny that ethics is entirely reducible to this brand of means-end, ternary-ought reasoning. For instance, by way of an appeal to descriptive linguistics, people are wont to classify questions of whether particular ends justify particular means as ethical questions. If such questions are intelligible and could potentially have negative answers, that would mean that there would be some true ternary-ought sentences of the form "X ought to Y if he wants Z" that are also "unethical" (albeit in some possibly-different evaluative context) And that, to me, does seem like fertile ground for internal incoherence.
Rather, you must show that his own definition forces upon him logical errors or renders his claims unintelligible against a background you can both agree on.
My approach has basically been this: Treat "ought" as I'm treating it -- a thing that demands a referent, analogous to an equation with a dangling variable. When you do, the is/ought problem is completely and elegantly explained and is no longer a confounding mystery. Furthermore, various non-realist moral theories can be elegantly reduced to it, and various realist moral theories and their confusing mysticism can be elegantly explained as failures to ascertain it.
Nobody has to agree with me. In fact, when I explain that this is the root of the issue, I sometimes get "that's not what morality is" responses. This is because morality is laden with folk baggage. While I'd like morality to mean only "right decisionmaking," it actually means, to most,
Right decisionmaking, plus
a weight of significance, plus
a leaning toward the interests of "others" or "society," plus
an array of nonsensical or counterproductive rules, which remain around due to memetic momentum.
On the matter of mushroom pizza being tasty, this appears to be an instance of impedance mismatch between colloquial and precise use of language. Colloquially, I read "mushroom pizza is tasty" as "I like the taste of mushroom pizza," which is most certainly a truth claim. In the context of precise language, I'd be inclined to agree that it is ambiguous -- however, it certainly isn't necessarily incapable of holding a truth value. Someone that wished to articulate an objective or realistic theory of taste (grounded in neuroscience, say) might well be able to articulate a coherent one-place "tasty" predicate that does not require a referent. Something like "Lights up this taste sub-center of the brain without fail," for instance.
Similarly, one can define "moral rightness" as something like "that which conforms to the Slab of Dictates." The problem is the same, here: imprecision. You can solve the problem by explication (making moral statements mundane mechanical facts, e.g., "Doing X optimizes value Y") or by universal adoption of a definition of "tasty" or "ought" that is commonly understood to be loaded with a mundane, fact-based referent. Either way, the solution is complete information-transmission such that the statement "comes in for a landing" as a mundane mechanical fact.
Here's a thought experiment I wrote a few weeks ago called "The Fuchsia Fez":
While visiting the market one day, you notice a married couple scoffing and pointing at a man across the plaza wearing a bright, fuchsia fez. The husband says to his wife, loudly with the intent that others hear him, "Atipo is wearing the fuchsia fez he borrowed from the shaman! What immorality!"
Which of these might describe the husband's view of morality, such that he would call what Atipo did immoral?
"According to the Slab of Dictates, it is forbidden to wear the sacred clothing of the shaman."
"It is gravely inconsiderate to ask to borrow anything from a shaman, since according to their religion, shamans must always give when asked."
"Traditional social convention considers it dishonorable to wear hats in the marketplace."
"A fuchsia fez is proper only for women like the shaman. Atipo is a man. Wearing women's clothing is abominable."
"A fuchsia fez might enrage the nearby woolly bull; wearing one recklessly puts lives and property at risk."
The husband's view of morality might be one or more of these. Furthermore, the husband's view of morality might reject one or more of these; the husband might passionately eschew the fetters of religion and tradition, for example, and thus reject the first four views, accepting only the fifth.
Many people overheard the husband (to the husband's delight) -- 32 people, in fact -- each one adopting a different combination of what moral views, of the above, that the husband might accept or reject.
The result is that, even though what the husband said was completely audible and seemingly understandable on the surface, the ambiguity of the word "immoral" catalyzed a grossly imperfect transmission of information.
It seems to me that in order for an accusation of lying to stick to someone at all, it must have been the case that they were making a truth claim in the first place.
The false claim would be, "There is an objective standard, independent of the preferences of preferencers, against which morality is wholly defined."
If such questions are intelligible and could potentially have negative answers, that would mean that there would be some true ternary-ought sentences of the form "X ought to Y if he wants Z" that are also "unethical"
They would be "unethical" only against !Z. A mundane explication would be, "X ought[Z] to do Y if he wants Z, but I find Z abhorrent, so X ought[!Z] not to do Y."
Or, in other words, if Y leads to Z, then "ought[Z] = Y" and "ought[!Z] = !Y." Once the [] referent in the "ought" is defined, it becomes a mundane mechanical question of whether Y leads to Z.
(I say "mundane" not to mean "actually boring or simple or easy," but to mean, "lacking the folk mystique of traditional ideas of morality.")
There may be many ways out of the swamp, but they all take the same form: Here is where we need to make a presupposition or assumption in order to get things moving. Is there any particular one you would have us make?
I believe I already stated I was presupposing Evolution had value.
I don't buy this assumption. It's the word "real" that kills it for me. Re fitness, why are all of the babies you have in the experience machine not "real" offspring? Again, you could be in an experience machine right now. I don't think if you have children you are constantly wondering whether or not they're real in any evaluative sense. I don't think if Morpheus came and offered you the red pill, you'd love any children you had any less. In fact, psychology may indicate the opposite -- there appears to be quite some question in experimental moral philosophy concerning whether you would take the red pill at all under such conditions!
There are two frames of references that statements about well-being and fitness can be evaluated in. One is within the sensory world set up by the experience machine, which I am colloquially calling "the Matrix." The other is outside of the experience machine, which I am colloquially calling "the real world."
In the Matrix frame of reference, both I am Sam's goals are being met.
In the real world frame of reference, only Sam's goals are being met.
Moral statements, e.g., "Destroying statues is wrong," are not facts and have no truth value. They seem to have truth values only insofar as the listener is able to infer an implicit value referent from them.
Not in a vacuum, right. But if you asserted earlier that destruction is evil, you can then evaluate that statement as true. "Should" and "Shouldn't" need context to be evaluated. In the presence of goals, for example, they can be evaluated for truth..... Which I guess you state plainly in your next post:
Man I feel dumb. Is there some book I can read that'll help me understand what you two are talking about?
Try Essays on Moral Realism edited by Geoffrey Sayre-McCord for a survey of opinions, or R. M. Hare's The Language of Morals for an in-depth articulation of a variety of non-cognitivism similar to what stan seems to be espousing.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
Right. And the point of this thread was to give scientific context to moral statements.
This can be done just by doing science. Many studies are about dissecting complicated correlative relationships in order to find out how the world works, especially the "brains" and "social structures" parts of the world. We use that information to make decisions that more effectively meet our goals.
The sticky wicket is introduced when someone proclaims that you can extract a referent-less ought ("ought[]") from surveys about what folks value. For instance, we can, like Sam Harris, proclaim that objective morality means that which optimizes the feelings of satisfaction enjoyed by people. But we need only lightly exercise our sci-fi brains to find many reasons why "satisfaction" cannot be "ultimate." For example, that optimized circumstance may result in the extinction of humanity. If we find that morally objectionable, we now have to proclaim that both satisfaction and persistence are important and must be optimized (even though are frequently circumstantially incommensurable). But what about intense satisfaction disparities required by the optimal engine (for instance, perhaps enslaving a few hundred thousand people optimizes general satisfaction and persistence for humanity)? So, your value array grows to something like, "lots of satisfaction," "persistence of species," and "we must maintain a satisfaction minimum for everyone." But what if a device is invented that keeps humans immortal, asleep, and happy? Some might say that's morally fine, others might find that morally abominable.
I don't think these considerations are so absurd so as to be meaningless. I think they demonstrate how truly, at the end of the day, arbitrary are the proclamations of Harris-style "objective morality."
I have largely come to some of the same conclusions as extremestan (which I hope is a Taleb reference) in my musings on ethics. I was about to write a lengthy reply with a much more inelegant and inarticulate version of the fuschia fez narrative, but extremestan beat me to the punch with his lucid commentary.
Still, I have a couple of comments that I think tie into the subject of the thread
The false claim would be, "There is an objective standard, independent of the preferences of preferencers, against which morality is wholly defined."
This is what I think the argument boils down to. The truth value of an ought statement is dependent on a goal or a value referent, call it "Z." "One ought not borrow the shaman's fez" really means "One ought not borrow the shaman's fez [because Z]" or "One ought[Z] not borrow the shaman's fez." We have to assign a value to Z in order to determine the truth value of these statements. Without "Z", these are not well-formed propositions of binary logic.
Taylor's argument says "Z = maximizing evolutionary fitness." Crashing's and Harris's arguments imply "Z = maximizing the well-being of conscious creatures." Maybe someone else thinks "Z = helping salamanders dominate the world."
Why should I select Taylor's Z or Crashing's Z or any particular Z for that matter? In order for an objective "Z" to exist, it must not only be devoid of internal contradiction, but must provide a reason for rejecting all other possible Zs.
Why should I select Taylor's Z or Crashing's Z or any particular Z for that matter? In order for an objective "Z" to exist, it must not only be devoid of internal contradiction, but must provide a reason for rejecting all other possible Zs.
My argument was based on the idea that--whether or not we acknowledge it--social evolution happened and is happening. The "fitter" societies bet out less fit ones. The morals of human society have been evolving, and less beneficial behaviors die out as time increased. If a society has morals that make it more able to survive than other societies, given time they will expand or their morals will be adopted.
(It's essentially based on the argument Robert Wright makes at the end of ">Evolution of God.)
Why should I select Taylor's Z or Crashing's Z or any particular Z for that matter? In order for an objective "Z" to exist, it must not only be devoid of internal contradiction, but must provide a reason for rejecting all other possible Zs.
My argument was based on the idea that--whether or not we acknowledge it--social evolution happened and is happening. The "fitter" societies bet out less fit ones. The morals of human society have been evolving, and less beneficial behaviors die out as time increased. If a society has morals that make it more able to survive than other societies, given time they will expand or their morals will be adopted.
(It's essentially based on the argument Robert Wright makes at the end of ">Evolution of God.)
What are we going for? Population size?
EDIT: By the above question, I'm implying that there's no innate "goodness" to "things that are good at surviving and spreading both survive and spread." Algae has us beat. So do ants. And man alive, do we suck compared to bacteria.
Anyway,
We are going for behavior that increase the 'social fitness' of a society (the ability of that society to propagate its morals). Certain actions taken by individuals and/or groups can take way from that fitness of the society they find themselves in; those actions would be "bad." One that improves the fitness of a society would be considered "good."
I don't understand... are you saying you want mores to be more stagnant, or more volatile? And the follow-up to either would be, "Why?" Neither is necessarily preferable to the other. Plant mutations are relatively volatile and animal mutations are (typically) relatively stagnant, and both do their thing.
I'm not talking about mutations; I am talking about 'social fitness,' which I am defining as the ability of that society to propagate its morals/culture. The fitness of its cultural meme.
Morals that are beneficial for the society that holds them will cause that society to flourish, while morals that aren't will cause them to die out.
I'm not talking about mutations; I am talking about 'social fitness,' which I am defining as the ability of that society to propagate its morals/culture. The fitness of its cultural meme.
That's what I was talking about, too. I was talking about the stagnation vs. volatility of meme mutation. By "propagate its morals," do you mean, the capacity to maintain a conservative zeitgeist against what might be risky mutations, or do you mean a quickly-moving progressive zeitgeist, daring to try new things?
I agree that things that will happen will happen. That may be the most pathetic moral claim ever, though (in the sense that it is a non-moral redundancy).
http://www.phy.duke.edu/~rgb/Philosophy/axioms/axioms/node27.html
I hope it was correct, it does--at least--seem to explain where my misinformation was coming from.
Anyway, you (and Sam) are asking me to accept your definition of "ethics" as self-evident. As he puts it in that article you linked:
As I said, I agree with Sam's conclusion. I agree that the worst possible misery for everyone is "bad;" however, it does not then follow that "ethics" must be presupposed definitionally as "the well-being of 'conscious creatures.'" It might well be that IS what ethics is, but I see no reason we can't find a reason for it. We don't need to just assume it to be true, or declare the definition as such. We can find the reason behind that fact.
It was the form of argumentum ad ignorantiam known as 'Argument from Personal Incredulity:' "I can't imagination P being false, therefor P is true."
Noted.
Which is one of the reasons you shouldn't try to define what ethics is at the start of a debate about what ethics is.
Have you ever agreed with a theist to define God as "omnibenevolent" and then try to argue His actions aren't moral?
The right definitions can very much result in one party being right or wrong.
Right, I must understand your definition in order to understand your claims, I don't need to accept it as correct.
I understand what you and Sam are saying; pretty sure I do anyway.
In an argument they certainly can. If you agree that "measured IQ" is defined as "intelligence"--for example--then you've agreed that Whites are smarter than Blacks. If you were trying to argue that isn't true, you lose the argument as soon as you agree to that definition.
Getting your partner to agree to certain favorable definitions is a pretty standard debate tactic, Crashing00. One of the reasons I get very very suspicious when someone starts throwing statements around like "self-evident" as Sam does.
If the argument was that God isn't ethical, then it most certainly would be a question beg. What if I told you one of the properties I am defining the Christian God to have is "exists"? As in, "He exists" is part of His definition. Would you tell me that definition can't be wrong?
I understand the claims you and Sam are making, pretty sure I do. I even agree with your conclusions. I just don't find your understanding of what ethics is "self-evidently true." But--to be fair--I find very little self-evidently true.
I think there is a reason why we feel ethics is what is it. That is what this thread it meant to explore.
I find the argument he makes in Chapter 1 that starts with "While I do not think anyone sincerely believes that this kind of moral skepticism makes sense, there is no shortage of people who will press this point with a ferocity that often passes for sincerity." to be incorrect. Consciousness necessarily assigning meaning to itself, because of the assumption that only consciousness can assign meaning.
Then I'm objectively wrong.
The experience machine can make me think wrong is right, but it can't make wrong right... unless you are assuming that whatever you think is right is right.
When consciousness is assigning value to itself first, then you don't have anything before consciousness grounded in reality. Or--to put it another way--all other considerations are secondary to the consciousness's happiness. However, I am putting something else BEFORE the consciousness's well-being as having more value than that well-being. While you can use your machine to trick me, all you will be doing is making me objectively wrong about something. But under Sam's setup that is not true, since all he cares about is that the consciousness's well-being is increasing.
The machine is truly fulfilling Sam's ultimate goal, but it's not truly fulfilling mine. That's the difference, and why Sam's argument has more trouble with the machine than mine does.
Why? What are you basing this assertion on?
The direction of gravity has nothing to do directly with the sharpness or dullness of the rock, while the direction of evolution has everything to do with the fitness of the species. As you well know.
Whether or not you want to accuse me of abusing words, evolution has a direction and we are a direct outcome of that.
...
If you don't want to debate about the definition of ethics, why are you in a thread about debating the definition of ethics?
I'm honestly beginning to doubt you understand what I'm saying at this point, not the other way around. K. Seems reasonable given the nature of the thread.
An unempathetic creature can certainly understand another's discomfort, but they have no natural reason to care about it.
A sadist--for example--certainly understands that someone else's pain, but not as something they'd want to prevent.
You're confusing my argument with the one Sam talks about in his book dealing with human sociopaths.
But, I do agree if we accept your definition, they're just that.
That is indeed a good resource; I particularly commend your attention to the bulleted points where he distinguishes a definition from an axiom. I am going to take the liberty of quoting from that article in the sequel.
(I'm getting RSI from pushing command-v...)
I don't think even Sam Harris would object to an attempt to derive his views from something further anterior. The objections that you seem to be levying, however, are either of the "get back in the swamp" kind (above) or of the kind that equally well impale your own alternative (below).
There can be no debate about ethics in the absence of a shared understanding of what the word "ethics" means. Such a debate has no ground on which it can be fought. Ethics is ponies. Debate over, I win. There is no way for you to say I'm wrong; even bringing in your own definition is pointless because then you will just be talking about non-ponies while I rattle on about ponies.
Debates can only be had on the ground of shared definitions. The debate then becomes whether the thing named by the definition is coherent, extant, relevant, logical, et cetera.
You could say "the thing you have named ethics is incoherent" or "the thing you have named ethics is nonexistent" or "the thing you have named ethics is irrelevant."
In debates about God I accept without any quibble whatsoever the theist's preferred definition of God. If it then turns out that they claim God exists and contradictions result therefrom, I say that by reductio, God as they have defined him doesn't exist.
Definitions can't be debated, can't be proven right or wrong, and don't determine truth. Sorry!
Category mistake. Definitions can't be correct or incorrect.
If I define intelligence as measured IQ, then I can use command-F in my word processor and replace everything like "intelligent" or "smart" with "measured IQ." After doing that, we find that I am arguing about the measured IQ of whites and blacks. If it turns out that the measured IQ of whites is higher than blacks, then why would anyone interested in the truth about measured IQ ever deny that fact?
If you mean it can be used to trick or ensnare incompetents, then yes. That is not how I am using it, and I don't think Harris is using it that way either. I take you for a peer, not a fool. I have no doubt that you are capable of reaching an understanding of basic epistemology, metaphysics, and the differences between them. I (usually) debate for dialectical purposes, not rhetorical ones. I am not interested in convincing you "at any cost" or fooling you.
You certainly should always be suspicious of the phrase "self-evident." That is not the problem.
You simply can't define God to be ethical and then argue that he isn't. When I disagree with a proponent of divine-command ethics, I am not saying that his definition is wrong or that God, by his lights, is actually unethical. I am saying that the consequences of his choice of worldview should give him pause. If he's really okay with the genocide of the Amalakites, then he's a moral monster unreachable by rational argument. The fact that his closed worldview is self-endorsing is not something that I can correct from outside. (Didn't we talk about this in the worldview thread?)
Ultimately, an appeal to a divine command theorist to change his view on ethics isn't (and can't be) an internal attack on his own axioms; rather, it's an appeal to his own ethical intuition and its collision therewith. It's almost an ad misericordiam argument. (Sometimes you can get them with their own axioms, though; for Christians, you can use that scripture that says "the law is written on your heart" or whatever.)
This is the basis of one of the original ontological arguments for God. It's a tricky question; so tricky that Goedel himself thought the argument was valid, and it has been reformulated in more modern times by Alvin Plantinga into a still much deeper argument about the axiomatization of modal logic. A very terse summary of my answer is that I would say that your definition is incoherent because existence is not a predicate. To paraphrase the Duke article, definitions are purely descriptive -- they can't call things into existence, they can only apply (or fail to apply) to things that already exist. (If you want a longer discussion of this issue you are going to have to make a new thread.)
Is it true that 1+1=2?
Right, and let's get back to that. I still haven't heard an argument from you on the basic point that makes sense.
Once again, the foundation or explanation for our ethics can't be evolution. It's just a description of a natural process, not an explanatory terminus of anything. And that's not because of any of the definitional faffing around that we've been doing -- it's because of the ontology of evolution itself which as far as I can tell, we both agree on. Even under your definitions and axioms, it still doesn't work.
Evolution selects against behaviors/traits because they are not conducive to reproduction. The behavior/trait has to already be not conducive to reproduction before evolution can "identify" it as such. An evolutionary explanation for ethics would still have to identify those behaviors not conducive to reproduction and explain why that is so -- and in that sense, adding evolution does nothing. You still have to answer the same question that other empirical ethicists have to answer, only you've just added some unnecessary junk in between, because now your theory is totally inapplicable to things that may not have evolved.
The response to this that you registered two posts ago was a kind of tu quoque: you'd only accept this if Sam Harris's answer was somehow better. Well, let's say you're right and it isn't -- so what? That doesn't absolve you from having to square your own theory with basic ontology.
That's not an assumption. What else can assign meaning besides consciousness? We were talking in the other thread and your response to the point that only conscious creatures can realize moral theories was something like "well, duh." Why "duh?" Because I think your "duh" and Harris's "duh" are the same "duh." Really, Harris is making a reductio here -- the state of affairs in which there are no conscious creatures leaves no room for evaluative judgments.
Not the point! Okay, both you and Harris are trying to articulate an empirical theory of morality. That's the appeal, right? No mumbo-jumbo, no mysticism, no Gods, no appeal to anything outside of what can be observed -- you can measure morality solely by examining the state of the universe.
Well, that means that whatever empirical combination of states you are assigning the word "good" to, the experience machine can give to you in its entirety. It's not that the experience machine is changing what you think is good -- it's that it's giving you a thing that you cannot distinguish from "good" exactly as you have defined it without the appeal to something non-empirical.
Is the thing you are putting before well-being empirical and grounded in observation and experience, or is it not? If it is, then the experience machine can by definition give it to you and you are just as vulnerable to the experience machine as anything else.
Then your ultimate goal cannot truly be empirical because the experience machine can duplicate everything empirical. So if you rely on this kind of symmetry-breaking, you're introducing non-empirical elements into your ethical theory. Now I don't know whether you care, but your theory certainly immediately loses its appeal to anybody looking for a better version of Harris.
Because unconscious matter doesn't have wishes, desires, inclinations, or anything else that could constitute teleological finality. And how do I know that? Well, a photon doesn't have enough internal states to encode a wish, desire, or inclination, nor does an electron, and so on and so forth. It's only when you get up to systems as complex as a nervous-system (and I won't quibble about the boundary; just that it's a lot higher than brute matter) where you get the ability to encode the kind of information that you need to verify a teleology.
I beg your pardon? I am sure that the direction of gravity came into play many times in the collisions that formed our knife-rock's history and eventually gave it its knifelike shape.
(Dammit, just as my RSI was starting to subside. Go figure.)
I doubt that even you understand what you're saying, at least in its fullest implications.
Yes, but you are speaking as though sadism is a necessary state of affairs; that anyone lacking in empathy would perforce become a sadist. I surely don't deny that it is possible that an unempathetic creature (or even an empathetic creature) winds up being sadistic and therefore immoral. I simply claim that there's no reason it couldn't be the other way around. Blind painters are a thing.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
And therein lies the problem with your argument.
If you were just whitewashing the word 'ethics' to mean nothing but "caring about conscious creatures" I would have no problem with it; I would just point out that it's not the 'ethics' I'm talking about. However--unless I am very much misreading you--you have chosen the word "ethics" because it also implies things like "stuff you should do." You ARE NOT simply specifying an object with your definition, you are calling forth other properties that word traditionally refers to and attempting to link it to "caring about conscious creatures."
THAT is what I'm objecting to.
If you want to simply label "caring about conscious creatures" why not make up a new, hitherto meaningless, word instead? Why pick one already so loaded with meaning?
I'm attempting to lead us out of the swamp with something more legitimate than "we should just leave the swamp," because I don't think that's a good enough reason. Or--at least--explain why we have an inclination to leave the swamp.
And it should be a neutral definition, not a definition that includes your conclusions.
I hope this part is meant to be ironic, since this is what you're trying to do with your loaded definition. You're mocking yourself, not me.
You should have kept reading: "This definition itself is expressed in words that require definition. Ultimately any given dictionary is circular - it defines words in terms of other words in the dictionary and cannot be understood unless you already understand those words."
You would 'command-F' the very premise of the debate and make it so you lost before you even started.
Not to sound obstruct, but--yes--yes you are for this debate. You might not know that you are, but you are.
This is a debate about how one arrives at ethics from other--for lack of a better term--more scientific principles. You are attempting to just skip to it (which would also end the debate entirely, since that's the end goal not the starting point for this).
And I appreciate that and have been extending the same courtesy.
Yes, and I'm going to go out on a limb and say you can also prove it's true.[1]
Did you read the OP?
But, now I'm wondering how you're even arguing with me if you don't think I'm making any claims.
As I outlined in the OP, the point would be to use ethics to aid social evolution. Social evolution is going to happen anyway, societies more fit(with 'better behaviors') than other societies have--and will--historically beat out ones less fit. The point of my version of ethics is to simply acknowledge this fact and use it in guiding society's behavior, what society should or shouldn't do in that context.
I'm becoming more and more skeptical you even know what I'm arguing at this point. I feel like I should be asking for a recap of my position your own words, like I gave for you.
Just because I can't think of any, does not mean there isn't any (you're making an argumentum ad ignorantiam, and while I don't remember doing that on some other thread, maybe I did).
Anyway--even assuming this to be the case--it does not then follow that consciousness is required to assign value to itself first, or even at all. Sam--being a neurologist--wants to say consciousness is most important. I--being a physicist--wants to say the physical is most important. That is what I'm choosing to assign value to first, reality. Because of my physical limitations, I am forced to perceive reality through my consciousness, but that does not mean IT is more important than reality.
And I wish I did not have to rely on such a ****ty filter.
Right, it can trick me into believing something untrue by presenting me with something counterfeit I am unable to distinguish from the empirical.
It can't change reality, only my perception of it.
I am just as vulnerable to the experience machine as Sam is--yes--but my theory isn't. However, I wouldn't know that, sure.
It would take an outside observe to see that Sam's goal is being fulfilled and mine isn't.
I'd like to rewrite my statements in ways that wouldn't imply teleology, and should probably should try harder.
I completely understand that such phrasing weakens my argument, which is why I put that first '*' in the OP.
As a byproduct, not a product.
Possible, but I am trying.
Certainly in the case of humans the more common tendency is away from sadism to flourish, but we're not talking about humans in this part. One could construct our "created creatures" such that sadism is the natural state for them. In fact, the only way they can survive and reproduce is by indulging in their sadism. The only way they can be fit as a species is thusly.
Under your definition they would be fundamentally "evil." Your ethics would tell them they "should" die out as quickly as they can.
Let's get back on track. As I said before, I am (attempting to) drop my affirmative claims about the basis for ethics. Nor am I going to spend any more time defending Harris. Harris could be totally wrong, but I promise you he is not as wrong as you are. This thread is about your claims, and I'm going to skip over everything that doesn't speak to your claims or underlying notions of logic that we need to deal with them.
Then object to yourself! Anyone trying to lay out a theory of ethics is eventually going to run into very big problems with other people's purported definitions. Most people think ethics constitutes "what God has set forth." They're going to reply to you the same way you're replying to me, and what the hell are you going to do about it then?
The one sense in which it is "legitimate" to "debate" definitions is under the auspices of descriptive linguistics: one can say that a term as defined is not what "most people mean" when they use the same combination of symbols. Well, most people mean "divine command ethics" when they say ethics. So Harris, yourself, me, and every moral realist ever might as well just pack it in.
Ditto "the behaviors that maximize fitness."
For as long as you believe definitions can include conclusions, your positions will be ultimately unintelligible to those operating under the auspices of basic logic and philosophy.
No, it's intended to get you to understand something about the ground on which we are met. Definitions can only be analyzed in terms of the arguments and conclusions that make use of them. The only reason that you think anyone is being mocked here is because you don't know what is being talked about.
Complete non-sequitur. He's right, of course -- definitions are ultimately circular. That's why it's a good thing that definitions don't constitute truth claims, arguments, proofs, premises, or assumptions.
That argument correctly proves using the Peano axioms that S0 + S0 = SS0. I'm not sure why I should believe that S0 = 1 or SS0 = 2. Can you show me why I should believe those?
This argument is going to go the way of our last one if you keep engaging in bad-faith debate or straw manning me. How would a conscientious reader of my position ever think that it entails that you are not making any claims?
Okay, if I understood this to be your claim I don't know that I'd be debating, because it's circular on the face of it. Ethics aids evolution which is ontologically prior to ethics? I thought you were claiming to be able to derive normative principles only from evolution. Specifically, this:
This is the claim I am addressing. Do you still maintain this?
Again, I don't think either of us knows what you're arguing. There appears to be a deeper problem, which is that I don't think you can know what you're arguing, because of some confusion on basic philosophical concepts. However, that does not preclude the very real possibility that I don't know what you're arguing. To be clear, I thought (putting aside our sub-arguments) your general assertion here is what I just quoted. Am I wrong?
What do you mean, "untrue?" Scientific truth, to the extent it exists, is grounded in the empirical. Of course taking basic skepticism into account, we never declare anything to be true with certainty. But the experience machine gets you as close to truth as you could ever possibly get. Again, core point: There is no empirical basis for asserting that the experience machine is "fooling" you. Although "fooling" is usually an empirical thing, the way you're using it here, it isn't. So if your notion of ethics relies on your ability to distingush this kind of "fooling," it's non-empirical.
No, your theory is too. Any theory that attempts to give ethics an empirical basis is vulnerable to having that basis ripped out.
Oh, an "outside observer" eh? Someone who was completely separated from the observational context of the universe you believe yourself to inhabit? Nothing non-empirical there, no sir.
It seems to me that to the extent that your argument expressly depends on telos, it doesn't just weaken your argument -- it undermines it altogether.
Again, pardon? There is no such distinction in the universe-of-rocks. Everything in nature is a product. The idea of a byproduct only makes sense when there's something that's interested in particular products but disinterested in others, and can label the ones he's disinterested in as byproducts. You have to presuppose telos to get byproducts.
Right; something that is hypothesized to meet all the conditions of immorality is immoral. There's nothing for that.
Hold on there, slick. I know I am supposed to be dropping my ethical claims, but this is such a calumny that I felt compelled to respond. Ethical creatures would recognize even these sadists as conscious and would therefore not kill them. And I most certainly don't assert any deontological principle that says "anyone who is unethical must kill themselves off and/or fail to reproduce." That actually seems like it might be your thing, what with your connection between reproduction and ethics -- a connection that could only arise incidentally on my position.
I think the fate of those sadists under my sort of ethics would be something like imprisonment or exile; the isolation of all sadist-creatures in a place where they could do their thing without involving any non-sadistic creatures.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
It depends on the nature of the discussion. But, I think my posts around the debate form would be good examples about how I would conduct myself in different debates, depending on what's being asked and how its being asked.
If you assume -> Definitions basically specify the objects upon which the axioms act or the nature of that action. They are purely descriptive and hence also unprovable, but they are also not assumptions.
Then, yes, you might as well give up at that point. I am trying to derive the term, not trying to define it at the start. I am using an argument and reasoning chain to arrive at what I mean by "ethics." I am not handing out some definition that must either be agreed on or rejected right at the start.
You can agreed on or rejected my conclusion if you want, but just be aware it is a conclusion not a definition.
I am most certainly not doing that, and I wasn't doing it last time either, as I attempted to explain in PM.
But, I am getting very tired of you treating me like some kind of "debate criminal" every time we interact. Can't we have a conversation without you accusing me of a "bad-faith debate?"
We can use ethics to aid social evolution, not biological evolution.
Biological evolution exists ontologically prior to ethics, social evolution exists in tandem with ethics.
Well, I now see that the teleology wording makes it weaker, but other than that I'll say "yes."
You're correct that is my general assertion.
A person in an insane asylum might think he is performing empirical studies, and might not be able to tell he is not, however, his conviction doesn't make it true. His inability to distinguish between imagination and reality does not mean that their isn't a difference. Or, are you claiming this isn't true?
Sam's theory cares only about the well-being of the mind. So the person in the insane asylum, as long as their consciousness is experiencing the sensation of well-being, is fulling that requirement.
It is possible I could wake-up in a Matrix Pod, but my theory would say I should then remove myself from the machinery. Sam's says he should plug himself back in if doing so will cause him to forget the whole experience. That's the difference.
And you claim you're not mocking me.....
I am not talking about some timeless, nonphysical God. I am saying YOU standing between me and Sam as we are plugged into the Matrix could see that my goal of fitness is not being fulfilled and Sam's goal of conscious' well-being IS being fulfilled.
To follow our goals, I should take the red pill and Sam should take the blue pill.
You are pushing me closer to the fence on this one, but I'm not there yet. I'm believe I can recast the argument to remove any teleology.
The whole "telos" bit was added after I made the thread when BS brought it up (each of those divides are from separate edits that were included as the thread progressed and I attempted to make my theory more coherent). When I first wrote the OP I did not even know what telos was. You are convincing me it's inclusion may have been a mistake.
Gravity is indirectly involved in the sharpness of the rock. Evolution is directly involved in the fitness of a species. I'm not saying you are saying they should kill themselves. I am saying that you are saying that 'should' stop engaging in sadism, which is vital to their survival.
No, I would say they 'should' do what they need to do to survive as a species. They 'should not' stop engaging in an activity that is vital to their survival as a species.
Except, they--themselves--are "conscience creatures;" so--unless I misunderstand what you've been saying--allowing them to perform their sadism on each other violates your ethics as much as them performing it on any other "conscience creature."
...You can't derive a definition. Okay, here's what I think you mean, being charitable: you are going to start from some definition of ethics that you consider to be "neutral" and attempt to derive some properties from it.
That's a fine way to argue. So: Socratic method. Let's do it. What's ethics, on your position? (To spare us from having to go back and forth a dozen times, you may want to define any subsidiary terms as well. For instance, if you define ethics as "good behavior" I'm just going to ask you to define "good" so you might as well just do it up front.)
How could you possibly miss the point so badly? The point is: what will you do when confronted with a misunderstanding of the very kind that you possess? (We'll soon find out if we carry out the Socratic method above.) If you think that the contextually-previous exchange in this line is a legitimate way of responding to arguments (it's not!) then a sufficiently clever or stubborn opponent will be able to stymie all your arguments using the same nonsense.
So, if I assume the workings of basic logic, I might as well give up in every debate that utilizes definitions? You're not seeing a problem here?
Again, can't be done. Whatever your argument or derivation is, there is no compulsion on anyone's part to agree to call the thing on the bottom line "ethics."
Only if you don't do it. And while we're venting, I have a similar rant: you seem to be extraordinarily resistant to any attempt to educate you. I may not know more than anyone else about ethics, and I don't blame you for calling me out on any mistakes I make there, which are, I'm sure, very likely. But when it comes to basic logic, I know what I'm talking about. If you don't believe you have something to learn from me on that score, then you're mistaken. You continue to flatly contradict me every time I tell you what you're doing wrong, even after you've independently sourced my claims from another professional. I have no idea how to respond to someone who just sticks "not" in front of everything I say and repeats it back to me. It's ridiculous.
Read what I'm saying, understand it, multi-source it if you don't trust me, but once you've done so, straighten yourself out -- don't just keep doing the same thing over and over. If it chafes you to have to back down on some faulty piece of logic because you feel you'd be losing a debate, then take some time, use the various online and offline resources at your disposal, and evaluate it in a peaceful non-debate context. If we can't get logic right, there's just absolutely no chance of making any progress on ethics.
Okay, before, I wrote that murder isn't "bad" because evolution selects against it; rather, evolution selects against it because it's "bad." Murder had to be bad before evolution could identify it as such; the measurements relevant to the evolutionary process are done against a background that already exists. Do you accept this? If so, then there is a clear direction of ontological priority between social evolution and ethics -- ethics is prior.
Well in a sense it depends what's wrong with his mind. If his mind literally was an experience machine, producing a reality that nobody outside could impinge on in any way, then there is obviously no empirical way for him to know that he is insane. Any empirical theories he develops will be based on his being "fooled
" so he could never develop one that concludes he's not being "fooled." I don't think that's really possible for a human, but if it is, then I would say that there is an almost-literal incarnation of a Cartesian demon at work, and of course those are irresolvable problems in philosophy.
(Or to be more glib about it: prove you're not insane.)
Harris is off the table till we figure out what's going on with your theory.
Okay. You're in one right now. What steps do you take on a daily basis to remove yourself?
Does it? I think forcible removal of memories might trigger some red flags on Harris' account. But no more defending Harris until we get you straightened out.
I'm a nonphysical being with respect to you. In your Matrix pod, there's no way for you to interact with me. I might as well be God. Your evaluation of ethics depends on you being able to "reach outside" of your matrix pod and talk to me for an answer. This is non-empirical and I can't believe that you are denying it. What is going on here?
The red-vs-blue pill scenario depends on there being a creature able to transcend realities that can offer you the ability to transcend realities. Highly non-empirical. What is going on here?
I highly recommend that you do so.
First, what are the criteria for whether something is "directly" or "indirectly" involved in something? Second, evolution pushes things toward greater fitness, but it doesn't make the fitness what it is. What would you say if I told you that a thermometer was directly involved in producing the temperature it's measuring?
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
If you think that's what I'm doing, I believe we are done here. I did extra legwork to read about the definition of "definition" and attempted to understand where I went wrong. I linked you the article and ask for your opinion and adjusted what I was saying according. I was reading the exact definition of "teleology" and thinking about how best to remove it from my argument as well. But, I guess you don't really care about that because my apparent obstinance offends your sensibilities.
But, if acknowledging your superiority(which I've already done a few times on this and other threads) isn't enough for you, and if you feel that me darning to continue to converse so that I might better see your point is just me being "ridiculous" then I don't think dragging this out will be worth either of our time.
If you're committed to talk down to me and act as if I am incapable of learning then I really don't wish to continue this discussion. This kind of snarky high-horse attitude—even in the face if submission—is exactly the reason I left cutting-edge academia.
......
Sigh........
*deep breath*
......Ok, I needed to get that out of my system.....
Let me try one more time, because--while I might be dumb as rocks--there are still two points of mine you seem to be misunderstanding, and I hate misunderstandings.
1) When I mean "ethics" I am talking about what we "should" or "shouldn't" do. I am not attempting to define the term "ethics" starting with a physical bases, I am attempting to say what we "should" or "should not" do starting with a physical bases. I am attempting to answer the question posed by ethics of "What should we do?" and arrive at an answer using physical principles.
I don't know if you care or not--because you keep saying you're abandoning Sam's argument and then coming back to it--but Sam is saying we should be presupposing that the answer to the question "what should we do?" is "care about the well-being of conscious creatures." I would rather not presuppose that.
2) I can't prove I'm not insane. The point of the "experience machine" example is that if all you care about is the well-being of your mind then it fulfills your requirements on all levels. You have no reason to want to leave it--assuming you could--because it can make you feel the sensation of well-being, which is your ultimate goal.
However, the machine cannot 'really' increase your fitness, which is my goal.
That is moot--of course--if you can't leave. But, if you assumed you could, Sam's theory would say you "shouldn't," while mine says you "should."
__________________
Now--if you don't mind--I have a lot of being unemployed I need to get back to.
Because real life isn't harrowing enough for me... I need to go online to be reminded of my mental shortcomings.
It's up to you whether you want to end things for your own reasons, and I get that. I'm happy to continue only if I am getting traction on foundational issues.
How could you have gotten me so wrong? I am not looking for acknowledgement of superiority. You could have left that out altogether for all the difference it makes. I skip over it when reading. I am looking for traction on issues, or evidence that I am having some effect.
Well, I can't help that. Oftentimes academics are guilty of expecting people to learn, and too quickly for their own good at that. However, if I may be so bold, I think your mistake might be in the emphasis you place on submission, which is wholly irrelevant. Evincing submission is different from evincing an attempt to learn.
Okay, so ethics on your position is "that which we should do." What's "should?"
Hints: Dictionary says "must" or "ought." What's "ought?" Dictionary says "expression of duty, moral obligation, justice, moral rightness, propriety." I won't belabor you with more details but suffice it to say that all of these things come back around to "ethics" in the end. (This is the circular-definition thing.)
You can't make arguments without presupposing things. (Or rather, any argument you did make under such conditions would be a tautology.) You will see, as we unwind the Socratic method, that you are in fact presupposing things. You may not like what Harris is presupposing, but how is that a basis for any sort of objection? You will find that if you allow that to be a valid response to argumentation, then it can be applied to yours as well.
No, the point of the experience machine is that it can produce any conceivable experience. So if your theories are ultimately grounded in empirics, it can validate any conceivable condition of your theory and therefore give you exactly what you say you want. If your goal is something the machine cannot give you, then your goal is not empirical!
I feel that you are not answering this argument. You are just repeating what you said in the first place and not taking into account any answer I've given.
Name the empirical state your theory evaluates as morally good. Once that state is named, call it X, program the experience machine to give you (and everyone else) exactly X. On what evaluative basis do you refuse to take exactly X when it's offered to you?
You introduce a "real" versus "not actually real" symmetry-breaking to underwrite this. But this is necessarily non-empirical, since you can't by any empirical measurement determine which of these states you are in. Furthermore, if the symmetry-break is open to you, it is open to Sam Harris as well; he can say he only wants "real" happiness, and that the mental states induced by the experience machine are not "real."
Again, all of these arguments I have stated previously. I think you must answer my objections specifically before you can continue to assert the original claims.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Right. You're right. It was foolish of me to think I could turn an "is" into an "ought." I was attempting to give evolution some kind of motive, which is a teleology and incorrect.
I see that now.
I am aware that you need to presuppose things. If you go back over our dissection you will see me say that time and time again that things must be presupposed. I was trying to do it further back in the reasoning chain then Sam because he was presuposing the answer to what I think is the key question. I "don't like" his method because it answers what--to me--is the only question worth answering in the discussion with an ad hoc assumption.
"What should we do?"
Read the link I first linked when I brought it up:
http://en.wikipedia.org/wiki/Experience_machine
I believe we are talking about different "experience machine" thought experiments. However--despite this--I think I am being to understand what you're saying. In yours the person isn't aware of the experience machine isn't "real life?"
I was assuming since the machine dealt only with the mind, then the well-being Harries got from the machine was the same as "real" well-being, but my fitness wasn't the same as "real" fitness.
This is because the mental state of well-being would be equivalent in both, but the physical quality of fitness wouldn't be.
I guess I am still being pig-headed on this one because, while I understand the subject in the machine might not know the difference, there still is a difference. As far as my inferior intellect can deduce, anyway.
Well, let's not be too hasty. Keep following along here, I think this line of thinking will ultimately be constructive.
That being said, you've basically sized up the current situation. In fact, as of where we are right now in this little Socratic diversion, it's even worse than that. You can't get an "ought" from an anything -- our theory is totally vacuous and gives us no way of making ethical assertions because it doesn't define ethics in terms of anything but itself. We're in the swamp. However, I think it's premature (or at least trivializing) to just accept that.
There may be many ways out of the swamp, but they all take the same form: Here is where we need to make a presupposition or assumption in order to get things moving. Is there any particular one you would have us make?
The only way to answer any question is (ultimately) with an ad hoc assumption. How do we manipulate numbers? Well, our choice of the field axioms isn't (a priori) any less ad hoc than what's going on here. It's only a question of where our choices lead. We "like" the field axioms because when utilized, they solve useful problems and/or produce novel consequences. We may later on find that we "liked" those axioms too much, and some incompatible axioms exist which also give useful answers or novel consequences.
I have read that link. Actually, when I was in undergrad, I read the original paper by Nozick as well as some of the responses from his interlocutors.
I am using the same premise as Nozick -- namely, that there's a machine that can produce any conceivable experience. However, Nozick (says I and some of his respondents) stops short in enumerating the conclusions that follow from the existence of such a thing. He checks his swing. In fairness to him, he's only going after a particularly brain-dead variety of hedonism, but the exact hypothesis he's using can underwrite a much more devastating conclusion that he simply elects not to mention.
He may or may not be aware that it's an experience machine (I'm happy to suppose that he is if that's what you want in this particular case, and I think Nozick intends on awareness, so let's say he is aware) -- but insofar as his wishes are empirical in nature, what he's after can be provided by the machine. If he has an additional wish that his experiences have some property called "real" then that wish is non-empirical.
Put it this way: you could be in an experience machine right now. Does your worry about the possible "unrealness" of everything you know and do color your every ethical decision? I should hope not. Why not? If you can answer that then you get what I'm saying.
I don't buy this assumption. It's the word "real" that kills it for me. Re fitness, why are all of the babies you have in the experience machine not "real" offspring? Again, you could be in an experience machine right now. I don't think if you have children you are constantly wondering whether or not they're real in any evaluative sense. I don't think if Morpheus came and offered you the red pill, you'd love any children you had any less. In fact, psychology may indicate the opposite -- there appears to be quite some question in experimental moral philosophy concerning whether you would take the red pill at all under such conditions!
It's not necessarily that you don't know the difference (although each such belief must stand up to scrutiny under the possibility that you might not) -- it's that if "real" versus "not-real" is something you care about then that's a non-empirical thing you care about.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Moral statements, e.g., "Destroying statues is wrong," are not facts and have no truth value. They seem to have truth values only insofar as the listener is able to infer an implicit value referent from them.
But this leaves room for ambiguity. (Ambiguity is the wind that sustains the heat and duration of various philosophical conversations. Philosophical conversations are supposed to be resolved. If they don't resolve, something may be very wrong.)
When subject to value explication, these statements become facts (that can be true or false). Not only that, but they become rather mundane facts, e.g., "It does not serve the interests of those who value utmost the preservation of all artistic and historical works to destroy statues."
Philosophical conversations reduce to science + semantics unless they're polluted by ambiguity, or deontology, or incoherence.
If you have certain interests, e.g., the preservation of mankind, the conquest of the world by salamanders, the diversity of species, the monolithic unity of all life on the planet, the extinction of all life on the planet, truth and honesty at the expense of all else, satisfaction at the expense of all else, etc., and you wish for others to share those interests, you are free to campaign. It turns out that one of the most effective interest-campaign strategies is to lie to people by convincing them that the interest is not a mere interest, but a magical moral object pervading the world, or a vase on God's coffee table. This is why the false concept of "intrinsic human rights" is so ubiquitously considered true, and sacrosanct.
How do you respond to the usual critiques of non-cognitivism? (of which you seem to be articulating a variant)
To name one common objection: ethics appears to transform sanely under propositional logic in the way you'd expect it to (e.g. "if you ought not do X to Y, and a Z is a Y, then you ought not do X to Z") so ethical statements appear to reside well within the domain of propositional logic. But the domain of propositional logic is propositions. Such a statement would be incoherent in the semantics of propositional logic if its atoms couldn't be assigned truth values, and yet it appears perfectly coherent.
I think this is an illicit accusation of rhetorical intent -- as if every ethicist had as his actual goal to capture the audience or get votes. Remove the imputation of ulterior motive from this argument and it becomes empty.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Those statements are incoherent if you deliberately refuse to impute into them any inferred value referent. "Ought" has no meaning except in terms of a goal or interest.
Consider the statement, "Mushroom pizza is tasty." Does that statement have a truth value? No, it doesn't, unless you do it the favor of filling in its missing preference referent, probably by inference. An inferred appendage like, "to me," or "to you," or "to some," or "to most." Things aren't just "tasty" with no tastiness-judges.
Consider the statement, "X + 5 = 2." Does that have a truth value? No, it does not. In order to assign it a truth value, I need to make a referent appeal. And if that appeal returns, "X = 1 + Y," then I haven't resolved anything, and must make still more appeals.
Now, this doesn't mean that there are no objective components to morality about which we can speak. Once you are given a value referent, you can talk about the mechanics of optimizing it. Non-cognitivism doesn't mean morality exists purely as taste declarations without any world-ties. In the link you provided, they seemed stumped by the following:
'She does not realize that eating meat is wrong.' ... 'Attempts to translate these sentences in an emotivist framework seem to fail (e.g. "She does not realize, 'Boo on eating meat!'")'
The person pondering this "seeming" failure is confused by the fact that in that sentence, not only is a value referent missing, but it's also the kind of moral sentence we typically use when talking about working against your own interests (which is an objective action). The translation into coherent-speak is something like this:
"She does not realize that eating meat is counterproductive in terms of what she values."
I don't meant to imply that this lie is deliberate. "We have libertarian free will," "moral realism is true," etc. are lies that are nearly ubiquitously and sincerely held as true. They are lies, but that doesn't mean their proliferators are intentionally lying, or malicious about it, or what have you.
When I talk about ethical campaigning, I mean things like this:
But I have a broad view of ethics, such that I'd even consider this ethical campaigning:
Now, to diminish my earlier claim a bit, most of these appeal to shared values and are just trying to convince you of supposed facts about how the world works. For example:
"We both agree that being heart-healthy is good, because that helps you stay alive, and staying alive is desirable. Kellogg's cereal has vitamins in it that will make you maximally heart-healthy."
"We both agree that conforming to God's opinions is absolutely essential. Getting mad at homosexuals is by far the best way to do that. Also, sacrificing 10 chickens every other Thursday."
"We both agree that maintaining the stability of our natural world is great. Animal conservation is necessary for accomplishing that, because step on the wrong Amazonian beetle, and our entire food chain could collapse."
You're playing soccer on the football field. I agree with your statements about variable quantification. You've defined "ought" as a ternary predicate, and it's of course a basic tenet of first-order predicate logic that a sentence that fails to provide sufficient inputs to match the arity of its predicates is non-grammatical. So if you read out a sentence from a binary account of "ought" literally and interpret the words in the context of a ternary "ought," you get nonsense.
However, a charge of incoherence against an opposing position can only be met by the lights of the purportedly-incoherent position (or some agreed-upon background material). While you've defined "ought" as ternary as you are free to do, you may not charge someone defining "ought" as binary with incoherence merely because he defines it otherwise. Rather, you must show that his own definition forces upon him logical errors or renders his claims unintelligible against a background you can both agree on.
This is what I see as the force of the original objection; there is nothing evinced by binary "ought" claims that outrages logic or would forbid them from having a truth value -- in fact, they seem to participate in the same "logical dance" that other truth claims do, and only an incurious person would dismiss that as a complete coincidence. Your reply here doesn't appear to contradict that objection by asserting anything like "binary ought leads to internal, irreparable unintelligibilities" -- rather, it appears to be saying "if you adopt my stance, which defines ought as ternary, you can't continue to use ought as if it were binary and still have your sentences make sense." While I agree with that statement, as it is obvious, it is not exactly what I'd call an answer to the objection.
On the matter of mushroom pizza being tasty, this appears to be an instance of impedance mismatch between colloquial and precise use of language. Colloquially, I read "mushroom pizza is tasty" as "I like the taste of mushroom pizza," which is most certainly a truth claim. In the context of precise language, I'd be inclined to agree that it is ambiguous -- however, it certainly isn't necessarily incapable of holding a truth value. Someone that wished to articulate an objective or realistic theory of taste (grounded in neuroscience, say) might well be able to articulate a coherent one-place "tasty" predicate that does not require a referent. Something like "Lights up this taste sub-center of the brain without fail," for instance.
I believe this objection is directed at a different variety of non-cognitivism which defines ethical statements to be nothing more than expressions of emotional revulsion. I agree that it is not a very good objection to your variant.
It seems to me that in order for an accusation of lying to stick to someone at all, it must have been the case that they were making a truth claim in the first place. A moral realist, insofar as his ontology is grounded in axioms of the form "you ought to do (insert something morally realistic)," can't possibly be lying (at least not about the entirety of his position) so long as it is being denied that binary oughts can constitute truth claims. I think you might say that he's rambling or babbling, if you believe his statements to be fundamentally incoherent.
I am certainly prepared to acknowledge that a great deal of means-end reasoning can ultimately be grounded in fact. However, I would deny that ethics is entirely reducible to this brand of means-end, ternary-ought reasoning. For instance, by way of an appeal to descriptive linguistics, people are wont to classify questions of whether particular ends justify particular means as ethical questions. If such questions are intelligible and could potentially have negative answers, that would mean that there would be some true ternary-ought sentences of the form "X ought to Y if he wants Z" that are also "unethical" (albeit in some possibly-different evaluative context) And that, to me, does seem like fertile ground for internal incoherence.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
My approach has basically been this: Treat "ought" as I'm treating it -- a thing that demands a referent, analogous to an equation with a dangling variable. When you do, the is/ought problem is completely and elegantly explained and is no longer a confounding mystery. Furthermore, various non-realist moral theories can be elegantly reduced to it, and various realist moral theories and their confusing mysticism can be elegantly explained as failures to ascertain it.
Nobody has to agree with me. In fact, when I explain that this is the root of the issue, I sometimes get "that's not what morality is" responses. This is because morality is laden with folk baggage. While I'd like morality to mean only "right decisionmaking," it actually means, to most,
Similarly, one can define "moral rightness" as something like "that which conforms to the Slab of Dictates." The problem is the same, here: imprecision. You can solve the problem by explication (making moral statements mundane mechanical facts, e.g., "Doing X optimizes value Y") or by universal adoption of a definition of "tasty" or "ought" that is commonly understood to be loaded with a mundane, fact-based referent. Either way, the solution is complete information-transmission such that the statement "comes in for a landing" as a mundane mechanical fact.
Here's a thought experiment I wrote a few weeks ago called "The Fuchsia Fez":
The false claim would be, "There is an objective standard, independent of the preferences of preferencers, against which morality is wholly defined."
They would be "unethical" only against !Z. A mundane explication would be, "X ought[Z] to do Y if he wants Z, but I find Z abhorrent, so X ought[!Z] not to do Y."
Or, in other words, if Y leads to Z, then "ought[Z] = Y" and "ought[!Z] = !Y." Once the [] referent in the "ought" is defined, it becomes a mundane mechanical question of whether Y leads to Z.
(I say "mundane" not to mean "actually boring or simple or easy," but to mean, "lacking the folk mystique of traditional ideas of morality.")
I believe I already stated I was presupposing Evolution had value.
There are two frames of references that statements about well-being and fitness can be evaluated in. One is within the sensory world set up by the experience machine, which I am colloquially calling "the Matrix." The other is outside of the experience machine, which I am colloquially calling "the real world."
In the Matrix frame of reference, both I am Sam's goals are being met.
In the real world frame of reference, only Sam's goals are being met.
Not in a vacuum, right. But if you asserted earlier that destruction is evil, you can then evaluate that statement as true. "Should" and "Shouldn't" need context to be evaluated. In the presence of goals, for example, they can be evaluated for truth..... Which I guess you state plainly in your next post:
Right. And the point of this thread was to give scientific context to moral statements.
Wiki? (and then all the links on that page, and those pages, and those....)
Try Essays on Moral Realism edited by Geoffrey Sayre-McCord for a survey of opinions, or R. M. Hare's The Language of Morals for an in-depth articulation of a variety of non-cognitivism similar to what stan seems to be espousing.
candidus inperti; si nil, his utere mecum.
This can be done just by doing science. Many studies are about dissecting complicated correlative relationships in order to find out how the world works, especially the "brains" and "social structures" parts of the world. We use that information to make decisions that more effectively meet our goals.
The sticky wicket is introduced when someone proclaims that you can extract a referent-less ought ("ought[]") from surveys about what folks value. For instance, we can, like Sam Harris, proclaim that objective morality means that which optimizes the feelings of satisfaction enjoyed by people. But we need only lightly exercise our sci-fi brains to find many reasons why "satisfaction" cannot be "ultimate." For example, that optimized circumstance may result in the extinction of humanity. If we find that morally objectionable, we now have to proclaim that both satisfaction and persistence are important and must be optimized (even though are frequently circumstantially incommensurable). But what about intense satisfaction disparities required by the optimal engine (for instance, perhaps enslaving a few hundred thousand people optimizes general satisfaction and persistence for humanity)? So, your value array grows to something like, "lots of satisfaction," "persistence of species," and "we must maintain a satisfaction minimum for everyone." But what if a device is invented that keeps humans immortal, asleep, and happy? Some might say that's morally fine, others might find that morally abominable.
I don't think these considerations are so absurd so as to be meaningless. I think they demonstrate how truly, at the end of the day, arbitrary are the proclamations of Harris-style "objective morality."
Still, I have a couple of comments that I think tie into the subject of the thread
This is what I think the argument boils down to. The truth value of an ought statement is dependent on a goal or a value referent, call it "Z." "One ought not borrow the shaman's fez" really means "One ought not borrow the shaman's fez [because Z]" or "One ought[Z] not borrow the shaman's fez." We have to assign a value to Z in order to determine the truth value of these statements. Without "Z", these are not well-formed propositions of binary logic.
Taylor's argument says "Z = maximizing evolutionary fitness." Crashing's and Harris's arguments imply "Z = maximizing the well-being of conscious creatures." Maybe someone else thinks "Z = helping salamanders dominate the world."
Why should I select Taylor's Z or Crashing's Z or any particular Z for that matter? In order for an objective "Z" to exist, it must not only be devoid of internal contradiction, but must provide a reason for rejecting all other possible Zs.
My argument was based on the idea that--whether or not we acknowledge it--social evolution happened and is happening. The "fitter" societies bet out less fit ones. The morals of human society have been evolving, and less beneficial behaviors die out as time increased. If a society has morals that make it more able to survive than other societies, given time they will expand or their morals will be adopted.
(It's essentially based on the argument Robert Wright makes at the end of ">Evolution of God.)
What are we going for? Population size?
EDIT: By the above question, I'm implying that there's no innate "goodness" to "things that are good at surviving and spreading both survive and spread." Algae has us beat. So do ants. And man alive, do we suck compared to bacteria.
Anyway,
We are going for behavior that increase the 'social fitness' of a society (the ability of that society to propagate its morals). Certain actions taken by individuals and/or groups can take way from that fitness of the society they find themselves in; those actions would be "bad." One that improves the fitness of a society would be considered "good."
Morals that are beneficial for the society that holds them will cause that society to flourish, while morals that aren't will cause them to die out.
It will happen anyway.
That's what I was talking about, too. I was talking about the stagnation vs. volatility of meme mutation. By "propagate its morals," do you mean, the capacity to maintain a conservative zeitgeist against what might be risky mutations, or do you mean a quickly-moving progressive zeitgeist, daring to try new things?
So... you ARE talking about population size? What on Earth is the metric, here?
I agree that things that will happen will happen. That may be the most pathetic moral claim ever, though (in the sense that it is a non-moral redundancy).