I agree with this statement, but I would point out it implies that the concept of a universal "ought" doesn't really exist. If morality evolved to maximize evolutionary fitness and "[t]here is no 'should' here," then all moral oughts can be rephrased as instrumental statements of the form "if you want X then do Y." For example, "if you want [survival/flourishing of the species], then [behave morally]."
This type of justification for morality doesn't really answer the question: "why should I behave morally if I don't care about the outcomes that morality is trying to achieve?" (a.k.a. "why should Dahmer care about morality?"). The answer is that Dahmer doesn't and shouldn't care. It is only by punishing or otherwise deterring him that society can force him to care.
I don't understand this "I don't care" argument. I hear it a lot, but it doesn't make any sense to me.
We can all agree in advance that a person is free to say "I don't care" to whatever he wants. I can present someone with a proof of the Pythagorean theorem and they can say "I don't care." It doesn't make the Pythagorean theorem false or impugn the objectivity of mathematics as a discipline. It just means that person doesn't care about geometry. Well, so be it.
The same is true for ethics. If someone doesn't care about ethics, that is entirely a statement about that person's character and has nothing whatsoever to do with the validity of a particular ethical theory.
"Care" is a lazy word choice. Let me rephrase.
One can objectively prove the truth of the Pythagorean theorem, and any rational agent must accept this theorem as true. It is also "binding" on everyone in the sense that it is just as true from my perspective as it is from your perspective. I do not need to accept any special premises or assumptions in order to arrive at the conclusion that the Pythagorean theorem is true.
One cannot objectively prove the truth of "ought" moral statements in the same way (if you can, I would love to see it). To prove moral statements, we must first agree about certain value propositions (X is "good," Y is "bad"). But there is no reason why you and I must necessarily agree about these value propositions. You say human flourishing is good, Dahmer says human flourishing is bad, someone else says human flourishing is neither good nor bad.
So I am free to simply say "I disagree with your conception of value" in response to any moral proof you construct, and there is no rational way for you to convince me. That's not the case for the Pythagorean theorem. If I am rational, I must agree that the Pythagorean theorem is true.
Interestingly, universal deterrents can exist only if universal morality does. Why? Well, what's a deterrent? It's basically something people don't want done to them. If a universal deterrent exists, then there is, roughly speaking, a way people don't want to be treated. From this, we can construct what Sam Harris calls "the worst possible misery for everyone:" a state of affairs in which everyone is being treated in this way. This, in turn, yields a metric over states of affairs: the further the distance from the worst possible misery, the better. That metric is (well, modulo some admittedly very important details) essentially an objective moral utility function.
I'm not making the claim that universal deterrents exist, so maybe this isn't directed to me.
Assuming we can construct or define "the worst possible misery for everyone," we may be able to use this to derive some objective moral utility function. But it does not follow that this "objective moral utility function" has any bearing on or resemblance to morality, as we typically think about it. The furthest distance from suffering is not necessarily optimally moral; indeed some moral viewpoints would consider a maximally pleasurable life to be immoral.
Further, we could try to minimize the distance from "the worst possible misery" instead, and that would also be an objective moral utility function. Why is it objectively correct to define suffering as morally bad rather than as morally good?
I don't understand this "I don't care" argument. I hear it a lot, but it doesn't make any sense to me.
We can all agree in advance that a person is free to say "I don't care" to whatever he wants. I can present someone with a proof of the Pythagorean theorem and they can say "I don't care." It doesn't make the Pythagorean theorem false or impugn the objectivity of mathematics as a discipline. It just means that person doesn't care about geometry. Well, so be it.
The same is true for ethics. If someone doesn't care about ethics, that is entirely a statement about that person's character and has nothing whatsoever to do with the validity of a particular ethical theory.
I'm not entirely sure what you mean by this, and which way you're leaning. Can you elaborate?
In any case, it's interesting that you bring up ethics. Ethics aren't universal, they're just a codification of social norms regarding fairness and conduct. As such, people can disagree with them or run counter to them, and they're only 'wrong' in the sense of the judgment of the people who believe in ethics. It's people's belief, not any kind of universal constant, that make ethics valid. Nothing happens to someone who violates ethical principles and doesn't get caught. Ethics, and fairness, are only important in a cooperative social structure.
Interestingly, universal deterrents can exist only if universal morality does. Why? Well, what's a deterrent? It's basically something people don't want done to them. If a universal deterrent exists, then there is, roughly speaking, a way people don't want to be treated. From this, we can construct what Sam Harris calls "the worst possible misery for everyone:" a state of affairs in which everyone is being treated in this way. This, in turn, yields a metric over states of affairs: the further the distance from the worst possible misery, the better. That metric is (well, modulo some admittedly very important details) essentially an objective moral utility function.
This isn't what bitterroot was saying. No one is arguing that a universal deterrent exists, because it doesn't. People have wildly different reactions to deterrents.
One can objectively prove the truth of the Pythagorean theorem, and any rational agent must accept this theorem as true. It is also "binding" on everyone in the sense that it is just as true from my perspective as it is from your perspective. I do not need to accept any special premises or assumptions in order to arrive at the conclusion that the Pythagorean theorem is true.
You don't think being a rational agent is a special premise or assumption? I know vanishingly few rational agents (I don't even know that I'd call myself one) and I don't even know many people who strive to be such. There are things that communities of people dedicated to striving for rationality do (like worrying about existential risk from sentient AI) that just seem bizarre to "normal" people.
One cannot objectively prove the truth of "ought" moral statements in the same way (if you can, I would love to see it). To prove moral statements, we must first agree about certain value propositions (X is "good," Y is "bad"). But there is no reason why you and I must necessarily agree about these value propositions. You say human flourishing is good, Dahmer says human flourishing is bad, someone else says human flourishing is neither good nor bad.
So I am free to simply say "I disagree with your conception of value" in response to any moral proof you construct, and there is no rational way for you to convince me. That's not the case for the Pythagorean theorem. If I am rational, I must agree that the Pythagorean theorem is true.
The dark "secret" of epistemology -- any epistemology, even one that confines itself to pure logic and mathematics -- is that it's never finally grounded in anything, it's never going to be a closed, self-justifying system, and you can't reject or accept a particular epistemic notion without doing epistemology to explain why.
For example, suppose someone comes along who claims that the truth or falsity of a statement is decided by its length (in some fixed, formal semantic encoding) -- if it's an even number of characters, it's true, otherwise it's false. Let's say it turns out that the Pythagorean theorem encodes to an odd number of characters and is therefore false by that person's lights.
Of course, someone who is already rational can see that this violates an axiom of rationality. But how would I show that person that he is being irrational? Well, write down whatever brilliant argument I come up with to persuade him in formal language. Regardless of what I wrote down, it's only going to be persuasive to the person in question if all the sentences have an even number of characters! Otherwise he's going to tell me I've just argued from the basis of something false, which by his lights, I have. There's simply no way for me to get underneath the reasoning process of such a person and fix it. In order to do rational epistemics, you must already accept rational epistemics.
So, going back to what you said: yes, if you're rational, you must agree that it's true. All of the elaborate machinery required for you to arrive at that truth is built into the word "rational."
I submit to you that the word "ethical" works in exactly the same way as "rational." Just as there is an elaborate set of axioms and definitions and epistemic notions underlying rationality, the same is true for ethics. Those who reject the epistemic machinery associated with rationality, we call irrational. Those who reject the epistemic machinery associated with ethics, we call unethical!
Quote from bitterroot »
I'm not making the claim that universal deterrents exist, so maybe this isn't directed to me.
I was perhaps reading too much into the following:
The answer is that Dahmer doesn't and shouldn't care. It is only by punishing or otherwise deterring him that society can force him to care.
If you won't claim there's a universal deterrent, then I will. Prison is a universal deterrent, or at least within an epsilon of being such. Nobody outside of a negligible collection of sociopaths who is aware that prison is a consequence of a particular action weights it positively when evaluating payoff from doing said action.
This is in the penumbra of an underlying moral concept, which can be summed up as: people flourish when untrammeled.
Quote from bitterroot »
Assuming we can construct or define "the worst possible misery for everyone," we may be able to use this to derive some objective moral utility function. But it does not follow that this "objective moral utility function" has any bearing on or resemblance to morality, as we typically think about it. The furthest distance from suffering is not necessarily optimally moral; indeed some moral viewpoints would consider a maximally pleasurable life to be immoral.
Further, we could try to minimize the distance from "the worst possible misery" instead, and that would also be an objective moral utility function. Why is it objectively correct to define suffering as morally bad rather than as morally good?
I spoke of an objective moral utility function. Surely if f is objectively definable, then so is -f. But need -f be moral if f is? No, says I. The moral direction is, by definition, the direction pointing away from misery.
The ultimate epistemic notion underlying these sorts of arguments seems to be that anyone can simply call anything they want ethical. There's a sense in which that's true, and it's the one I just described above. I'll grant that if you permit that, then this is where you end up. However, I don't think we ought to permit that (there's me using ethics to justify ethics) and I further submit that you already know why.
I refer you to this debate, in which you were (very cogently, I thought) arguing about the notion of a free market. When someone offered an alternative conceptualization of a free market, your response was to point out that the definition wasn't "useful" in a sense that you went on to specify.
We can certainly do the same thing with conceptual formulations of ethics. It's precisely that sense of "usefulness" that I apply (and I urge you to apply as well) when ruling out these empty, nihilistic ethical "theories." They don't make any useful predictions about a world in which they are true!
Treat ethical theories the same way you treat economic theories and you will convert yourself away from ethical nihilism quite rapidly. Maybe there are still a bunch of ethical theories legitimately left standing after such a winnowing, but "ethics is underpinned by arbitrary subjective values" won't be among them.
I'm not entirely sure what you mean by this, and which way you're leaning. Can you elaborate?
I'm not sure exactly what you're asking me to elaborate on, so here's a short brain dump on ethics: I believe ethics is or at least will soon become a proper science that produces objective results, but at the moment it is only in its infancy as a science, and the "common man," if you will, has some ideas about it which are turning out to be fairly serious errors.
Imagine chemistry in the 1600s. The "common man" still believes in alchemy, phlogiston, and Galen's elan vital, while there are some weirdos out on the fringe, with their then-newfangled microscopes and vacuum pumps and barometers and so forth, who are getting just the barest glimpse into what's really going on.
That's how ethics is now. Once the philosophy and technology develops a bit more, the idea that ethics is just a melange of people's subjective preferences will make about as much sense as phlogiston theory does.
Quote from Jay13x »
Ethics aren't universal, they're just a codification of social norms regarding fairness and conduct.
How does the former follow from the latter?
Quote from Jay13x »
As such, people can disagree with them or run counter to them, and they're only 'wrong' in the sense of the judgment of the people who believe in ethics. It's people's belief, not any kind of universal constant, that make ethics valid.
You've got it backwards. We were successful because we developed a sense of duty to one another. There is no 'should' here. The only way the human race could work is if we had co-evolved the system of social norms, morals, sense of duty, etc we needed.
I'll note that "the system of social norms that we needed" stands in contrast to other sets of social norms, presumably of the kind we didn't need. Ethics is buried in the distinction between the social norms we need and the ones we don't.
Evolution is an optimization algorithm running against an anterior function: the environment. The result of an evolutionary process (which you're saying here that propensity for ethical behavior is -- and I agree) is a solution to an optimization problem. When you look at the result of an evolutionary process you are seeing a local maximum of an objectively-determined function!
So the universe did "determine" ethics, in the sense that it is from the universe that we take our environment, which is the objective function against which evolution is running.
"Don't kill people" isn't a rule that some guy made up; it's built into the universe, because if you kill enough people, the universe will "respond" by discontinuing your society.
Quote from Jay13x »
Nothing happens to someone who violates ethical principles and doesn't get caught.
Then how did evolution select for particular ethical propensities, as you suggest it did? I submit that it could only do so if the universe does, in fact, "punish" at least certain types of unethical behavior.
Quote from Jay13x »
Ethics, and fairness, are only important in a cooperative social structure.
True, but irrelevant as far as I can tell. Social structures needn't be arbitrary, subjective, or relativistic.
Quote from Jay13x »
This isn't what bitterroot was saying. No one is arguing that a universal deterrent exists, because it doesn't. People have wildly different reactions to deterrents.
If he doesn't, I will. Prisons are (up to a factor of epsilon) a universal deterrent. Nobody wants to go to prison, and everyone who is aware that he might go to prison as a result of an action factors it negatively into his evaluation of the payoff.
Not everyone assigns the same negative value to going to prison, and not everyone evaluates these kinds of formulae in the same way, but modulo an epsilon, nobody is saying "Yay! Prison time!" The basic concept -- "being imprisoned is contrary to flourishing" -- is sound.
If I were to believe that God doesn't exist, I'd pretty much immediately adopt a Randian/Objectivist philosophy of morality. That seems to be the most logical way to view a world without God if you ask me. If God does not exist, then selfishness is the ultimate virtue. In a world without the existence of God the idea of making any sort of sacrifice for others is illogical.
Based on what reasoning?
[Edit;] I just realized I replied to a post from the first page and there are 4 pages. Ignore this if it's already been answered.
Private Mod Note
():
Rollback Post to RevisionRollBack
"If you're Havengul problems I feel bad for you son, I got 99 problems and a Lich ain't one." - FSM
"In a world where money talks, silence is horrifying."
One can objectively prove the truth of the Pythagorean theorem, and any rational agent must accept this theorem as true. It is also "binding" on everyone in the sense that it is just as true from my perspective as it is from your perspective. I do not need to accept any special premises or assumptions in order to arrive at the conclusion that the Pythagorean theorem is true.
You don't think being a rational agent is a special premise or assumption? I know vanishingly few rational agents (I don't even know that I'd call myself one) and I don't even know many people who strive to be such. There are things that communities of people dedicated to striving for rationality do (like worrying about existential risk from sentient AI) that just seem bizarre to "normal" people.
One cannot objectively prove the truth of "ought" moral statements in the same way (if you can, I would love to see it). To prove moral statements, we must first agree about certain value propositions (X is "good," Y is "bad"). But there is no reason why you and I must necessarily agree about these value propositions. You say human flourishing is good, Dahmer says human flourishing is bad, someone else says human flourishing is neither good nor bad.
So I am free to simply say "I disagree with your conception of value" in response to any moral proof you construct, and there is no rational way for you to convince me. That's not the case for the Pythagorean theorem. If I am rational, I must agree that the Pythagorean theorem is true.
The dark "secret" of epistemology -- any epistemology, even one that confines itself to pure logic and mathematics -- is that it's never finally grounded in anything, it's never going to be a closed, self-justifying system, and you can't reject or accept a particular epistemic notion without doing epistemology to explain why.
For example, suppose someone comes along who claims that the truth or falsity of a statement is decided by its length (in some fixed, formal semantic encoding) -- if it's an even number of characters, it's true, otherwise it's false. Let's say it turns out that the Pythagorean theorem encodes to an odd number of characters and is therefore false by that person's lights.
Of course, someone who is already rational can see that this violates an axiom of rationality. But how would I show that person that he is being irrational? Well, write down whatever brilliant argument I come up with to persuade him in formal language. Regardless of what I wrote down, it's only going to be persuasive to the person in question if all the sentences have an even number of characters! Otherwise he's going to tell me I've just argued from the basis of something false, which by his lights, I have. There's simply no way for me to get underneath the reasoning process of such a person and fix it. In order to do rational epistemics, you must already accept rational epistemics.
So, going back to what you said: yes, if you're rational, you must agree that it's true. All of the elaborate machinery required for you to arrive at that truth is built into the word "rational."
I submit to you that the word "ethical" works in exactly the same way as "rational." Just as there is an elaborate set of axioms and definitions and epistemic notions underlying rationality, the same is true for ethics. Those who reject the epistemic machinery associated with rationality, we call irrational. Those who reject the epistemic machinery associated with ethics, we call unethical!
Rationality is an assumption that underlies every valid proof of anything. Rationality is not an arbitrary or hand-wavey assumption, it is based on our observations of the mechanics of objective truth in the real world.
Using a system of thought based on rationality, (e.g. quantum mechanics) we can predict real-world experimental outcomes with extremely high accuracy and reproducibility. These are objective facts that anyone with sufficient time and interest can verify for themselves. If we subject other notions of truth (e.g. the number of words in a sentence determines its truth value) to this type of empirical test, then we would find the predictive power of this method to be highly unreliable and not reproducible. This indicates that there is something objectively correct about rationality and that there is something objectively wrong with other metrics for truth.
Unlike rationality, we cannot empirically test the objective validity of moral and ethical value statements. How would one show that "causing harm is bad" is a true moral statement while "causing harm is good" is a false moral statement? The vast majority of people feel this to be true on a gut level, but a bunch of people feeling that something is true does not make it true. Moreover, "bad" and "good" are inherently subjective concepts that cannot be objectively measured; what is "bad" and what is "good" depends on who you ask.
I also want to point out another problem with your line of thinking, namely the idea that ethics is an alternative set of epistemic machinery. Ethics is not an alternative to rationality; ethical arguments rely on the epistemology of rational thought plus the epistemology of ethics on top of that. So ethics is necessarily constrained to obey the laws of rationality; it is subsidiary to rationality.
Quote from bitterroot »
I'm not making the claim that universal deterrents exist, so maybe this isn't directed to me.
I was perhaps reading too much into the following:
The answer is that Dahmer doesn't and shouldn't care. It is only by punishing or otherwise deterring him that society can force him to care.
If you won't claim there's a universal deterrent, then I will. Prison is a universal deterrent, or at least within an epsilon of being such. Nobody outside of a negligible collection of sociopaths who is aware that prison is a consequence of a particular action weights it positively when evaluating payoff from doing said action.
Most people don't want to go to prison. But, as you said, there's a "negligible collection" of people who either enjoy prison or are indifferent to it for whatever reason. The existence of even a single person who is not deterred by prison means prison is not a universal deterrent.
And even if we were to assume arguendo that prison is a deterrent to everyone, the degree to which prison is a deterrent varies significantly from person to person. Accordingly, it might not represent "the worst possible misery for everyone," meaning that it might not be useful as a means for constructing a utility function. Maximizing the utilitarian distance from prison might not result in maximal utility.
There's another problem with the "maximize the distance" idea as well; namely that morality is path-dependent under most formulations. In other words, how you get from point A to point B is just as important as the distance between those points when we talk about morality. Maybe we could increase total societal happiness by painlessly euthanizing all the unhappy people. This may move us to a "better" position on the utility function, but most would agree it's morally wrong to kill innocent people.
This is in the penumbra of an underlying moral concept, which can be summed up as: people flourish when untrammeled.
I tend to believe this is true, but certainly many people disagree. And surely you would agree that there is a subset of people who flourish more when their personal freedom is restricted in some way (a heroin addict may be better off if he is prevented from getting heroin).
But at the end of the day, even if I agree with your statement above, you are still begging the question. The debate is not about how to maximize flourishing. The debate is about how you demonstrate that maximizing flourishing is "moral" or "good." Indeed, how would you prove that anything is "moral" or "immoral" in an objective, empirically-verifiable way?
I spoke of an objective moral utility function. Surely if f is objectively definable, then so is -f. But need -f be moral if f is? No, says I. The moral direction is, by definition, the direction pointing away from misery.
If misery is only immoral "by definition" (i.e. by fiat), then morality is inherently arbitrary. I can substitute a different definition and arrive at a different result. There is no rational or empirical reason to conclude that your definition is better than mine.
This stands in stark contrast to rationality, which we can expect to outperform all its competitors when put to the empirical test. If "your rationality" is different from "my rationality," we can put these to the test in the real world and see which one reliably cashes out by correctly predicting real-world results.
The ultimate epistemic notion underlying these sorts of arguments seems to be that anyone can simply call anything they want ethical. There's a sense in which that's true, and it's the one I just described above. I'll grant that if you permit that, then this is where you end up. However, I don't think we ought to permit that (there's me using ethics to justify ethics) and I further submit that you already know why.
I refer you to this debate, in which you were (very cogently, I thought) arguing about the notion of a free market. When someone offered an alternative conceptualization of a free market, your response was to point out that the definition wasn't "useful" in a sense that you went on to specify.
We can certainly do the same thing with conceptual formulations of ethics. It's precisely that sense of "usefulness" that I apply (and I urge you to apply as well) when ruling out these empty, nihilistic ethical "theories." They don't make any useful predictions about a world in which they are true!
Treat ethical theories the same way you treat economic theories and you will convert yourself away from ethical nihilism quite rapidly. Maybe there are still a bunch of ethical theories legitimately left standing after such a winnowing, but "ethics is underpinned by arbitrary subjective values" won't be among them.
I do, in fact, treat ethics and economics the same. Both these disciplines are only useful if you and your audience agree on the goals you wish to achieve. Both economics and ethics are underpinned by arbitrary subjective values. Thus "should" or "ought" statements in both economics and ethics reduce to statements of the form "if you want to accomplish X, do Y."
In the example you linked, I was arguing about the definition of an abstract model called a free market, and the assumptions needed to make this model "useful." However, I acknowledge that "useful" is not an objective concept; it is based on the types of goals you are tying to achieve. In this particular example, I was trying to achieve the goal of modeling "economic efficiency," which essentially equates to "maximizing total social utility."
If my audience agrees that maximizing total social utility is a worthwhile goal, then we can use my model to have a conversation about that. However, there is no objective reason why my audience must agree to this. For example, maximizing total social utility does not include any notion of "fairness" or "social justice," so someone who thinks these things are valuable might not find my argument persuasive. There is no way for me to prove that they are wrong to care about social justice. There is no way for them to prove that I am wrong to not care about social justice.
Economic and ethical arguments are only useful if we agree at the outset about the goals we are trying to achieve. But there is no way to prove that one set of goals is "better" than another set of goals.
I agree with this statement, but I would point out it implies that the concept of a universal "ought" doesn't really exist. If morality evolved to maximize evolutionary fitness and "[t]here is no 'should' here," then all moral oughts can be rephrased as instrumental statements of the form "if you want X then do Y." For example, "if you want [survival/flourishing of the species], then [behave morally]."
This type of justification for morality doesn't really answer the question: "why should I behave morally if I don't care about the outcomes that morality is trying to achieve?"
Yes it does answer that question: "You shouldn't."
If you want X, then do Y.
If you don't do Y, you won't get X.
So, if you don't want X, the you really have no objective reason to do Y. If you don't want to be good or moral, then you don't have a reason to act good or moral. I don't see what the issue is. However,
We're linking [acting moral] to [human survival]. So, "if you want [survival/flourishing of the species], then [behave morally]." If Dahmer cares about the survival of his species, then he should act moral. Now, maybe Dahmer is shortsighted and doesn't care. If that is the case, then -like the computer BS mentioned- he doesn't have a reason to act moral.
Now, while assuming a shortsighted Dahmer, "We" -as a species/group- do want [survival/flourishing]. So, "We" DO have reason to make shortsighted Dahmer act moral. We ought to find something shortsighted Dahmer does care about, and link that to him acting moral. (If he is acting at all -unlike the computer- then we can assume he cares about something.)
Because "We" -collectively- want [survival/flourishing of the species], "We" should enforce [moral behavior]. It would be like saying:
"If you don't do what God wants, you will go to hell."
"But, I want to go to hell, so what should I do?"
"Not what God wants."
So, going back to what you said: yes, if you're rational, you must agree that it's true. All of the elaborate machinery required for you to arrive at that truth is built into the word "rational."
I submit to you that the word "ethical" works in exactly the same way as "rational." Just as there is an elaborate set of axioms and definitions and epistemic notions underlying rationality, the same is true for ethics. Those who reject the epistemic machinery associated with rationality, we call irrational. Those who reject the epistemic machinery associated with ethics, we call unethical!
The Bible uses a similar argument in Romans 1. Basically those who reject the epistemic and moral machinery of Biblical Christianity are said to be "suppressing the truth in unrighteousness." So in the end it all boils down to questions of ultimate authorities. An ultimate authority must be self-authenticating. Logic can't be a self-authenticating authority because that would be circular reasoning, which is why logic can never appeal to itself as the ultimate authority without also simultaneously destroying its own foundation as an ultimate authority.
But the Bible is a self-authenticating authority. It testifies that it itself is inspired of God, and within its pages you'll find the worldview with the best explanation for why human beings are intelligible, rational, and moral beings.
I agree with this statement, but I would point out it implies that the concept of a universal "ought" doesn't really exist. If morality evolved to maximize evolutionary fitness and "[t]here is no 'should' here," then all moral oughts can be rephrased as instrumental statements of the form "if you want X then do Y." For example, "if you want [survival/flourishing of the species], then [behave morally]."
This type of justification for morality doesn't really answer the question: "why should I behave morally if I don't care about the outcomes that morality is trying to achieve?"
Yes it does answer that question: "You shouldn't."
If you want X, then do Y.
If you don't do Y, you won't get X.
So, if you don't want X, the you really have no objective reason to do Y. If you don't want to be good or moral, then you don't have a reason to act good or moral. I don't see what the issue is. However,
We're linking [acting moral] to [human survival]. So, "if you want [survival/flourishing of the species], then [behave morally]." If Dahmer cares about the survival of his species, then he should act moral. Now, maybe Dahmer is shortsighted and doesn't care. If that is the case, then -like the computer BS mentioned- he doesn't have a reason to act moral.
Now, while assuming a shortsighted Dahmer, "We" -as a species/group- do want [survival/flourishing]. So, "We" DO have reason to make shortsighted Dahmer act moral. We ought to find something shortsighted Dahmer does care about, and link that to him acting moral. (If he is acting at all -unlike the computer- then we can assume he cares about something.)
Because "We" -collectively- want [survival/flourishing of the species], "We" should enforce [moral behavior].
I basically agree with this, but it's an acknowledgement that "should" has no objective, universal meaning. When most people talk about morality (including OP, I assume), they are talking about rules they claim exist independent of particular goals. In other words, a Christian would likely say that the rule "you should not kill innocent people" is an objectively true statement for all people. The truth of the statement does not depend on the preferences or goals of society or of the person hearing the statement. It is an inherent property of the universe (and/or of God), the same way the mass of the electron or the speed of light is an inherent property of the universe.
If morality is entirely goal-oriented, then there is nothing special about moral ought statements. There is no particular reason you should follow a moral statement if you determine it is not in your interest to do so. That's the point of the thread, I believe. If someone wants to behave immorally, and either thinks they can escape the societal consequences or is willing to bear those consequences, why should they behave morally rather than immorally? You presumably would say "there's no reason to act morally in that situation." The Christian would say "because behaving immorally is transgressive against God and the universe it is always inherently bad to behave immorally."
It would be like saying:
"If you don't do what God wants, you will go to hell."
"But, I want to go to hell, so what should I do?"
"Not what God wants."
Right, and this is what most Christians believe. Hell is a "choice" to separate yourself from God. This is how they solve the problem of "if God is omnibenevolent, why does he sentence people to eternity in hell?" The answer is that God doesn't choose to send them there, they choose to send themselves there.
But my point is that hell is an inherently bad place compared with heaven. This implies that there is some objective moral order to the universe that creates a distinction between goodness and badness; "bad" actions have "bad" eternal consequences while "good" actions have "good" eternal consequences. (I acknowledge this is an oversimplification of Christian doctrine).
If you don't believe in some variation of God, heaven, hell, karma, etc, then there is no objective standard that defines "good." It is a subjective, pragmatic standard based on what happens to benefit the survival and flourishing of your society at this particular time. Some moral ideas will tend to be constant between societies, like "killing innocent people is bad," but the fact that a lot of societies come to the same subjective moral conclusions is not the same thing as saying those conclusions are objectively true.
If the person in question goes to Hell, he might realize it's not where he really wants to be. However, without this enlightened future knowledge, he could currently believe he wants to go to Hell, even if Hell is truly "universally subjectively intolerable." He is objectively wrong about Hell, but he doesn't know that. Thus, he would conclude -incorrectly- he should not do what God wants.
Likewise, shortsighted Dahmer can claim he really wants to do immoral actions even in the face of "if you want [survival/flourishing of the species], then [behave morally]." But, we don't know if he is truly aware of the consequences of his actions. If faced with the undeniable reality of humanity's extinction, he might realize he finds it undesirable.
Thus -despite it being objectively true he should behave morally- without this enlightened self-interest, he claims he shouldn't.
But, his actions or beliefs doesn't change the existence of the universal truth. Like the game of chess example, if you want to win there are objectively moves you should make and ones you shouldn't make. But, that doesn't mean you are aware of this. You might truly want to win, but still think you should make moves that -in reality- you shouldn't.
Likewise -with perfect knowledge- there might be a universal morality system no sentient creature could deny. But, nothing would prevent them from denying it without that knowledge.
Likewise, shortsighted Dahmer can claim he really wants to do immoral actions even in the face of "if you want [survival/flourishing of the species], then [behave morally]." But, we don't know if he is truly aware of the consequences of his actions. If faced with the undeniable reality of humanity's extinction, he might realize he finds it undesirable.
Taking an action that is detrimental to the flourishing of humanity will not necessarily cause humanity's extinction. In fact, unless undertaken on an enormous scale, an immoral action is almost certain to not cause the extinction of humanity. So it is completely possible for someone to conclude correctly that it is in their best interest to take a particular immoral action. The harm to society is diffuse and only affects them a small amount, but the benefit they receive from the action is relatively large. Absent some kind of punishment or deterrent, acting immorally is the rational choice under those circumstances.
On the other hand, assuming no one would ever rationally choose to go to hell (or whatever we say the negative cosmic consequences are), then no one could ever rationally choose to act immorally. Acting immorally would always be a mistake.
That's how ethics is now. Once the philosophy and technology develops a bit more, the idea that ethics is just a melange of people's subjective preferences will make about as much sense as phlogiston theory does.
The problem here is that unlike the hard sciences, human beings don't operate purely rationally, and there is no physical barrier to 'defying' ethics. Further, unlike hard sciences, which only change as we learn more, ethics change based on group consensus. The majority of the human race believing the world to be flat didn't make it flat, but the majority of the human race believing slavery is wrong makes it wrong. Yes, there are general guiding principles towards group cohesion, but there is a hell of a lot of wiggle room in between. A society where all but one guy are slaves is a society that has just as much chance at success as one where everyone is free.
Humans as a society can, and have, survived and thrived off principles we would consider unethical. And they'll do it again in the future. We know this because we've done it for thousands of years. The suggest otherwise is ethnocentrism at it's finest.
I'll note that "the system of social norms that we needed" stands in contrast to other sets of social norms, presumably of the kind we didn't need. Ethics is buried in the distinction between the social norms we need and the ones we don't.
You're using 'need' here, but it's not really the case. They're social norms that work. Our current set of social norms may work, but so have others.
"Don't kill people" isn't a rule that some guy made up; it's built into the universe, because if you kill enough people, the universe will "respond" by discontinuing your society.
Or a country built on genocide and slaves will become the most powerful on Earth. It's hard to make judgments like this because all societies eventually change. What worked once can stop working, and then work again later. Ethics are just the codification of what is currently working.
Nothing happens to someone who violates ethical principles and doesn't get caught.
Then how did evolution select for particular ethical propensities, as you suggest it did? I submit that it could only do so if the universe does, in fact, "punish" at least certain types of unethical behavior.
...Because enough people get caught? That was the point of the 'doesn't get caught' part of that statement. As a whole there is backlash from those in the social group when the norms are violated, but this rule only applies in the most broad sense. People violating the established 'rules' are good for societies, because it forced them to evolve. Otherwise we'd stagnate.
Nobody wants to go to prison, and everyone who is aware that he might go to prison as a result of an action factors it negatively into his evaluation of the payoff.
People often don't take prison into consideration when they commit a crime, and people DO want to go to prison, in some cases, especially old timers who got used to prison and are paroled or released much, much later in their lives. Some people think of prison as a second home, they're in and out so much. You can say that generally prison is a deterrent, but it's not by any stretch of the imagination a universal deterrent, because there is likely at least one person out there who has no feelings about going to prison either way.
I think we're mostly on the same page here, I just don't think you're taking into consideration that ethics change as social norms do, and they're only right or wrong depending on what social norm perspective you're viewing them from.
But the Bible is a self-authenticating authority. It testifies that it itself is inspired of God, and within its pages you'll find the worldview with the best explanation for why human beings are intelligible, rational, and moral beings.
I say that my word is the word of God and what I say is the best explanation for why human beings are often unintelligible, irrational and sometimes moral beings.
But that's circular reasoning, just saying it is because I say so is meaningless.
Taking an action that is detrimental to the flourishing of humanity will not necessarily cause humanity's extinction. In fact, unless undertaken on an enormous scale, an immoral action is almost certain to not cause the extinction of humanity. So it is completely possible for someone to conclude correctly that it is in their best interest to take a particular immoral action. The harm to society is diffuse and only affects them a small amount, but the benefit they receive from the action is relatively large. Absent some kind of punishment or deterrent, acting immorally is the rational choice under those circumstances.
But, if each individual acted immoral, it could very well lead to extinction of humanity. If we're looking for some kind of "universal law," we should be looking at the problem universally.
Anyway, if an individual who claims they wish to act immoral was made truly aware of the totality of the issue, it might become clear to him he shouldn't be acting immoral. I mean, many forms of Hell in fiction are just that: People being shown -over and over- all of the negative consequences of their actions and forced to feel what they did to others.
Complete knowledge of the outcomes of our evil actions could easily be "universally subjectively intolerable."
Logic doesn't claim to be the ultimate authority. If you think that logic is the sort of thing that even could be an authority, you misunderstand what logic is. It's like thinking you could wear a mathematical theorem as a hat -- they're just different categories of thing entirely.
...which means it too "can never appeal to itself as the ultimate authority without also simultaneously destroying its own foundation as an ultimate authority". Honestly, cloudman, double standard much? It doesn't even matter that your claim about best explanations is nonsensical (since you'd need some sort of outside authority to evaluate explanations as "better" or "worse"). That's just a detail of the postmortem for why this particular argument fails. And we don't need to get into such details. You yourself have pointed out the general reason why no proclaimed authority can ever self-authenticate: because that would be circular reasoning.
Rationality is an assumption that underlies every valid proof of anything. Rationality is not an arbitrary or hand-wavey assumption, it is based on our observations of the mechanics of objective truth in the real world.
Using a system of thought based on rationality, (e.g. quantum mechanics) we can predict real-world experimental outcomes with extremely high accuracy and reproducibility. These are objective facts that anyone with sufficient time and interest can verify for themselves. If we subject other notions of truth (e.g. the number of words in a sentence determines its truth value) to this type of empirical test, then we would find the predictive power of this method to be highly unreliable and not reproducible. This indicates that there is something objectively correct about rationality and that there is something objectively wrong with other metrics for truth.
Oh, come now. Turn your mind to criticizing rationality using the same pattern of argument you're using against ethics, and you will see that both disciplines fall if you insist on fairly applying the "I don't care" argument to both cases.
Here: Suppose I don't care about accuracy and reproducibility, since both of those words end with 'y.' What then?
Caring about accuracy and reproducibility and correspondence with the outside world is the very basis of rationality (in fact, it's sometimes taken as the definition thereof.) So your ability to convince me of the "correctness" of rationality depends crucially on me having already agreed to a particular standard for correctness -- namely, that associated with being rational!
I emphasize again that this is a game that anyone can play with any epistemic theory, and the game can be played until the heat death of the universe. Generally, people can see it's pointless in every other context except ethics.
Quote from bitterroot »
Unlike rationality, we cannot empirically test the objective validity of moral and ethical value statements. How would one show that "causing harm is bad" is a true moral statement while "causing harm is good" is a false moral statement?
If you wanted to test that, you could do something like this: go to the historical record, look at societies where causing harm was endorsed, contrast with societies where causing harm was forbidden, and look at how well those societies did. Probably you could get pretty far just doing regression on the binary variable "Does this society still exist?" Obviously a lot of details need to be filled in to make this a sound experiment, but the basis of it would be empirical data.
Quote from bitterroot »
Moreover, "bad" and "good" are inherently subjective concepts that cannot be objectively measured; what is "bad" and what is "good" depends on who you ask.
You can't infer subjectivity of a thing from subjectivity of opinion about that thing. Ask a hundred random people how the double-slit experiment works and you'll get a hundred different answers. It doesn't mean it's subjective.
Quote from bitterroot »
I also want to point out another problem with your line of thinking, namely the idea that ethics is an alternative set of epistemic machinery. Ethics is not an alternative to rationality; ethical arguments rely on the epistemology of rational thought plus the epistemology of ethics on top of that. So ethics is necessarily constrained to obey the laws of rationality; it is subsidiary to rationality.
I fail to see how this could possibly represent a problem with my line of thinking, since I agree with it. I don't propose that ethics is disjoint from rationality. I do point out that, as an epistemic theory, it is not self-justifying and it depends on the acceptance of axioms and definitions. If an anti-ethicist lodges an epistemic objection to ethics (e.g. by saying that its definitions and axioms exist only by fiat) then any such objection also applies to rationality, because it is also an epistemic theory. (and, e.g., its definitions and axioms exist only by fiat as well)
Thus, by contraposition, a person who lodges no epistemic objections to rationality should, if he wishes to be consistent, lodge no epistemic objections to ethics. (Of course, this leaves open non-epistemic objections and objections to particular theories of ethics.)
Quote from bitterroot »
There's another problem with the "maximize the distance" idea as well; namely that morality is path-dependent under most formulations. In other words, how you get from point A to point B is just as important as the distance between those points when we talk about morality. Maybe we could increase total societal happiness by painlessly euthanizing all the unhappy people. This may move us to a "better" position on the utility function, but most would agree it's morally wrong to kill innocent people.
This is an entirely sensible non-epistemic objection, and certainly a subject worth discussing.
Big picture, though -- if you're agreeing that there's an objective system of utility out there, and your objection is just through what means that utility gets maximized, then you're essentially conceding the better part of the argument to me. When it comes to the actual details of the utility function and how to maximize it, I'm perfectly prepared to admit my ignorance. Like I said, moral science is extremely primitive in its present state.
Quote from bitterroot »
And surely you would agree that there is a subset of people who flourish more when their personal freedom is restricted in some way (a heroin addict may be better off if he is prevented from getting heroin).
Granted. Your agreement that there is an actual fact of the matter concerning whether the heroin addict is flourishing or not is all that I ask for.
Quote from bitterroot »
But at the end of the day, even if I agree with your statement above, you are still begging the question. The debate is not about how to maximize flourishing. The debate is about how you demonstrate that maximizing flourishing is "moral" or "good." Indeed, how would you prove that anything is "moral" or "immoral" in an objective, empirically-verifiable way?
Whatever behaviors make the heroin addict flourish are the moral ones, and those that don't are the immoral ones. You just seemingly acknowledged that there was a fact of the matter about which of these two things is happening, presumably based on the physical state of the heroin addict himself. Which is empirical. What is left to explain?
Quote from bitterroot »
If misery is only immoral "by definition" (i.e. by fiat), then morality is inherently arbitrary. I can substitute a different definition and arrive at a different result. There is no rational or empirical reason to conclude that your definition is better than mine.
Now we're back to the epistemic objections that don't work. I can do the same thing with rationality: substitute a different definition of rationality that doesn't care about accuracy or empiricism at all. Then what?
All definitions exist only by fiat. A definition is just a shortcut. Changing definitions doesn't change what's true.
Quote from bitterroot »
This stands in stark contrast to rationality, which we can expect to outperform all its competitors when put to the empirical test. If "your rationality" is different from "my rationality," we can put these to the test in the real world and see which one reliably cashes out by correctly predicting real-world results.
Imagine you're talking to an "epistemic Jeffrey Dahmer." In response to your proposed test, comparing his brand of rationality to yours, he simply informs you that he doesn't care about real-world results. What then?
Quote from bitterroot »
In the example you linked, I was arguing about the definition of an abstract model called a free market, and the assumptions needed to make this model "useful." However, I acknowledge that "useful" is not an objective concept; it is based on the types of goals you are tying to achieve. In this particular example, I was trying to achieve the goal of modeling "economic efficiency," which essentially equates to "maximizing total social utility."
No, your response to your interlocutor in that debate was a lot more specific than that. It was substantially objective. You said:
What do I mean "it's not a useful definition?" I mean that it doesn't tell us anything about the market. Is the market regulated? Maybe. It might be regulated by a cartel, or a corporation, or a church, just not by a "monolithic legal entity." Is the market efficient? Maybe. There might be huge, terrible externalities that make the market's allocation of resources super inefficient and harmful. Is it a competitive market? Maybe. Or maybe the entire market consists of a single, gigantic monopolistic corporation. So I dislike your definition because it's meaningless. We can't make a single useful prediction about how your "free market" behaves, or what it's like to live there.
By the same token, an ethical theory that refuses even to define what ethics is, and expressly rejects all attempts to do so, tells us nothing about a world in which that ethical theory holds.
Quote from bitterroot »
If my audience agrees that maximizing total social utility is a worthwhile goal, then we can use my model to have a conversation about that. However, there is no objective reason why my audience must agree to this. For example, maximizing total social utility does not include any notion of "fairness" or "social justice," so someone who thinks these things are valuable might not find my argument persuasive. There is no way for me to prove that they are wrong to care about social justice. There is no way for them to prove that I am wrong to not care about social justice.
Sure there is. If it weren't unethical to perform an experiment that would test this theory, I would be all for it, because I bet a room full of SJWs would eat each other inside of a week.
Quote from bitterroot »
Economic and ethical arguments are only useful if we agree at the outset about the goals we are trying to achieve. But there is no way to prove that one set of goals is "better" than another set of goals.
Sure there is. Example: The famines of Stalinist Russia were, in large part, the result of implementing a particular economic theory. Does this not count as evidence for the proposition that that particular economic theory is bad?
If not, why not? If so, then insofar as that economic theory constitutes a "set of goals," well, we've just proven that set of goals is bad.
But the Bible is a self-authenticating authority. It testifies that it itself is inspired of God, and within its pages you'll find the worldview with the best explanation for why human beings are intelligible, rational, and moral beings.
The Bible is a mere collection of squiggles on paper absent an epistemology sufficiently powerful to interpret it, and any such system is powerful enough to recognize circular logic. Thus the very tool that allows you to perceive the Bible's self-endorsement in the first place is the same tool that punctures an irreparable hole in it.
Private Mod Note
():
Rollback Post to RevisionRollBack
A limit of time is fixed for thee
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Logic doesn't claim to be the ultimate authority. If you think that logic is the sort of thing that even could be an authority, you misunderstand what logic is. It's like thinking you could wear a mathematical theorem as a hat -- they're just different categories of thing entirely.
...which means it too "can never appeal to itself as the ultimate authority without also simultaneously destroying its own foundation as an ultimate authority". Honestly, cloudman, double standard much? It doesn't even matter that your claim about best explanations is nonsensical (since you'd need some sort of outside authority to evaluate explanations as "better" or "worse"). That's just a detail of the postmortem for why this particular argument fails. And we don't need to get into such details. You yourself have pointed out the general reason why no proclaimed authority can ever self-authenticate: because that would be circular reasoning.
Using the Bible to prove the Bible is not a problem because it is Biblical. It is perfectly consistent within itself to do so. The fact that it is circular is trivial because the Bible is not concerned with proving its claims in a logical fashion. That is not the case with logic. Using logic to prove logic is a problem because that is illogical. So no it is not a double standard.
When you place logic as the ultimate authority over scripture you have placed a worldview that is an absurdity to begin with over the Bible. My worldview does not have to be sucked in to the problems of your worldview.
The Bible is a mere collection of squiggles on paper absent an epistemology sufficiently powerful to interpret it, and any such system is powerful enough to recognize circular logic. Thus the very tool that allows you to perceive the Bible's self-endorsement in the first place is the same tool that punctures an irreparable hole in it.
You have no basis for knowing that logic is even intelligible in your worldview, and yet you just assume it to be true. So you are guilty of the same circularity that you accuse me of. The difference is your circularity nullifies your ultimate epistemic authority while my circularity is consistent with and does not nullify my ultimate epistemic authority.
The difference between our worldviews is I actually have a basis for accounting for the existence of logic. Logic exists because human beings are image bearers of God designed to think God's thoughts after him. And that is how the Bible accounts for reliability and the existence of logic.
Complete knowledge of the outcomes of our evil actions could easily be "universally subjectively intolerable."
Probably not intolerable to Dahmer. He witnessed firsthad a large part of the suffering his actions caused, and it didn't seem to bother him.
It didn't bother him because he lacked empathy. You know, "the ability to understand and share the feelings of another." If he were to have that understanding -that knowledge- I think it's safe to say it would bother him.
I'm pretty sure giving him perfect understanding -including empathy- and then making him analyze the full consequences of his actions would be rather intolerable. I would even call it "universally subjectively intolerable."
You have no basis for knowing that logic is even intelligible in your worldview, and yet you just assume it to be true. So you are guilty of the same circularity that you accuse me of. The difference is your circularity nullifies your ultimate epistemic authority while my circularity is consistent with and does not nullify my ultimate epistemic authority.
On the contrary, since I expressly acknowledge that my position is not ultimately self-justifying, I am not guilty of any circularity at all. And since you claim yours is, you are guilty of circularity. You just don't care about circularity, because God, or whatever. Well, this is exactly the kind of "I don't care" argument that I'm currently debating with bitterroot about. If you don't want to read the extremely long posts (and I'm not sure I'd blame you) the TLDR is that such arguments undermine the very basis for examining arguments and are therefore pointless.
The difference between our worldviews is I actually have a basis for accounting for the existence of logic.
Why do you think I don't? Logic is proto-language and its origins are similar to the origins of language -- sentient, social species will naturally develop it; ideas cannot be communicated from mind to mind without it.
Logic exists because human beings are image bearers of God designed to think God's thoughts after him. And that is how the Bible accounts for reliability and the existence of logic.
I say again that you couldn't read or apprehend the Bible without logic. Saying the Bible justifies logic is like saying the cart justifies the horse.
Rationality is an assumption that underlies every valid proof of anything. Rationality is not an arbitrary or hand-wavey assumption, it is based on our observations of the mechanics of objective truth in the real world.
Using a system of thought based on rationality, (e.g. quantum mechanics) we can predict real-world experimental outcomes with extremely high accuracy and reproducibility. These are objective facts that anyone with sufficient time and interest can verify for themselves. If we subject other notions of truth (e.g. the number of words in a sentence determines its truth value) to this type of empirical test, then we would find the predictive power of this method to be highly unreliable and not reproducible. This indicates that there is something objectively correct about rationality and that there is something objectively wrong with other metrics for truth.
Oh, come now. Turn your mind to criticizing rationality using the same pattern of argument you're using against ethics, and you will see that both disciplines fall if you insist on fairly applying the "I don't care" argument to both cases.
Here: Suppose I don't care about accuracy and reproducibility, since both of those words end with 'y.' What then?
Caring about accuracy and reproducibility and correspondence with the outside world is the very basis of rationality (in fact, it's sometimes taken as the definition thereof.) So your ability to convince me of the "correctness" of rationality depends crucially on me having already agreed to a particular standard for correctness -- namely, that associated with being rational!
I emphasize again that this is a game that anyone can play with any epistemic theory, and the game can be played until the heat death of the universe. Generally, people can see it's pointless in every other context except ethics.
I acknowledge that anyone can say "I don't care" about anything, including rationality. But you and I are using rational discourse to conduct this debate. And you concede later in your post: "I don't propose that ethics is disjoint from rationality." In other words, accepting the validity of rationality is a necessary prerequisite to discussing ethics.
So while I agree that the epistemic validity of rationality is not self-justifying, we can regard it as such for purposes of this debate because we have both conceded its validity in order to have this debate in the first place.
Quote from bitterroot »
Unlike rationality, we cannot empirically test the objective validity of moral and ethical value statements. How would one show that "causing harm is bad" is a true moral statement while "causing harm is good" is a false moral statement?
If you wanted to test that, you could do something like this: go to the historical record, look at societies where causing harm was endorsed, contrast with societies where causing harm was forbidden, and look at how well those societies did. Probably you could get pretty far just doing regression on the binary variable "Does this society still exist?" Obviously a lot of details need to be filled in to make this a sound experiment, but the basis of it would be empirical data.
Why should the existence or non-existence of a society have anything to do with morality?
If an anti-ethicist lodges an epistemic objection to ethics (e.g. by saying that its definitions and axioms exist only by fiat) then any such objection also applies to rationality, because it is also an epistemic theory. (and, e.g., its definitions and axioms exist only by fiat as well)
Thus, by contraposition, a person who lodges no epistemic objections to rationality should, if he wishes to be consistent, lodge no epistemic objections to ethics. (Of course, this leaves open non-epistemic objections and objections to particular theories of ethics.)
Am I likewise precluded from asserting objections to your concocted epistemic theories that determine truth based on the number of words in a sentence or whether a word ends in the letter "y"?
Quote from bitterroot »
There's another problem with the "maximize the distance" idea as well; namely that morality is path-dependent under most formulations. In other words, how you get from point A to point B is just as important as the distance between those points when we talk about morality. Maybe we could increase total societal happiness by painlessly euthanizing all the unhappy people. This may move us to a "better" position on the utility function, but most would agree it's morally wrong to kill innocent people.
This is an entirely sensible non-epistemic objection, and certainly a subject worth discussing.
Big picture, though -- if you're agreeing that there's an objective system of utility out there, and your objection is just through what means that utility gets maximized, then you're essentially conceding the better part of the argument to me. When it comes to the actual details of the utility function and how to maximize it, I'm perfectly prepared to admit my ignorance. Like I said, moral science is extremely primitive in its present state.
I have no problem with the idea that an objective system of utility might exist. I don't want to concede that it does exist because I'm not sure. But it might.
My issue is why utility should have any connection with morality and "ought" statements. Why "ought" one attempt to maximize utility?
Certainly there are many systems of moral thought that believe maximizing utility is not a good thing. And maximizing utility may have nothing to do with maximizing the chances of your society's survival, which you said earlier was the way we should empirically test the veracity of moral systems.
At bottom, though, my objection is based on the is-ought problem. How do you go from the statement "if you want to maximize utility, then behave morally" to the statement "you ought to behave morally?" How does one arrive at a naked ought?
Whatever behaviors make the heroin addict flourish are the moral ones, and those that don't are the immoral ones. You just seemingly acknowledged that there was a fact of the matter about which of these two things is happening, presumably based on the physical state of the heroin addict himself. Which is empirical. What is left to explain?
Why "flourishing" is objectively "good." Why one "ought" to maximize flourishing.
Quote from bitterroot »
This stands in stark contrast to rationality, which we can expect to outperform all its competitors when put to the empirical test. If "your rationality" is different from "my rationality," we can put these to the test in the real world and see which one reliably cashes out by correctly predicting real-world results.
Imagine you're talking to an "epistemic Jeffrey Dahmer." In response to your proposed test, comparing his brand of rationality to yours, he simply informs you that he doesn't care about real-world results. What then?
He is free not to care. Because rationality does not demand that you care about it. No one claims that the existence of rationality imposes any duty on anyone to do anything.
But morality, by making "ought" statements, is purporting to impose duties on people to act a particular way. If people are perfectly free not to care about these duties and not act the way they "ought" to act, then the word "ought" is meaningless. If there is no particular reason a person "ought" to care about and follow moral rules, then why do we call them moral rules?
Quote from bitterroot »
Economic and ethical arguments are only useful if we agree at the outset about the goals we are trying to achieve. But there is no way to prove that one set of goals is "better" than another set of goals.
Sure there is. Example: The famines of Stalinist Russia were, in large part, the result of implementing a particular economic theory. Does this not count as evidence for the proposition that that particular economic theory is bad?
If not, why not? If so, then insofar as that economic theory constitutes a "set of goals," well, we've just proven that set of goals is bad.
The missing step in your syllogism is proof for the statement that famines are "bad." This is the objection I keep repeating. How does one prove that anything is "bad" or "good?" How does one show that "bad" and "good" even exist as coherent, objective concepts?
Using the Bible to prove the Bible is not a problem because it is Biblical. It is perfectly consistent within itself to do so. The fact that it is circular is trivial because the Bible is not concerned with proving its claims in a logical fashion. That is not the case with logic. Using logic to prove logic is a problem because that is illogical. So no it is not a double standard.
When you place logic as the ultimate authority over scripture you have placed a worldview that is an absurdity to begin with over the Bible. My worldview does not have to be sucked in to the problems of your worldview.
What you have is not so much a "worldview" as a "blind spot".
Consider the matter of consistency. You assert that the Bible is self-consistent. (It isn't, but for the sake of argument let's grant that it is.) Furthermore, you imply that the self-consistency of the Bible is a positive thing. You rate the Bible as "good" on some scale because it is self-consistent, and you would rate it as "bad" were it not self-consistent. You do not believe other religions' scriptures, like the Qur'an, because you think they are not self-consistent -- again, "bad" on this scale. You think the Bible is better than the Qur'an, and you think self-consistency is the reason for it.
Have I reconstructed your position accurately so far?
Okay. Now ask yourself: what is this scale?
It's logic, dude. Logic is, in its entirety, the idea that consistency is good and contradiction is bad. Everything that logicians do is just variations on that theme. So when you decide that the Bible is self-consistent, you are using logic to evaluate the Bible. And because you are using self-consistency to determine whether you should accept or reject the Bible and other scriptures, your logic has overriding authority over these scriptures. If it did not, you would not have any grounds to praise the Bible for self-consistency, any more than you would have grounds to praise it for being made of tree fiber -- these properties would simply be irrelevant to the question of why the Bible is valuable. And you would not have grounds to reject the Qur'an -- its inconsistency with itself and with the Bible would likewise be irrelevant to the question of why it is not valuable.
I'm not the one imposing a logical "worldview" on the Bible. That's you. You've don't realize it, because you're used to thinking of "logic" as "that nasty thought process skeptics use to challenge my faith". So as a result, whenever the idea that consistency is good and contradiction is bad reinforces your faith, you embrace it as simply natural and clear thinking, but whenever it challenges your faith, you call it "logic" and reject it. But in reality it's all the same logic. It's all simply natural and clear thinking. You're just applying the idea that consistency is good inconsistently. And, before you think you're allowed to do that because worldview or circular reasoning or whatever, remember that if you're not applying the idea that consistency is good consistently, there's no reason for you to apply it to the Bible.
PS: You should also worry that justifying yourself on the basis of differing "worldviews" is entirely too subjectivist for someone claiming to possess universal objective truth.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
Complete knowledge of the outcomes of our evil actions could easily be "universally subjectively intolerable."
Probably not intolerable to Dahmer. He witnessed firsthad a large part of the suffering his actions caused, and it didn't seem to bother him.
It didn't bother him because he lacked empathy. You know, "the ability to understand and share the feelings of another." If he were to have that understanding -that knowledge- I think it's safe to say it would bother him.
I'm pretty sure giving him perfect understanding -including empathy- and then making him analyze the full consequences of his actions would be rather intolerable. I would even call it "universally subjectively intolerable."
Why would "perfect understanding" include a feeling of empathy? Certainly it would include perfect knowledge of what the other person is experiencing. But empathy is knowledge plus a certain type of emotional reaction to that knowledge.
What you're saying is: if we somehow modified Dahmer's subjective emotional experiences, we could make harming others intolerable to him. But that is the opposite of saying that harming others is "universally subjectively intolerable." Instead, harming others is only subjectively intolerable to people who subjectively experience an emotional reaction that makes harming others intolerable to them. Other people (like Dahmer) do not experience this type of emotional reaction and do not experience harming others as subjectively intolerable. We would have to change Dahmer's subjective outlook in order to make harming others intolerable to him. This proves that harming others is not universally subjectively intolerable.
What you're saying is: if we somehow modified Dahmer's subjective emotional experiences, we could make harming others intolerable to him. But that is the opposite of saying that harming others is "universally subjectively intolerable." Instead, harming others is only subjectively intolerable to people who subjectively experience an emotional reaction that makes harming others intolerable to them. Other people (like Dahmer) do not experience this type of emotional reaction and do not experience harming others as subjectively intolerable. We would have to change Dahmer's subjective outlook in order to make harming others intolerable to him. This proves that harming others is not universally subjectively intolerable.
Complete knowledge of the outcomes of our evil actions could easily be "universally subjectively intolerable."
If Dahmer didn't just have a superficial understanding of the outcome of his actions, and really and truly understood -on a deep level- those outcomes (outcomes like the emotions his actions caused in others), I don't think it's unreasonable to believe he would feel remorse for what he did.
Likewise -with perfect knowledge- there might be a universal morality system no sentient creature could deny. But, nothing would prevent them from denying it without that knowledge.
Assuming this, just because Dahmer lacked some knowledge (in this case that knowledge being empathy) doesn't somehow make his choices valid. It just means he made incorrect choices based on an incomplete understanding of the system.
Why would "perfect understanding" include a feeling of empathy?
Because empathy is a subcategory of understanding; mainly, an understanding of the emotions of others.
Empathy is not just "understanding the emotions of others," it is understanding and caring about the emotions of others. Someone could fully understand the emotions of others and simply not care.
What you're saying is: if we somehow modified Dahmer's subjective emotional experiences, we could make harming others intolerable to him. But that is the opposite of saying that harming others is "universally subjectively intolerable." Instead, harming others is only subjectively intolerable to people who subjectively experience an emotional reaction that makes harming others intolerable to them. Other people (like Dahmer) do not experience this type of emotional reaction and do not experience harming others as subjectively intolerable. We would have to change Dahmer's subjective outlook in order to make harming others intolerable to him. This proves that harming others is not universally subjectively intolerable.
Complete knowledge of the outcomes of our evil actions could easily be "universally subjectively intolerable."
If Dahmer didn't just have a superficial understanding of the outcome of his actions, and really and truly understood -on a deep level- those outcomes (outcomes like the emotions his actions caused in others), I don't think it's unreasonable to believe he would feel remorse for what he did.
Certainly it could cause him to feel remorse, but there's no reason it must. For something to be "universally" intolerable, it's not sufficient to say it "could" be intolerable. "Universally subjectively intolerable" means intolerable to every conceivable subjective observer.
"Care" is a lazy word choice. Let me rephrase.
One can objectively prove the truth of the Pythagorean theorem, and any rational agent must accept this theorem as true. It is also "binding" on everyone in the sense that it is just as true from my perspective as it is from your perspective. I do not need to accept any special premises or assumptions in order to arrive at the conclusion that the Pythagorean theorem is true.
One cannot objectively prove the truth of "ought" moral statements in the same way (if you can, I would love to see it). To prove moral statements, we must first agree about certain value propositions (X is "good," Y is "bad"). But there is no reason why you and I must necessarily agree about these value propositions. You say human flourishing is good, Dahmer says human flourishing is bad, someone else says human flourishing is neither good nor bad.
So I am free to simply say "I disagree with your conception of value" in response to any moral proof you construct, and there is no rational way for you to convince me. That's not the case for the Pythagorean theorem. If I am rational, I must agree that the Pythagorean theorem is true.
I'm not making the claim that universal deterrents exist, so maybe this isn't directed to me.
Assuming we can construct or define "the worst possible misery for everyone," we may be able to use this to derive some objective moral utility function. But it does not follow that this "objective moral utility function" has any bearing on or resemblance to morality, as we typically think about it. The furthest distance from suffering is not necessarily optimally moral; indeed some moral viewpoints would consider a maximally pleasurable life to be immoral.
Further, we could try to minimize the distance from "the worst possible misery" instead, and that would also be an objective moral utility function. Why is it objectively correct to define suffering as morally bad rather than as morally good?
In any case, it's interesting that you bring up ethics. Ethics aren't universal, they're just a codification of social norms regarding fairness and conduct. As such, people can disagree with them or run counter to them, and they're only 'wrong' in the sense of the judgment of the people who believe in ethics. It's people's belief, not any kind of universal constant, that make ethics valid. Nothing happens to someone who violates ethical principles and doesn't get caught. Ethics, and fairness, are only important in a cooperative social structure.
This isn't what bitterroot was saying. No one is arguing that a universal deterrent exists, because it doesn't. People have wildly different reactions to deterrents.
TerribleBad at Magic since 1998.A Vorthos Guide to Magic Story | Twitter | Tumblr
[Primer] Krenko | Azor | Kess | Zacama | Kumena | Sram | The Ur-Dragon | Edgar Markov | Daretti | Marath
You don't think being a rational agent is a special premise or assumption? I know vanishingly few rational agents (I don't even know that I'd call myself one) and I don't even know many people who strive to be such. There are things that communities of people dedicated to striving for rationality do (like worrying about existential risk from sentient AI) that just seem bizarre to "normal" people.
The dark "secret" of epistemology -- any epistemology, even one that confines itself to pure logic and mathematics -- is that it's never finally grounded in anything, it's never going to be a closed, self-justifying system, and you can't reject or accept a particular epistemic notion without doing epistemology to explain why.
For example, suppose someone comes along who claims that the truth or falsity of a statement is decided by its length (in some fixed, formal semantic encoding) -- if it's an even number of characters, it's true, otherwise it's false. Let's say it turns out that the Pythagorean theorem encodes to an odd number of characters and is therefore false by that person's lights.
Of course, someone who is already rational can see that this violates an axiom of rationality. But how would I show that person that he is being irrational? Well, write down whatever brilliant argument I come up with to persuade him in formal language. Regardless of what I wrote down, it's only going to be persuasive to the person in question if all the sentences have an even number of characters! Otherwise he's going to tell me I've just argued from the basis of something false, which by his lights, I have. There's simply no way for me to get underneath the reasoning process of such a person and fix it. In order to do rational epistemics, you must already accept rational epistemics.
So, going back to what you said: yes, if you're rational, you must agree that it's true. All of the elaborate machinery required for you to arrive at that truth is built into the word "rational."
I submit to you that the word "ethical" works in exactly the same way as "rational." Just as there is an elaborate set of axioms and definitions and epistemic notions underlying rationality, the same is true for ethics. Those who reject the epistemic machinery associated with rationality, we call irrational. Those who reject the epistemic machinery associated with ethics, we call unethical!
I was perhaps reading too much into the following:
If you won't claim there's a universal deterrent, then I will. Prison is a universal deterrent, or at least within an epsilon of being such. Nobody outside of a negligible collection of sociopaths who is aware that prison is a consequence of a particular action weights it positively when evaluating payoff from doing said action.
This is in the penumbra of an underlying moral concept, which can be summed up as: people flourish when untrammeled.
I spoke of an objective moral utility function. Surely if f is objectively definable, then so is -f. But need -f be moral if f is? No, says I. The moral direction is, by definition, the direction pointing away from misery.
The ultimate epistemic notion underlying these sorts of arguments seems to be that anyone can simply call anything they want ethical. There's a sense in which that's true, and it's the one I just described above. I'll grant that if you permit that, then this is where you end up. However, I don't think we ought to permit that (there's me using ethics to justify ethics) and I further submit that you already know why.
I refer you to this debate, in which you were (very cogently, I thought) arguing about the notion of a free market. When someone offered an alternative conceptualization of a free market, your response was to point out that the definition wasn't "useful" in a sense that you went on to specify.
We can certainly do the same thing with conceptual formulations of ethics. It's precisely that sense of "usefulness" that I apply (and I urge you to apply as well) when ruling out these empty, nihilistic ethical "theories." They don't make any useful predictions about a world in which they are true!
Treat ethical theories the same way you treat economic theories and you will convert yourself away from ethical nihilism quite rapidly. Maybe there are still a bunch of ethical theories legitimately left standing after such a winnowing, but "ethics is underpinned by arbitrary subjective values" won't be among them.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
I'm not sure exactly what you're asking me to elaborate on, so here's a short brain dump on ethics: I believe ethics is or at least will soon become a proper science that produces objective results, but at the moment it is only in its infancy as a science, and the "common man," if you will, has some ideas about it which are turning out to be fairly serious errors.
Imagine chemistry in the 1600s. The "common man" still believes in alchemy, phlogiston, and Galen's elan vital, while there are some weirdos out on the fringe, with their then-newfangled microscopes and vacuum pumps and barometers and so forth, who are getting just the barest glimpse into what's really going on.
That's how ethics is now. Once the philosophy and technology develops a bit more, the idea that ethics is just a melange of people's subjective preferences will make about as much sense as phlogiston theory does.
How does the former follow from the latter?
Allow me to quote some wisdom:
I'll note that "the system of social norms that we needed" stands in contrast to other sets of social norms, presumably of the kind we didn't need. Ethics is buried in the distinction between the social norms we need and the ones we don't.
Evolution is an optimization algorithm running against an anterior function: the environment. The result of an evolutionary process (which you're saying here that propensity for ethical behavior is -- and I agree) is a solution to an optimization problem. When you look at the result of an evolutionary process you are seeing a local maximum of an objectively-determined function!
So the universe did "determine" ethics, in the sense that it is from the universe that we take our environment, which is the objective function against which evolution is running.
"Don't kill people" isn't a rule that some guy made up; it's built into the universe, because if you kill enough people, the universe will "respond" by discontinuing your society.
Then how did evolution select for particular ethical propensities, as you suggest it did? I submit that it could only do so if the universe does, in fact, "punish" at least certain types of unethical behavior.
True, but irrelevant as far as I can tell. Social structures needn't be arbitrary, subjective, or relativistic.
If he doesn't, I will. Prisons are (up to a factor of epsilon) a universal deterrent. Nobody wants to go to prison, and everyone who is aware that he might go to prison as a result of an action factors it negatively into his evaluation of the payoff.
Not everyone assigns the same negative value to going to prison, and not everyone evaluates these kinds of formulae in the same way, but modulo an epsilon, nobody is saying "Yay! Prison time!" The basic concept -- "being imprisoned is contrary to flourishing" -- is sound.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Based on what reasoning?
[Edit;] I just realized I replied to a post from the first page and there are 4 pages. Ignore this if it's already been answered.
"In a world where money talks, silence is horrifying."
Ashcoat Bear of Limited
Rationality is an assumption that underlies every valid proof of anything. Rationality is not an arbitrary or hand-wavey assumption, it is based on our observations of the mechanics of objective truth in the real world.
Using a system of thought based on rationality, (e.g. quantum mechanics) we can predict real-world experimental outcomes with extremely high accuracy and reproducibility. These are objective facts that anyone with sufficient time and interest can verify for themselves. If we subject other notions of truth (e.g. the number of words in a sentence determines its truth value) to this type of empirical test, then we would find the predictive power of this method to be highly unreliable and not reproducible. This indicates that there is something objectively correct about rationality and that there is something objectively wrong with other metrics for truth.
Unlike rationality, we cannot empirically test the objective validity of moral and ethical value statements. How would one show that "causing harm is bad" is a true moral statement while "causing harm is good" is a false moral statement? The vast majority of people feel this to be true on a gut level, but a bunch of people feeling that something is true does not make it true. Moreover, "bad" and "good" are inherently subjective concepts that cannot be objectively measured; what is "bad" and what is "good" depends on who you ask.
I also want to point out another problem with your line of thinking, namely the idea that ethics is an alternative set of epistemic machinery. Ethics is not an alternative to rationality; ethical arguments rely on the epistemology of rational thought plus the epistemology of ethics on top of that. So ethics is necessarily constrained to obey the laws of rationality; it is subsidiary to rationality.
Most people don't want to go to prison. But, as you said, there's a "negligible collection" of people who either enjoy prison or are indifferent to it for whatever reason. The existence of even a single person who is not deterred by prison means prison is not a universal deterrent.
And even if we were to assume arguendo that prison is a deterrent to everyone, the degree to which prison is a deterrent varies significantly from person to person. Accordingly, it might not represent "the worst possible misery for everyone," meaning that it might not be useful as a means for constructing a utility function. Maximizing the utilitarian distance from prison might not result in maximal utility.
There's another problem with the "maximize the distance" idea as well; namely that morality is path-dependent under most formulations. In other words, how you get from point A to point B is just as important as the distance between those points when we talk about morality. Maybe we could increase total societal happiness by painlessly euthanizing all the unhappy people. This may move us to a "better" position on the utility function, but most would agree it's morally wrong to kill innocent people.
I tend to believe this is true, but certainly many people disagree. And surely you would agree that there is a subset of people who flourish more when their personal freedom is restricted in some way (a heroin addict may be better off if he is prevented from getting heroin).
But at the end of the day, even if I agree with your statement above, you are still begging the question. The debate is not about how to maximize flourishing. The debate is about how you demonstrate that maximizing flourishing is "moral" or "good." Indeed, how would you prove that anything is "moral" or "immoral" in an objective, empirically-verifiable way?
If misery is only immoral "by definition" (i.e. by fiat), then morality is inherently arbitrary. I can substitute a different definition and arrive at a different result. There is no rational or empirical reason to conclude that your definition is better than mine.
This stands in stark contrast to rationality, which we can expect to outperform all its competitors when put to the empirical test. If "your rationality" is different from "my rationality," we can put these to the test in the real world and see which one reliably cashes out by correctly predicting real-world results.
I do, in fact, treat ethics and economics the same. Both these disciplines are only useful if you and your audience agree on the goals you wish to achieve. Both economics and ethics are underpinned by arbitrary subjective values. Thus "should" or "ought" statements in both economics and ethics reduce to statements of the form "if you want to accomplish X, do Y."
In the example you linked, I was arguing about the definition of an abstract model called a free market, and the assumptions needed to make this model "useful." However, I acknowledge that "useful" is not an objective concept; it is based on the types of goals you are tying to achieve. In this particular example, I was trying to achieve the goal of modeling "economic efficiency," which essentially equates to "maximizing total social utility."
If my audience agrees that maximizing total social utility is a worthwhile goal, then we can use my model to have a conversation about that. However, there is no objective reason why my audience must agree to this. For example, maximizing total social utility does not include any notion of "fairness" or "social justice," so someone who thinks these things are valuable might not find my argument persuasive. There is no way for me to prove that they are wrong to care about social justice. There is no way for them to prove that I am wrong to not care about social justice.
Economic and ethical arguments are only useful if we agree at the outset about the goals we are trying to achieve. But there is no way to prove that one set of goals is "better" than another set of goals.
If you want X, then do Y.
If you don't do Y, you won't get X.
So, if you don't want X, the you really have no objective reason to do Y. If you don't want to be good or moral, then you don't have a reason to act good or moral. I don't see what the issue is.
However,
We're linking [acting moral] to [human survival]. So, "if you want [survival/flourishing of the species], then [behave morally]." If Dahmer cares about the survival of his species, then he should act moral. Now, maybe Dahmer is shortsighted and doesn't care. If that is the case, then -like the computer BS mentioned- he doesn't have a reason to act moral.
Now, while assuming a shortsighted Dahmer, "We" -as a species/group- do want [survival/flourishing]. So, "We" DO have reason to make shortsighted Dahmer act moral. We ought to find something shortsighted Dahmer does care about, and link that to him acting moral. (If he is acting at all -unlike the computer- then we can assume he cares about something.)
Because "We" -collectively- want [survival/flourishing of the species], "We" should enforce [moral behavior].
It would be like saying:
"If you don't do what God wants, you will go to hell."
"But, I want to go to hell, so what should I do?"
"Not what God wants."
The Bible uses a similar argument in Romans 1. Basically those who reject the epistemic and moral machinery of Biblical Christianity are said to be "suppressing the truth in unrighteousness." So in the end it all boils down to questions of ultimate authorities. An ultimate authority must be self-authenticating. Logic can't be a self-authenticating authority because that would be circular reasoning, which is why logic can never appeal to itself as the ultimate authority without also simultaneously destroying its own foundation as an ultimate authority.
But the Bible is a self-authenticating authority. It testifies that it itself is inspired of God, and within its pages you'll find the worldview with the best explanation for why human beings are intelligible, rational, and moral beings.
I basically agree with this, but it's an acknowledgement that "should" has no objective, universal meaning. When most people talk about morality (including OP, I assume), they are talking about rules they claim exist independent of particular goals. In other words, a Christian would likely say that the rule "you should not kill innocent people" is an objectively true statement for all people. The truth of the statement does not depend on the preferences or goals of society or of the person hearing the statement. It is an inherent property of the universe (and/or of God), the same way the mass of the electron or the speed of light is an inherent property of the universe.
If morality is entirely goal-oriented, then there is nothing special about moral ought statements. There is no particular reason you should follow a moral statement if you determine it is not in your interest to do so. That's the point of the thread, I believe. If someone wants to behave immorally, and either thinks they can escape the societal consequences or is willing to bear those consequences, why should they behave morally rather than immorally? You presumably would say "there's no reason to act morally in that situation." The Christian would say "because behaving immorally is transgressive against God and the universe it is always inherently bad to behave immorally."
Right, and this is what most Christians believe. Hell is a "choice" to separate yourself from God. This is how they solve the problem of "if God is omnibenevolent, why does he sentence people to eternity in hell?" The answer is that God doesn't choose to send them there, they choose to send themselves there.
But my point is that hell is an inherently bad place compared with heaven. This implies that there is some objective moral order to the universe that creates a distinction between goodness and badness; "bad" actions have "bad" eternal consequences while "good" actions have "good" eternal consequences. (I acknowledge this is an oversimplification of Christian doctrine).
If you don't believe in some variation of God, heaven, hell, karma, etc, then there is no objective standard that defines "good." It is a subjective, pragmatic standard based on what happens to benefit the survival and flourishing of your society at this particular time. Some moral ideas will tend to be constant between societies, like "killing innocent people is bad," but the fact that a lot of societies come to the same subjective moral conclusions is not the same thing as saying those conclusions are objectively true.
Likewise, shortsighted Dahmer can claim he really wants to do immoral actions even in the face of "if you want [survival/flourishing of the species], then [behave morally]." But, we don't know if he is truly aware of the consequences of his actions. If faced with the undeniable reality of humanity's extinction, he might realize he finds it undesirable.
Thus -despite it being objectively true he should behave morally- without this enlightened self-interest, he claims he shouldn't.
But, his actions or beliefs doesn't change the existence of the universal truth. Like the game of chess example, if you want to win there are objectively moves you should make and ones you shouldn't make. But, that doesn't mean you are aware of this. You might truly want to win, but still think you should make moves that -in reality- you shouldn't.
Likewise -with perfect knowledge- there might be a universal morality system no sentient creature could deny. But, nothing would prevent them from denying it without that knowledge.
Taking an action that is detrimental to the flourishing of humanity will not necessarily cause humanity's extinction. In fact, unless undertaken on an enormous scale, an immoral action is almost certain to not cause the extinction of humanity. So it is completely possible for someone to conclude correctly that it is in their best interest to take a particular immoral action. The harm to society is diffuse and only affects them a small amount, but the benefit they receive from the action is relatively large. Absent some kind of punishment or deterrent, acting immorally is the rational choice under those circumstances.
On the other hand, assuming no one would ever rationally choose to go to hell (or whatever we say the negative cosmic consequences are), then no one could ever rationally choose to act immorally. Acting immorally would always be a mistake.
Humans as a society can, and have, survived and thrived off principles we would consider unethical. And they'll do it again in the future. We know this because we've done it for thousands of years. The suggest otherwise is ethnocentrism at it's finest.
You're using 'need' here, but it's not really the case. They're social norms that work. Our current set of social norms may work, but so have others.
Or a country built on genocide and slaves will become the most powerful on Earth. It's hard to make judgments like this because all societies eventually change. What worked once can stop working, and then work again later. Ethics are just the codification of what is currently working.
...Because enough people get caught? That was the point of the 'doesn't get caught' part of that statement. As a whole there is backlash from those in the social group when the norms are violated, but this rule only applies in the most broad sense. People violating the established 'rules' are good for societies, because it forced them to evolve. Otherwise we'd stagnate.
People often don't take prison into consideration when they commit a crime, and people DO want to go to prison, in some cases, especially old timers who got used to prison and are paroled or released much, much later in their lives. Some people think of prison as a second home, they're in and out so much. You can say that generally prison is a deterrent, but it's not by any stretch of the imagination a universal deterrent, because there is likely at least one person out there who has no feelings about going to prison either way.
I think we're mostly on the same page here, I just don't think you're taking into consideration that ethics change as social norms do, and they're only right or wrong depending on what social norm perspective you're viewing them from.
I say that my word is the word of God and what I say is the best explanation for why human beings are often unintelligible, irrational and sometimes moral beings.
But that's circular reasoning, just saying it is because I say so is meaningless.
TerribleBad at Magic since 1998.A Vorthos Guide to Magic Story | Twitter | Tumblr
[Primer] Krenko | Azor | Kess | Zacama | Kumena | Sram | The Ur-Dragon | Edgar Markov | Daretti | Marath
Anyway, if an individual who claims they wish to act immoral was made truly aware of the totality of the issue, it might become clear to him he shouldn't be acting immoral. I mean, many forms of Hell in fiction are just that: People being shown -over and over- all of the negative consequences of their actions and forced to feel what they did to others.
Complete knowledge of the outcomes of our evil actions could easily be "universally subjectively intolerable."
...which means it too "can never appeal to itself as the ultimate authority without also simultaneously destroying its own foundation as an ultimate authority". Honestly, cloudman, double standard much? It doesn't even matter that your claim about best explanations is nonsensical (since you'd need some sort of outside authority to evaluate explanations as "better" or "worse"). That's just a detail of the postmortem for why this particular argument fails. And we don't need to get into such details. You yourself have pointed out the general reason why no proclaimed authority can ever self-authenticate: because that would be circular reasoning.
candidus inperti; si nil, his utere mecum.
Probably not intolerable to Dahmer. He witnessed firsthad a large part of the suffering his actions caused, and it didn't seem to bother him.
Oh, come now. Turn your mind to criticizing rationality using the same pattern of argument you're using against ethics, and you will see that both disciplines fall if you insist on fairly applying the "I don't care" argument to both cases.
Here: Suppose I don't care about accuracy and reproducibility, since both of those words end with 'y.' What then?
Caring about accuracy and reproducibility and correspondence with the outside world is the very basis of rationality (in fact, it's sometimes taken as the definition thereof.) So your ability to convince me of the "correctness" of rationality depends crucially on me having already agreed to a particular standard for correctness -- namely, that associated with being rational!
I emphasize again that this is a game that anyone can play with any epistemic theory, and the game can be played until the heat death of the universe. Generally, people can see it's pointless in every other context except ethics.
If you wanted to test that, you could do something like this: go to the historical record, look at societies where causing harm was endorsed, contrast with societies where causing harm was forbidden, and look at how well those societies did. Probably you could get pretty far just doing regression on the binary variable "Does this society still exist?" Obviously a lot of details need to be filled in to make this a sound experiment, but the basis of it would be empirical data.
You can't infer subjectivity of a thing from subjectivity of opinion about that thing. Ask a hundred random people how the double-slit experiment works and you'll get a hundred different answers. It doesn't mean it's subjective.
I fail to see how this could possibly represent a problem with my line of thinking, since I agree with it. I don't propose that ethics is disjoint from rationality. I do point out that, as an epistemic theory, it is not self-justifying and it depends on the acceptance of axioms and definitions. If an anti-ethicist lodges an epistemic objection to ethics (e.g. by saying that its definitions and axioms exist only by fiat) then any such objection also applies to rationality, because it is also an epistemic theory. (and, e.g., its definitions and axioms exist only by fiat as well)
Thus, by contraposition, a person who lodges no epistemic objections to rationality should, if he wishes to be consistent, lodge no epistemic objections to ethics. (Of course, this leaves open non-epistemic objections and objections to particular theories of ethics.)
This is an entirely sensible non-epistemic objection, and certainly a subject worth discussing.
Big picture, though -- if you're agreeing that there's an objective system of utility out there, and your objection is just through what means that utility gets maximized, then you're essentially conceding the better part of the argument to me. When it comes to the actual details of the utility function and how to maximize it, I'm perfectly prepared to admit my ignorance. Like I said, moral science is extremely primitive in its present state.
Granted. Your agreement that there is an actual fact of the matter concerning whether the heroin addict is flourishing or not is all that I ask for.
Whatever behaviors make the heroin addict flourish are the moral ones, and those that don't are the immoral ones. You just seemingly acknowledged that there was a fact of the matter about which of these two things is happening, presumably based on the physical state of the heroin addict himself. Which is empirical. What is left to explain?
Now we're back to the epistemic objections that don't work. I can do the same thing with rationality: substitute a different definition of rationality that doesn't care about accuracy or empiricism at all. Then what?
All definitions exist only by fiat. A definition is just a shortcut. Changing definitions doesn't change what's true.
Imagine you're talking to an "epistemic Jeffrey Dahmer." In response to your proposed test, comparing his brand of rationality to yours, he simply informs you that he doesn't care about real-world results. What then?
No, your response to your interlocutor in that debate was a lot more specific than that. It was substantially objective. You said:
By the same token, an ethical theory that refuses even to define what ethics is, and expressly rejects all attempts to do so, tells us nothing about a world in which that ethical theory holds.
Sure there is. If it weren't unethical to perform an experiment that would test this theory, I would be all for it, because I bet a room full of SJWs would eat each other inside of a week.
Sure there is. Example: The famines of Stalinist Russia were, in large part, the result of implementing a particular economic theory. Does this not count as evidence for the proposition that that particular economic theory is bad?
If not, why not? If so, then insofar as that economic theory constitutes a "set of goals," well, we've just proven that set of goals is bad.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
The Bible is a mere collection of squiggles on paper absent an epistemology sufficiently powerful to interpret it, and any such system is powerful enough to recognize circular logic. Thus the very tool that allows you to perceive the Bible's self-endorsement in the first place is the same tool that punctures an irreparable hole in it.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Using the Bible to prove the Bible is not a problem because it is Biblical. It is perfectly consistent within itself to do so. The fact that it is circular is trivial because the Bible is not concerned with proving its claims in a logical fashion. That is not the case with logic. Using logic to prove logic is a problem because that is illogical. So no it is not a double standard.
When you place logic as the ultimate authority over scripture you have placed a worldview that is an absurdity to begin with over the Bible. My worldview does not have to be sucked in to the problems of your worldview.
You have no basis for knowing that logic is even intelligible in your worldview, and yet you just assume it to be true. So you are guilty of the same circularity that you accuse me of. The difference is your circularity nullifies your ultimate epistemic authority while my circularity is consistent with and does not nullify my ultimate epistemic authority.
The difference between our worldviews is I actually have a basis for accounting for the existence of logic. Logic exists because human beings are image bearers of God designed to think God's thoughts after him. And that is how the Bible accounts for reliability and the existence of logic.
I'm pretty sure giving him perfect understanding -including empathy- and then making him analyze the full consequences of his actions would be rather intolerable. I would even call it "universally subjectively intolerable."
(This is pretty much the plot behind the Indigo Lantern Power Ring.)
On the contrary, since I expressly acknowledge that my position is not ultimately self-justifying, I am not guilty of any circularity at all. And since you claim yours is, you are guilty of circularity. You just don't care about circularity, because God, or whatever. Well, this is exactly the kind of "I don't care" argument that I'm currently debating with bitterroot about. If you don't want to read the extremely long posts (and I'm not sure I'd blame you) the TLDR is that such arguments undermine the very basis for examining arguments and are therefore pointless.
Why do you think I don't? Logic is proto-language and its origins are similar to the origins of language -- sentient, social species will naturally develop it; ideas cannot be communicated from mind to mind without it.
I say again that you couldn't read or apprehend the Bible without logic. Saying the Bible justifies logic is like saying the cart justifies the horse.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
I acknowledge that anyone can say "I don't care" about anything, including rationality. But you and I are using rational discourse to conduct this debate. And you concede later in your post: "I don't propose that ethics is disjoint from rationality." In other words, accepting the validity of rationality is a necessary prerequisite to discussing ethics.
So while I agree that the epistemic validity of rationality is not self-justifying, we can regard it as such for purposes of this debate because we have both conceded its validity in order to have this debate in the first place.
Why should the existence or non-existence of a society have anything to do with morality?
Am I likewise precluded from asserting objections to your concocted epistemic theories that determine truth based on the number of words in a sentence or whether a word ends in the letter "y"?
I have no problem with the idea that an objective system of utility might exist. I don't want to concede that it does exist because I'm not sure. But it might.
My issue is why utility should have any connection with morality and "ought" statements. Why "ought" one attempt to maximize utility?
Certainly there are many systems of moral thought that believe maximizing utility is not a good thing. And maximizing utility may have nothing to do with maximizing the chances of your society's survival, which you said earlier was the way we should empirically test the veracity of moral systems.
At bottom, though, my objection is based on the is-ought problem. How do you go from the statement "if you want to maximize utility, then behave morally" to the statement "you ought to behave morally?" How does one arrive at a naked ought?
Why "flourishing" is objectively "good." Why one "ought" to maximize flourishing.
He is free not to care. Because rationality does not demand that you care about it. No one claims that the existence of rationality imposes any duty on anyone to do anything.
But morality, by making "ought" statements, is purporting to impose duties on people to act a particular way. If people are perfectly free not to care about these duties and not act the way they "ought" to act, then the word "ought" is meaningless. If there is no particular reason a person "ought" to care about and follow moral rules, then why do we call them moral rules?
The missing step in your syllogism is proof for the statement that famines are "bad." This is the objection I keep repeating. How does one prove that anything is "bad" or "good?" How does one show that "bad" and "good" even exist as coherent, objective concepts?
Consider the matter of consistency. You assert that the Bible is self-consistent. (It isn't, but for the sake of argument let's grant that it is.) Furthermore, you imply that the self-consistency of the Bible is a positive thing. You rate the Bible as "good" on some scale because it is self-consistent, and you would rate it as "bad" were it not self-consistent. You do not believe other religions' scriptures, like the Qur'an, because you think they are not self-consistent -- again, "bad" on this scale. You think the Bible is better than the Qur'an, and you think self-consistency is the reason for it.
Have I reconstructed your position accurately so far?
Okay. Now ask yourself: what is this scale?
It's logic, dude. Logic is, in its entirety, the idea that consistency is good and contradiction is bad. Everything that logicians do is just variations on that theme. So when you decide that the Bible is self-consistent, you are using logic to evaluate the Bible. And because you are using self-consistency to determine whether you should accept or reject the Bible and other scriptures, your logic has overriding authority over these scriptures. If it did not, you would not have any grounds to praise the Bible for self-consistency, any more than you would have grounds to praise it for being made of tree fiber -- these properties would simply be irrelevant to the question of why the Bible is valuable. And you would not have grounds to reject the Qur'an -- its inconsistency with itself and with the Bible would likewise be irrelevant to the question of why it is not valuable.
I'm not the one imposing a logical "worldview" on the Bible. That's you. You've don't realize it, because you're used to thinking of "logic" as "that nasty thought process skeptics use to challenge my faith". So as a result, whenever the idea that consistency is good and contradiction is bad reinforces your faith, you embrace it as simply natural and clear thinking, but whenever it challenges your faith, you call it "logic" and reject it. But in reality it's all the same logic. It's all simply natural and clear thinking. You're just applying the idea that consistency is good inconsistently. And, before you think you're allowed to do that because worldview or circular reasoning or whatever, remember that if you're not applying the idea that consistency is good consistently, there's no reason for you to apply it to the Bible.
PS: You should also worry that justifying yourself on the basis of differing "worldviews" is entirely too subjectivist for someone claiming to possess universal objective truth.
candidus inperti; si nil, his utere mecum.
Why would "perfect understanding" include a feeling of empathy? Certainly it would include perfect knowledge of what the other person is experiencing. But empathy is knowledge plus a certain type of emotional reaction to that knowledge.
What you're saying is: if we somehow modified Dahmer's subjective emotional experiences, we could make harming others intolerable to him. But that is the opposite of saying that harming others is "universally subjectively intolerable." Instead, harming others is only subjectively intolerable to people who subjectively experience an emotional reaction that makes harming others intolerable to them. Other people (like Dahmer) do not experience this type of emotional reaction and do not experience harming others as subjectively intolerable. We would have to change Dahmer's subjective outlook in order to make harming others intolerable to him. This proves that harming others is not universally subjectively intolerable.
What I'm saying hasn't changed from: If Dahmer didn't just have a superficial understanding of the outcome of his actions, and really and truly understood -on a deep level- those outcomes (outcomes like the emotions his actions caused in others), I don't think it's unreasonable to believe he would feel remorse for what he did.
So, what I'm saying is: Assuming this, just because Dahmer lacked some knowledge (in this case that knowledge being empathy) doesn't somehow make his choices valid. It just means he made incorrect choices based on an incomplete understanding of the system.
Empathy is not just "understanding the emotions of others," it is understanding and caring about the emotions of others. Someone could fully understand the emotions of others and simply not care.
Certainly it could cause him to feel remorse, but there's no reason it must. For something to be "universally" intolerable, it's not sufficient to say it "could" be intolerable. "Universally subjectively intolerable" means intolerable to every conceivable subjective observer.